WorldWideScience

Sample records for pairwise comparison approach

  1. Preference Learning and Ranking by Pairwise Comparison

    Science.gov (United States)

    Fürnkranz, Johannes; Hüllermeier, Eyke

    This chapter provides an overview of recent work on preference learning and ranking via pairwise classification. The learning by pairwise comparison (LPC) paradigm is the natural machine learning counterpart to the relational approach to preference modeling and decision making. From a machine learning point of view, LPC is especially appealing as it decomposes a possibly complex prediction problem into a certain number of learning problems of the simplest type, namely binary classification. We explain how to approach different preference learning problems, such as label and instance ranking, within the framework of LPC. We primarily focus on methodological aspects, but also address theoretical questions as well as algorithmic and complexity issues.

  2. Socio-economic scenario development for the assessment of climate change impacts on agricultural land use: a pairwise comparison approach

    DEFF Research Database (Denmark)

    Abildtrup, Jens; Audsley, E.; Fekete-Farkas, M.

    2006-01-01

    Assessment of the vulnerability of agriculture to climate change is strongly dependent on concurrent changes in socio-economic development pathways. This paper presents an integrated approach to the construction of socio-economic scenarios required for the analysis of climate change impacts...... on European agricultural land use. The scenarios are interpreted from the storylines described in the intergovernmental panel on climate change (IPCC) special report on emission scenarios (SRES), which ensures internal consistency between the evolution of socio-economics and climate change. A stepwise...... downscaling procedure based on expert-judgement and pairwise comparison is presented to obtain quantitative socio-economic parameters, e.g. prices and productivity estimates that are input to the ACCELERATES integrated land use model. In the first step, the global driving forces are identified and quantified...

  3. Evaluation of users' satisfaction on pedestrian facilities using pair-wise comparison approach

    Science.gov (United States)

    Zainol, R.; Ahmad, F.; Nordin, N. A.; Aripin, A. W. M.

    2014-02-01

    Global climate change issues demand people of the world to change the way they live today. Thus, current cities need to be redeveloped towards less use of carbon in their day to day operations. Pedestrianized environment is one of the approaches used in reducing carbon foot print in cities. Heritage cities are the first to be looked into since they were built in the era in which motorized vehicles were minimal. Therefore, the research explores users' satisfaction on assessment of physical attributes of pedestrianization in Melaka Historical City, a UNESCO World Heritage Site. It aims to examine users' satisfaction on pedestrian facilities provided within the study area using pair wise questionnaire comparison approach. A survey of 200 respondents using random sampling was conducted in six different sites namely Jonker Street, Church Street, Kota Street, Goldsmith Street, Merdeka Street to Taming Sari Tower and Merdeka Street to River Cruise terminal. The survey consists of an assessment tool based on a nine-point scale of users' satisfaction level of pathway properties, zebra pedestrian crossing, street furniture, personal safety, adjacent to traffic flow, aesthetic and amenities. Analytical hierarchical process (AHP) was used to avoid any biasness in analyzing the data collected. Findings show that Merdeka Street to Taming Sari Tower as the street that scores the highest satisfaction level that fulfils all the required needs of a pedestrianized environment. Similar assessment elements can be used to evaluate existing streets in other cities and these criteria should also be used in planning for future cities.

  4. Doctoral Program Selection Using Pairwise Comparisons.

    Science.gov (United States)

    Tadisina, Suresh K.; Bhasin, Vijay

    1989-01-01

    The application of a pairwise comparison methodology (Saaty's Analytic Hierarchy Process) to the doctoral program selection process is illustrated. A hierarchy for structuring and facilitating the doctoral program selection decision is described. (Author/MLW)

  5. Evaluation of users' satisfaction on pedestrian facilities using pair-wise comparison approach

    International Nuclear Information System (INIS)

    Zainol, R; Ahmad, F; Nordin, N A; Aripin, A W M

    2014-01-01

    Global climate change issues demand people of the world to change the way they live today. Thus, current cities need to be redeveloped towards less use of carbon in their day to day operations. Pedestrianized environment is one of the approaches used in reducing carbon foot print in cities. Heritage cities are the first to be looked into since they were built in the era in which motorized vehicles were minimal. Therefore, the research explores users' satisfaction on assessment of physical attributes of pedestrianization in Melaka Historical City, a UNESCO World Heritage Site. It aims to examine users' satisfaction on pedestrian facilities provided within the study area using pair wise questionnaire comparison approach. A survey of 200 respondents using random sampling was conducted in six different sites namely Jonker Street, Church Street, Kota Street, Goldsmith Street, Merdeka Street to Taming Sari Tower and Merdeka Street to River Cruise terminal. The survey consists of an assessment tool based on a nine-point scale of users' satisfaction level of pathway properties, zebra pedestrian crossing, street furniture, personal safety, adjacent to traffic flow, aesthetic and amenities. Analytical hierarchical process (AHP) was used to avoid any biasness in analyzing the data collected. Findings show that Merdeka Street to Taming Sari Tower as the street that scores the highest satisfaction level that fulfils all the required needs of a pedestrianized environment. Similar assessment elements can be used to evaluate existing streets in other cities and these criteria should also be used in planning for future cities

  6. Nonparametric predictive pairwise comparison with competing risks

    International Nuclear Information System (INIS)

    Coolen-Maturi, Tahani

    2014-01-01

    In reliability, failure data often correspond to competing risks, where several failure modes can cause a unit to fail. This paper presents nonparametric predictive inference (NPI) for pairwise comparison with competing risks data, assuming that the failure modes are independent. These failure modes could be the same or different among the two groups, and these can be both observed and unobserved failure modes. NPI is a statistical approach based on few assumptions, with inferences strongly based on data and with uncertainty quantified via lower and upper probabilities. The focus is on the lower and upper probabilities for the event that the lifetime of a future unit from one group, say Y, is greater than the lifetime of a future unit from the second group, say X. The paper also shows how the two groups can be compared based on particular failure mode(s), and the comparison of the two groups when some of the competing risks are combined is discussed

  7. Supplier Evaluation Process by Pairwise Comparisons

    Directory of Open Access Journals (Sweden)

    Arkadiusz Kawa

    2015-01-01

    Full Text Available We propose to assess suppliers by using consistency-driven pairwise comparisons for tangible and intangible criteria. The tangible criteria are simpler to compare (e.g., the price of a service is lower than that of another service with identical characteristics. Intangible criteria are more difficult to assess. The proposed model combines assessments of both types of criteria. The main contribution of this paper is the presentation of an extension framework for the selection of suppliers in a procurement process. The final weights are computed from relative pairwise comparisons. For the needs of the paper, surveys were conducted among Polish managers dealing with cooperation with suppliers in their enterprises. The Polish practice and restricted bidding are discussed, too.

  8. Selecting numerical scales for pairwise comparisons

    International Nuclear Information System (INIS)

    Elliott, Michael A.

    2010-01-01

    It is often desirable in decision analysis problems to elicit from an individual the rankings of a population of attributes according to the individual's preference and to understand the degree to which each attribute is preferred to the others. A common method for obtaining this information involves the use of pairwise comparisons, which allows an analyst to convert subjective expressions of preference between two attributes into numerical values indicating preferences across the entire population of attributes. Key to the use of pairwise comparisons is the underlying numerical scale that is used to convert subjective linguistic expressions of preference into numerical values. This scale represents the psychological manner in which individuals perceive increments of preference among abstract attributes and it has important implications about the distribution and consistency of an individual's preferences. Three popular scale types, the traditional integer scales, balanced scales and power scales are examined. Results of a study of 64 individuals responding to a hypothetical decision problem show that none of these scales can accurately capture the preferences of all individuals. A study of three individuals working on an actual engineering decision problem involving the design of a decay heat removal system for a nuclear fission reactor show that the choice of scale can affect the preferred decision. It is concluded that applications of pairwise comparisons would benefit from permitting participants to choose the scale that best models their own particular way of thinking about the relative preference of attributes.

  9. Modeling Expressed Emotions in Music using Pairwise Comparisons

    DEFF Research Database (Denmark)

    Madsen, Jens; Nielsen, Jens Brehm; Jensen, Bjørn Sand

    2012-01-01

    We introduce a two-alternative forced-choice experimental paradigm to quantify expressed emotions in music using the two wellknown arousal and valence (AV) dimensions. In order to produce AV scores from the pairwise comparisons and to visualize the locations of excerpts in the AV space, we...

  10. A predictive model of music preference using pairwise comparisons

    DEFF Research Database (Denmark)

    Jensen, Bjørn Sand; Gallego, Javier Saez; Larsen, Jan

    2012-01-01

    Music recommendation is an important aspect of many streaming services and multi-media systems, however, it is typically based on so-called collaborative filtering methods. In this paper we consider the recommendation task from a personal viewpoint and examine to which degree music preference can...... be elicited and predicted using simple and robust queries such as pairwise comparisons. We propose to model - and in turn predict - the pairwise music preference using a very flexible model based on Gaussian Process priors for which we describe the required inference. We further propose a specific covariance...

  11. Pairwise Comparison and Distance Measure of Hesitant Fuzzy Linguistic Term Sets

    Directory of Open Access Journals (Sweden)

    Han-Chen Huang

    2014-01-01

    Full Text Available A hesitant fuzzy linguistic term set (HFLTS, allowing experts using several possible linguistic terms to assess a qualitative linguistic variable, is very useful to express people’s hesitancy in practical decision-making problems. Up to now, a little research has been done on the comparison and distance measure of HFLTSs. In this paper, we present a comparison method for HFLTSs based on pairwise comparisons of each linguistic term in the two HFLTSs. Then, a distance measure method based on the pairwise comparison matrix of HFLTSs is proposed, and we prove that this distance is equal to the distance of the average values of HFLTSs, which makes the distance measure much more simple. Finally, the pairwise comparison and distance measure methods are utilized to develop two multicriteria decision-making approaches under hesitant fuzzy linguistic environments. The results analysis shows that our methods in this paper are more reasonable.

  12. Determinants of sovereign debt yield spreads under EMU: Pairwise approach

    NARCIS (Netherlands)

    Fazlioglu, S.

    2013-01-01

    This study aims at providing an empirical analysis of long-term determinants of sovereign debt yield spreads under European EMU (Economic and Monetary Union) through pairwise approach within panel framework. Panel gravity models are increasingly used in the cross-market correlation literature while

  13. Exact p-values for pairwise comparison of Friedman rank sums, with application to comparing classifiers

    NARCIS (Netherlands)

    Eisinga, R.N.; Heskes, T.M.; Pelzer, B.J.; Grotenhuis, H.F. te

    2017-01-01

    Background: The Friedman rank sum test is a widely-used nonparametric method in computational biology. In addition to examining the overall null hypothesis of no significant difference among any of the rank sums, it is typically of interest to conduct pairwise comparison tests. Current approaches to

  14. Pairwise comparisons and visual perceptions of equal area polygons.

    Science.gov (United States)

    Adamic, P; Babiy, V; Janicki, R; Kakiashvili, T; Koczkodaj, W W; Tadeusiewicz, R

    2009-02-01

    The number of studies related to visual perception has been plentiful in recent years. Participants rated the areas of five randomly generated shapes of equal area, using a reference unit area that was displayed together with the shapes. Respondents were 179 university students from Canada and Poland. The average error estimated by respondents using the unit square was 25.75%. The error was substantially decreased to 5.51% when the shapes were compared to one another in pairs. This gain of 20.24% for this two-dimensional experiment was substantially better than the 11.78% gain reported in the previous one-dimensional experiments. This is the first statistically sound two-dimensional experiment demonstrating that pairwise comparisons improve accuracy.

  15. Screening synteny blocks in pairwise genome comparisons through integer programming.

    Science.gov (United States)

    Tang, Haibao; Lyons, Eric; Pedersen, Brent; Schnable, James C; Paterson, Andrew H; Freeling, Michael

    2011-04-18

    It is difficult to accurately interpret chromosomal correspondences such as true orthology and paralogy due to significant divergence of genomes from a common ancestor. Analyses are particularly problematic among lineages that have repeatedly experienced whole genome duplication (WGD) events. To compare multiple "subgenomes" derived from genome duplications, we need to relax the traditional requirements of "one-to-one" syntenic matchings of genomic regions in order to reflect "one-to-many" or more generally "many-to-many" matchings. However this relaxation may result in the identification of synteny blocks that are derived from ancient shared WGDs that are not of interest. For many downstream analyses, we need to eliminate weak, low scoring alignments from pairwise genome comparisons. Our goal is to objectively select subset of synteny blocks whose total scores are maximized while respecting the duplication history of the genomes in comparison. We call this "quota-based" screening of synteny blocks in order to appropriately fill a quota of syntenic relationships within one genome or between two genomes having WGD events. We have formulated the synteny block screening as an optimization problem known as "Binary Integer Programming" (BIP), which is solved using existing linear programming solvers. The computer program QUOTA-ALIGN performs this task by creating a clear objective function that maximizes the compatible set of synteny blocks under given constraints on overlaps and depths (corresponding to the duplication history in respective genomes). Such a procedure is useful for any pairwise synteny alignments, but is most useful in lineages affected by multiple WGDs, like plants or fish lineages. For example, there should be a 1:2 ploidy relationship between genome A and B if genome B had an independent WGD subsequent to the divergence of the two genomes. We show through simulations and real examples using plant genomes in the rosid superorder that the quota

  16. Revision of Begomovirus taxonomy based on pairwise sequence comparisons

    KAUST Repository

    Brown, Judith K.

    2015-04-18

    Viruses of the genus Begomovirus (family Geminiviridae) are emergent pathogens of crops throughout the tropical and subtropical regions of the world. By virtue of having a small DNA genome that is easily cloned, and due to the recent innovations in cloning and low-cost sequencing, there has been a dramatic increase in the number of available begomovirus genome sequences. Even so, most of the available sequences have been obtained from cultivated plants and are likely a small and phylogenetically unrepresentative sample of begomovirus diversity, a factor constraining taxonomic decisions such as the establishment of operationally useful species demarcation criteria. In addition, problems in assigning new viruses to established species have highlighted shortcomings in the previously recommended mechanism of species demarcation. Based on the analysis of 3,123 full-length begomovirus genome (or DNA-A component) sequences available in public databases as of December 2012, a set of revised guidelines for the classification and nomenclature of begomoviruses are proposed. The guidelines primarily consider a) genus-level biological characteristics and b) results obtained using a standardized classification tool, Sequence Demarcation Tool, which performs pairwise sequence alignments and identity calculations. These guidelines are consistent with the recently published recommendations for the genera Mastrevirus and Curtovirus of the family Geminiviridae. Genome-wide pairwise identities of 91 % and 94 % are proposed as the demarcation threshold for begomoviruses belonging to different species and strains, respectively. Procedures and guidelines are outlined for resolving conflicts that may arise when assigning species and strains to categories wherever the pairwise identity falls on or very near the demarcation threshold value.

  17. Revision of Begomovirus taxonomy based on pairwise sequence comparisons

    KAUST Repository

    Brown, Judith K.; Zerbini, F. Murilo; Navas-Castillo, Jesú s; Moriones, Enrique; Ramos-Sobrinho, Roberto; Silva, José C. F.; Fiallo-Olivé , Elvira; Briddon, Rob W.; Herná ndez-Zepeda, Cecilia; Idris, Ali; Malathi, V. G.; Martin, Darren P.; Rivera-Bustamante, Rafael; Ueda, Shigenori; Varsani, Arvind

    2015-01-01

    Viruses of the genus Begomovirus (family Geminiviridae) are emergent pathogens of crops throughout the tropical and subtropical regions of the world. By virtue of having a small DNA genome that is easily cloned, and due to the recent innovations in cloning and low-cost sequencing, there has been a dramatic increase in the number of available begomovirus genome sequences. Even so, most of the available sequences have been obtained from cultivated plants and are likely a small and phylogenetically unrepresentative sample of begomovirus diversity, a factor constraining taxonomic decisions such as the establishment of operationally useful species demarcation criteria. In addition, problems in assigning new viruses to established species have highlighted shortcomings in the previously recommended mechanism of species demarcation. Based on the analysis of 3,123 full-length begomovirus genome (or DNA-A component) sequences available in public databases as of December 2012, a set of revised guidelines for the classification and nomenclature of begomoviruses are proposed. The guidelines primarily consider a) genus-level biological characteristics and b) results obtained using a standardized classification tool, Sequence Demarcation Tool, which performs pairwise sequence alignments and identity calculations. These guidelines are consistent with the recently published recommendations for the genera Mastrevirus and Curtovirus of the family Geminiviridae. Genome-wide pairwise identities of 91 % and 94 % are proposed as the demarcation threshold for begomoviruses belonging to different species and strains, respectively. Procedures and guidelines are outlined for resolving conflicts that may arise when assigning species and strains to categories wherever the pairwise identity falls on or very near the demarcation threshold value.

  18. Predictive Modeling of Expressed Emotions in Music Using Pairwise Comparisons

    DEFF Research Database (Denmark)

    Madsen, Jens; Jensen, Bjørn Sand; Larsen, Jan

    2013-01-01

    We introduce a two-alternative forced-choice (2AFC) experimental paradigm to quantify expressed emotions in music using the arousal and valence (AV) dimensions. A wide range of well-known audio features are investigated for predicting the expressed emotions in music using learning curves...... and essential baselines. We furthermore investigate the scalability issues of using 2AFC in quantifying emotions expressed in music on large-scale music databases. The possibility of dividing the annotation task between multiple individuals, while pooling individuals’ comparisons is investigated by looking...... comparisons at random by using learning curves. We show that a suitable predictive model of expressed valence in music can be achieved from only 15% of the total number of comparisons when using the Expected Value of Information (EVOI) active learning scheme. For the arousal dimension we require 9...

  19. Improving pairwise comparison of protein sequences with domain co-occurrence

    Science.gov (United States)

    Gascuel, Olivier

    2018-01-01

    Comparing and aligning protein sequences is an essential task in bioinformatics. More specifically, local alignment tools like BLAST are widely used for identifying conserved protein sub-sequences, which likely correspond to protein domains or functional motifs. However, to limit the number of false positives, these tools are used with stringent sequence-similarity thresholds and hence can miss several hits, especially for species that are phylogenetically distant from reference organisms. A solution to this problem is then to integrate additional contextual information to the procedure. Here, we propose to use domain co-occurrence to increase the sensitivity of pairwise sequence comparisons. Domain co-occurrence is a strong feature of proteins, since most protein domains tend to appear with a limited number of other domains on the same protein. We propose a method to take this information into account in a typical BLAST analysis and to construct new domain families on the basis of these results. We used Plasmodium falciparum as a case study to evaluate our method. The experimental findings showed an increase of 14% of the number of significant BLAST hits and an increase of 25% of the proteome area that can be covered with a domain. Our method identified 2240 new domains for which, in most cases, no model of the Pfam database could be linked. Moreover, our study of the quality of the new domains in terms of alignment and physicochemical properties show that they are close to that of standard Pfam domains. Source code of the proposed approach and supplementary data are available at: https://gite.lirmm.fr/menichelli/pairwise-comparison-with-cooccurrence PMID:29293498

  20. Image ranking in video sequences using pairwise image comparisons and temporal smoothing

    CSIR Research Space (South Africa)

    Burke, Michael

    2016-12-01

    Full Text Available The ability to predict the importance of an image is highly desirable in computer vision. This work introduces an image ranking scheme suitable for use in video or image sequences. Pairwise image comparisons are used to determine image ‘interest...

  1. Optimal Inconsistency Repairing of Pairwise Comparison Matrices Using Integrated Linear Programming and Eigenvector Methods

    Directory of Open Access Journals (Sweden)

    Haiqing Zhang

    2014-01-01

    Full Text Available Satisfying consistency requirements of pairwise comparison matrix (PCM is a critical step in decision making methodologies. An algorithm has been proposed to find a new modified consistent PCM in which it can replace the original inconsistent PCM in analytic hierarchy process (AHP or in fuzzy AHP. This paper defines the modified consistent PCM by the original inconsistent PCM and an adjustable consistent PCM combined. The algorithm adopts a segment tree to gradually approach the greatest lower bound of the distance with the original PCM to obtain the middle value of an adjustable PCM. It also proposes a theorem to obtain the lower value and the upper value of an adjustable PCM based on two constraints. The experiments for crisp elements show that the proposed approach can preserve more of the original information than previous works of the same consistent value. The convergence rate of our algorithm is significantly faster than previous works with respect to different parameters. The experiments for fuzzy elements show that our method could obtain suitable modified fuzzy PCMs.

  2. Estimators of the Relations of Equivalence, Tolerance and Preference Based on Pairwise Comparisons with Random Errors

    Directory of Open Access Journals (Sweden)

    Leszek Klukowski

    2012-01-01

    Full Text Available This paper presents a review of results of the author in the area of estimation of the relations of equivalence, tolerance and preference within a finite set based on multiple, independent (in a stochastic way pairwise comparisons with random errors, in binary and multivalent forms. These estimators require weaker assumptions than those used in the literature on the subject. Estimates of the relations are obtained based on solutions to problems from discrete optimization. They allow application of both types of comparisons - binary and multivalent (this fact relates to the tolerance and preference relations. The estimates can be verified in a statistical way; in particular, it is possible to verify the type of the relation. The estimates have been applied by the author to problems regarding forecasting, financial engineering and bio-cybernetics. (original abstract

  3. Market Competitiveness Evaluation of Mechanical Equipment with a Pairwise Comparisons Hierarchical Model.

    Science.gov (United States)

    Hou, Fujun

    2016-01-01

    This paper provides a description of how market competitiveness evaluations concerning mechanical equipment can be made in the context of multi-criteria decision environments. It is assumed that, when we are evaluating the market competitiveness, there are limited number of candidates with some required qualifications, and the alternatives will be pairwise compared on a ratio scale. The qualifications are depicted as criteria in hierarchical structure. A hierarchical decision model called PCbHDM was used in this study based on an analysis of its desirable traits. Illustration and comparison shows that the PCbHDM provides a convenient and effective tool for evaluating the market competitiveness of mechanical equipment. The researchers and practitioners might use findings of this paper in application of PCbHDM.

  4. Classification of forest-based ecotourism areas in Pocahontas County of West Virginia using GIS and pairwise comparison method

    Science.gov (United States)

    Ishwar Dhami; Jinyang. Deng

    2012-01-01

    Many previous studies have examined ecotourism primarily from the perspective of tourists while largely ignoring ecotourism destinations. This study used geographical information system (GIS) and pairwise comparison to identify forest-based ecotourism areas in Pocahontas County, West Virginia. The study adopted the criteria and scores developed by Boyd and Butler (1994...

  5. Left auditory cortex is involved in pairwise comparisons of the direction of frequency modulated tones

    Directory of Open Access Journals (Sweden)

    Nicole eAngenstein

    2013-07-01

    Full Text Available Evaluating series of complex sounds like those in speech and music requires sequential comparisons to extract task-relevant relations between subsequent sounds. With the present functional magnetic resonance imaging (fMRI study, we investigated whether sequential comparison of a specific acoustic feature within pairs of tones leads to a change in lateralized processing in the auditory cortex of humans. For this we used the active categorization of the direction (up versus down of slow frequency modulated (FM tones. Several studies suggest that this task is mainly processed in the right auditory cortex. These studies, however, tested only the categorization of the FM direction of each individual tone. In the present study we ask the question whether the right lateralized processing changes when, in addition, the FM direction is compared within pairs of successive tones. For this we use an experimental approach involving contralateral noise presentation in order to explore the contributions made by the left and right auditory cortex in the completion of the auditory task. This method has already been applied to confirm the right-lateralized processing of the FM direction of individual tones. In the present study, the subjects were required to perform, in addition, a sequential comparison of the FM-direction in pairs of tones. The results suggest a division of labor between the two hemispheres such that the FM direction of each individual tone is mainly processed in the right auditory cortex whereas the sequential comparison of this feature between tones in a pair is probably performed in the left auditory cortex.

  6. Pairwise comparisons of ten porcine tissues identify differential transcriptional regulation at the gene, isoform, promoter and transcription start site level

    International Nuclear Information System (INIS)

    Farajzadeh, Leila; Hornshøj, Henrik; Momeni, Jamal; Thomsen, Bo; Larsen, Knud; Hedegaard, Jakob; Bendixen, Christian; Madsen, Lone Bruhn

    2013-01-01

    Highlights: •Transcriptome sequencing yielded 223 mill porcine RNA-seq reads, and 59,000 transcribed locations. •Establishment of unique transcription profiles for ten porcine tissues including four brain tissues. •Comparison of transcription profiles at gene, isoform, promoter and transcription start site level. •Highlights a high level of regulation of neuro-related genes at both gene, isoform, and TSS level. •Our results emphasize the pig as a valuable animal model with respect to human biological issues. -- Abstract: The transcriptome is the absolute set of transcripts in a tissue or cell at the time of sampling. In this study RNA-Seq is employed to enable the differential analysis of the transcriptome profile for ten porcine tissues in order to evaluate differences between the tissues at the gene and isoform expression level, together with an analysis of variation in transcription start sites, promoter usage, and splicing. Totally, 223 million RNA fragments were sequenced leading to the identification of 59,930 transcribed gene locations and 290,936 transcript variants using Cufflinks with similarity to approximately 13,899 annotated human genes. Pairwise analysis of tissues for differential expression at the gene level showed that the smallest differences were between tissues originating from the porcine brain. Interestingly, the relative level of differential expression at the isoform level did generally not vary between tissue contrasts. Furthermore, analysis of differential promoter usage between tissues, revealed a proportionally higher variation between cerebellum (CBE) versus frontal cortex and cerebellum versus hypothalamus (HYP) than in the remaining comparisons. In addition, the comparison of differential transcription start sites showed that the number of these sites is generally increased in comparisons including hypothalamus in contrast to other pairwise assessments. A comprehensive analysis of one of the tissue contrasts, i

  7. The effect of fiscal incentives on market penetration of electric vehicles: A pairwise comparison of total cost of ownership

    International Nuclear Information System (INIS)

    Lévay, Petra Zsuzsa; Drossinos, Yannis; Thiel, Christian

    2017-01-01

    An important barrier to electric vehicle (EV) sales is their high purchase price compared to internal combustion engine (ICE) vehicles. We conducted total cost of ownership (TCO) calculations to study how costs and sales of EVs relate to each other and to examine the role of fiscal incentives in reducing TCO and increasing EV sales. We composed EV-ICE vehicle pairs that allowed cross-segment and cross-country comparison in eight European countries. Actual car prices were used to calculate the incentives for each model in each country. We found a negative TCO-sales relationship that differs across car segments. Compared to their ICE vehicle pair, big EVs have lower TCO, higher sales, and seem to be less price responsive than small EVs. Three country groups can be distinguished according to the level of fiscal incentives and their impact on TCO and EV sales. In Norway, incentives led to the lowest TCO for the EVs. In the Netherlands, France, and UK the TCO of EVs is close to the TCO of the ICE pairs. In the other countries the TCO of EVs exceeds that of the ICE vehicles. We found that exemptions from flat taxes favour big EVs, while lump-sum subsidies favour small EVs. - Highlights: • Pairwise comparison of EV and ICE vehicle TCO and sales in eight European countries. • In NO, EV TCO is lower than ICE TCO; in NL, FR, and UK, EV TCO is slightly higher. • Compared to ICE vehicles, big EVs have lower TCO and higher sales than small EVs. • Exemptions from flat taxes favour big EVs, lump-sum subsidies favour small EVs. • Most popular EV models: Tesla Model S, Nissan Leaf, Mitsubishi Outlander PHEV.

  8. Grip-Pattern Verification for Smart Gun Based on Maximum-Pairwise Comparison and Mean-Template Comparison

    NARCIS (Netherlands)

    Shang, X.; Veldhuis, Raymond N.J.

    2008-01-01

    In our biometric verification system of a smart gun, the rightful user of a gun is authenticated by grip-pattern recognition. In this work verification will be done using two types of comparison methods, respectively. One is mean-template comparison, where the matching score between a test image and

  9. On the non-convergence of energy intensities: evidence from a pair-wise econometric approach

    International Nuclear Information System (INIS)

    Le Pen, Yannick; Sevi, Benoit

    2008-01-01

    This paper evaluates convergence of energy intensities for a group of 97 countries in the period 1971-2003. Convergence is tested using a recent method proposed by Pesaran (2007) [M.H. Pesaran. A pair- wise approach to testing for output and growth convergence. Journal of Econometrics 138, 312-355.] based on the stochastic convergence criterion. Main advantages of this method are that results do not depend on a benchmark against which convergence is assessed, and that it is more robust. Applications of several unit-root tests as well as a stationarity test uniformly reject the global convergence hypothesis. Locally, for Middle- East, OECD and Europe sub-groups, non-convergence is less strongly rejected. The introduction of possible structural breaks in the analysis only marginally provides more support to the convergence hypothesis. (authors)

  10. Randomized Approaches for Nearest Neighbor Search in Metric Space When Computing the Pairwise Distance Is Extremely Expensive

    Science.gov (United States)

    Wang, Lusheng; Yang, Yong; Lin, Guohui

    Finding the closest object for a query in a database is a classical problem in computer science. For some modern biological applications, computing the similarity between two objects might be very time consuming. For example, it takes a long time to compute the edit distance between two whole chromosomes and the alignment cost of two 3D protein structures. In this paper, we study the nearest neighbor search problem in metric space, where the pair-wise distance between two objects in the database is known and we want to minimize the number of distances computed on-line between the query and objects in the database in order to find the closest object. We have designed two randomized approaches for indexing metric space databases, where objects are purely described by their distances with each other. Analysis and experiments show that our approaches only need to compute O(logn) objects in order to find the closest object, where n is the total number of objects in the database.

  11. Historical demography of common carp estimated from individuals collected from various parts of the world using the pairwise sequentially markovian coalescent approach.

    Science.gov (United States)

    Yuan, Zihao; Huang, Wei; Liu, Shikai; Xu, Peng; Dunham, Rex; Liu, Zhanjiang

    2018-04-01

    The inference of historical demography of a species is helpful for understanding species' differentiation and its population dynamics. However, such inference has been previously difficult due to the lack of proper analytical methods and availability of genetic data. A recently developed method called Pairwise Sequentially Markovian Coalescent (PSMC) offers the capability for estimation of the trajectories of historical populations over considerable time periods using genomic sequences. In this study, we applied this approach to infer the historical demography of the common carp using samples collected from Europe, Asia and the Americas. Comparison between Asian and European common carp populations showed that the last glacial period starting 100 ka BP likely caused a significant decline in population size of the wild common carp in Europe, while it did not have much of an impact on its counterparts in Asia. This was probably caused by differences in glacial activities in East Asia and Europe, and suggesting a separation of the European and Asian clades before the last glacial maximum. The North American clade which is an invasive population shared a similar demographic history as those from Europe, consistent with the idea that the North American common carp probably had European ancestral origins. Our analysis represents the first reconstruction of the historical population demography of the common carp, which is important to elucidate the separation of European and Asian common carp clades during the Quaternary glaciation, as well as the dispersal of common carp across the world.

  12. Theory of pairwise lesion interaction

    International Nuclear Information System (INIS)

    Harder, Dietrich; Virsik-Peuckert, Patricia; Bartels, Ernst

    1992-01-01

    A comparison between repair time constants measured both at the molecular and cellular levels has shown that the DNA double strand break is the molecular change of key importance in the causation of cellular effects such as chromosome aberrations and cell inactivation. Cell fusion experiments provided the evidence that it needs the pairwise interaction between two double strand breaks - or more exactly between the two ''repair sites'' arising from them in the course of enzymatic repair - to provide the faulty chromatin crosslink which leads to cytogenetic and cytolethal effects. These modern experiments have confirmed the classical assumption of pairwise lesion interaction (PLI) on which the models of Lea and Neary were based. It seems worthwhile to continue and complete the mathematical treatment of their proposed mechanism in order to show in quantitative terms that the well-known fractionation, protraction and linear energy transfer (LET) irradiation effects are consequences of or can at least be partly attributed to PLI. Arithmetic treatment of PLI - a second order reaction - has also the advantage of providing a prerequisite for further investigations into the stages of development of misrepair products such as chromatin crosslinks. It has been possible to formulate a completely arithmetic theory of PLI by consequently applying three biophysically permitted approximations - pure first order lesion repair kinetics, dose-independent repair time constants and low yield of the ionization/lesion conversion. The mathematical approach will be summarized here, including several formulae not elaborated at the time of previous publications. We will also study an application which sheds light on the chain of events involved in PLI. (author)

  13. The use of a modified pairwise comparison method in evaluating critical success factors for community-based rural homestay programmes

    Science.gov (United States)

    Daud, Shahidah Md; Ramli, Razamin; Kasim, Maznah Mat; Kayat, Kalsom; Razak, Rafidah Abd

    2014-12-01

    Tourism industry has become the highlighted sector which has amazingly increased the national income level. Despite the tourism industry being one of the highest income generating sectors, Homestay Programme as a Community-Based Tourism (CBT) product in Malaysia does not absorbed much of the incoming wealth. Homestay Programme refers to a programme in a community where a tourist stays together with a host family and experiences the everyday way of life of the family in both direct and indirect manner. There are over 100 Homestay Programme currently being registered with the Ministry of Culture and Tourism Malaysia which mostly are located in rural areas, but only a few excel and enjoying the fruit of the booming industry. Hence, this article seeks to identify the critical success factors for a Community-Based Rural Homestay Programme in Malaysia. A modified pairwise method is utilized to further evaluate the identified success factors in a more meaningful way. The findings will help Homestay Programme function as a community development tool that manages tourism resources. Thus, help the community in improving local economy and creating job opportunities.

  14. Evaluation of criteria for sustainability of community-based rural homestay programs via a modified pairwise comparison method

    Science.gov (United States)

    Ramli, Rohaini; Kasim, Maznah Mat; Ramli, Razamin; Kayat, Kalsom; Razak, Rafidah Abd

    2014-12-01

    Ministry of Tourism and Culture Malaysia has long introduced homestay programs across the country to enhance the quality of life of people, especially those living in rural areas. This type of program is classified as a community-based tourism (CBT) as it is expected to economically improve livelihood through cultural and community associated activities. It is the aspiration of the ministry to see that the income imbalance between people in the rural and urban areas is reduced, thus would contribute towards creating more developed states of Malaysia. Since 1970s, there are 154 homestay programs registered with the ministry. However, the performance and sustainability of the programs are still not satisfying. There are only a number of homestay programs that perform well and able to sustain. Thus, the aim of this paper is to identify relevant criteria contributing to the sustainability of a homestay program. The criteria are evaluated for their levels of importance via the use of a modified pairwise method and analyzed for other potentials. The findings will help the homestay operators to focus on the necessary criteria and thus, effectively perform as the CBT business initiative.

  15. Pareto optimal pairwise sequence alignment.

    Science.gov (United States)

    DeRonne, Kevin W; Karypis, George

    2013-01-01

    Sequence alignment using evolutionary profiles is a commonly employed tool when investigating a protein. Many profile-profile scoring functions have been developed for use in such alignments, but there has not yet been a comprehensive study of Pareto optimal pairwise alignments for combining multiple such functions. We show that the problem of generating Pareto optimal pairwise alignments has an optimal substructure property, and develop an efficient algorithm for generating Pareto optimal frontiers of pairwise alignments. All possible sets of two, three, and four profile scoring functions are used from a pool of 11 functions and applied to 588 pairs of proteins in the ce_ref data set. The performance of the best objective combinations on ce_ref is also evaluated on an independent set of 913 protein pairs extracted from the BAliBASE RV11 data set. Our dynamic-programming-based heuristic approach produces approximated Pareto optimal frontiers of pairwise alignments that contain comparable alignments to those on the exact frontier, but on average in less than 1/58th the time in the case of four objectives. Our results show that the Pareto frontiers contain alignments whose quality is better than the alignments obtained by single objectives. However, the task of identifying a single high-quality alignment among those in the Pareto frontier remains challenging.

  16. Comparison of sputum collection methods for tuberculosis diagnosis: a systematic review and pairwise and network meta-analysis.

    Science.gov (United States)

    Datta, Sumona; Shah, Lena; Gilman, Robert H; Evans, Carlton A

    2017-08-01

    The performance of laboratory tests to diagnose pulmonary tuberculosis is dependent on the quality of the sputum sample tested. The relative merits of sputum collection methods to improve tuberculosis diagnosis are poorly characterised. We therefore aimed to investigate the effects of sputum collection methods on tuberculosis diagnosis. We did a systematic review and meta-analysis to investigate whether non-invasive sputum collection methods in people aged at least 12 years improve the diagnostic performance of laboratory testing for pulmonary tuberculosis. We searched PubMed, Google Scholar, ProQuest, Web of Science, CINAHL, and Embase up to April 14, 2017, to identify relevant experimental, case-control, or cohort studies. We analysed data by pairwise meta-analyses with a random-effects model and by network meta-analysis. All diagnostic performance data were calculated at the sputum-sample level, except where authors only reported data at the individual patient-level. Heterogeneity was assessed, with potential causes identified by logistic meta-regression. We identified 23 eligible studies published between 1959 and 2017, involving 8967 participants who provided 19 252 sputum samples. Brief, on-demand spot sputum collection was the main reference standard. Pooled sputum collection increased tuberculosis diagnosis by microscopy (odds ratio [OR] 1·6, 95% CI 1·3-1·9, pmeta-analysis confirmed these findings, and revealed that both pooled and instructed spot sputum collections were similarly effective techniques for increasing the diagnostic performance of microscopy. Tuberculosis diagnoses were substantially increased by either pooled collection or by providing instruction on how to produce a sputum sample taken at any time of the day. Both interventions had a similar effect to that reported for the introduction of new, expensive laboratory tests, and therefore warrant further exploration in the drive to end the global tuberculosis epidemic. Wellcome Trust

  17. A pairwise unit-root-test based approach to investigating convergence of household debts in South Africa and the United States

    Directory of Open Access Journals (Sweden)

    Ntebogang Dinah Moroke

    2015-05-01

    Full Text Available The purpose of this paper was to test convergence of household debts in the United States and South Africa taking a pairwise unit root tests based approaches into account. Substantial number of studies dealt with convergence of several macroeconomic variables but to my knowledge no study considered this subject with respect to household debts of the identified countries. Quarterly data on household debts consisting of 88 observations in the South Africa and United States spanning the period 1990 to 2013 was collected from the South African and St. Louis Federal Reserve Banks. Focused on the absolute value of household debts, this study proved that South Africa is far from catching-up with the United States in terms of overcoming household debts for the selected period. The findings of this study can be used by relevant authorities to help improve ways and means of dealing with household debts South Africa

  18. Pairwise Choice Markov Chains

    OpenAIRE

    Ragain, Stephen; Ugander, Johan

    2016-01-01

    As datasets capturing human choices grow in richness and scale---particularly in online domains---there is an increasing need for choice models that escape traditional choice-theoretic axioms such as regularity, stochastic transitivity, and Luce's choice axiom. In this work we introduce the Pairwise Choice Markov Chain (PCMC) model of discrete choice, an inferentially tractable model that does not assume any of the above axioms while still satisfying the foundational axiom of uniform expansio...

  19. Metabolic network prediction through pairwise rational kernels.

    Science.gov (United States)

    Roche-Lima, Abiel; Domaratzki, Michael; Fristensky, Brian

    2014-09-26

    Metabolic networks are represented by the set of metabolic pathways. Metabolic pathways are a series of biochemical reactions, in which the product (output) from one reaction serves as the substrate (input) to another reaction. Many pathways remain incompletely characterized. One of the major challenges of computational biology is to obtain better models of metabolic pathways. Existing models are dependent on the annotation of the genes. This propagates error accumulation when the pathways are predicted by incorrectly annotated genes. Pairwise classification methods are supervised learning methods used to classify new pair of entities. Some of these classification methods, e.g., Pairwise Support Vector Machines (SVMs), use pairwise kernels. Pairwise kernels describe similarity measures between two pairs of entities. Using pairwise kernels to handle sequence data requires long processing times and large storage. Rational kernels are kernels based on weighted finite-state transducers that represent similarity measures between sequences or automata. They have been effectively used in problems that handle large amount of sequence information such as protein essentiality, natural language processing and machine translations. We create a new family of pairwise kernels using weighted finite-state transducers (called Pairwise Rational Kernel (PRK)) to predict metabolic pathways from a variety of biological data. PRKs take advantage of the simpler representations and faster algorithms of transducers. Because raw sequence data can be used, the predictor model avoids the errors introduced by incorrect gene annotations. We then developed several experiments with PRKs and Pairwise SVM to validate our methods using the metabolic network of Saccharomyces cerevisiae. As a result, when PRKs are used, our method executes faster in comparison with other pairwise kernels. Also, when we use PRKs combined with other simple kernels that include evolutionary information, the accuracy

  20. Pairwise harmonics for shape analysis

    KAUST Repository

    Zheng, Youyi

    2013-07-01

    This paper introduces a simple yet effective shape analysis mechanism for geometry processing. Unlike traditional shape analysis techniques which compute descriptors per surface point up to certain neighborhoods, we introduce a shape analysis framework in which the descriptors are based on pairs of surface points. Such a pairwise analysis approach leads to a new class of shape descriptors that are more global, discriminative, and can effectively capture the variations in the underlying geometry. Specifically, we introduce new shape descriptors based on the isocurves of harmonic functions whose global maximum and minimum occur at the point pair. We show that these shape descriptors can infer shape structures and consistently lead to simpler and more efficient algorithms than the state-of-the-art methods for three applications: intrinsic reflectional symmetry axis computation, matching shape extremities, and simultaneous surface segmentation and skeletonization. © 2012 IEEE.

  1. Pairwise conjoint analysis of activity engagement choice

    NARCIS (Netherlands)

    Wang, Donggen; Oppewal, H.; Timmermans, H.J.P.

    2000-01-01

    Information overload is a well-known problem of conjoint choice models when respondents have to evaluate a large number of attributes and/or attribute levels. In this paper we develop an alternative conjoint modelling approach, called pairwise conjoint analysis. It differs from conventional conjoint

  2. A pragmatic pairwise group-decision method for selection of sites for nuclear power plants

    International Nuclear Information System (INIS)

    Kutbi, I.I.

    1987-01-01

    A pragmatic pairwise group-decision approach is applied to compare two regions in order to select the more suitable one for construction of nulcear power plants in the Kingdom of Saudi Arabia. The selection methodology is based on pairwise comparison by forced choice. The method facilitates rating of the regions or sites using simple calculations. Two regions, one close to Dhahran on the Arabian Gulf and another close to Jeddah on the Red Sea, are evaluated. No specific site in either region is considered at this stage. The comparison is based on a set of selection criteria which include (i) topography, (ii) geology, (iii) seismology, (iv) meteorology, (v) oceanography, (vi) hydrology and (vii) proximetry to oil and gas fields. The comparison shows that the Jeddah region is more suitable than the Dhahran region. (orig.)

  3. Pseudo inputs for pairwise learning with Gaussian processes

    DEFF Research Database (Denmark)

    Nielsen, Jens Brehm; Jensen, Bjørn Sand; Larsen, Jan

    2012-01-01

    We consider learning and prediction of pairwise comparisons between instances. The problem is motivated from a perceptual view point, where pairwise comparisons serve as an effective and extensively used paradigm. A state-of-the-art method for modeling pairwise data in high dimensional domains...... is based on a classical pairwise probit likelihood imposed with a Gaussian process prior. While extremely flexible, this non-parametric method struggles with an inconvenient O(n3) scaling in terms of the n input instances which limits the method only to smaller problems. To overcome this, we derive...... to other similar approximations that have been applied in standard Gaussian process regression and classification problems such as FI(T)C and PI(T)C....

  4. Pairwise Sequence Alignment Library

    Energy Technology Data Exchange (ETDEWEB)

    2015-05-20

    Vector extensions, such as SSE, have been part of the x86 CPU since the 1990s, with applications in graphics, signal processing, and scientific applications. Although many algorithms and applications can naturally benefit from automatic vectorization techniques, there are still many that are difficult to vectorize due to their dependence on irregular data structures, dense branch operations, or data dependencies. Sequence alignment, one of the most widely used operations in bioinformatics workflows, has a computational footprint that features complex data dependencies. The trend of widening vector registers adversely affects the state-of-the-art sequence alignment algorithm based on striped data layouts. Therefore, a novel SIMD implementation of a parallel scan-based sequence alignment algorithm that can better exploit wider SIMD units was implemented as part of the Parallel Sequence Alignment Library (parasail). Parasail features: Reference implementations of all known vectorized sequence alignment approaches. Implementations of Smith Waterman (SW), semi-global (SG), and Needleman Wunsch (NW) sequence alignment algorithms. Implementations across all modern CPU instruction sets including AVX2 and KNC. Language interfaces for C/C++ and Python.

  5. A Novel Pairwise Comparison-Based Method to Determine Radiation Dose Reduction Potentials of Iterative Reconstruction Algorithms, Exemplified Through Circle of Willis Computed Tomography Angiography.

    Science.gov (United States)

    Ellmann, Stephan; Kammerer, Ferdinand; Brand, Michael; Allmendinger, Thomas; May, Matthias S; Uder, Michael; Lell, Michael M; Kramer, Manuel

    2016-05-01

    The aim of this study was to determine the dose reduction potential of iterative reconstruction (IR) algorithms in computed tomography angiography (CTA) of the circle of Willis using a novel method of evaluating the quality of radiation dose-reduced images. This study relied on ReconCT, a proprietary reconstruction software that allows simulating CT scans acquired with reduced radiation dose based on the raw data of true scans. To evaluate the performance of ReconCT in this regard, a phantom study was performed to compare the image noise of true and simulated scans within simulated vessels of a head phantom. That followed, 10 patients scheduled for CTA of the circle of Willis were scanned according to our institute's standard protocol (100 kV, 145 reference mAs). Subsequently, CTA images of these patients were reconstructed as either a full-dose weighted filtered back projection or with radiation dose reductions down to 10% of the full-dose level and Sinogram-Affirmed Iterative Reconstruction (SAFIRE) with either strength 3 or 5. Images were marked with arrows pointing on vessels of different sizes, and image pairs were presented to observers. Five readers assessed image quality with 2-alternative forced choice comparisons. In the phantom study, no significant differences were observed between the noise levels of simulated and true scans in filtered back projection, SAFIRE 3, and SAFIRE 5 reconstructions.The dose reduction potential for patient scans showed a strong dependence on IR strength as well as on the size of the vessel of interest. Thus, the potential radiation dose reductions ranged from 84.4% for the evaluation of great vessels reconstructed with SAFIRE 5 to 40.9% for the evaluation of small vessels reconstructed with SAFIRE 3. This study provides a novel image quality evaluation method based on 2-alternative forced choice comparisons. In CTA of the circle of Willis, higher IR strengths and greater vessel sizes allowed higher degrees of radiation dose

  6. Analyzing the drivers of green manufacturing with fuzzy approach

    DEFF Research Database (Denmark)

    Govindan, Kannan; Diabat, Ali; Madan Shankar, K.

    2015-01-01

    India, and aided by their replies; a pair-wise comparison was made among the drivers. The pair-wise comparison is used as an input data and the drivers were analyzed on its basis. The analysis resorted to the use of a fuzzy Multi Criteria Decision Making (MCDM) approach. The obtained results...

  7. SVM-dependent pairwise HMM: an application to protein pairwise alignments.

    Science.gov (United States)

    Orlando, Gabriele; Raimondi, Daniele; Khan, Taushif; Lenaerts, Tom; Vranken, Wim F

    2017-12-15

    Methods able to provide reliable protein alignments are crucial for many bioinformatics applications. In the last years many different algorithms have been developed and various kinds of information, from sequence conservation to secondary structure, have been used to improve the alignment performances. This is especially relevant for proteins with highly divergent sequences. However, recent works suggest that different features may have different importance in diverse protein classes and it would be an advantage to have more customizable approaches, capable to deal with different alignment definitions. Here we present Rigapollo, a highly flexible pairwise alignment method based on a pairwise HMM-SVM that can use any type of information to build alignments. Rigapollo lets the user decide the optimal features to align their protein class of interest. It outperforms current state of the art methods on two well-known benchmark datasets when aligning highly divergent sequences. A Python implementation of the algorithm is available at http://ibsquare.be/rigapollo. wim.vranken@vub.be. Supplementary data are available at Bioinformatics online. © The Author (2017). Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com

  8. Statistical physics of pairwise probability models

    Directory of Open Access Journals (Sweden)

    Yasser Roudi

    2009-11-01

    Full Text Available Statistical models for describing the probability distribution over the states of biological systems are commonly used for dimensional reduction. Among these models, pairwise models are very attractive in part because they can be fit using a reasonable amount of data: knowledge of the means and correlations between pairs of elements in the system is sufficient. Not surprisingly, then, using pairwise models for studying neural data has been the focus of many studies in recent years. In this paper, we describe how tools from statistical physics can be employed for studying and using pairwise models. We build on our previous work on the subject and study the relation between different methods for fitting these models and evaluating their quality. In particular, using data from simulated cortical networks we study how the quality of various approximate methods for inferring the parameters in a pairwise model depends on the time bin chosen for binning the data. We also study the effect of the size of the time bin on the model quality itself, again using simulated data. We show that using finer time bins increases the quality of the pairwise model. We offer new ways of deriving the expressions reported in our previous work for assessing the quality of pairwise models.

  9. Robust Subjective Visual Property Prediction from Crowdsourced Pairwise Labels.

    Science.gov (United States)

    Fu, Yanwei; Hospedales, Timothy M; Xiang, Tao; Xiong, Jiechao; Gong, Shaogang; Wang, Yizhou; Yao, Yuan

    2016-03-01

    The problem of estimating subjective visual properties from image and video has attracted increasing interest. A subjective visual property is useful either on its own (e.g. image and video interestingness) or as an intermediate representation for visual recognition (e.g. a relative attribute). Due to its ambiguous nature, annotating the value of a subjective visual property for learning a prediction model is challenging. To make the annotation more reliable, recent studies employ crowdsourcing tools to collect pairwise comparison labels. However, using crowdsourced data also introduces outliers. Existing methods rely on majority voting to prune the annotation outliers/errors. They thus require a large amount of pairwise labels to be collected. More importantly as a local outlier detection method, majority voting is ineffective in identifying outliers that can cause global ranking inconsistencies. In this paper, we propose a more principled way to identify annotation outliers by formulating the subjective visual property prediction task as a unified robust learning to rank problem, tackling both the outlier detection and learning to rank jointly. This differs from existing methods in that (1) the proposed method integrates local pairwise comparison labels together to minimise a cost that corresponds to global inconsistency of ranking order, and (2) the outlier detection and learning to rank problems are solved jointly. This not only leads to better detection of annotation outliers but also enables learning with extremely sparse annotations.

  10. Statistical pairwise interaction model of stock market

    Science.gov (United States)

    Bury, Thomas

    2013-03-01

    Financial markets are a classical example of complex systems as they are compound by many interacting stocks. As such, we can obtain a surprisingly good description of their structure by making the rough simplification of binary daily returns. Spin glass models have been applied and gave some valuable results but at the price of restrictive assumptions on the market dynamics or they are agent-based models with rules designed in order to recover some empirical behaviors. Here we show that the pairwise model is actually a statistically consistent model with the observed first and second moments of the stocks orientation without making such restrictive assumptions. This is done with an approach only based on empirical data of price returns. Our data analysis of six major indices suggests that the actual interaction structure may be thought as an Ising model on a complex network with interaction strengths scaling as the inverse of the system size. This has potentially important implications since many properties of such a model are already known and some techniques of the spin glass theory can be straightforwardly applied. Typical behaviors, as multiple equilibria or metastable states, different characteristic time scales, spatial patterns, order-disorder, could find an explanation in this picture.

  11. Predicting community composition from pairwise interactions

    Science.gov (United States)

    Friedman, Jonathan; Higgins, Logan; Gore, Jeff

    The ability to predict the structure of complex, multispecies communities is crucial for understanding the impact of species extinction and invasion on natural communities, as well as for engineering novel, synthetic communities. Communities are often modeled using phenomenological models, such as the classical generalized Lotka-Volterra (gLV) model. While a lot of our intuition comes from such models, their predictive power has rarely been tested experimentally. To directly assess the predictive power of this approach, we constructed synthetic communities comprised of up to 8 soil bacteria. We measured the outcome of competition between all species pairs, and used these measurements to predict the composition of communities composed of more than 2 species. The pairwise competitions resulted in a diverse set of outcomes, including coexistence, exclusion, and bistability, and displayed evidence for both interference and facilitation. Most pair outcomes could be captured by the gLV framework, and the composition of multispecies communities could be predicted for communities composed solely of such pairs. Our results demonstrate the predictive ability and utility of simple phenomenology, which enables accurate predictions in the absence of mechanistic details.

  12. Statistical physics of pairwise probability models

    DEFF Research Database (Denmark)

    Roudi, Yasser; Aurell, Erik; Hertz, John

    2009-01-01

    (dansk abstrakt findes ikke) Statistical models for describing the probability distribution over the states of biological systems are commonly used for dimensional reduction. Among these models, pairwise models are very attractive in part because they can be fit using a reasonable amount of  data......: knowledge of the means and correlations between pairs of elements in the system is sufficient. Not surprisingly, then, using pairwise models for studying neural data has been the focus of many studies in recent years. In this paper, we describe how tools from statistical physics can be employed for studying...

  13. ProteinWorldDB: querying radical pairwise alignments among protein sets from complete genomes.

    Science.gov (United States)

    Otto, Thomas Dan; Catanho, Marcos; Tristão, Cristian; Bezerra, Márcia; Fernandes, Renan Mathias; Elias, Guilherme Steinberger; Scaglia, Alexandre Capeletto; Bovermann, Bill; Berstis, Viktors; Lifschitz, Sergio; de Miranda, Antonio Basílio; Degrave, Wim

    2010-03-01

    Many analyses in modern biological research are based on comparisons between biological sequences, resulting in functional, evolutionary and structural inferences. When large numbers of sequences are compared, heuristics are often used resulting in a certain lack of accuracy. In order to improve and validate results of such comparisons, we have performed radical all-against-all comparisons of 4 million protein sequences belonging to the RefSeq database, using an implementation of the Smith-Waterman algorithm. This extremely intensive computational approach was made possible with the help of World Community Grid, through the Genome Comparison Project. The resulting database, ProteinWorldDB, which contains coordinates of pairwise protein alignments and their respective scores, is now made available. Users can download, compare and analyze the results, filtered by genomes, protein functions or clusters. ProteinWorldDB is integrated with annotations derived from Swiss-Prot, Pfam, KEGG, NCBI Taxonomy database and gene ontology. The database is a unique and valuable asset, representing a major effort to create a reliable and consistent dataset of cross-comparisons of the whole protein content encoded in hundreds of completely sequenced genomes using a rigorous dynamic programming approach. The database can be accessed through http://proteinworlddb.org

  14. Unjamming in models with analytic pairwise potentials

    NARCIS (Netherlands)

    Kooij, S.; Lerner, E.

    Canonical models for studying the unjamming scenario in systems of soft repulsive particles assume pairwise potentials with a sharp cutoff in the interaction range. The sharp cutoff renders the potential nonanalytic but makes it possible to describe many properties of the solid in terms of the

  15. A decomposition of pairwise continuity via ideals

    Directory of Open Access Journals (Sweden)

    Mahes Wari

    2016-02-01

    Full Text Available In this paper, we introduce and study the notions of (i, j - regular - ℐ -closed sets, (i, j - Aℐ -sets, (i, j - ℐ -locally closed sets, p- Aℐ -continuous functions and p- ℐ -LC-continuous functions in ideal bitopological spaces and investigate some of their properties. Also, a new decomposition of pairwise continuity is obtained using these sets.

  16. PAIRWISE BLENDING OF HIGH LEVEL WASTE

    International Nuclear Information System (INIS)

    CERTA, P.J.

    2006-01-01

    The primary objective of this study is to demonstrate a mission scenario that uses pairwise and incidental blending of high level waste (HLW) to reduce the total mass of HLW glass. Secondary objectives include understanding how recent refinements to the tank waste inventory and solubility assumptions affect the mass of HLW glass and how logistical constraints may affect the efficacy of HLW blending

  17. Unjamming in models with analytic pairwise potentials

    Science.gov (United States)

    Kooij, Stefan; Lerner, Edan

    2017-06-01

    Canonical models for studying the unjamming scenario in systems of soft repulsive particles assume pairwise potentials with a sharp cutoff in the interaction range. The sharp cutoff renders the potential nonanalytic but makes it possible to describe many properties of the solid in terms of the coordination number z , which has an unambiguous definition in these cases. Pairwise potentials without a sharp cutoff in the interaction range have not been studied in this context, but should in fact be considered to understand the relevance of the unjamming phenomenology in systems where such a cutoff is not present. In this work we explore two systems with such interactions: an inverse power law and an exponentially decaying pairwise potential, with the control parameters being the exponent (of the inverse power law) for the former and the number density for the latter. Both systems are shown to exhibit the characteristic features of the unjamming transition, among which are the vanishing of the shear-to-bulk modulus ratio and the emergence of an excess of low-frequency vibrational modes. We establish a relation between the pressure-to-bulk modulus ratio and the distance to unjamming in each of our model systems. This allows us to predict the dependence of other key observables on the distance to unjamming. Our results provide the means for a quantitative estimation of the proximity of generic glass-forming models to the unjamming transition in the absence of a clear-cut definition of the coordination number and highlight the general irrelevance of nonaffine contributions to the bulk modulus.

  18. Robot Trajectories Comparison: A Statistical Approach

    Directory of Open Access Journals (Sweden)

    A. Ansuategui

    2014-01-01

    Full Text Available The task of planning a collision-free trajectory from a start to a goal position is fundamental for an autonomous mobile robot. Although path planning has been extensively investigated since the beginning of robotics, there is no agreement on how to measure the performance of a motion algorithm. This paper presents a new approach to perform robot trajectories comparison that could be applied to any kind of trajectories and in both simulated and real environments. Given an initial set of features, it automatically selects the most significant ones and performs a statistical comparison using them. Additionally, a graphical data visualization named polygraph which helps to better understand the obtained results is provided. The proposed method has been applied, as an example, to compare two different motion planners, FM2 and WaveFront, using different environments, robots, and local planners.

  19. Robot Trajectories Comparison: A Statistical Approach

    Science.gov (United States)

    Ansuategui, A.; Arruti, A.; Susperregi, L.; Yurramendi, Y.; Jauregi, E.; Lazkano, E.; Sierra, B.

    2014-01-01

    The task of planning a collision-free trajectory from a start to a goal position is fundamental for an autonomous mobile robot. Although path planning has been extensively investigated since the beginning of robotics, there is no agreement on how to measure the performance of a motion algorithm. This paper presents a new approach to perform robot trajectories comparison that could be applied to any kind of trajectories and in both simulated and real environments. Given an initial set of features, it automatically selects the most significant ones and performs a statistical comparison using them. Additionally, a graphical data visualization named polygraph which helps to better understand the obtained results is provided. The proposed method has been applied, as an example, to compare two different motion planners, FM2 and WaveFront, using different environments, robots, and local planners. PMID:25525618

  20. Pairwise Trajectory Management (PTM): Concept Overview

    Science.gov (United States)

    Jones, Kenneth M.; Graff, Thomas J.; Chartrand, Ryan C.; Carreno, Victor; Kibler, Jennifer L.

    2017-01-01

    Pairwise Trajectory Management (PTM) is an Interval Management (IM) concept that utilizes airborne and ground-based capabilities to enable the implementation of airborne pairwise spacing capabilities in oceanic regions. The goal of PTM is to use airborne surveillance and tools to manage an "at or greater than" inter-aircraft spacing. Due to the precision of Automatic Dependent Surveillance-Broadcast (ADS-B) information and the use of airborne spacing guidance, the PTM minimum spacing distance will be less than distances a controller can support with current automation systems that support oceanic operations. Ground tools assist the controller in evaluating the traffic picture and determining appropriate PTM clearances to be issued. Avionics systems provide guidance information that allows the flight crew to conform to the PTM clearance issued by the controller. The combination of a reduced minimum distance and airborne spacing management will increase the capacity and efficiency of aircraft operations at a given altitude or volume of airspace. This paper provides an overview of the proposed application, description of a few key scenarios, high level discussion of expected air and ground equipment and procedure changes, overview of a potential flight crew human-machine interface that would support PTM operations and some initial PTM benefits results.

  1. Measuring pair-wise molecular interactions in a complex mixture

    Science.gov (United States)

    Chakraborty, Krishnendu; Varma, Manoj M.; Venkatapathi, Murugesan

    2016-03-01

    Complex biological samples such as serum contain thousands of proteins and other molecules spanning up to 13 orders of magnitude in concentration. Present measurement techniques do not permit the analysis of all pair-wise interactions between the components of such a complex mixture to a given target molecule. In this work we explore the use of nanoparticle tags which encode the identity of the molecule to obtain the statistical distribution of pair-wise interactions using their Localized Surface Plasmon Resonance (LSPR) signals. The nanoparticle tags are chosen such that the binding between two molecules conjugated to the respective nanoparticle tags can be recognized by the coupling of their LSPR signals. This numerical simulation is done by DDA to investigate this approach using a reduced system consisting of three nanoparticles (a gold ellipsoid with aspect ratio 2.5 and short axis 16 nm, and two silver ellipsoids with aspect ratios 3 and 2 and short axes 8 nm and 10 nm respectively) and the set of all possible dimers formed between them. Incident light was circularly polarized and all possible particle and dimer orientations were considered. We observed that minimum peak separation between two spectra is 5 nm while maximum is 184nm.

  2. Locating one pairwise interaction: Three recursive constructions

    Directory of Open Access Journals (Sweden)

    Charles J. Colbourn

    2016-09-01

    Full Text Available In a complex component-based system, choices (levels for components (factors may interact tocause faults in the system behaviour. When faults may be caused by interactions among few factorsat specific levels, covering arrays provide a combinatorial test suite for discovering the presence offaults. While well studied, covering arrays do not enable one to determine the specific levels of factorscausing the faults; locating arrays ensure that the results from test suite execution suffice to determinethe precise levels and factors causing faults, when the number of such causes is small. Constructionsfor locating arrays are at present limited to heuristic computational methods and quite specific directconstructions. In this paper three recursive constructions are developed for locating arrays to locateone pairwise interaction causing a fault.

  3. Sequence comparison alignment-free approach based on suffix tree and L-words frequency.

    Science.gov (United States)

    Soares, Inês; Goios, Ana; Amorim, António

    2012-01-01

    The vast majority of methods available for sequence comparison rely on a first sequence alignment step, which requires a number of assumptions on evolutionary history and is sometimes very difficult or impossible to perform due to the abundance of gaps (insertions/deletions). In such cases, an alternative alignment-free method would prove valuable. Our method starts by a computation of a generalized suffix tree of all sequences, which is completed in linear time. Using this tree, the frequency of all possible words with a preset length L-L-words--in each sequence is rapidly calculated. Based on the L-words frequency profile of each sequence, a pairwise standard Euclidean distance is then computed producing a symmetric genetic distance matrix, which can be used to generate a neighbor joining dendrogram or a multidimensional scaling graph. We present an improvement to word counting alignment-free approaches for sequence comparison, by determining a single optimal word length and combining suffix tree structures to the word counting tasks. Our approach is, thus, a fast and simple application that proved to be efficient and powerful when applied to mitochondrial genomes. The algorithm was implemented in Python language and is freely available on the web.

  4. Sequence Comparison Alignment-Free Approach Based on Suffix Tree and L-Words Frequency

    Directory of Open Access Journals (Sweden)

    Inês Soares

    2012-01-01

    Full Text Available The vast majority of methods available for sequence comparison rely on a first sequence alignment step, which requires a number of assumptions on evolutionary history and is sometimes very difficult or impossible to perform due to the abundance of gaps (insertions/deletions. In such cases, an alternative alignment-free method would prove valuable. Our method starts by a computation of a generalized suffix tree of all sequences, which is completed in linear time. Using this tree, the frequency of all possible words with a preset length L—L-words—in each sequence is rapidly calculated. Based on the L-words frequency profile of each sequence, a pairwise standard Euclidean distance is then computed producing a symmetric genetic distance matrix, which can be used to generate a neighbor joining dendrogram or a multidimensional scaling graph. We present an improvement to word counting alignment-free approaches for sequence comparison, by determining a single optimal word length and combining suffix tree structures to the word counting tasks. Our approach is, thus, a fast and simple application that proved to be efficient and powerful when applied to mitochondrial genomes. The algorithm was implemented in Python language and is freely available on the web.

  5. A Bayesian approach to PET reconstruction using image-modeling Gibbs priors: Implementation and comparison

    International Nuclear Information System (INIS)

    Chan, M.T.; Herman, G.T.; Levitan, E.

    1996-01-01

    We demonstrate that (i) classical methods of image reconstruction from projections can be improved upon by considering the output of such a method as a distorted version of the original image and applying a Bayesian approach to estimate from it the original image (based on a model of distortion and on a Gibbs distribution as the prior) and (ii) by selecting an open-quotes image-modelingclose quotes prior distribution (i.e., one which is such that it is likely that a random sample from it shares important characteristics of the images of the application area) one can improve over another Gibbs prior formulated using only pairwise interactions. We illustrate our approach using simulated Positron Emission Tomography (PET) data from realistic brain phantoms. Since algorithm performance ultimately depends on the diagnostic task being performed. we examine a number of different medically relevant figures of merit to give a fair comparison. Based on a training-and-testing evaluation strategy, we demonstrate that statistically significant improvements can be obtained using the proposed approach

  6. A scalable pairwise class interaction framework for multidimensional classification

    DEFF Research Database (Denmark)

    Arias, Jacinto; Gámez, Jose A.; Nielsen, Thomas Dyhre

    2016-01-01

    We present a general framework for multidimensional classification that cap- tures the pairwise interactions between class variables. The pairwise class inter- actions are encoded using a collection of base classifiers (Phase 1), for which the class predictions are combined in a Markov random fie...

  7. SDT: a virus classification tool based on pairwise sequence alignment and identity calculation.

    Directory of Open Access Journals (Sweden)

    Brejnev Muhizi Muhire

    Full Text Available The perpetually increasing rate at which viral full-genome sequences are being determined is creating a pressing demand for computational tools that will aid the objective classification of these genome sequences. Taxonomic classification approaches that are based on pairwise genetic identity measures are potentially highly automatable and are progressively gaining favour with the International Committee on Taxonomy of Viruses (ICTV. There are, however, various issues with the calculation of such measures that could potentially undermine the accuracy and consistency with which they can be applied to virus classification. Firstly, pairwise sequence identities computed based on multiple sequence alignments rather than on multiple independent pairwise alignments can lead to the deflation of identity scores with increasing dataset sizes. Also, when gap-characters need to be introduced during sequence alignments to account for insertions and deletions, methodological variations in the way that these characters are introduced and handled during pairwise genetic identity calculations can cause high degrees of inconsistency in the way that different methods classify the same sets of sequences. Here we present Sequence Demarcation Tool (SDT, a free user-friendly computer program that aims to provide a robust and highly reproducible means of objectively using pairwise genetic identity calculations to classify any set of nucleotide or amino acid sequences. SDT can produce publication quality pairwise identity plots and colour-coded distance matrices to further aid the classification of sequences according to ICTV approved taxonomic demarcation criteria. Besides a graphical interface version of the program for Windows computers, command-line versions of the program are available for a variety of different operating systems (including a parallel version for cluster computing platforms.

  8. On Solving Intransitivities in Repeated Pairwise Choices

    NARCIS (Netherlands)

    A. Maas (Arne); Th.G.G. Bezembinder (Thom); P.P. Wakker (Peter)

    1995-01-01

    textabstractAn operational method is presented for deriving a linear ranking of alternatives from repeated paired comparisons of the alternatives. Intransitivities in the observed preferences are cleared away by the introduction of decision errors of varying importance. An observed preference

  9. Towards Predicting Expressed Emotion in Music from Pairwise Comparisons

    DEFF Research Database (Denmark)

    Madsen, Jens; Jensen, Bjørn Sand; Larsen, Jan

    2012-01-01

    We introduce five regression models for the modeling of expressed emotion in music using data obtained in a two alternative forced choice listening experiment. The predictive performance of the proposed models is compared using learning curves, showing that all models converge to produce a similar...

  10. Parent-Child Similarity in Environmental Attitudes: A Pairwise Comparison

    Science.gov (United States)

    Leppanen, Jaana M.; Haahla, Anu E.; Lensu, Anssi M.; Kuitunen, Markku T.

    2012-01-01

    Are adolescents' environmental attitudes similar to their parents' attitudes? The main objective of this study is to examine what quantitative associations, if any, exist in parent-child environmental attitudes within the family. The survey data was collected assessing attitudes toward the environment and nature from 15-year-old students (n = 237)…

  11. Comparison between Fisherian and Bayesian approach to ...

    African Journals Online (AJOL)

    ... of its simplicity and optimality properties is normally used for two group cases. However, Bayesian approach is found to be better than Fisher's approach because of its low misclassification error rate. Keywords: variance-covariance matrices, centroids, prior probability, mahalanobis distance, probability of misclassification ...

  12. Geometric measure of pairwise quantum discord for superpositions of multipartite generalized coherent states

    International Nuclear Information System (INIS)

    Daoud, M.; Ahl Laamara, R.

    2012-01-01

    We give the explicit expressions of the pairwise quantum correlations present in superpositions of multipartite coherent states. A special attention is devoted to the evaluation of the geometric quantum discord. The dynamics of quantum correlations under a dephasing channel is analyzed. A comparison of geometric measure of quantum discord with that of concurrence shows that quantum discord in multipartite coherent states is more resilient to dissipative environments than is quantum entanglement. To illustrate our results, we consider some special superpositions of Weyl–Heisenberg, SU(2) and SU(1,1) coherent states which interpolate between Werner and Greenberger–Horne–Zeilinger states. -- Highlights: ► Pairwise quantum correlations multipartite coherent states. ► Explicit expression of geometric quantum discord. ► Entanglement sudden death and quantum discord robustness. ► Generalized coherent states interpolating between Werner and Greenberger–Horne–Zeilinger states

  13. Geometric measure of pairwise quantum discord for superpositions of multipartite generalized coherent states

    Energy Technology Data Exchange (ETDEWEB)

    Daoud, M., E-mail: m_daoud@hotmail.com [Department of Physics, Faculty of Sciences, University Ibnou Zohr, Agadir (Morocco); Ahl Laamara, R., E-mail: ahllaamara@gmail.com [LPHE-Modeling and Simulation, Faculty of Sciences, University Mohammed V, Rabat (Morocco); Centre of Physics and Mathematics, CPM, CNESTEN, Rabat (Morocco)

    2012-07-16

    We give the explicit expressions of the pairwise quantum correlations present in superpositions of multipartite coherent states. A special attention is devoted to the evaluation of the geometric quantum discord. The dynamics of quantum correlations under a dephasing channel is analyzed. A comparison of geometric measure of quantum discord with that of concurrence shows that quantum discord in multipartite coherent states is more resilient to dissipative environments than is quantum entanglement. To illustrate our results, we consider some special superpositions of Weyl–Heisenberg, SU(2) and SU(1,1) coherent states which interpolate between Werner and Greenberger–Horne–Zeilinger states. -- Highlights: ► Pairwise quantum correlations multipartite coherent states. ► Explicit expression of geometric quantum discord. ► Entanglement sudden death and quantum discord robustness. ► Generalized coherent states interpolating between Werner and Greenberger–Horne–Zeilinger states.

  14. Criteria for the singularity of a pairwise l1-distance matrix and their generalizations

    International Nuclear Information System (INIS)

    D'yakonov, Alexander G

    2012-01-01

    We study the singularity problem for the pairwise distance matrix of a system of points, as well as generalizations of this problem that are connected with applications to interpolation theory and with an algebraic approach to recognition problems. We obtain necessary and sufficient conditions on a system under which the dimension of the range space of polynomials of bounded degree over the columns of the distance matrix is less than the number of points in the system.

  15. Criteria for the singularity of a pairwise l{sub 1}-distance matrix and their generalizations

    Energy Technology Data Exchange (ETDEWEB)

    D' yakonov, Alexander G [M. V. Lomonosov Moscow State University, Faculty of Computational Mathematics and Cybernetics, Moscow (Russian Federation)

    2012-06-30

    We study the singularity problem for the pairwise distance matrix of a system of points, as well as generalizations of this problem that are connected with applications to interpolation theory and with an algebraic approach to recognition problems. We obtain necessary and sufficient conditions on a system under which the dimension of the range space of polynomials of bounded degree over the columns of the distance matrix is less than the number of points in the system.

  16. Comparison of Traditional and Constructivist Teaching Approaches ...

    African Journals Online (AJOL)

    The second section of students had 47 students and was taught using traditional teaching approach. Learning strategy inventory questionnaire which was adapted from strategy inventory for language learning (SILL) L2 students of English, (Oxford, 1990) was employed before and after students were taught using two ...

  17. Pairwise Constraint-Guided Sparse Learning for Feature Selection.

    Science.gov (United States)

    Liu, Mingxia; Zhang, Daoqiang

    2016-01-01

    Feature selection aims to identify the most informative features for a compact and accurate data representation. As typical supervised feature selection methods, Lasso and its variants using L1-norm-based regularization terms have received much attention in recent studies, most of which use class labels as supervised information. Besides class labels, there are other types of supervised information, e.g., pairwise constraints that specify whether a pair of data samples belong to the same class (must-link constraint) or different classes (cannot-link constraint). However, most of existing L1-norm-based sparse learning methods do not take advantage of the pairwise constraints that provide us weak and more general supervised information. For addressing that problem, we propose a pairwise constraint-guided sparse (CGS) learning method for feature selection, where the must-link and the cannot-link constraints are used as discriminative regularization terms that directly concentrate on the local discriminative structure of data. Furthermore, we develop two variants of CGS, including: 1) semi-supervised CGS that utilizes labeled data, pairwise constraints, and unlabeled data and 2) ensemble CGS that uses the ensemble of pairwise constraint sets. We conduct a series of experiments on a number of data sets from University of California-Irvine machine learning repository, a gene expression data set, two real-world neuroimaging-based classification tasks, and two large-scale attribute classification tasks. Experimental results demonstrate the efficacy of our proposed methods, compared with several established feature selection methods.

  18. A configuration space of homologous proteins conserving mutual information and allowing a phylogeny inference based on pair-wise Z-score probabilities

    OpenAIRE

    Maréchal Eric; Ortet Philippe; Roy Sylvaine; Bastien Olivier

    2005-01-01

    Abstract Background Popular methods to reconstruct molecular phylogenies are based on multiple sequence alignments, in which addition or removal of data may change the resulting tree topology. We have sought a representation of homologous proteins that would conserve the information of pair-wise sequence alignments, respect probabilistic properties of Z-scores (Monte Carlo methods applied to pair-wise comparisons) and be the basis for a novel method of consistent and stable phylogenetic recon...

  19. Automatic Camera Calibration Using Multiple Sets of Pairwise Correspondences.

    Science.gov (United States)

    Vasconcelos, Francisco; Barreto, Joao P; Boyer, Edmond

    2018-04-01

    We propose a new method to add an uncalibrated node into a network of calibrated cameras using only pairwise point correspondences. While previous methods perform this task using triple correspondences, these are often difficult to establish when there is limited overlap between different views. In such challenging cases we must rely on pairwise correspondences and our solution becomes more advantageous. Our method includes an 11-point minimal solution for the intrinsic and extrinsic calibration of a camera from pairwise correspondences with other two calibrated cameras, and a new inlier selection framework that extends the traditional RANSAC family of algorithms to sampling across multiple datasets. Our method is validated on different application scenarios where a lack of triple correspondences might occur: addition of a new node to a camera network; calibration and motion estimation of a moving camera inside a camera network; and addition of views with limited overlap to a Structure-from-Motion model.

  20. SFESA: a web server for pairwise alignment refinement by secondary structure shifts.

    Science.gov (United States)

    Tong, Jing; Pei, Jimin; Grishin, Nick V

    2015-09-03

    Protein sequence alignment is essential for a variety of tasks such as homology modeling and active site prediction. Alignment errors remain the main cause of low-quality structure models. A bioinformatics tool to refine alignments is needed to make protein alignments more accurate. We developed the SFESA web server to refine pairwise protein sequence alignments. Compared to the previous version of SFESA, which required a set of 3D coordinates for a protein, the new server will search a sequence database for the closest homolog with an available 3D structure to be used as a template. For each alignment block defined by secondary structure elements in the template, SFESA evaluates alignment variants generated by local shifts and selects the best-scoring alignment variant. A scoring function that combines the sequence score of profile-profile comparison and the structure score of template-derived contact energy is used for evaluation of alignments. PROMALS pairwise alignments refined by SFESA are more accurate than those produced by current advanced alignment methods such as HHpred and CNFpred. In addition, SFESA also improves alignments generated by other software. SFESA is a web-based tool for alignment refinement, designed for researchers to compute, refine, and evaluate pairwise alignments with a combined sequence and structure scoring of alignment blocks. To our knowledge, the SFESA web server is the only tool that refines alignments by evaluating local shifts of secondary structure elements. The SFESA web server is available at http://prodata.swmed.edu/sfesa.

  1. Scalable Bayesian nonparametric measures for exploring pairwise dependence via Dirichlet Process Mixtures.

    Science.gov (United States)

    Filippi, Sarah; Holmes, Chris C; Nieto-Barajas, Luis E

    2016-11-16

    In this article we propose novel Bayesian nonparametric methods using Dirichlet Process Mixture (DPM) models for detecting pairwise dependence between random variables while accounting for uncertainty in the form of the underlying distributions. A key criteria is that the procedures should scale to large data sets. In this regard we find that the formal calculation of the Bayes factor for a dependent-vs.-independent DPM joint probability measure is not feasible computationally. To address this we present Bayesian diagnostic measures for characterising evidence against a "null model" of pairwise independence. In simulation studies, as well as for a real data analysis, we show that our approach provides a useful tool for the exploratory nonparametric Bayesian analysis of large multivariate data sets.

  2. A Relative-Localization Algorithm Using Incomplete Pairwise Distance Measurements for Underwater Applications

    Directory of Open Access Journals (Sweden)

    Kae Y. Foo

    2010-01-01

    Full Text Available The task of localizing underwater assets involves the relative localization of each unit using only pairwise distance measurements, usually obtained from time-of-arrival or time-delay-of-arrival measurements. In the fluctuating underwater environment, a complete set of pair-wise distance measurements can often be difficult to acquire, thus hindering a straightforward closed-form solution in deriving the assets' relative coordinates. An iterative multidimensional scaling approach is presented based upon a weighted-majorization algorithm that tolerates missing or inaccurate distance measurements. Substantial modifications are proposed to optimize the algorithm, while the effects of refractive propagation paths are considered. A parametric study of the algorithm based upon simulation results is shown. An acoustic field-trial was then carried out, presenting field measurements to highlight the practical implementation of this algorithm.

  3. Pairwise comparison of 89Zr- and 124I-labeled cG250 based on positron emission tomography imaging and nonlinear immunokinetic modeling: in vivo carbonic anhydrase IX receptor binding and internalization in mouse xenografts of clear-cell renal cell carcinoma

    International Nuclear Information System (INIS)

    Cheal, Sarah M.; Punzalan, Blesida; Doran, Michael G.; Osborne, Joseph R.; Evans, Michael J.; Lewis, Jason S.; Zanzonico, Pat; Larson, Steven M.

    2014-01-01

    The PET tracer, 124 I-cG250, directed against carbonic anhydrase IX (CAIX) shows promise for presurgical diagnosis of clear-cell renal cell carcinoma (ccRCC) (Divgi et al. in Lancet Oncol 8:304-310, 2007; Divgi et al. in J Clin Oncol 31:187-194, 2013). The radiometal 89 Zr, however, may offer advantages as a surrogate PET nuclide over 124 I in terms of greater tumor uptake and retention (Rice et al. in Semin Nucl Med 41:265-282, 2011). We have developed a nonlinear immunokinetic model to facilitate a quantitative comparison of absolute uptake and antibody turnover between 124 I-cG250 and 89 Zr-cG250 using a human ccRCC xenograft tumor model in mice. We believe that this unique model better relates quantitative imaging data to the salient biological features of tumor antibody-antigen binding and turnover. We conducted experiments with 89 Zr-cG250 and 124 I-cG250 using a human ccRCC cell line (SK-RC-38) to characterize the binding affinity and internalization kinetics of the two tracers in vitro. Serial PET imaging was performed in mice bearing subcutaneous ccRCC tumors to simultaneously detect and quantify time-dependent tumor uptake in vivo. Using the known specific activities of the two tracers, the equilibrium rates of antibody internalization and turnover in the tumors were derived from the PET images using nonlinear compartmental modeling. The two tracers demonstrated virtually identical tumor cell binding and internalization but showed markedly different retentions in vitro. Superior PET images were obtained using 89 Zr-cG250, owing to the more prolonged trapping of the radiolabel in the tumor and simultaneous washout from normal tissues. Estimates of cG250/CAIX complex turnover were 1.35 - 5.51 x 10 12 molecules per hour per gram of tumor (20 % of receptors internalized per hour), and the ratio of 124 I/ 89 Zr atoms released per unit time by tumor was 17.5. Pairwise evaluation of 89 Zr-cG250 and 124 I-cG250 provided the basis for a nonlinear immunokinetic

  4. Pairwise comparison of {sup 89}Zr- and {sup 124}I-labeled cG250 based on positron emission tomography imaging and nonlinear immunokinetic modeling: in vivo carbonic anhydrase IX receptor binding and internalization in mouse xenografts of clear-cell renal cell carcinoma

    Energy Technology Data Exchange (ETDEWEB)

    Cheal, Sarah M.; Punzalan, Blesida; Doran, Michael G.; Osborne, Joseph R. [Memorial Sloan-Kettering Cancer Center, Department of Radiology, New York, NY (United States); Evans, Michael J. [Memorial Sloan-Kettering Cancer Center, Human Oncology and Pathogenesis Program, New York, NY (United States); Lewis, Jason S. [Memorial Sloan-Kettering Cancer Center, Department of Radiology, New York, NY (United States); Memorial Sloan-Kettering Cancer Center, Program in Molecular Pharmacology and Chemistry, New York, NY (United States); Memorial Sloan-Kettering Cancer Center, Radiochemistry and Imaging Sciences Service, New York, NY (United States); Zanzonico, Pat [Memorial Sloan-Kettering Cancer Center, Department of Radiology, New York, NY (United States); Memorial Sloan-Kettering Cancer Center, Molecular Pharmacology and Therapy Service, New York, NY (United States); Memorial-Sloan Kettering Cancer Center, New York, NY (United States); Larson, Steven M. [Memorial Sloan-Kettering Cancer Center, Department of Radiology, New York, NY (United States); Memorial Sloan-Kettering Cancer Center, Program in Molecular Pharmacology and Chemistry, New York, NY (United States); Memorial Sloan-Kettering Cancer Center, Molecular Pharmacology and Therapy Service, New York, NY (United States)

    2014-05-15

    The PET tracer, {sup 124}I-cG250, directed against carbonic anhydrase IX (CAIX) shows promise for presurgical diagnosis of clear-cell renal cell carcinoma (ccRCC) (Divgi et al. in Lancet Oncol 8:304-310, 2007; Divgi et al. in J Clin Oncol 31:187-194, 2013). The radiometal {sup 89}Zr, however, may offer advantages as a surrogate PET nuclide over {sup 124}I in terms of greater tumor uptake and retention (Rice et al. in Semin Nucl Med 41:265-282, 2011). We have developed a nonlinear immunokinetic model to facilitate a quantitative comparison of absolute uptake and antibody turnover between {sup 124}I-cG250 and {sup 89}Zr-cG250 using a human ccRCC xenograft tumor model in mice. We believe that this unique model better relates quantitative imaging data to the salient biological features of tumor antibody-antigen binding and turnover. We conducted experiments with {sup 89}Zr-cG250 and {sup 124}I-cG250 using a human ccRCC cell line (SK-RC-38) to characterize the binding affinity and internalization kinetics of the two tracers in vitro. Serial PET imaging was performed in mice bearing subcutaneous ccRCC tumors to simultaneously detect and quantify time-dependent tumor uptake in vivo. Using the known specific activities of the two tracers, the equilibrium rates of antibody internalization and turnover in the tumors were derived from the PET images using nonlinear compartmental modeling. The two tracers demonstrated virtually identical tumor cell binding and internalization but showed markedly different retentions in vitro. Superior PET images were obtained using {sup 89}Zr-cG250, owing to the more prolonged trapping of the radiolabel in the tumor and simultaneous washout from normal tissues. Estimates of cG250/CAIX complex turnover were 1.35 - 5.51 x 10{sup 12} molecules per hour per gram of tumor (20 % of receptors internalized per hour), and the ratio of {sup 124}I/{sup 89}Zr atoms released per unit time by tumor was 17.5. Pairwise evaluation of {sup 89}Zr-cG250 and {sup

  5. Comparison of Routable Control System Security Approaches

    Energy Technology Data Exchange (ETDEWEB)

    Edgar, Thomas W.; Hadley, Mark D.; Carroll, Thomas E.; Manz, David O.; Winn, Jennifer D.

    2011-06-01

    This document is an supplement to the 'Secure and Efficient Routable Control Systems.' It addressed security in routable control system communication. The control system environment that monitors and manages the power grid historically has utilized serial communication mechanisms. Leased-line serial communication environments operating at 1200 to 9600 baud rates are common. However, recent trends show that communication media such as fiber, optical carrier 3 (OC-3) speeds, mesh-based high-speed wireless, and the Internet are becoming the media of choice. In addition, a dichotomy has developed between the electrical transmission and distribution environments, with more modern communication infrastructures deployed by transmission utilities. The preceding diagram represents a typical control system. The Communication Links cloud supports all of the communication mechanisms a utility might deploy between the control center and devices in the field. Current methodologies used for security implementations are primarily led by single vendors or standards bodies. However, these entities tend to focus on individual protocols. The result is an environment that contains a mixture of security solutions that may only address some communication protocols at an increasing operational burden for the utility. A single approach is needed that meets operational requirements, is simple to operate, and provides the necessary level of security for all control system communication. The solution should be application independent (e.g., Distributed Network Protocol/Internet Protocol [DNP/IP], International Electrotechnical Commission [IEC] C37.118, Object Linking and Embedding for Process Control [OPC], etc.) and focus on the transport layer. In an ideal setting, a well-designed suite of standards for control system communication will be used for vendor implementation and compliance testing. An expected outcome of this effort is an international standard.

  6. Dynamics of pairwise motions in the Cosmic Web

    Science.gov (United States)

    Hellwing, Wojciech A.

    2016-10-01

    We present results of analysis of the dark matter (DM) pairwise velocity statistics in different Cosmic Web environments. We use the DM velocity and density field from the Millennium 2 simulation together with the NEXUS+ algorithm to segment the simulation volume into voxels uniquely identifying one of the four possible environments: nodes, filaments, walls or cosmic voids. We show that the PDFs of the mean infall velocities v 12 as well as its spatial dependence together with the perpendicular and parallel velocity dispersions bear a significant signal of the large-scale structure environment in which DM particle pairs are embedded. The pairwise flows are notably colder and have smaller mean magnitude in wall and voids, when compared to much denser environments of filaments and nodes. We discuss on our results, indicating that they are consistent with a simple theoretical predictions for pairwise motions as induced by gravitational instability mechanism. Our results indicate that the Cosmic Web elements are coherent dynamical entities rather than just temporal geometrical associations. In addition it should be possible to observationally test various Cosmic Web finding algorithms by segmenting available peculiar velocity data and studying resulting pairwise velocity statistics.

  7. Improving prediction of heterodimeric protein complexes using combination with pairwise kernel.

    Science.gov (United States)

    Ruan, Peiying; Hayashida, Morihiro; Akutsu, Tatsuya; Vert, Jean-Philippe

    2018-02-19

    Since many proteins become functional only after they interact with their partner proteins and form protein complexes, it is essential to identify the sets of proteins that form complexes. Therefore, several computational methods have been proposed to predict complexes from the topology and structure of experimental protein-protein interaction (PPI) network. These methods work well to predict complexes involving at least three proteins, but generally fail at identifying complexes involving only two different proteins, called heterodimeric complexes or heterodimers. There is however an urgent need for efficient methods to predict heterodimers, since the majority of known protein complexes are precisely heterodimers. In this paper, we use three promising kernel functions, Min kernel and two pairwise kernels, which are Metric Learning Pairwise Kernel (MLPK) and Tensor Product Pairwise Kernel (TPPK). We also consider the normalization forms of Min kernel. Then, we combine Min kernel or its normalization form and one of the pairwise kernels by plugging. We applied kernels based on PPI, domain, phylogenetic profile, and subcellular localization properties to predicting heterodimers. Then, we evaluate our method by employing C-Support Vector Classification (C-SVC), carrying out 10-fold cross-validation, and calculating the average F-measures. The results suggest that the combination of normalized-Min-kernel and MLPK leads to the best F-measure and improved the performance of our previous work, which had been the best existing method so far. We propose new methods to predict heterodimers, using a machine learning-based approach. We train a support vector machine (SVM) to discriminate interacting vs non-interacting protein pairs, based on informations extracted from PPI, domain, phylogenetic profiles and subcellular localization. We evaluate in detail new kernel functions to encode these data, and report prediction performance that outperforms the state-of-the-art.

  8. A Comparison of MOOC Development and Delivery Approaches

    Science.gov (United States)

    Smith, Neil; Caldwell, Helen; Richards, Mike; Bandara, Arosha

    2017-01-01

    Purpose: The purpose of this paper is to present a comparison of two ways of developing and delivering massive open online courses (MOOCs). One was developed by The Open University in collaboration with FutureLearn; the other was developed independently by a small team at the Northampton University. Design/methodology/approach: The different…

  9. The pairwise disconnectivity index as a new metric for the topological analysis of regulatory networks

    Directory of Open Access Journals (Sweden)

    Wingender Edgar

    2008-05-01

    Full Text Available Abstract Background Currently, there is a gap between purely theoretical studies of the topology of large bioregulatory networks and the practical traditions and interests of experimentalists. While the theoretical approaches emphasize the global characterization of regulatory systems, the practical approaches focus on the role of distinct molecules and genes in regulation. To bridge the gap between these opposite approaches, one needs to combine 'general' with 'particular' properties and translate abstract topological features of large systems into testable functional characteristics of individual components. Here, we propose a new topological parameter – the pairwise disconnectivity index of a network's element – that is capable of such bridging. Results The pairwise disconnectivity index quantifies how crucial an individual element is for sustaining the communication ability between connected pairs of vertices in a network that is displayed as a directed graph. Such an element might be a vertex (i.e., molecules, genes, an edge (i.e., reactions, interactions, as well as a group of vertices and/or edges. The index can be viewed as a measure of topological redundancy of regulatory paths which connect different parts of a given network and as a measure of sensitivity (robustness of this network to the presence (absence of each individual element. Accordingly, we introduce the notion of a path-degree of a vertex in terms of its corresponding incoming, outgoing and mediated paths, respectively. The pairwise disconnectivity index has been applied to the analysis of several regulatory networks from various organisms. The importance of an individual vertex or edge for the coherence of the network is determined by the particular position of the given element in the whole network. Conclusion Our approach enables to evaluate the effect of removing each element (i.e., vertex, edge, or their combinations from a network. The greatest potential value of

  10. Living network meta-analysis compared with pairwise meta-analysis in comparative effectiveness research: empirical study.

    Science.gov (United States)

    Nikolakopoulou, Adriani; Mavridis, Dimitris; Furukawa, Toshi A; Cipriani, Andrea; Tricco, Andrea C; Straus, Sharon E; Siontis, George C M; Egger, Matthias; Salanti, Georgia

    2018-02-28

    To examine whether the continuous updating of networks of prospectively planned randomised controlled trials (RCTs) ("living" network meta-analysis) provides strong evidence against the null hypothesis in comparative effectiveness of medical interventions earlier than the updating of conventional, pairwise meta-analysis. Empirical study of the accumulating evidence about the comparative effectiveness of clinical interventions. Database of network meta-analyses of RCTs identified through searches of Medline, Embase, and the Cochrane Database of Systematic Reviews until 14 April 2015. Network meta-analyses published after January 2012 that compared at least five treatments and included at least 20 RCTs. Clinical experts were asked to identify in each network the treatment comparison of greatest clinical interest. Comparisons were excluded for which direct and indirect evidence disagreed, based on side, or node, splitting test (Pmeta-analyses were performed for each selected comparison. Monitoring boundaries of statistical significance were constructed and the evidence against the null hypothesis was considered to be strong when the monitoring boundaries were crossed. A significance level was defined as α=5%, power of 90% (β=10%), and an anticipated treatment effect to detect equal to the final estimate from the network meta-analysis. The frequency and time to strong evidence was compared against the null hypothesis between pairwise and network meta-analyses. 49 comparisons of interest from 44 networks were included; most (n=39, 80%) were between active drugs, mainly from the specialties of cardiology, endocrinology, psychiatry, and rheumatology. 29 comparisons were informed by both direct and indirect evidence (59%), 13 by indirect evidence (27%), and 7 by direct evidence (14%). Both network and pairwise meta-analysis provided strong evidence against the null hypothesis for seven comparisons, but for an additional 10 comparisons only network meta-analysis provided

  11. Dynamics of pairwise entanglement between two Tavis-Cummings atoms

    International Nuclear Information System (INIS)

    Guo Jinliang; Song Heshan

    2008-01-01

    We investigate the time evolution of pairwise entanglement between two Tavis-Cummings atoms for various entangled initial states, including pure and mixed states. We find that the phenomenon of entanglement sudden death behaviors is distinct in the evolution of entanglement for different initial states. What deserves mentioning here is that the initial portion of the excited state in the initial state is responsible for the sudden death of entanglement, and the degree of this effect also depends on the initial states

  12. Living network meta-analysis compared with pairwise meta-analysis in comparative effectiveness research: empirical study

    Science.gov (United States)

    Nikolakopoulou, Adriani; Mavridis, Dimitris; Furukawa, Toshi A; Cipriani, Andrea; Tricco, Andrea C; Straus, Sharon E; Siontis, George C M; Egger, Matthias

    2018-01-01

    Abstract Objective To examine whether the continuous updating of networks of prospectively planned randomised controlled trials (RCTs) (“living” network meta-analysis) provides strong evidence against the null hypothesis in comparative effectiveness of medical interventions earlier than the updating of conventional, pairwise meta-analysis. Design Empirical study of the accumulating evidence about the comparative effectiveness of clinical interventions. Data sources Database of network meta-analyses of RCTs identified through searches of Medline, Embase, and the Cochrane Database of Systematic Reviews until 14 April 2015. Eligibility criteria for study selection Network meta-analyses published after January 2012 that compared at least five treatments and included at least 20 RCTs. Clinical experts were asked to identify in each network the treatment comparison of greatest clinical interest. Comparisons were excluded for which direct and indirect evidence disagreed, based on side, or node, splitting test (Pmeta-analysis. The frequency and time to strong evidence was compared against the null hypothesis between pairwise and network meta-analyses. Results 49 comparisons of interest from 44 networks were included; most (n=39, 80%) were between active drugs, mainly from the specialties of cardiology, endocrinology, psychiatry, and rheumatology. 29 comparisons were informed by both direct and indirect evidence (59%), 13 by indirect evidence (27%), and 7 by direct evidence (14%). Both network and pairwise meta-analysis provided strong evidence against the null hypothesis for seven comparisons, but for an additional 10 comparisons only network meta-analysis provided strong evidence against the null hypothesis (P=0.002). The median time to strong evidence against the null hypothesis was 19 years with living network meta-analysis and 23 years with living pairwise meta-analysis (hazard ratio 2.78, 95% confidence interval 1.00 to 7.72, P=0.05). Studies directly comparing

  13. Data Analysis A Model Comparison Approach, Second Edition

    CERN Document Server

    Judd, Charles M; Ryan, Carey S

    2008-01-01

    This completely rewritten classic text features many new examples, insights and topics including mediational, categorical, and multilevel models. Substantially reorganized, this edition provides a briefer, more streamlined examination of data analysis. Noted for its model-comparison approach and unified framework based on the general linear model, the book provides readers with a greater understanding of a variety of statistical procedures. This consistent framework, including consistent vocabulary and notation, is used throughout to develop fewer but more powerful model building techniques. T

  14. Atomic pairwise distribution function analysis of the amorphous phase prepared by different manufacturing routes

    DEFF Research Database (Denmark)

    Boetker, Johan P.; Koradia, Vishal; Rades, Thomas

    2012-01-01

    was subjected to quench cooling thereby creating an amorphous form of the drug from both starting materials. The milled and quench cooled samples were, together with the crystalline starting materials, analyzed with X-ray powder diffraction (XRPD), Raman spectroscopy and atomic pair-wise distribution function...... (PDF) analysis of the XRPD pattern. When compared to XRPD and Raman spectroscopy, the PDF analysis was superior in displaying the difference between the amorphous samples prepared by milling and quench cooling approaches of the two starting materials....

  15. A Comparison of Microeconomic and Macroeconomic Approaches to Deforestation Analysis

    Directory of Open Access Journals (Sweden)

    Jeff Felardo

    2016-01-01

    Full Text Available The economics of deforestation has been explored in detail. Generally, the frame of analysis takes either a microeconomics or macroeconomics approach. The microeconomics approach assumes that individual decision makers are responsible for deforestation as a result of utility maximizing behavior and imperfect property right regimes. The macroeconomics approach explores nationwide trends thought to be associated with forest conversion. This paper investigates the relationship between these two approaches by empirically testing the determinants of deforestation using the same data set from Thailand. The theory for both the microeconomics-based and macroeconomics-based approaches are developed and then tested statistically. The models were constructed using established theoretical frames developed in the literature. The results from both models show statistical significance consistent with prior results in the tropical deforestation literature. A comparison of the two approaches demonstrates that the macro approach is useful in identifying relevant aggregate trends in the deforestation process; the micro approach provides the opportunity to isolate factors of those trends which are necessary for effective policy decisions.

  16. A comparison of verbal and numerical judgments in the analytic hierarchy process

    NARCIS (Netherlands)

    Huizingh, EKRE; Vrolijk, HCJ

    In the Analytic Hierarchy Process (AHP), decision makers make pairwise comparisons of alternatives and criteria. The AHP allows to make these pairwise comparisons verbally or numerically. Although verbal statements are intuitively attractive for preference elicitation, there is overwhelming evidence

  17. A comparison of approaches for simultaneous inference of fixed effects for multiple outcomes using linear mixed models

    DEFF Research Database (Denmark)

    Jensen, Signe Marie; Ritz, Christian

    2018-01-01

    Longitudinal studies with multiple outcomes often pose challenges for the statistical analysis. A joint model including all outcomes has the advantage of incorporating the simultaneous behavior but is often difficult to fit due to computational challenges. We consider 2 alternative approaches to ......, pairwise fitting shows a larger loss in efficiency than the marginal models approach. Using an alternative to the joint modelling strategy will lead to some but not necessarily a large loss of efficiency for small sample sizes....

  18. Pairwise Trajectory Management (PTM): Concept Description and Documentation

    Science.gov (United States)

    Jones, Kenneth M.; Graff, Thomas J.; Carreno, Victor; Chartrand, Ryan C.; Kibler, Jennifer L.

    2018-01-01

    Pairwise Trajectory Management (PTM) is an Interval Management (IM) concept that utilizes airborne and ground-based capabilities to enable the implementation of airborne pairwise spacing capabilities in oceanic regions. The goal of PTM is to use airborne surveillance and tools to manage an "at or greater than" inter-aircraft spacing. Due to the accuracy of Automatic Dependent Surveillance-Broadcast (ADS-B) information and the use of airborne spacing guidance, the minimum PTM spacing distance will be less than distances a controller can support with current automation systems that support oceanic operations. Ground tools assist the controller in evaluating the traffic picture and determining appropriate PTM clearances to be issued. Avionics systems provide guidance information that allows the flight crew to conform to the PTM clearance issued by the controller. The combination of a reduced minimum distance and airborne spacing management will increase the capacity and efficiency of aircraft operations at a given altitude or volume of airspace. This document provides an overview of the proposed application, a description of several key scenarios, a high level discussion of expected air and ground equipment and procedure changes, a description of a NASA human-machine interface (HMI) prototype for the flight crew that would support PTM operations, and initial benefits analysis results. Additionally, included as appendices, are the following documents: the PTM Operational Services and Environment Definition (OSED) document and a companion "Future Considerations for the Pairwise Trajectory Management (PTM) Concept: Potential Future Updates for the PTM OSED" paper, a detailed description of the PTM algorithm and PTM Limit Mach rules, initial PTM safety requirements and safety assessment documents, a detailed description of the design, development, and initial evaluations of the proposed flight crew HMI, an overview of the methodology and results of PTM pilot training

  19. Comparison of Different Approaches for Measuring Tibial Cartilage Thickness

    Directory of Open Access Journals (Sweden)

    Maier Jennifer

    2017-07-01

    Full Text Available Osteoarthritis is a degenerative disease affecting bones and cartilage especially in the human knee. In this context, cartilage thickness is an indicator for knee cartilage health. Thickness measurements are performed on medical images acquired in-vivo. Currently, there is no standard method agreed upon that defines a distance measure in articular cartilage. In this work, we present a comparison of different methods commonly used in literature. These methods are based on nearest neighbors, surface normal vectors, local thickness and potential field lines. All approaches were applied to manual segmentations of tibia and lateral and medial tibial cartilage performed by experienced raters. The underlying data were contrast agent-enhanced cone-beam C-arm CT reconstructions of one healthy subject’s knee. The subject was scanned three times, once in supine position and two times in a standing weight-bearing position. A comparison of the resulting thickness maps shows similar distributions and high correlation coefficients between the approaches above 0.90. The nearest neighbor method results on average in the lowest cartilage thickness values, while the local thickness approach assigns the highest values. We showed that the different methods agree in their thickness distribution. The results will be used for a future evaluation of cartilage change under weight-bearing conditions.

  20. Extension of Pairwise Broadcast Clock Synchronization for Multicluster Sensor Networks

    Directory of Open Access Journals (Sweden)

    Bruce W. Suter

    2008-01-01

    Full Text Available Time synchronization is crucial for wireless sensor networks (WSNs in performing a number of fundamental operations such as data coordination, power management, security, and localization. The Pairwise Broadcast Synchronization (PBS protocol was recently proposed to minimize the number of timing messages required for global network synchronization, which enables the design of highly energy-efficient WSNs. However, PBS requires all nodes in the network to lie within the communication ranges of two leader nodes, a condition which might not be available in some applications. This paper proposes an extension of PBS to the more general class of sensor networks. Based on the hierarchical structure of the network, an energy-efficient pair selection algorithm is proposed to select the best pairwise synchronization sequence to reduce the overall energy consumption. It is shown that in a multicluster networking environment, PBS requires a far less number of timing messages than other well-known synchronization protocols and incurs no loss in synchronization accuracy. Moreover, the proposed scheme presents significant energy savings for densely deployed WSNs.

  1. A risk based approach for SSTO/TSTO comparisons

    Science.gov (United States)

    Greenberg, Joel S.

    1996-03-01

    An approach has been developed for performing early comparisons of transportation architectures explicitly taking into account quantitative measures of uncertainty and resulting risk. Risk considerations are necessary since the transportation systems are likely to have significantly different levels of risk, both because of differing degrees of freedom in achieving desired performance levels and their different states of development and utilization. The approach considers the uncertainty of achievement of technology goals, effect that the achieved technology level will have on transportation system performance and the relationship between system performance/capability and the ability to accommodate variations in payload mass. The consequences of system performance are developed in terms of nonrecurring, recurring, and the present value of transportation system life cycle costs.

  2. A Comparison of Machine Learning Approaches for Corn Yield Estimation

    Science.gov (United States)

    Kim, N.; Lee, Y. W.

    2017-12-01

    Machine learning is an efficient empirical method for classification and prediction, and it is another approach to crop yield estimation. The objective of this study is to estimate corn yield in the Midwestern United States by employing the machine learning approaches such as the support vector machine (SVM), random forest (RF), and deep neural networks (DNN), and to perform the comprehensive comparison for their results. We constructed the database using satellite images from MODIS, the climate data of PRISM climate group, and GLDAS soil moisture data. In addition, to examine the seasonal sensitivities of corn yields, two period groups were set up: May to September (MJJAS) and July and August (JA). In overall, the DNN showed the highest accuracies in term of the correlation coefficient for the two period groups. The differences between our predictions and USDA yield statistics were about 10-11 %.

  3. Comparison of Human Exploration Architecture and Campaign Approaches

    Science.gov (United States)

    Goodliff, Kandyce; Cirillo, William; Mattfeld, Bryan; Stromgren, Chel; Shyface, Hilary

    2015-01-01

    As part of an overall focus on space exploration, National Aeronautics and Space Administration (NASA) continues to evaluate potential approaches for sending humans beyond low Earth orbit (LEO). In addition, various external organizations are studying options for beyond LEO exploration. Recent studies include NASA's Evolvable Mars Campaign and Design Reference Architecture (DRA) 5.0, JPL's Minimal Mars Architecture; the Inspiration Mars mission; the Mars One campaign; and the Global Exploration Roadmap (GER). Each of these potential exploration constructs applies unique methods, architectures, and philosophies for human exploration. It is beneficial to compare potential approaches in order to better understand the range of options available for exploration. Since most of these studies were conducted independently, the approaches, ground rules, and assumptions used to conduct the analysis differ. In addition, the outputs and metrics presented for each construct differ substantially. This paper will describe the results of an effort to compare and contrast the results of these different studies under a common set of metrics. The paper will first present a summary of each of the proposed constructs, including a description of the overall approach and philosophy for exploration. Utilizing a common set of metrics for comparison, the paper will present the results of an evaluation of the potential benefits, critical challenges, and uncertainties associated with each construct. The analysis framework will include a detailed evaluation of key characteristics of each construct. These will include but are not limited to: a description of the technology and capability developments required to enable the construct and the uncertainties associated with these developments; an analysis of significant operational and programmatic risks associated with that construct; and an evaluation of the extent to which exploration is enabled by the construct, including the destinations

  4. A Comparison of Routing Protocol for WSNs: Redundancy Based Approach A Comparison of Routing Protocol for WSNs: Redundancy Based Approach

    Directory of Open Access Journals (Sweden)

    Anand Prakash

    2014-03-01

    Full Text Available Wireless Sensor Networks (WSNs with their dynamic applications gained a tremendous attention of researchers. Constant monitoring of critical situations attracted researchers to utilize WSNs at vast platforms. The main focus in WSNs is to enhance network localization as much as one could, for efficient and optimal utilization of resources. Different approaches based upon redundancy are proposed for optimum functionality. Localization is always related with redundancy of sensor nodes deployed at remote areas for constant and fault tolerant monitoring. In this work, we propose a comparison of classic flooding and the gossip protocol for homogenous networks which enhances stability and throughput quiet significantly.  

  5. Comparison of two Minkowski-space approaches to heavy quarkonia

    Energy Technology Data Exchange (ETDEWEB)

    Leitao, Sofia; Biernat, Elmar P. [Universidade de Lisboa, CFTP, Instituto Superior Tecnico, Lisbon (Portugal); Li, Yang [Iowa State University, Department of Physics and Astronomy, Ames, IA (United States); College of William and Mary, Department of Physics, Williamsburg, VA (United States); Maris, Pieter; Vary, James P. [Iowa State University, Department of Physics and Astronomy, Ames, IA (United States); Pena, M.T. [Universidade de Lisboa, CFTP, Instituto Superior Tecnico, Lisbon (Portugal); Universidade de Lisboa, Departamento de Fisica, Instituto Superior Tecnico, Lisbon (Portugal); Stadler, Alfred [Universidade de Lisboa, CFTP, Instituto Superior Tecnico, Lisbon (Portugal); Universidade de Evora, Departamento de Fisica, Evora (Portugal)

    2017-10-15

    In this work we compare mass spectra and decay constants obtained from two recent, independent, and fully relativistic approaches to the quarkonium bound-state problem: the Basis Light-Front Quantization approach, where light-front wave functions are naturally formulated; and, the Covariant Spectator Theory (CST), based on a reorganization of the Bethe-Salpeter equation. Even though conceptually different, both solutions are obtained in Minkowski space. Comparisons of decay constants for more than ten states of charmonium and bottomonium show favorable agreement between the two approaches as well as with experiment where available. We also apply the Brodsky-Huang-Lepage prescription to convert the CST amplitudes into functions of light-front variables. This provides an ideal opportunity to investigate the similarities and differences at the level of the wave functions. Several qualitative features are observed in remarkable agreement between the two approaches even for the rarely addressed excited states. Leading-twist distribution amplitudes as well as parton distribution functions of heavy quarkonia are also analyzed. (orig.)

  6. Pair-Wise and Many-Body Dispersive Interactions Coupled to an Optimally Tuned Range-Separated Hybrid Functional.

    Science.gov (United States)

    Agrawal, Piyush; Tkatchenko, Alexandre; Kronik, Leeor

    2013-08-13

    We propose a nonempirical, pair-wise or many-body dispersion-corrected, optimally tuned range-separated hybrid functional. This functional retains the advantages of the optimal-tuning approach in the prediction of the electronic structure. At the same time, it gains accuracy in the prediction of binding energies for dispersively bound systems, as demonstrated on the S22 and S66 benchmark sets of weakly bound dimers.

  7. Exact method for the simulation of Coulombic systems by spherically truncated, pairwise r-1 summation

    International Nuclear Information System (INIS)

    Wolf, D.; Keblinski, P.; Phillpot, S.R.; Eggebrecht, J.

    1999-01-01

    Based on a recent result showing that the net Coulomb potential in condensed ionic systems is rather short ranged, an exact and physically transparent method permitting the evaluation of the Coulomb potential by direct summation over the r -1 Coulomb pair potential is presented. The key observation is that the problems encountered in determining the Coulomb energy by pairwise, spherically truncated r -1 summation are a direct consequence of the fact that the system summed over is practically never neutral. A simple method is developed that achieves charge neutralization wherever the r -1 pair potential is truncated. This enables the extraction of the Coulomb energy, forces, and stresses from a spherically truncated, usually charged environment in a manner that is independent of the grouping of the pair terms. The close connection of our approach with the Ewald method is demonstrated and exploited, providing an efficient method for the simulation of even highly disordered ionic systems by direct, pairwise r -1 summation with spherical truncation at rather short range, i.e., a method which fully exploits the short-ranged nature of the interactions in ionic systems. The method is validated by simulations of crystals, liquids, and interfacial systems, such as free surfaces and grain boundaries. copyright 1999 American Institute of Physics

  8. A water market simulator considering pair-wise trades between agents

    Science.gov (United States)

    Huskova, I.; Erfani, T.; Harou, J. J.

    2012-04-01

    In many basins in England no further water abstraction licences are available. Trading water between water rights holders has been recognized as a potentially effective and economically efficient strategy to mitigate increasing scarcity. A screening tool that could assess the potential for trade through realistic simulation of individual water rights holders would help assess the solution's potential contribution to local water management. We propose an optimisation-driven water market simulator that predicts pair-wise trade in a catchment and represents its interaction with natural hydrology and engineered infrastructure. A model is used to emulate licence-holders' willingness to engage in short-term trade transactions. In their simplest form agents are represented using an economic benefit function. The working hypothesis is that trading behaviour can be partially predicted based on differences in marginal values of water over space and time and estimates of transaction costs on pair-wise trades. We discuss the further possibility of embedding rules, norms and preferences of the different water user sectors to more realistically represent the behaviours, motives and constraints of individual licence holders. The potential benefits and limitations of such a social simulation (agent-based) approach is contrasted with our simulator where agents are driven by economic optimization. A case study based on the Dove River Basin (UK) demonstrates model inputs and outputs. The ability of the model to suggest impacts of water rights policy reforms on trading is discussed.

  9. Hierarchical ordering with partial pairwise hierarchical relationships on the macaque brain data sets.

    Directory of Open Access Journals (Sweden)

    Woosang Lim

    Full Text Available Hierarchical organizations of information processing in the brain networks have been known to exist and widely studied. To find proper hierarchical structures in the macaque brain, the traditional methods need the entire pairwise hierarchical relationships between cortical areas. In this paper, we present a new method that discovers hierarchical structures of macaque brain networks by using partial information of pairwise hierarchical relationships. Our method uses a graph-based manifold learning to exploit inherent relationship, and computes pseudo distances of hierarchical levels for every pair of cortical areas. Then, we compute hierarchy levels of all cortical areas by minimizing the sum of squared hierarchical distance errors with the hierarchical information of few cortical areas. We evaluate our method on the macaque brain data sets whose true hierarchical levels are known as the FV91 model. The experimental results show that hierarchy levels computed by our method are similar to the FV91 model, and its errors are much smaller than the errors of hierarchical clustering approaches.

  10. Linear VSS and Distributed Commitments Based on Secret Sharing and Pairwise Checks

    DEFF Research Database (Denmark)

    Fehr, Serge; Maurer, Ueli M.

    2002-01-01

    . VSS and DC are main building blocks for unconditional secure multi-party computation protocols. This general approach covers all known linear VSS and DC schemes. The main theorem states that the security of a scheme is equivalent to a pure linear-algebra condition on the linear mappings (e.......g. described as matrices and vectors) describing the scheme. The security of all known schemes follows as corollaries whose proofs are pure linear-algebra arguments, in contrast to some hybrid arguments used in the literature. Our approach is demonstrated for the CDM DC scheme, which we generalize to be secure......We present a general treatment of all non-cryptographic (i.e., information-theoretically secure) linear veriable-secret-sharing (VSS) and distributed-commitment (DC) schemes, based on an underlying secret sharing scheme, pairwise checks between players, complaints, and accusations of the dealer...

  11. Calibration of Smartphone-Based Weather Measurements Using Pairwise Gossip

    Directory of Open Access Journals (Sweden)

    Jane Louie Fresco Zamora

    2015-01-01

    Full Text Available Accurate and reliable daily global weather reports are necessary for weather forecasting and climate analysis. However, the availability of these reports continues to decline due to the lack of economic support and policies in maintaining ground weather measurement systems from where these reports are obtained. Thus, to mitigate data scarcity, it is required to utilize weather information from existing sensors and built-in smartphone sensors. However, as smartphone usage often varies according to human activity, it is difficult to obtain accurate measurement data. In this paper, we present a heuristic-based pairwise gossip algorithm that will calibrate smartphone-based pressure sensors with respect to fixed weather stations as our referential ground truth. Based on actual measurements, we have verified that smartphone-based readings are unstable when observed during movement. Using our calibration algorithm on actual smartphone-based pressure readings, the updated values were significantly closer to the ground truth values.

  12. Calibration of Smartphone-Based Weather Measurements Using Pairwise Gossip.

    Science.gov (United States)

    Zamora, Jane Louie Fresco; Kashihara, Shigeru; Yamaguchi, Suguru

    2015-01-01

    Accurate and reliable daily global weather reports are necessary for weather forecasting and climate analysis. However, the availability of these reports continues to decline due to the lack of economic support and policies in maintaining ground weather measurement systems from where these reports are obtained. Thus, to mitigate data scarcity, it is required to utilize weather information from existing sensors and built-in smartphone sensors. However, as smartphone usage often varies according to human activity, it is difficult to obtain accurate measurement data. In this paper, we present a heuristic-based pairwise gossip algorithm that will calibrate smartphone-based pressure sensors with respect to fixed weather stations as our referential ground truth. Based on actual measurements, we have verified that smartphone-based readings are unstable when observed during movement. Using our calibration algorithm on actual smartphone-based pressure readings, the updated values were significantly closer to the ground truth values.

  13. Urban pavement surface temperature. Comparison of numerical and statistical approach

    Science.gov (United States)

    Marchetti, Mario; Khalifa, Abderrahmen; Bues, Michel; Bouilloud, Ludovic; Martin, Eric; Chancibaut, Katia

    2015-04-01

    The forecast of pavement surface temperature is very specific in the context of urban winter maintenance. to manage snow plowing and salting of roads. Such forecast mainly relies on numerical models based on a description of the energy balance between the atmosphere, the buildings and the pavement, with a canyon configuration. Nevertheless, there is a specific need in the physical description and the numerical implementation of the traffic in the energy flux balance. This traffic was originally considered as a constant. Many changes were performed in a numerical model to describe as accurately as possible the traffic effects on this urban energy balance, such as tires friction, pavement-air exchange coefficient, and infrared flux neat balance. Some experiments based on infrared thermography and radiometry were then conducted to quantify the effect fo traffic on urban pavement surface. Based on meteorological data, corresponding pavement temperature forecast were calculated and were compared with fiels measurements. Results indicated a good agreement between the forecast from the numerical model based on this energy balance approach. A complementary forecast approach based on principal component analysis (PCA) and partial least-square regression (PLS) was also developed, with data from thermal mapping usng infrared radiometry. The forecast of pavement surface temperature with air temperature was obtained in the specific case of urban configurtation, and considering traffic into measurements used for the statistical analysis. A comparison between results from the numerical model based on energy balance, and PCA/PLS was then conducted, indicating the advantages and limits of each approach.

  14. Reducing Post-Decision Dissonance in International Decisions: The Analytic Hierarchy Process Approach.

    Science.gov (United States)

    DuBois, Frank L.

    1999-01-01

    Describes use of the analytic hierarchy process (AHP) as a teaching tool to illustrate the complexities of decision making in an international environment. The AHP approach uses managerial input to develop pairwise comparisons of relevant decision criteria to efficiently generate an appropriate solution. (DB)

  15. Probing dark energy models with extreme pairwise velocities of galaxy clusters from the DEUS-FUR simulations

    Science.gov (United States)

    Bouillot, Vincent R.; Alimi, Jean-Michel; Corasaniti, Pier-Stefano; Rasera, Yann

    2015-06-01

    Observations of colliding galaxy clusters with high relative velocity probe the tail of the halo pairwise velocity distribution with the potential of providing a powerful test of cosmology. As an example it has been argued that the discovery of the Bullet Cluster challenges standard Λ cold dark matter (ΛCDM) model predictions. Halo catalogues from N-body simulations have been used to estimate the probability of Bullet-like clusters. However, due to simulation volume effects previous studies had to rely on a Gaussian extrapolation of the pairwise velocity distribution to high velocities. Here, we perform a detail analysis using the halo catalogues from the Dark Energy Universe Simulation Full Universe Runs (DEUS-FUR), which enables us to resolve the high-velocity tail of the distribution and study its dependence on the halo mass definition, redshift and cosmology. Building upon these results, we estimate the probability of Bullet-like systems in the framework of Extreme Value Statistics. We show that the tail of extreme pairwise velocities significantly deviates from that of a Gaussian, moreover it carries an imprint of the underlying cosmology. We find the Bullet Cluster probability to be two orders of magnitude larger than previous estimates, thus easing the tension with the ΛCDM model. Finally, the comparison of the inferred probabilities for the different DEUS-FUR cosmologies suggests that observations of extreme interacting clusters can provide constraints on dark energy models complementary to standard cosmological tests.

  16. Energetics of Brazilian ethanol: Comparison between assessment approaches

    International Nuclear Information System (INIS)

    Ramirez Triana, Carlos Ariel

    2011-01-01

    As with any other bioenergy product, bioethanol production requires fossil fuel inputs; hence the alleged benefits of energy security and carbon mitigation depend on the extent to which these inputs are capable of drawing a substantive bioenergetic yield. Brazilian ethanol, made out of sugarcane, has been reported as the most efficient gasoline substitute that is commercially available nowadays. For that reason it has been the object of several analyses on the energetics, i.e. energy balances. These studies surprisingly vary widely according with the scholar approach and are not fully comparable among them due to divergences in the assessment method. This paper standardises results of the four most prominent authors in the field, establishing a point of comparison and drawing some light on the energetics studies on biofuels. The main result is shown in , which homogenises the outcomes for referred studies in terms of unit of assessment in the energy input analysis. Subsequently, this information is also charted () explaining the source of divergence among authors. This work ends with a short reference and comparison to some energy balance studies carried out on feedstocks of diverse nature, highlighting the potential that sugarcane-based bioethanol represents nowadays. - Highlights: → Distribution stage could reduce energy ratio but its contribution is not significant. → In Pimentel and Patzek there is an evident impact of the industrial stage. → A coincidence across the studies was the major impact of the agricultural stage. → Brazilian technology to produce ethanol was proved the most energy efficient one.

  17. Comparison of different approaches of modelling in a masonry building

    Science.gov (United States)

    Saba, M.; Meloni, D.

    2017-12-01

    The present work has the objective to model a simple masonry building, through two different modelling methods in order to assess their validity in terms of evaluation of static stresses. Have been chosen two of the most commercial software used to address this kind of problem, which are of S.T.A. Data S.r.l. and Sismicad12 of Concrete S.r.l. While the 3Muri software adopts the Frame by Macro Elements Method (FME), which should be more schematic and more efficient, Sismicad12 software uses the Finite Element Method (FEM), which guarantees accurate results, with greater computational burden. Remarkably differences of the static stresses, for such a simple structure between the two approaches have been found, and an interesting comparison and analysis of the reasons is proposed.

  18. A comparison of approaches in fitting continuum SEDs

    International Nuclear Information System (INIS)

    Liu Yao; Wang Hong-Chi; Madlener David; Wolf Sebastian

    2013-01-01

    We present a detailed comparison of two approaches, the use of a pre-calculated database and simulated annealing (SA), for fitting the continuum spectral energy distribution (SED) of astrophysical objects whose appearance is dominated by surrounding dust. While pre-calculated databases are commonly used to model SED data, only a few studies to date employed SA due to its unclear accuracy and convergence time for this specific problem. From a methodological point of view, different approaches lead to different fitting quality, demand on computational resources and calculation time. We compare the fitting quality and computational costs of these two approaches for the task of SED fitting to provide a guide to the practitioner to find a compromise between desired accuracy and available resources. To reduce uncertainties inherent to real datasets, we introduce a reference model resembling a typical circumstellar system with 10 free parameters. We derive the SED of the reference model with our code MC3 D at 78 logarithmically distributed wavelengths in the range [0.3 μm, 1.3 mm] and use this setup to simulate SEDs for the database and SA. Our result directly demonstrates the applicability of SA in the field of SED modeling, since the algorithm regularly finds better solutions to the optimization problem than a pre-calculated database. As both methods have advantages and shortcomings, a hybrid approach is preferable. While the database provides an approximate fit and overall probability distributions for all parameters deduced using Bayesian analysis, SA can be used to improve upon the results returned by the model grid.

  19. PSA data base, comparison of the German and French approach

    International Nuclear Information System (INIS)

    Kreuser, A.; Tirira, J.

    2001-01-01

    The results of probabilistic safety assessments (PSA) of nuclear power plants strongly depend on the reliability data used. This report describes coarsely the general process to generate reliability data for components and resumes the differences between the German and French approaches. As has been shown in former studies which compared international PSA data, PSA data are closely related to the model definitions of the PSA. Therefore single PSA data cannot be compared directly without regard e.g. to the corresponding fault trees. These findings are confirmed by this study. The comparison of German and French methods shows a lot of differences concerning various details of the data generation process. Some differences between single reliability data should be eliminated when taking into account the complete fault tree analysis. But there are some other differences which have a direct impact on the obtained results of a PSA. In view of the all differences between both approaches concerning the definition of data and the data collection process, it is not possible to compare directly German and French PSA data. However, the database differences give no indication on the influence on the PSA results. Therefore, it is a need to perform a common IPSN/GRS assessment on how the different databases impact the PSA results. (orig.)

  20. A general transformation to canonical form for potentials in pairwise interatomic interactions.

    Science.gov (United States)

    Walton, Jay R; Rivera-Rivera, Luis A; Lucchese, Robert R; Bevan, John W

    2015-06-14

    A generalized formulation of explicit force-based transformations is introduced to investigate the concept of a canonical potential in both fundamental chemical and intermolecular bonding. Different classes of representative ground electronic state pairwise interatomic interactions are referenced to a chosen canonical potential illustrating application of such transformations. Specifically, accurately determined potentials of the diatomic molecules H2, H2(+), HF, LiH, argon dimer, and one-dimensional dissociative coordinates in Ar-HBr, OC-HF, and OC-Cl2 are investigated throughout their bound potentials. Advantages of the current formulation for accurately evaluating equilibrium dissociation energies and a fundamentally different unified perspective on nature of intermolecular interactions will be emphasized. In particular, this canonical approach has significance to previous assertions that there is no very fundamental distinction between van der Waals bonding and covalent bonding or for that matter hydrogen and halogen bonds.

  1. Pairwise local structural alignment of RNA sequences with sequence similarity less than 40%

    DEFF Research Database (Denmark)

    Havgaard, Jakob Hull; Lyngsø, Rune B.; Stormo, Gary D.

    2005-01-01

    detect two genes with low sequence similarity, where the genes are part of a larger genomic region. Results: Here we present such an approach for pairwise local alignment which is based on FILDALIGN and the Sankoff algorithm for simultaneous structural alignment of multiple sequences. We include...... the ability to conduct mutual scans of two sequences of arbitrary length while searching for common local structural motifs of some maximum length. This drastically reduces the complexity of the algorithm. The scoring scheme includes structural parameters corresponding to those available for free energy....... The structure prediction performance for a family is typically around 0.7 using Matthews correlation coefficient. In case (2), the algorithm is successful at locating RNA families with an average sensitivity of 0.8 and a positive predictive value of 0.9 using a BLAST-like hit selection scheme. Availability...

  2. Memory-efficient dynamic programming backtrace and pairwise local sequence alignment.

    Science.gov (United States)

    Newberg, Lee A

    2008-08-15

    A backtrace through a dynamic programming algorithm's intermediate results in search of an optimal path, or to sample paths according to an implied probability distribution, or as the second stage of a forward-backward algorithm, is a task of fundamental importance in computational biology. When there is insufficient space to store all intermediate results in high-speed memory (e.g. cache) existing approaches store selected stages of the computation, and recompute missing values from these checkpoints on an as-needed basis. Here we present an optimal checkpointing strategy, and demonstrate its utility with pairwise local sequence alignment of sequences of length 10,000. Sample C++-code for optimal backtrace is available in the Supplementary Materials. Supplementary data is available at Bioinformatics online.

  3. Parasail: SIMD C library for global, semi-global, and local pairwise sequence alignments.

    Science.gov (United States)

    Daily, Jeff

    2016-02-10

    Sequence alignment algorithms are a key component of many bioinformatics applications. Though various fast Smith-Waterman local sequence alignment implementations have been developed for x86 CPUs, most are embedded into larger database search tools. In addition, fast implementations of Needleman-Wunsch global sequence alignment and its semi-global variants are not as widespread. This article presents the first software library for local, global, and semi-global pairwise intra-sequence alignments and improves the performance of previous intra-sequence implementations. A faster intra-sequence local pairwise alignment implementation is described and benchmarked, including new global and semi-global variants. Using a 375 residue query sequence a speed of 136 billion cell updates per second (GCUPS) was achieved on a dual Intel Xeon E5-2670 24-core processor system, the highest reported for an implementation based on Farrar's 'striped' approach. Rognes's SWIPE optimal database search application is still generally the fastest available at 1.2 to at best 2.4 times faster than Parasail for sequences shorter than 500 amino acids. However, Parasail was faster for longer sequences. For global alignments, Parasail's prefix scan implementation is generally the fastest, faster even than Farrar's 'striped' approach, however the opal library is faster for single-threaded applications. The software library is designed for 64 bit Linux, OS X, or Windows on processors with SSE2, SSE41, or AVX2. Source code is available from https://github.com/jeffdaily/parasail under the Battelle BSD-style license. Applications that require optimal alignment scores could benefit from the improved performance. For the first time, SIMD global, semi-global, and local alignments are available in a stand-alone C library.

  4. Convergent cross-mapping and pairwise asymmetric inference.

    Science.gov (United States)

    McCracken, James M; Weigel, Robert S

    2014-12-01

    Convergent cross-mapping (CCM) is a technique for computing specific kinds of correlations between sets of times series. It was introduced by Sugihara et al. [Science 338, 496 (2012).] and is reported to be "a necessary condition for causation" capable of distinguishing causality from standard correlation. We show that the relationships between CCM correlations proposed by Sugihara et al. do not, in general, agree with intuitive concepts of "driving" and as such should not be considered indicative of causality. It is shown that the fact that the CCM algorithm implies causality is a function of system parameters for simple linear and nonlinear systems. For example, in a circuit containing a single resistor and inductor, both voltage and current can be identified as the driver depending on the frequency of the source voltage. It is shown that the CCM algorithm, however, can be modified to identify relationships between pairs of time series that are consistent with intuition for the considered example systems for which CCM causality analysis provided nonintuitive driver identifications. This modification of the CCM algorithm is introduced as "pairwise asymmetric inference" (PAI) and examples of its use are presented.

  5. Identifying the Academic Rising Stars via Pairwise Citation Increment Ranking

    KAUST Repository

    Zhang, Chuxu

    2017-08-02

    Predicting the fast-rising young researchers (the Academic Rising Stars) in the future provides useful guidance to the research community, e.g., offering competitive candidates to university for young faculty hiring as they are expected to have success academic careers. In this work, given a set of young researchers who have published the first first-author paper recently, we solve the problem of how to effectively predict the top k% researchers who achieve the highest citation increment in Δt years. We explore a series of factors that can drive an author to be fast-rising and design a novel pairwise citation increment ranking (PCIR) method that leverages those factors to predict the academic rising stars. Experimental results on the large ArnetMiner dataset with over 1.7 million authors demonstrate the effectiveness of PCIR. Specifically, it outperforms all given benchmark methods, with over 8% average improvement. Further analysis demonstrates that temporal features are the best indicators for rising stars prediction, while venue features are less relevant.

  6. Comparison of ductile-to-brittle transition curve fitting approaches

    International Nuclear Information System (INIS)

    Cao, L.W.; Wu, S.J.; Flewitt, P.E.J.

    2012-01-01

    Ductile-to-brittle transition (DBT) curve fitting approaches are compared over the transition temperature range for reactor pressure vessel steels with different kinds of data, including Charpy-V notch impact energy data and fracture toughness data. Three DBT curve fitting methods have been frequently used in the past, including the Burr S-Weibull and tanh distributions. In general there is greater scatter associated with test data obtained within the transition region. Therefore these methods give results with different accuracies, especially when fitting to small quantities of data. The comparison shows that the Burr distribution and tanh distribution can almost equally fit well distributed and large data sets extending across the test temperature range to include the upper and lower shelves. The S-Weibull distribution fit is poor for the lower shelf of the DBT curve. Overall for both large and small quantities of measured data the Burr distribution provides the best description. - Highlights: ► Burr distribution offers a better fit than that of a S-Weibull and tanh fit. ► Burr and tanh methods show similar fitting ability for a large data set. ► Burr method can fit sparse data well distributed across the test temperature. ► S-Weibull method cannot fit the lower shelf well and show poor fitting quality.

  7. The Role of Middlemen inEfficient and Strongly Pairwise Stable Networks

    NARCIS (Netherlands)

    Gilles, R.P.; Chakrabarti, S.; Sarangi, S.; Badasyan, N.

    2004-01-01

    We examine the strong pairwise stability concept in network formation theory under collective network benefits.Strong pairwise stability considers a pair of players to add a link through mutual consent while permitting them to unilaterally delete any subset of links under their control.We examine

  8. BETASCAN: probable beta-amyloids identified by pairwise probabilistic analysis.

    Directory of Open Access Journals (Sweden)

    Allen W Bryan

    2009-03-01

    Full Text Available Amyloids and prion proteins are clinically and biologically important beta-structures, whose supersecondary structures are difficult to determine by standard experimental or computational means. In addition, significant conformational heterogeneity is known or suspected to exist in many amyloid fibrils. Recent work has indicated the utility of pairwise probabilistic statistics in beta-structure prediction. We develop here a new strategy for beta-structure prediction, emphasizing the determination of beta-strands and pairs of beta-strands as fundamental units of beta-structure. Our program, BETASCAN, calculates likelihood scores for potential beta-strands and strand-pairs based on correlations observed in parallel beta-sheets. The program then determines the strands and pairs with the greatest local likelihood for all of the sequence's potential beta-structures. BETASCAN suggests multiple alternate folding patterns and assigns relative a priori probabilities based solely on amino acid sequence, probability tables, and pre-chosen parameters. The algorithm compares favorably with the results of previous algorithms (BETAPRO, PASTA, SALSA, TANGO, and Zyggregator in beta-structure prediction and amyloid propensity prediction. Accurate prediction is demonstrated for experimentally determined amyloid beta-structures, for a set of known beta-aggregates, and for the parallel beta-strands of beta-helices, amyloid-like globular proteins. BETASCAN is able both to detect beta-strands with higher sensitivity and to detect the edges of beta-strands in a richly beta-like sequence. For two proteins (Abeta and Het-s, there exist multiple sets of experimental data implying contradictory structures; BETASCAN is able to detect each competing structure as a potential structure variant. The ability to correlate multiple alternate beta-structures to experiment opens the possibility of computational investigation of prion strains and structural heterogeneity of amyloid

  9. GraphAlignment: Bayesian pairwise alignment of biological networks

    Directory of Open Access Journals (Sweden)

    Kolář Michal

    2012-11-01

    Full Text Available Abstract Background With increased experimental availability and accuracy of bio-molecular networks, tools for their comparative and evolutionary analysis are needed. A key component for such studies is the alignment of networks. Results We introduce the Bioconductor package GraphAlignment for pairwise alignment of bio-molecular networks. The alignment incorporates information both from network vertices and network edges and is based on an explicit evolutionary model, allowing inference of all scoring parameters directly from empirical data. We compare the performance of our algorithm to an alternative algorithm, Græmlin 2.0. On simulated data, GraphAlignment outperforms Græmlin 2.0 in several benchmarks except for computational complexity. When there is little or no noise in the data, GraphAlignment is slower than Græmlin 2.0. It is faster than Græmlin 2.0 when processing noisy data containing spurious vertex associations. Its typical case complexity grows approximately as O(N2.6. On empirical bacterial protein-protein interaction networks (PIN and gene co-expression networks, GraphAlignment outperforms Græmlin 2.0 with respect to coverage and specificity, albeit by a small margin. On large eukaryotic PIN, Græmlin 2.0 outperforms GraphAlignment. Conclusions The GraphAlignment algorithm is robust to spurious vertex associations, correctly resolves paralogs, and shows very good performance in identification of homologous vertices defined by high vertex and/or interaction similarity. The simplicity and generality of GraphAlignment edge scoring makes the algorithm an appropriate choice for global alignment of networks.

  10. High Performance Biological Pairwise Sequence Alignment: FPGA versus GPU versus Cell BE versus GPP

    Directory of Open Access Journals (Sweden)

    Khaled Benkrid

    2012-01-01

    Full Text Available This paper explores the pros and cons of reconfigurable computing in the form of FPGAs for high performance efficient computing. In particular, the paper presents the results of a comparative study between three different acceleration technologies, namely, Field Programmable Gate Arrays (FPGAs, Graphics Processor Units (GPUs, and IBM’s Cell Broadband Engine (Cell BE, in the design and implementation of the widely-used Smith-Waterman pairwise sequence alignment algorithm, with general purpose processors as a base reference implementation. Comparison criteria include speed, energy consumption, and purchase and development costs. The study shows that FPGAs largely outperform all other implementation platforms on performance per watt criterion and perform better than all other platforms on performance per dollar criterion, although by a much smaller margin. Cell BE and GPU come second and third, respectively, on both performance per watt and performance per dollar criteria. In general, in order to outperform other technologies on performance per dollar criterion (using currently available hardware and development tools, FPGAs need to achieve at least two orders of magnitude speed-up compared to general-purpose processors and one order of magnitude speed-up compared to domain-specific technologies such as GPUs.

  11. The Dynamics of Multiple Pair-Wise Collisions in a Chain for Designing Optimal Shock Amplifiers

    Directory of Open Access Journals (Sweden)

    Bryan Rodgers

    2009-01-01

    Full Text Available The major focus of this work is to examine the dynamics of velocity amplification through pair-wise collisions between multiple masses in a chain, in order to develop useful machines. For instance low-cost machines based on this principle could be used for detailed, very-high acceleration shock-testing of MEMS devices. A theoretical basis for determining the number and mass of intermediate stages in such a velocity amplifier, based on simple rigid body mechanics, is proposed. The influence of mass ratios and the coefficient of restitution on the optimisation of the system is identified and investigated. In particular, two cases are examined: in the first, the velocity of the final mass in the chain (that would have the object under test mounted on it is maximised by defining the ratio of adjacent masses according to a power law relationship; in the second, the energy transfer efficiency of the system is maximised by choosing the mass ratios such that all masses except the final mass come to rest following impact. Comparisons are drawn between both cases and the results are used in proposing design guidelines for optimal shock amplifiers. It is shown that for most practical systems, a shock amplifier with mass ratios based on a power law relationship is optimal and can easily yield velocity amplifications of a factor 5–8 times. A prototype shock testing machine that was made using above principles is briefly introduced.

  12. Comparison of Fuzzy AHP and Fuzzy TOPSIS for Road Pavement Maintenance Prioritization: Methodological Exposition and Case Study

    OpenAIRE

    Yashon O. Ouma; J. Opudo; S. Nyambenya

    2015-01-01

    For road pavement maintenance and repairs prioritization, a multiattribute approach that compares fuzzy Analytical Hierarchy Process (AHP) and fuzzy Technique for Order Preference by Ideal Situation (TOPSIS) is evaluated. The pavement distress data was collected through empirical condition surveys and rating by pavement experts. In comparison to the crisp AHP, the fuzzy AHP and fuzzy TOPSIS pairwise comparison techniques are considered to be more suitable for the subjective analysis of the pa...

  13. A pairwise residue contact area-based mean force potential for discrimination of native protein structure

    Directory of Open Access Journals (Sweden)

    Pezeshk Hamid

    2010-01-01

    Full Text Available Abstract Background Considering energy function to detect a correct protein fold from incorrect ones is very important for protein structure prediction and protein folding. Knowledge-based mean force potentials are certainly the most popular type of interaction function for protein threading. They are derived from statistical analyses of interacting groups in experimentally determined protein structures. These potentials are developed at the atom or the amino acid level. Based on orientation dependent contact area, a new type of knowledge-based mean force potential has been developed. Results We developed a new approach to calculate a knowledge-based potential of mean-force, using pairwise residue contact area. To test the performance of our approach, we performed it on several decoy sets to measure its ability to discriminate native structure from decoys. This potential has been able to distinguish native structures from the decoys in the most cases. Further, the calculated Z-scores were quite high for all protein datasets. Conclusions This knowledge-based potential of mean force can be used in protein structure prediction, fold recognition, comparative modelling and molecular recognition. The program is available at http://www.bioinf.cs.ipm.ac.ir/softwares/surfield

  14. Comparison of two novel approaches to model fibre reinforced concrete

    NARCIS (Netherlands)

    Radtke, F.K.F.; Simone, A.; Sluys, L.J.

    2009-01-01

    We present two approaches to model fibre reinforced concrete. In both approaches, discrete fibre distributions and the behaviour of the fibre-matrix interface are explicitly considered. One approach employs the reaction forces from fibre to matrix while the other is based on the partition of unity

  15. Power and sample-size estimation for microbiome studies using pairwise distances and PERMANOVA

    OpenAIRE

    Kelly, Brendan J.; Gross, Robert; Bittinger, Kyle; Sherrill-Mix, Scott; Lewis, James D.; Collman, Ronald G.; Bushman, Frederic D.; Li, Hongzhe

    2015-01-01

    Motivation: The variation in community composition between microbiome samples, termed beta diversity, can be measured by pairwise distance based on either presence–absence or quantitative species abundance data. PERMANOVA, a permutation-based extension of multivariate analysis of variance to a matrix of pairwise distances, partitions within-group and between-group distances to permit assessment of the effect of an exposure or intervention (grouping factor) upon the sampled microbiome. Within-...

  16. A Fiducial Approach to Extremes and Multiple Comparisons

    Science.gov (United States)

    Wandler, Damian V.

    2010-01-01

    Generalized fiducial inference is a powerful tool for many difficult problems. Based on an extension of R. A. Fisher's work, we used generalized fiducial inference for two extreme value problems and a multiple comparison procedure. The first extreme value problem is dealing with the generalized Pareto distribution. The generalized Pareto…

  17. Bispectral pairwise interacting source analysis for identifying systems of cross-frequency interacting brain sources from electroencephalographic or magnetoencephalographic signals

    Science.gov (United States)

    Chella, Federico; Pizzella, Vittorio; Zappasodi, Filippo; Nolte, Guido; Marzetti, Laura

    2016-05-01

    Brain cognitive functions arise through the coordinated activity of several brain regions, which actually form complex dynamical systems operating at multiple frequencies. These systems often consist of interacting subsystems, whose characterization is of importance for a complete understanding of the brain interaction processes. To address this issue, we present a technique, namely the bispectral pairwise interacting source analysis (biPISA), for analyzing systems of cross-frequency interacting brain sources when multichannel electroencephalographic (EEG) or magnetoencephalographic (MEG) data are available. Specifically, the biPISA makes it possible to identify one or many subsystems of cross-frequency interacting sources by decomposing the antisymmetric components of the cross-bispectra between EEG or MEG signals, based on the assumption that interactions are pairwise. Thanks to the properties of the antisymmetric components of the cross-bispectra, biPISA is also robust to spurious interactions arising from mixing artifacts, i.e., volume conduction or field spread, which always affect EEG or MEG functional connectivity estimates. This method is an extension of the pairwise interacting source analysis (PISA), which was originally introduced for investigating interactions at the same frequency, to the study of cross-frequency interactions. The effectiveness of this approach is demonstrated in simulations for up to three interacting source pairs and for real MEG recordings of spontaneous brain activity. Simulations show that the performances of biPISA in estimating the phase difference between the interacting sources are affected by the increasing level of noise rather than by the number of the interacting subsystems. The analysis of real MEG data reveals an interaction between two pairs of sources of central mu and beta rhythms, localizing in the proximity of the left and right central sulci.

  18. Gene ontology analysis of pairwise genetic associations in two genome-wide studies of sporadic ALS

    Directory of Open Access Journals (Sweden)

    Kim Nora

    2012-07-01

    analysis of pairwise genetic associations in two GWAS of sporadic ALS revealed a set of genes involved in cellular component organization and actin cytoskeleton, more specifically, that were not reported by prior GWAS. However, prior biological studies have implicated actin cytoskeleton in ALS and other motor neuron diseases. This study supports the idea that pathway-level analysis of GWAS data may discover important associations not revealed using conventional one-SNP-at-a-time approaches.

  19. A Quantitative Comparison of Semantic Web Page Segmentation Approaches

    NARCIS (Netherlands)

    Kreuzer, Robert; Hage, J.; Feelders, A.J.

    2015-01-01

    We compare three known semantic web page segmentation algorithms, each serving as an example of a particular approach to the problem, and one self-developed algorithm, WebTerrain, that combines two of the approaches. We compare the performance of the four algorithms for a large benchmark of modern

  20. Hong Kong Students' Approaches to Learning: Cross-Cultural Comparisons

    Science.gov (United States)

    Dasari, Bhoomiah

    2009-01-01

    Anecdotal evidence abounds in Hong Kong to the effect that students entering tertiary education are predisposed to a "rote" learning approach. With the internalisation of higher education in many countries, there is still insufficient understanding of how Chinese students approach their learning. Except few studies were conducted…

  1. Comparison of effective Hough transform-based fingerprint alignment approaches

    CSIR Research Space (South Africa)

    Mlambo, CS

    2014-08-01

    Full Text Available points set with larger rotation and small number of points. The DRBA approach was found to perform better with minutiae points with large amount of translation, and the computational time was less than that of LMBA approach. However, the memory usage...

  2. Investigating design: A comparison of manifest and latent approaches

    DEFF Research Database (Denmark)

    Cash, Philip; Snider, Chris

    2014-01-01

    This paper contributes to the on-going focus on improving design research methods, by exploring and synthesising two key interrelated research approaches e manifest and latent. These approaches are widely used individually in design research, however, this paper represents the first work bringing...

  3. DIALIGN P: Fast pair-wise and multiple sequence alignment using parallel processors

    Directory of Open Access Journals (Sweden)

    Kaufmann Michael

    2004-09-01

    Full Text Available Abstract Background Parallel computing is frequently used to speed up computationally expensive tasks in Bioinformatics. Results Herein, a parallel version of the multi-alignment program DIALIGN is introduced. We propose two ways of dividing the program into independent sub-routines that can be run on different processors: (a pair-wise sequence alignments that are used as a first step to multiple alignment account for most of the CPU time in DIALIGN. Since alignments of different sequence pairs are completely independent of each other, they can be distributed to multiple processors without any effect on the resulting output alignments. (b For alignments of large genomic sequences, we use a heuristics by splitting up sequences into sub-sequences based on a previously introduced anchored alignment procedure. For our test sequences, this combined approach reduces the program running time of DIALIGN by up to 97%. Conclusions By distributing sub-routines to multiple processors, the running time of DIALIGN can be crucially improved. With these improvements, it is possible to apply the program in large-scale genomics and proteomics projects that were previously beyond its scope.

  4. A Comparison of Five Alternative Approaches to Information Systems Development

    Directory of Open Access Journals (Sweden)

    Rudy Hirschheim

    1997-11-01

    Full Text Available The field of information systems (IS has grown dramatically over the past three decades. Recent trends have transformed the IS landscape. These trends include: the evolution of implementation technology from centralized mainframe environments towards distributed client-server architectures, embracing the internet and intranets; changes in user interface technology from character-based to graphical user interfaces, multimedia, and the World Wide Web; changes in applications from transaction processing systems towards systems supporting collaborative work; and the use of information technology as an enabler of business process reengineering and redesign. These technology changes coupled with changes in organizations and their operating environment, such as the growth of the network and virtual organization, internationalization and globalization of many organizations, intensified global competition, changes in values such as customer orientation (service quality and Quality of Working Life, have imposed new demands on the development of information systems. These changes have led to an increasing discussion about information systems development (ISO, and in particular, the various methods, tools, methodologies, and approaches for ISD. We believe such discussion has opened the door for new, alternative IS development approaches and methodologies. Our paper takes up this theme by describing five alternative ISD approaches, namely the Interactionist approach, the Speech Act-based approach, Soft Systems Methodology, the Trade Unionist approach, and the Professional Work Practices approach. Despite the fact that most of these approaches have a history of over 15 years, their relevance to IS development is not well recognized in the mainstream of IS practice and research, nor is their institutional status comparable to traditional approaches such as structured analysis and design methods. Therefore we characterize the five approaches as 'alternative' in

  5. Knowledge-based biomedical word sense disambiguation: comparison of approaches

    Directory of Open Access Journals (Sweden)

    Aronson Alan R

    2010-11-01

    Full Text Available Abstract Background Word sense disambiguation (WSD algorithms attempt to select the proper sense of ambiguous terms in text. Resources like the UMLS provide a reference thesaurus to be used to annotate the biomedical literature. Statistical learning approaches have produced good results, but the size of the UMLS makes the production of training data infeasible to cover all the domain. Methods We present research on existing WSD approaches based on knowledge bases, which complement the studies performed on statistical learning. We compare four approaches which rely on the UMLS Metathesaurus as the source of knowledge. The first approach compares the overlap of the context of the ambiguous word to the candidate senses based on a representation built out of the definitions, synonyms and related terms. The second approach collects training data for each of the candidate senses to perform WSD based on queries built using monosemous synonyms and related terms. These queries are used to retrieve MEDLINE citations. Then, a machine learning approach is trained on this corpus. The third approach is a graph-based method which exploits the structure of the Metathesaurus network of relations to perform unsupervised WSD. This approach ranks nodes in the graph according to their relative structural importance. The last approach uses the semantic types assigned to the concepts in the Metathesaurus to perform WSD. The context of the ambiguous word and semantic types of the candidate concepts are mapped to Journal Descriptors. These mappings are compared to decide among the candidate concepts. Results are provided estimating accuracy of the different methods on the WSD test collection available from the NLM. Conclusions We have found that the last approach achieves better results compared to the other methods. The graph-based approach, using the structure of the Metathesaurus network to estimate the relevance of the Metathesaurus concepts, does not perform well

  6. Green Infrastructure and German Landscape Planning: A Comparison of Approaches

    Directory of Open Access Journals (Sweden)

    Catalina VIEIRA MEJÍA

    2015-11-01

    Full Text Available A variety of similarities between green infrastructure and the German landscape planning can be found in comparing the approaches of the two planning instruments. Principles of green infrastructure such as multifunctionality, the multi-scale approach and connectivity show correspondences with landscape planning elements. However, some differences are apparent. The objective of this paper is to determine whether the main aims of these two frameworks overlap. It also seeks to deduce what benefits from ecosystem services could be provided by integrating the green infrastructure approach into the German landscape planning system. The results show that the green infrastructure concept is not well-known in German planning practice, although its principles are generally implemented through traditional landscape planning. Nevertheless, green infrastructure could act as a supplementary approach to current landscape planning practices by improving public acceptance and strengthening the social focus of the current landscape planning system.

  7. Comparisons on International Approaches of Business and Project Risk Management

    OpenAIRE

    Nadia Carmen ENE

    2005-01-01

    In this article we intend to present a comparative approach between three recognized international methodologies for risk management: RISKMAN, Project Management Institute Methodology-PMBoK and Project Risk Analysis and Management Guide (produced by Association for Project Management).

  8. Comparison of topic extraction approaches and their results

    NARCIS (Netherlands)

    Velden, Theresa; Boyack, Kevin W.; Gläser, Jochen; Koopman, Rob; Scharnhorst, Andrea; Wang, Shenghui

    2017-01-01

    This is the last paper in the Synthesis section of this special issue on ‘Same Data, Different Results’. We first provide a framework of how to describe and distinguish approaches to topic extraction

  9. A configuration space of homologous proteins conserving mutual information and allowing a phylogeny inference based on pair-wise Z-score probabilities.

    Science.gov (United States)

    Bastien, Olivier; Ortet, Philippe; Roy, Sylvaine; Maréchal, Eric

    2005-03-10

    Popular methods to reconstruct molecular phylogenies are based on multiple sequence alignments, in which addition or removal of data may change the resulting tree topology. We have sought a representation of homologous proteins that would conserve the information of pair-wise sequence alignments, respect probabilistic properties of Z-scores (Monte Carlo methods applied to pair-wise comparisons) and be the basis for a novel method of consistent and stable phylogenetic reconstruction. We have built up a spatial representation of protein sequences using concepts from particle physics (configuration space) and respecting a frame of constraints deduced from pair-wise alignment score properties in information theory. The obtained configuration space of homologous proteins (CSHP) allows the representation of real and shuffled sequences, and thereupon an expression of the TULIP theorem for Z-score probabilities. Based on the CSHP, we propose a phylogeny reconstruction using Z-scores. Deduced trees, called TULIP trees, are consistent with multiple-alignment based trees. Furthermore, the TULIP tree reconstruction method provides a solution for some previously reported incongruent results, such as the apicomplexan enolase phylogeny. The CSHP is a unified model that conserves mutual information between proteins in the way physical models conserve energy. Applications include the reconstruction of evolutionary consistent and robust trees, the topology of which is based on a spatial representation that is not reordered after addition or removal of sequences. The CSHP and its assigned phylogenetic topology, provide a powerful and easily updated representation for massive pair-wise genome comparisons based on Z-score computations.

  10. A configuration space of homologous proteins conserving mutual information and allowing a phylogeny inference based on pair-wise Z-score probabilities

    Directory of Open Access Journals (Sweden)

    Maréchal Eric

    2005-03-01

    Full Text Available Abstract Background Popular methods to reconstruct molecular phylogenies are based on multiple sequence alignments, in which addition or removal of data may change the resulting tree topology. We have sought a representation of homologous proteins that would conserve the information of pair-wise sequence alignments, respect probabilistic properties of Z-scores (Monte Carlo methods applied to pair-wise comparisons and be the basis for a novel method of consistent and stable phylogenetic reconstruction. Results We have built up a spatial representation of protein sequences using concepts from particle physics (configuration space and respecting a frame of constraints deduced from pair-wise alignment score properties in information theory. The obtained configuration space of homologous proteins (CSHP allows the representation of real and shuffled sequences, and thereupon an expression of the TULIP theorem for Z-score probabilities. Based on the CSHP, we propose a phylogeny reconstruction using Z-scores. Deduced trees, called TULIP trees, are consistent with multiple-alignment based trees. Furthermore, the TULIP tree reconstruction method provides a solution for some previously reported incongruent results, such as the apicomplexan enolase phylogeny. Conclusion The CSHP is a unified model that conserves mutual information between proteins in the way physical models conserve energy. Applications include the reconstruction of evolutionary consistent and robust trees, the topology of which is based on a spatial representation that is not reordered after addition or removal of sequences. The CSHP and its assigned phylogenetic topology, provide a powerful and easily updated representation for massive pair-wise genome comparisons based on Z-score computations.

  11. Pairwise additivity in the nuclear magnetic resonance interactions of atomic xenon.

    Science.gov (United States)

    Hanni, Matti; Lantto, Perttu; Vaara, Juha

    2009-04-14

    Nuclear magnetic resonance (NMR) of atomic (129/131)Xe is used as a versatile probe of the structure and dynamics of various host materials, due to the sensitivity of the Xe NMR parameters to intermolecular interactions. The principles governing this sensitivity can be investigated using the prototypic system of interacting Xe atoms. In the pairwise additive approximation (PAA), the binary NMR chemical shift, nuclear quadrupole coupling (NQC), and spin-rotation (SR) curves for the xenon dimer are utilized for fast and efficient evaluation of the corresponding NMR tensors in small xenon clusters Xe(n) (n = 2-12). If accurate, the preparametrized PAA enables the analysis of the NMR properties of xenon clusters, condensed xenon phases, and xenon gas without having to resort to electronic structure calculations of instantaneous configurations for n > 2. The binary parameters for Xe(2) at different internuclear distances were obtained at the nonrelativistic Hartree-Fock level of theory. Quantum-chemical (QC) calculations at the corresponding level were used to obtain the NMR parameters of the Xe(n) (n = 2-12) clusters at the equilibrium geometries. Comparison of PAA and QC data indicates that the direct use of the binary property curves of Xe(2) can be expected to be well-suited for the analysis of Xe NMR in the gaseous phase dominated by binary collisions. For use in condensed phases where many-body effects should be considered, effective binary property functions were fitted using the principal components of QC tensors from Xe(n) clusters. Particularly, the chemical shift in Xe(n) is strikingly well-described by the effective PAA. The coordination number Z of the Xe site is found to be the most important factor determining the chemical shift, with the largest shifts being found for high-symmetry sites with the largest Z. This is rationalized in terms of the density of virtual electronic states available for response to magnetic perturbations.

  12. A Multiple Cross-Cultural Comparison of Approaches to Learning

    Science.gov (United States)

    Bowden, Mark P.; Abhayawansa, Subhash; Manzin, Gregoria

    2015-01-01

    This study compares learning approaches of local English-speaking students and students from Asian countries studying at an Australian metropolitan university. The sample consists of students across 13 different countries. Unlike previous studies, students from Asian countries are subdivided into two categories: students from Confucian Heritage…

  13. Comparison of educational facilitation approaches for Grade R ...

    African Journals Online (AJOL)

    The Early Childhood Development Manager in Mpumalanga is faced with the problem of providing evidence-based guidance of the best facilitation approach in the Grade R context. An investigation on the effect of facilitation, i.e. play-based or formal instruction, on Grade R performance scores in English Additional ...

  14. A Comparison of HPT and Traditional Training Approaches.

    Science.gov (United States)

    Kretz, Richard

    2002-01-01

    Focuses on the comparative use of training from human performance technology (HPT) and traditional training perspectives, based on taxonomy. Concludes that the primary difference is a holistic systems performance improvement approach by eliminating barriers with HPT versus reaction or response to a set of business objectives in traditional…

  15. Comparison of Calculation Approaches for Monopiles for Offshore Wind Turbines

    DEFF Research Database (Denmark)

    Augustesen, Anders Hust; Sørensen, Søren Peder Hyldal; Ibsen, Lars Bo

    2010-01-01

    Large-diameter (4 to 6m) monopiles are often used as foundations for offshore wind turbines. The monopiles are subjected to large horizontal forces and overturning moments and they are traditionally designed based on the p-y curve method (Winkler type approach). The p-y curves recommended in offs...

  16. Comparison of attitude determination approaches using multiple Global Positioning System (GPS antennas

    Directory of Open Access Journals (Sweden)

    Wang Bing

    2013-02-01

    Full Text Available GPS-based attitude system is an important research field, since it is a valuable technique for the attitude determination of platforms. There exist two classes approaches for attitude determination using the GPS. The one determines attitude via baseline estimates in two frames, the other one solves for attitude by incorporating the attitude parameters directly into the GPS measurements. However, comparisons between these two classes approaches have been unexplored. First of all, two algorithms are introduced in detail which on behalf of these two kinds of approaches. Then we present numerical simulations demonstrating the performance of our algorithms and provide a comparison evaluating.

  17. Comparison of Plant Life Management Approaches for Long Term Operations

    International Nuclear Information System (INIS)

    Kang, Kisig

    2012-01-01

    Plant life management can be defined as the integration of ageing and economic planning to maintain a high level of safety and optimize operations. Many Member States have given high priority to long term operation of nuclear power plants beyond the time frame originally anticipated (e. g. 30 or 40 years). Out of a total of 445 (369 GWe) operating nuclear power plants, 349 units (297 GWe) have been in operation for more than 20 years (as of November 2011). The need for engineering support to operation, maintenance, safety review and life management for long term operation as well as education and training in the field is increasingly evident. In addition the Fukushima accident has rendered all stake holders even more attentive to safety concerns and to the provision of beyond safety measures in the preparation and scrutiny of applications for operational design life extensions. In many countries, the safety performance of NPPs is periodically followed and characterized via the periodic safety review (PSR) approach. The regulatory The regulatory review and acceptance of the PSR gives the licensee the permission to operate the plant for up to the end of the next PSR cycle (usually 10 years). In the USA and other countries operating US designed plants, the license renewal application is based on the five pre-requisite requirements and ageing management programme for passive long life system structure and components(SSCs) and active systems is adequately addressed by the maintenance rule (MR) requirements and other established regulatory processes. Other Member States have adopted a combined approach that incorporates elements of both PSR and additional LRA specific requirements primarily focused on time limited ageing analysis. Taking into account this variety of approaches, the international atomic energy agency (IAEA) initiated work for collecting and sharing information among Member States about good practices on plant life management for long term operation in

  18. Comparison of Different Approaches to the Cutting Plan Scheduling

    Directory of Open Access Journals (Sweden)

    Peter Bober

    2011-10-01

    Full Text Available Allocation of specific cutting plans and their scheduling to individual cutting machines presents a combinatorial optimization problem. In this respect, various approaches and methods are used to arrive to a viable solution. The paper reports three approaches represented by three discreet optimization methods. The first one is back-tracing algorithm and serves as a reference to verify functionality of the other two ones. The second method is optimization using genetic algorithms, and the third one presents heuristic approach to optimization based on anticipated properties of an optimal solution. Research results indicate that genetic algorithms are demanding to calculate though not dependant on the selected objective function. Heuristic algorithm is fast but dependant upon anticipated properties of the optimal solution. Hence, at change of the objective function it has to be changed. When the scheduling by genetic algorithms is solvable in a sufficiently short period of time, it is more appropriate from the practical point than the heuristic algorithm. The back-tracing algorithm usually does not provide a result in a feasible period of time.

  19. Fast and accurate estimation of the covariance between pairwise maximum likelihood distances

    Directory of Open Access Journals (Sweden)

    Manuel Gil

    2014-09-01

    Full Text Available Pairwise evolutionary distances are a model-based summary statistic for a set of molecular sequences. They represent the leaf-to-leaf path lengths of the underlying phylogenetic tree. Estimates of pairwise distances with overlapping paths covary because of shared mutation events. It is desirable to take these covariance structure into account to increase precision in any process that compares or combines distances. This paper introduces a fast estimator for the covariance of two pairwise maximum likelihood distances, estimated under general Markov models. The estimator is based on a conjecture (going back to Nei & Jin, 1989 which links the covariance to path lengths. It is proven here under a simple symmetric substitution model. A simulation shows that the estimator outperforms previously published ones in terms of the mean squared error.

  20. Fast and accurate estimation of the covariance between pairwise maximum likelihood distances.

    Science.gov (United States)

    Gil, Manuel

    2014-01-01

    Pairwise evolutionary distances are a model-based summary statistic for a set of molecular sequences. They represent the leaf-to-leaf path lengths of the underlying phylogenetic tree. Estimates of pairwise distances with overlapping paths covary because of shared mutation events. It is desirable to take these covariance structure into account to increase precision in any process that compares or combines distances. This paper introduces a fast estimator for the covariance of two pairwise maximum likelihood distances, estimated under general Markov models. The estimator is based on a conjecture (going back to Nei & Jin, 1989) which links the covariance to path lengths. It is proven here under a simple symmetric substitution model. A simulation shows that the estimator outperforms previously published ones in terms of the mean squared error.

  1. An ANOVA approach for statistical comparisons of brain networks.

    Science.gov (United States)

    Fraiman, Daniel; Fraiman, Ricardo

    2018-03-16

    The study of brain networks has developed extensively over the last couple of decades. By contrast, techniques for the statistical analysis of these networks are less developed. In this paper, we focus on the statistical comparison of brain networks in a nonparametric framework and discuss the associated detection and identification problems. We tested network differences between groups with an analysis of variance (ANOVA) test we developed specifically for networks. We also propose and analyse the behaviour of a new statistical procedure designed to identify different subnetworks. As an example, we show the application of this tool in resting-state fMRI data obtained from the Human Connectome Project. We identify, among other variables, that the amount of sleep the days before the scan is a relevant variable that must be controlled. Finally, we discuss the potential bias in neuroimaging findings that is generated by some behavioural and brain structure variables. Our method can also be applied to other kind of networks such as protein interaction networks, gene networks or social networks.

  2. Comparison of different homogenization approaches for elastic–viscoplastic materials

    International Nuclear Information System (INIS)

    Mercier, S; Molinari, A; Berbenni, S; Berveiller, M

    2012-01-01

    Homogenization of linear viscoelastic and non-linear viscoplastic composite materials is considered in this paper. First, we compare two homogenization schemes based on the Mori–Tanaka method coupled with the additive interaction (AI) law proposed by Molinari et al (1997 Mech. Mater. 26 43–62) or coupled with a concentration law based on translated fields (TF) originally proposed for the self-consistent scheme by Paquin et al (1999 Arch. Appl. Mech. 69 14–35). These methods are also evaluated against (i) full-field calculations of the literature based on the finite element method and on fast Fourier transform, (ii) available analytical exact solutions obtained in linear viscoelasticity and (iii) homogenization methods based on variational approaches. Developments of the AI model are obtained for linear and non-linear material responses while results for the TF method are shown for the linear case. Various configurations are considered: spherical inclusions, aligned fibers, hard and soft inclusions, large material contrasts between phases, volume-preserving versus dilatant anelastic flow, non-monotonic loading. The agreement between the AI and TF methods is excellent and the correlation with full field calculations is in general of quite good quality (with some exceptions for non-linear composites with a large volume fraction of very soft inclusions for which a discrepancy of about 15% was found for macroscopic stress). Description of the material behavior with internal variables can be accounted for with the AI and TF approaches and therefore complex loadings can be easily handled in contrast with most hereditary approaches. (paper)

  3. A comparison of the Bayesian and frequentist approaches to estimation

    CERN Document Server

    Samaniego, Francisco J

    2010-01-01

    This monograph contributes to the area of comparative statistical inference. Attention is restricted to the important subfield of statistical estimation. The book is intended for an audience having a solid grounding in probability and statistics at the level of the year-long undergraduate course taken by statistics and mathematics majors. The necessary background on Decision Theory and the frequentist and Bayesian approaches to estimation is presented and carefully discussed in Chapters 1-3. The 'threshold problem' - identifying the boundary between Bayes estimators which tend to outperform st

  4. A comprehensive comparison of comparative RNA structure prediction approaches

    DEFF Research Database (Denmark)

    Gardner, P. P.; Giegerich, R.

    2004-01-01

    -finding and multiple-sequence-alignment algorithms. Results Here we evaluate a number of RNA folding algorithms using reliable RNA data-sets and compare their relative performance. Conclusions We conclude that comparative data can enhance structure prediction but structure-prediction-algorithms vary widely in terms......Background An increasing number of researchers have released novel RNA structure analysis and prediction algorithms for comparative approaches to structure prediction. Yet, independent benchmarking of these algorithms is rarely performed as is now common practice for protein-folding, gene...

  5. Power and sample-size estimation for microbiome studies using pairwise distances and PERMANOVA.

    Science.gov (United States)

    Kelly, Brendan J; Gross, Robert; Bittinger, Kyle; Sherrill-Mix, Scott; Lewis, James D; Collman, Ronald G; Bushman, Frederic D; Li, Hongzhe

    2015-08-01

    The variation in community composition between microbiome samples, termed beta diversity, can be measured by pairwise distance based on either presence-absence or quantitative species abundance data. PERMANOVA, a permutation-based extension of multivariate analysis of variance to a matrix of pairwise distances, partitions within-group and between-group distances to permit assessment of the effect of an exposure or intervention (grouping factor) upon the sampled microbiome. Within-group distance and exposure/intervention effect size must be accurately modeled to estimate statistical power for a microbiome study that will be analyzed with pairwise distances and PERMANOVA. We present a framework for PERMANOVA power estimation tailored to marker-gene microbiome studies that will be analyzed by pairwise distances, which includes: (i) a novel method for distance matrix simulation that permits modeling of within-group pairwise distances according to pre-specified population parameters; (ii) a method to incorporate effects of different sizes within the simulated distance matrix; (iii) a simulation-based method for estimating PERMANOVA power from simulated distance matrices; and (iv) an R statistical software package that implements the above. Matrices of pairwise distances can be efficiently simulated to satisfy the triangle inequality and incorporate group-level effects, which are quantified by the adjusted coefficient of determination, omega-squared (ω2). From simulated distance matrices, available PERMANOVA power or necessary sample size can be estimated for a planned microbiome study. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  6. Perceptron learning of pairwise contact energies for proteins incorporating the amino acid environment

    Science.gov (United States)

    Heo, Muyoung; Kim, Suhkmann; Moon, Eun-Joung; Cheon, Mookyung; Chung, Kwanghoon; Chang, Iksoo

    2005-07-01

    Although a coarse-grained description of proteins is a simple and convenient way to attack the protein folding problem, the construction of a global pairwise energy function which can simultaneously recognize the native folds of many proteins has resulted in partial success. We have sought the possibility of a systematic improvement of this pairwise-contact energy function as we extended the parameter space of amino acids, incorporating local environments of amino acids, beyond a 20×20 matrix. We have studied the pairwise contact energy functions of 20×20 , 60×60 , and 180×180 matrices depending on the extent of parameter space, and compared their effect on the learnability of energy parameters in the context of a gapless threading, bearing in mind that a 20×20 pairwise contact matrix has been shown to be too simple to recognize the native folds of many proteins. In this paper, we show that the construction of a global pairwise energy function was achieved using 1006 training proteins of a homology of less than 30%, which include all representatives of different protein classes. After parametrizing the local environments of the amino acids into nine categories depending on three secondary structures and three kinds of hydrophobicity (desolvation), the 16290 pairwise contact energies (scores) of the amino acids could be determined by perceptron learning and protein threading. These could simultaneously recognize all the native folds of the 1006 training proteins. When these energy parameters were tested on the 382 test proteins of a homology of less than 90%, 370 (96.9%) proteins could recognize their native folds. We set up a simple thermodynamic framework in the conformational space of decoys to calculate the unfolded fraction and the specific heat of real proteins. The different thermodynamic stabilities of E.coli ribonuclease H (RNase H) and its mutants were well described in our calculation, agreeing with the experiment.

  7. Comparison of interbody fusion approaches for disabling low back pain.

    Science.gov (United States)

    Hacker, R J

    1997-03-15

    This is a study comparing two groups of patients surgically treated for disabling low back pain. One group was treated with lumbar anteroposterior fusion (360 degrees fusion), the other with posterior lumbar interbody fusion and an interbody fixation device. To determine which approach provided the best and most cost-effective outcome using similar patient selection criteria. Others have shown that certain patients with disabling low back pain benefit from lumbar fusion. Although rarely reported, the costs of different surgical treatments appear to vary significantly, whereas the patient outcome may vary little. Since 1991, 75 patients have been treated Starting in 1993, posterior lumbar interbody fusion BAK was offered to patients as an alternative to 360 degrees fusion. The treating surgeon reviewed the cases. The interbody fixation device used (BAK; Spine-Tech, Inc., Minneapolis, MN) was part of a Food and Drug Administration study. Patient selection criteria included examination, response to conservative therapy, imaging, psychological profile, and discography. North American Spine Society outcome questionnaires, BAK investigation data radiographs, chart entries, billing records and patient interviews were the basis for assessment. Age, sex compensable injury history and history of previous surgery were similar. Operative time; blood loss, hospitalization time, and total costs were significantly different. There was a quicker return to work and closure of workers compensation claims for the posterior lumbar interbody fusion-BAK group. Patient satisfaction was comparable at last follow-up. Posterior lumbar interbody fusion-BAK achieves equal patient satisfaction but fiscally surpasses the 360 degrees fusion approach. Today's environment of regulated medical practice requires the surgeon to consider cost effectiveness when performing fusion for low back pain.

  8. Ensemble survival tree models to reveal pairwise interactions of variables with time-to-events outcomes in low-dimensional setting

    Science.gov (United States)

    Dazard, Jean-Eudes; Ishwaran, Hemant; Mehlotra, Rajeev; Weinberg, Aaron; Zimmerman, Peter

    2018-01-01

    Unraveling interactions among variables such as genetic, clinical, demographic and environmental factors is essential to understand the development of common and complex diseases. To increase the power to detect such variables interactions associated with clinical time-to-events outcomes, we borrowed established concepts from random survival forest (RSF) models. We introduce a novel RSF-based pairwise interaction estimator and derive a randomization method with bootstrap confidence intervals for inferring interaction significance. Using various linear and nonlinear time-to-events survival models in simulation studies, we first show the efficiency of our approach: true pairwise interaction-effects between variables are uncovered, while they may not be accompanied with their corresponding main-effects, and may not be detected by standard semi-parametric regression modeling and test statistics used in survival analysis. Moreover, using a RSF-based cross-validation scheme for generating prediction estimators, we show that informative predictors may be inferred. We applied our approach to an HIV cohort study recording key host gene polymorphisms and their association with HIV change of tropism or AIDS progression. Altogether, this shows how linear or nonlinear pairwise statistical interactions of variables may be efficiently detected with a predictive value in observational studies with time-to-event outcomes. PMID:29453930

  9. Comparison of four approaches to a rock facies classification problem

    Science.gov (United States)

    Dubois, M.K.; Bohling, Geoffrey C.; Chakrabarti, S.

    2007-01-01

    In this study, seven classifiers based on four different approaches were tested in a rock facies classification problem: classical parametric methods using Bayes' rule, and non-parametric methods using fuzzy logic, k-nearest neighbor, and feed forward-back propagating artificial neural network. Determining the most effective classifier for geologic facies prediction in wells without cores in the Panoma gas field, in Southwest Kansas, was the objective. Study data include 3600 samples with known rock facies class (from core) with each sample having either four or five measured properties (wire-line log curves), and two derived geologic properties (geologic constraining variables). The sample set was divided into two subsets, one for training and one for testing the ability of the trained classifier to correctly assign classes. Artificial neural networks clearly outperformed all other classifiers and are effective tools for this particular classification problem. Classical parametric models were inadequate due to the nature of the predictor variables (high dimensional and not linearly correlated), and feature space of the classes (overlapping). The other non-parametric methods tested, k-nearest neighbor and fuzzy logic, would need considerable improvement to match the neural network effectiveness, but further work, possibly combining certain aspects of the three non-parametric methods, may be justified. ?? 2006 Elsevier Ltd. All rights reserved.

  10. Comparison of Resource Platform Selection Approaches for Scientific Workflows

    Energy Technology Data Exchange (ETDEWEB)

    Simmhan, Yogesh; Ramakrishnan, Lavanya

    2010-03-05

    Cloud computing is increasingly considered as an additional computational resource platform for scientific workflows. The cloud offers opportunity to scale-out applications from desktops and local cluster resources. At the same time, it can eliminate the challenges of restricted software environments and queue delays in shared high performance computing environments. Choosing from these diverse resource platforms for a workflow execution poses a challenge for many scientists. Scientists are often faced with deciding resource platform selection trade-offs with limited information on the actual workflows. While many workflow planning methods have explored task scheduling onto different resources, these methods often require fine-scale characterization of the workflow that is onerous for a scientist. In this position paper, we describe our early exploratory work into using blackbox characteristics to do a cost-benefit analysis across of using cloud platforms. We use only very limited high-level information on the workflow length, width, and data sizes. The length and width are indicative of the workflow duration and parallelism. The data size characterizes the IO requirements. We compare the effectiveness of this approach to other resource selection models using two exemplar scientific workflows scheduled on desktops, local clusters, HPC centers, and clouds. Early results suggest that the blackbox model often makes the same resource selections as a more fine-grained whitebox model. We believe the simplicity of the blackbox model can help inform a scientist on the applicability of cloud computing resources even before porting an existing workflow.

  11. A Comparison of the Approaches to Customer Experience Analysis

    Directory of Open Access Journals (Sweden)

    Havíř David

    2017-08-01

    Full Text Available Nowadays, customer experience is receiving much attention in scientific and managerial community. Scholars and practitioners state that customer experience is the next area of competition. For a long time, there has been a call for a uniform, accurate definition, definition of its components, and the development of the customer experience frameworks. As this topic is new, there has been a considerable fragmentation. The question is if the fragmentation is still present and how can we address it. The aim of this paper is to summarize research on customer experience analysis and to explore and compare the dimensions describing customer experience listed in seven conceptual models with findings from 17 research projects on customer experience conducted after the year 2010. The purpose of this is to summarize recent knowledge, get the most comprehensive view on customer experience and its possible decomposition, and to reveal possible relationships between the dimensions. Based on a review of the available literature, the paper juxtaposes several approaches to customer experience analysis and compares their results to find similarities and differences among them. In the first step, the dimensions and factors of the customer experience were extracted from the seven models to analyze customer experience and they were compared with each other. This resulted in a set of dimensions and factors. In the next step, customer experience factors and dimensions were extracted from 17 practical research papers on customer experience. Finally, based on their descriptions and found similarities, the dimensions and factors were put together into several groups, as this grouping and creation of the new universal set of dimensions might solve the fragmentation issue.

  12. Comparison of wind mill cluster performance: A multicriteria approach

    Energy Technology Data Exchange (ETDEWEB)

    Rajakumar, D.G.; Nagesha, N. [Visvesvaraya Technological Univ., Karnataka (India)

    2012-07-01

    Energy is a crucial input for the economic and social development of any nation. Both renewable and non-renewable energy contribute in meeting the total requirement of the economy. As an affordable and clean energy source, wind energy is amongst the world's fastest growing renewable energy forms. Though there are several wind-mill clusters producing energy in different geographical locations, evaluating their performance is a complex task and not much of literature is available in this area. In this backdrop, an attempt is made in the current paper to estimate the performance of a wind-mill cluster through an index called Cluster Performance Index (CPI) adopting a multi-criteria approach. The proposed CPI comprises four criteria viz., Technical Performance Indicators (TePI), Economic Performance Indicators (EcPI), Environmental Performance Indicators (EnPI), and Sociological Performance Indicators (SoPI). Under each performance criterion a total of ten parameters are considered with five subjective and five objective oriented responses. The methodology is implemented by collecting empirical data from three wind-mill clusters located at Chitradurga, Davangere, and Gadag in the southern Indian State of Karnataka. Totally fifteen different stake holders are consulted through a set of structured researcher administered questionnaire to collect the relevant data in each wind farm. Stake holders involved engineers working in wind farms, wind farm developers, Government officials from energy department and a few selected residential people near the wind farms. The results of the study revealed that Chitradurga wind farm performed much better with a CPI of 45.267 as compared to Gadag (CPI of 28.362) and Davangere (CPI of 19.040) wind farms. (Author)

  13. Assessment of crystalline disorder in cryo-milled samples of indomethacin using atomic pair-wise distribution functions

    DEFF Research Database (Denmark)

    Bøtker, Johan P; Karmwar, Pranav; Strachan, Clare J

    2011-01-01

    to analyse the cryo-milled samples. The high similarity between the ¿-indomethacin cryogenic ball milled samples and the crude ¿-indomethacin indicated that milled samples retained residual order of the ¿-form. The PDF analysis encompassed the capability of achieving a correlation with the physical......The aim of this study was to investigate the usefulness of the atomic pair-wise distribution function (PDF) to detect the extension of disorder/amorphousness induced into a crystalline drug using a cryo-milling technique, and to determine the optimal milling times to achieve amorphisation. The PDF...... properties determined from DSC, ss-NMR and stability experiments. Multivariate data analysis (MVDA) was used to visualize the differences in the PDF and XRPD data. The MVDA approach revealed that PDF is more efficient in assessing the introduced degree of disorder in ¿-indomethacin after cryo-milling than...

  14. Use of an excess variance approach for the certification of reference materials by interlaboratory comparison

    International Nuclear Information System (INIS)

    Crozet, M.; Rigaux, C.; Roudil, D.; Tuffery, B.; Ruas, A.; Desenfant, M.

    2014-01-01

    In the nuclear field, the accuracy and comparability of analytical results are crucial to insure correct accountancy, good process control and safe operational conditions. All of these require reliable measurements based on reference materials whose certified values must be obtained by robust metrological approaches according to the requirements of ISO guides 34 and 35. The data processing of the characterization step is one of the key steps of a reference material production process. Among several methods, the use of interlaboratory comparison results for reference material certification is very common. The DerSimonian and Laird excess variance approach, described and implemented in this paper, is a simple and efficient method for the data processing of interlaboratory comparison results for reference material certification. By taking into account not only the laboratory uncertainties but also the spread of the individual results into the calculation of the weighted mean, this approach minimizes the risk to get biased certified values in the case where one or several laboratories either underestimate their measurement uncertainties or do not identify all measurement biases. This statistical method has been applied to a new CETAMA plutonium reference material certified by interlaboratory comparison and has been compared to the classical weighted mean approach described in ISO Guide 35. This paper shows the benefits of using an 'excess variance' approach for the certification of reference material by interlaboratory comparison. (authors)

  15. Pairwise protein expression classifier for candidate biomarker discovery for early detection of human disease prognosis

    Directory of Open Access Journals (Sweden)

    Kaur Parminder

    2012-08-01

    Full Text Available Abstract Background An approach to molecular classification based on the comparative expression of protein pairs is presented. The method overcomes some of the present limitations in using peptide intensity data for class prediction for problems such as the detection of a disease, disease prognosis, or for predicting treatment response. Data analysis is particularly challenging in these situations due to sample size (typically tens being much smaller than the large number of peptides (typically thousands. Methods based upon high dimensional statistical models, machine learning or other complex classifiers generate decisions which may be very accurate but can be complex and difficult to interpret in simple or biologically meaningful terms. A classification scheme, called ProtPair, is presented that generates simple decision rules leading to accurate classification which is based on measurement of very few proteins and requires only relative expression values, providing specific targeted hypotheses suitable for straightforward validation. Results ProtPair has been tested against clinical data from 21 patients following a bone marrow transplant, 13 of which progress to idiopathic pneumonia syndrome (IPS. The approach combines multiple peptide pairs originating from the same set of proteins, with each unique peptide pair providing an independent measure of discriminatory power. The prediction rate of the ProtPair for IPS study as measured by leave-one-out CV is 69.1%, which can be very beneficial for clinical diagnosis as it may flag patients in need of closer monitoring. The “top ranked” proteins provided by ProtPair are known to be associated with the biological processes and pathways intimately associated with known IPS biology based on mouse models. Conclusions An approach to biomarker discovery, called ProtPair, is presented. ProtPair is based on the differential expression of pairs of peptides and the associated proteins. Using mass

  16. Document Level Assessment of Document Retrieval Systems in a Pairwise System Evaluation

    Science.gov (United States)

    Rajagopal, Prabha; Ravana, Sri Devi

    2017-01-01

    Introduction: The use of averaged topic-level scores can result in the loss of valuable data and can cause misinterpretation of the effectiveness of system performance. This study aims to use the scores of each document to evaluate document retrieval systems in a pairwise system evaluation. Method: The chosen evaluation metrics are document-level…

  17. Revisiting the classification of curtoviruses based on genome-wide pairwise identity

    KAUST Repository

    Varsani, Arvind

    2014-01-25

    Members of the genus Curtovirus (family Geminiviridae) are important pathogens of many wild and cultivated plant species. Until recently, relatively few full curtovirus genomes have been characterised. However, with the 19 full genome sequences now available in public databases, we revisit the proposed curtovirus species and strain classification criteria. Using pairwise identities coupled with phylogenetic evidence, revised species and strain demarcation guidelines have been instituted. Specifically, we have established 77% genome-wide pairwise identity as a species demarcation threshold and 94% genome-wide pairwise identity as a strain demarcation threshold. Hence, whereas curtovirus sequences with >77% genome-wide pairwise identity would be classified as belonging to the same species, those sharing >94% identity would be classified as belonging to the same strain. We provide step-by-step guidelines to facilitate the classification of newly discovered curtovirus full genome sequences and a set of defined criteria for naming new species and strains. The revision yields three curtovirus species: Beet curly top virus (BCTV), Spinach severe surly top virus (SpSCTV) and Horseradish curly top virus (HrCTV). © 2014 Springer-Verlag Wien.

  18. Revisiting the classification of curtoviruses based on genome-wide pairwise identity

    KAUST Repository

    Varsani, Arvind; Martin, Darren Patrick; Navas-Castillo, Jesú s; Moriones, Enrique; Herná ndez-Zepeda, Cecilia; Idris, Ali; Murilo Zerbini, F.; Brown, Judith K.

    2014-01-01

    Members of the genus Curtovirus (family Geminiviridae) are important pathogens of many wild and cultivated plant species. Until recently, relatively few full curtovirus genomes have been characterised. However, with the 19 full genome sequences now available in public databases, we revisit the proposed curtovirus species and strain classification criteria. Using pairwise identities coupled with phylogenetic evidence, revised species and strain demarcation guidelines have been instituted. Specifically, we have established 77% genome-wide pairwise identity as a species demarcation threshold and 94% genome-wide pairwise identity as a strain demarcation threshold. Hence, whereas curtovirus sequences with >77% genome-wide pairwise identity would be classified as belonging to the same species, those sharing >94% identity would be classified as belonging to the same strain. We provide step-by-step guidelines to facilitate the classification of newly discovered curtovirus full genome sequences and a set of defined criteria for naming new species and strains. The revision yields three curtovirus species: Beet curly top virus (BCTV), Spinach severe surly top virus (SpSCTV) and Horseradish curly top virus (HrCTV). © 2014 Springer-Verlag Wien.

  19. A gradient approximation for calculating Debye temperatures from pairwise interatomic potentials

    International Nuclear Information System (INIS)

    Jackson, D.P.

    1975-09-01

    A simple gradient approximation is given for calculating the effective Debye temperature of a cubic crystal from central pairwise interatomic potentials. For examples of the Morse potential applied to cubic metals the results are in generally good agreement with experiment. (author)

  20. A Comparative Study of Pairwise Learning Methods Based on Kernel Ridge Regression.

    Science.gov (United States)

    Stock, Michiel; Pahikkala, Tapio; Airola, Antti; De Baets, Bernard; Waegeman, Willem

    2018-06-12

    Many machine learning problems can be formulated as predicting labels for a pair of objects. Problems of that kind are often referred to as pairwise learning, dyadic prediction, or network inference problems. During the past decade, kernel methods have played a dominant role in pairwise learning. They still obtain a state-of-the-art predictive performance, but a theoretical analysis of their behavior has been underexplored in the machine learning literature. In this work we review and unify kernel-based algorithms that are commonly used in different pairwise learning settings, ranging from matrix filtering to zero-shot learning. To this end, we focus on closed-form efficient instantiations of Kronecker kernel ridge regression. We show that independent task kernel ridge regression, two-step kernel ridge regression, and a linear matrix filter arise naturally as a special case of Kronecker kernel ridge regression, implying that all these methods implicitly minimize a squared loss. In addition, we analyze universality, consistency, and spectral filtering properties. Our theoretical results provide valuable insights into assessing the advantages and limitations of existing pairwise learning methods.

  1. On the calculation of x-ray scattering signals from pairwise radial distribution functions

    DEFF Research Database (Denmark)

    Dohn, Asmus Ougaard; Biasin, Elisa; Haldrup, Kristoffer

    2015-01-01

    We derive a formulation for evaluating (time-resolved) x-ray scattering signals of solvated chemical systems, based on pairwise radial distribution functions, with the aim of this formulation to accompany molecular dynamics simulations. The derivation is described in detail to eliminate any possi...

  2. A Study of the Use of Pairwise Comparison in the Context of Social Online Moderation

    Science.gov (United States)

    Tarricone, Pina; Newhouse, C. Paul

    2016-01-01

    Traditional moderation of student assessments is often carried out with groups of teachers working face-to-face in a specified location making judgements concerning the quality of representations of achievement. This traditional model has relied little on modern information communications technologies and has been logistically challenging. We…

  3. Modeling Temporal Structure in Music for Emotion Prediction using Pairwise Comparisons

    DEFF Research Database (Denmark)

    Madsen, Jens; Jensen, Bjørn Sand; Larsen, Jan

    2014-01-01

    such as emotions, genre, and similarity. This paper addresses the specific hypothesis whether temporal information is essential for predicting expressed emotions in music, as a prototypical example of a cognitive aspect of music. We propose to test this hypothesis using a novel processing pipeline: 1) Extracting...

  4. On the equivalence of generalized least-squares approaches to the evaluation of measurement comparisons

    Science.gov (United States)

    Koo, A.; Clare, J. F.

    2012-06-01

    Analysis of CIPM international comparisons is increasingly being carried out using a model-based approach that leads naturally to a generalized least-squares (GLS) solution. While this method offers the advantages of being easier to audit and having general applicability to any form of comparison protocol, there is a lack of consensus over aspects of its implementation. Two significant results are presented that show the equivalence of three differing approaches discussed by or applied in comparisons run by Consultative Committees of the CIPM. Both results depend on a mathematical condition equivalent to the requirement that any two artefacts in the comparison are linked through a sequence of measurements of overlapping pairs of artefacts. The first result is that a GLS estimator excluding all sources of error common to all measurements of a participant is equal to the GLS estimator incorporating all sources of error, including those associated with any bias in the standards or procedures of the measuring laboratory. The second result identifies the component of uncertainty in the estimate of bias that arises from possible systematic effects in the participants' measurement standards and procedures. The expression so obtained is a generalization of an expression previously published for a one-artefact comparison with no inter-participant correlations, to one for a comparison comprising any number of repeat measurements of multiple artefacts and allowing for inter-laboratory correlations.

  5. Next-Generation Mitogenomics: A Comparison of Approaches Applied to Caecilian Amphibian Phylogeny

    OpenAIRE

    Maddock, Simon T.; Briscoe, Andrew G.; Wilkinson, Mark; Waeschenbach, Andrea; San Mauro, Diego; Day, Julia J.; Littlewood, D. Tim J.; Foster, Peter G.; Nussbaum, Ronald A.; Gower, David J.

    2016-01-01

    Mitochondrial genome (mitogenome) sequences are being generated with increasing speed due to the advances of next-generation sequencing (NGS) technology and associated analytical tools. However, detailed comparisons to explore the utility of alternative NGS approaches applied to the same taxa have not been undertaken. We compared a ‘traditional’ Sanger sequencing method with two NGS approaches (shotgun sequencing and non-indexed, multiplex amplicon sequencing) on four different sequencing pla...

  6. Analysis of pairwise correlations in multi-parametric PET/MR data for biological tumor characterization and treatment individualization strategies

    Energy Technology Data Exchange (ETDEWEB)

    Leibfarth, Sara; Moennich, David; Thorwarth, Daniela [University Hospital Tuebingen, Section for Biomedical Physics, Department of Radiation Oncology, Tuebingen (Germany); Simoncic, Urban [University Hospital Tuebingen, Section for Biomedical Physics, Department of Radiation Oncology, Tuebingen (Germany); University of Ljubljana, Faculty of Mathematics and Physics, Ljubljana (Slovenia); Jozef Stefan Institute, Ljubljana (Slovenia); Welz, Stefan; Zips, Daniel [University Hospital Tuebingen, Department of Radiation Oncology, Tuebingen (Germany); Schmidt, Holger; Schwenzer, Nina [University Hospital Tuebingen, Department of Diagnostic and Interventional Radiology, Tuebingen (Germany)

    2016-07-15

    The aim of this pilot study was to explore simultaneous functional PET/MR for biological characterization of tumors and potential future treatment adaptations. To investigate the extent of complementarity between different PET/MR-based functional datasets, a pairwise correlation analysis was performed. Functional datasets of N=15 head and neck (HN) cancer patients were evaluated. For patients of group A (N=7), combined PET/MR datasets including FDG-PET and ADC maps were available. Patients of group B (N=8) had FMISO-PET, DCE-MRI and ADC maps from combined PET/MRI, an additional dynamic FMISO-PET/CT acquired directly after FMISO tracer injection as well as an FDG-PET/CT acquired a few days earlier. From DCE-MR, parameter maps K{sup trans}, v{sub e} and v{sub p} were obtained with the extended Tofts model. Moreover, parameter maps of mean DCE enhancement, ΔS{sub DCE}, and mean FMISO signal 0-4 min p.i., anti A{sub FMISO}, were derived. Pairwise correlations were quantified using the Spearman correlation coefficient (r) on both a voxel and a regional level within the gross tumor volume. Between some pairs of functional imaging modalities moderate correlations were observed with respect to the median over all patient datasets, whereas distinct correlations were only present on an individual basis. Highest inter-modality median correlations on the voxel level were obtained for FDG/FMISO (r = 0.56), FDG/ anti A{sub FMISO} (r = 0.55), anti A{sub FMISO}/ΔS{sub DCE} (r = 0.46), and FDG/ADC (r = -0.39). Correlations on the regional level showed comparable results. The results of this study suggest that the examined functional datasets provide complementary information. However, only pairwise correlations were examined, and correlations could still exist between combinations of three or more datasets. These results might contribute to the future design of individually adapted treatment approaches based on multiparametric functional imaging.

  7. Fluoroscopically-Guided Posterior Approach for Shoulder Magnetic Resonance Arthrography: Comparison with Conventional Anterior Approach

    International Nuclear Information System (INIS)

    Yoo, Koun J.; Ha, Doo Hoe; Lee, Sang Min

    2011-01-01

    To prospectively evaluate the usefulness of the fluoroscopically-guided posterior approach compared with the anterior approach for shoulder magnetic resonance(MR) arthrography. Institutional review board approval and informed consent were obtained. Among 60 shoulder MR arthrographies performed on 59 patients with symptomatic shoulders, an intra-articular injection was performed (30 cases using the anterior approach and 30 using the posterior approach). Procedure-related pain was assessed by using a 5 score visual analogue scale (VAS). Depth of the puncture and standardized depth of puncture by body mass index (BMI) were recorded. The contrast leakage along the course of the puncture was evaluated by reviewing the MR. The statistical analyses included the Mann-Whitney U and Kruskal-Wallis test. There was no significant difference in VAS scores between the anterior and posterior groups (1.77 ± 1.10 vs. 1.80 ± 0.96). Depth of puncture and standardized depth of puncture by BMI were significantly shorter in the posterior group than those in the anterior group (4.4 ± 0.8 cm and 1.8 ± 0.3 cm vs. 6.6 ± 0.9 cm and 2.8 ± 0.4 cm, p < 0.001), respectively. The incidence of contrast leakage was more frequent in the posterior group (p = 0.003). The posterior approach will be useful in shoulder MR arthrography with a suspected anterior pathology, a postoperative follow-up study or obese patient.

  8. A spreadsheet template compatible with Microsoft Excel and iWork Numbers that returns the simultaneous confidence intervals for all pairwise differences between multiple sample means.

    Science.gov (United States)

    Brown, Angus M

    2010-04-01

    The objective of the method described in this paper is to develop a spreadsheet template for the purpose of comparing multiple sample means. An initial analysis of variance (ANOVA) test on the data returns F--the test statistic. If F is larger than the critical F value drawn from the F distribution at the appropriate degrees of freedom, convention dictates rejection of the null hypothesis and allows subsequent multiple comparison testing to determine where the inequalities between the sample means lie. A variety of multiple comparison methods are described that return the 95% confidence intervals for differences between means using an inclusive pairwise comparison of the sample means. 2009 Elsevier Ireland Ltd. All rights reserved.

  9. Assessing the Accuracy of Generalized Inferences From Comparison Group Studies Using a Within-Study Comparison Approach: The Methodology.

    Science.gov (United States)

    Jaciw, Andrew P

    2016-06-01

    Various studies have examined bias in impact estimates from comparison group studies (CGSs) of job training programs, and in education, where results are benchmarked against experimental results. Such within-study comparison (WSC) approaches investigate levels of bias in CGS-based impact estimates, as well as the success of various design and analytic strategies for reducing bias. This article reviews past literature and summarizes conditions under which CGSs replicate experimental benchmark results. It extends the framework to, and develops the methodology for, situations where results from CGSs are generalized to untreated inference populations. Past research is summarized; methods are developed to examine bias in program impact estimates based on cross-site comparisons in a multisite trial that are evaluated against site-specific experimental benchmarks. Students in Grades K-3 in 79 schools in Tennessee; students in Grades 4-8 in 82 schools in Alabama. Grades K-3 Stanford Achievement Test (SAT) in reading and math scores; Grades 4-8 SAT10 reading scores. Past studies show that bias in CGS-based estimates can be limited through strong design, with local matching, and appropriate analysis involving pretest covariates and variables that represent selection processes. Extension of the methodology to investigate accuracy of generalized estimates from CGSs shows bias from confounders and effect moderators. CGS results, when extrapolated to untreated inference populations, may be biased due to variation in outcomes and impact. Accounting for effects of confounders or moderators may reduce bias. © The Author(s) 2016.

  10. Classification between normal and tumor tissues based on the pair-wise gene expression ratio

    International Nuclear Information System (INIS)

    Yap, YeeLeng; Zhang, XueWu; Ling, MT; Wang, XiangHong; Wong, YC; Danchin, Antoine

    2004-01-01

    Precise classification of cancer types is critically important for early cancer diagnosis and treatment. Numerous efforts have been made to use gene expression profiles to improve precision of tumor classification. However, reliable cancer-related signals are generally lacking. Using recent datasets on colon and prostate cancer, a data transformation procedure from single gene expression to pair-wise gene expression ratio is proposed. Making use of the internal consistency of each expression profiling dataset this transformation improves the signal to noise ratio of the dataset and uncovers new relevant cancer-related signals (features). The efficiency in using the transformed dataset to perform normal/tumor classification was investigated using feature partitioning with informative features (gene annotation) as discriminating axes (single gene expression or pair-wise gene expression ratio). Classification results were compared to the original datasets for up to 10-feature model classifiers. 82 and 262 genes that have high correlation to tissue phenotype were selected from the colon and prostate datasets respectively. Remarkably, data transformation of the highly noisy expression data successfully led to lower the coefficient of variation (CV) for the within-class samples as well as improved the correlation with tissue phenotypes. The transformed dataset exhibited lower CV when compared to that of single gene expression. In the colon cancer set, the minimum CV decreased from 45.3% to 16.5%. In prostate cancer, comparable CV was achieved with and without transformation. This improvement in CV, coupled with the improved correlation between the pair-wise gene expression ratio and tissue phenotypes, yielded higher classification efficiency, especially with the colon dataset – from 87.1% to 93.5%. Over 90% of the top ten discriminating axes in both datasets showed significant improvement after data transformation. The high classification efficiency achieved suggested

  11. Pairwise correlations via quantum discord and its geometric measure in a four-qubit spin chain

    Directory of Open Access Journals (Sweden)

    Abdel-Baset A. Mohamed

    2013-04-01

    Full Text Available The dynamic of pairwise correlations, including quantum entanglement (QE and discord (QD with geometric measure of quantum discord (GMQD, are shown in the four-qubit Heisenberg XX spin chain. The results show that the effect of the entanglement degree of the initial state on the pairwise correlations is stronger for alternate qubits than it is for nearest-neighbor qubits. This parameter results in sudden death for QE, but it cannot do so for QD and GMQD. With different values for this entanglement parameter of the initial state, QD and GMQD differ and are sensitive for any change in this parameter. It is found that GMQD is more robust than both QD and QE to describe correlations with nonzero values, which offers a valuable resource for quantum computation.

  12. PTM Along Track Algorithm to Maintain Spacing During Same Direction Pair-Wise Trajectory Management Operations

    Science.gov (United States)

    Carreno, Victor A.

    2015-01-01

    Pair-wise Trajectory Management (PTM) is a cockpit based delegated responsibility separation standard. When an air traffic service provider gives a PTM clearance to an aircraft and the flight crew accepts the clearance, the flight crew will maintain spacing and separation from a designated aircraft. A PTM along track algorithm will receive state information from the designated aircraft and from the own ship to produce speed guidance for the flight crew to maintain spacing and separation

  13. Analysis of Geographic and Pairwise Distances among Chinese Cashmere Goat Populations

    OpenAIRE

    Liu, Jian-Bin; Wang, Fan; Lang, Xia; Zha, Xi; Sun, Xiao-Ping; Yue, Yao-Jing; Feng, Rui-Lin; Yang, Bo-Hui; Guo, Jian

    2013-01-01

    This study investigated the geographic and pairwise distances of nine Chinese local Cashmere goat populations through the analysis of 20 microsatellite DNA markers. Fluorescence PCR was used to identify the markers, which were selected based on their significance as identified by the Food and Agriculture Organization of the United Nations (FAO) and the International Society for Animal Genetics (ISAG). In total, 206 alleles were detected; the average allele number was 10.30; the polymorphism i...

  14. COMPARISON OF THE TRADITIONAL STRENGTH OF MATERIALS APPROACH TO DESIGN WITH THE FRACTURE MECHANICS APPROACH

    International Nuclear Information System (INIS)

    Z. Ceylan

    2002-01-01

    The objective of this activity is to show that the use of the traditional strength of materials approach to the drip shield and the waste package (WP) designs is bounding and appropriate when compared to the fracture mechanics approach. The scope of this activity is limited to determining the failure assessment diagrams for the two materials at issue: Ti-7 and Alloy 22. This calculation is intended for use in support of the license application design of the drip shield and the WP. This activity is associated with the drip shield and the WP designs. The activity evaluation for work package number P32 12234F2, included in ''Technical Work Plan for: Waste Package Design Description for LA'' (Ref. 1, p. A-6), has determined that the development of this document is subject to ''Quality Assurance Requirements and Description'' requirements. The control of the electronic management of data is accomplished in accordance with the methods specified in Reference 1, Section 10. AP-3.124, ''Design Calculations and Analysis'' (Ref. 2), is used to develop and document the calculation

  15. Simultaneous-Fault Diagnosis of Gas Turbine Generator Systems Using a Pairwise-Coupled Probabilistic Classifier

    Directory of Open Access Journals (Sweden)

    Zhixin Yang

    2013-01-01

    Full Text Available A reliable fault diagnostic system for gas turbine generator system (GTGS, which is complicated and inherent with many types of component faults, is essential to avoid the interruption of electricity supply. However, the GTGS diagnosis faces challenges in terms of the existence of simultaneous-fault diagnosis and high cost in acquiring the exponentially increased simultaneous-fault vibration signals for constructing the diagnostic system. This research proposes a new diagnostic framework combining feature extraction, pairwise-coupled probabilistic classifier, and decision threshold optimization. The feature extraction module adopts wavelet packet transform and time-domain statistical features to extract vibration signal features. Kernel principal component analysis is then applied to further reduce the redundant features. The features of single faults in a simultaneous-fault pattern are extracted and then detected using a probabilistic classifier, namely, pairwise-coupled relevance vector machine, which is trained with single-fault patterns only. Therefore, the training dataset of simultaneous-fault patterns is unnecessary. To optimize the decision threshold, this research proposes to use grid search method which can ensure a global solution as compared with traditional computational intelligence techniques. Experimental results show that the proposed framework performs well for both single-fault and simultaneous-fault diagnosis and is superior to the frameworks without feature extraction and pairwise coupling.

  16. Multiple Score Comparison: a network meta-analysis approach to comparison and external validation of prognostic scores

    Directory of Open Access Journals (Sweden)

    Sarah R. Haile

    2017-12-01

    Full Text Available Abstract Background Prediction models and prognostic scores have been increasingly popular in both clinical practice and clinical research settings, for example to aid in risk-based decision making or control for confounding. In many medical fields, a large number of prognostic scores are available, but practitioners may find it difficult to choose between them due to lack of external validation as well as lack of comparisons between them. Methods Borrowing methodology from network meta-analysis, we describe an approach to Multiple Score Comparison meta-analysis (MSC which permits concurrent external validation and comparisons of prognostic scores using individual patient data (IPD arising from a large-scale international collaboration. We describe the challenges in adapting network meta-analysis to the MSC setting, for instance the need to explicitly include correlations between the scores on a cohort level, and how to deal with many multi-score studies. We propose first using IPD to make cohort-level aggregate discrimination or calibration scores, comparing all to a common comparator. Then, standard network meta-analysis techniques can be applied, taking care to consider correlation structures in cohorts with multiple scores. Transitivity, consistency and heterogeneity are also examined. Results We provide a clinical application, comparing prognostic scores for 3-year mortality in patients with chronic obstructive pulmonary disease using data from a large-scale collaborative initiative. We focus on the discriminative properties of the prognostic scores. Our results show clear differences in performance, with ADO and eBODE showing higher discrimination with respect to mortality than other considered scores. The assumptions of transitivity and local and global consistency were not violated. Heterogeneity was small. Conclusions We applied a network meta-analytic methodology to externally validate and concurrently compare the prognostic properties

  17. Biotic interactions in the face of climate change: a comparison of three modelling approaches.

    Directory of Open Access Journals (Sweden)

    Anja Jaeschke

    Full Text Available Climate change is expected to alter biotic interactions, and may lead to temporal and spatial mismatches of interacting species. Although the importance of interactions for climate change risk assessments is increasingly acknowledged in observational and experimental studies, biotic interactions are still rarely incorporated in species distribution models. We assessed the potential impacts of climate change on the obligate interaction between Aeshna viridis and its egg-laying plant Stratiotes aloides in Europe, based on an ensemble modelling technique. We compared three different approaches for incorporating biotic interactions in distribution models: (1 We separately modelled each species based on climatic information, and intersected the future range overlap ('overlap approach'. (2 We modelled the potential future distribution of A. viridis with the projected occurrence probability of S. aloides as further predictor in addition to climate ('explanatory variable approach'. (3 We calibrated the model of A. viridis in the current range of S. aloides and multiplied the future occurrence probabilities of both species ('reference area approach'. Subsequently, all approaches were compared to a single species model of A. viridis without interactions. All approaches projected a range expansion for A. viridis. Model performance on test data and amount of range gain differed depending on the biotic interaction approach. All interaction approaches yielded lower range gains (up to 667% lower than the model without interaction. Regarding the contribution of algorithm and approach to the overall uncertainty, the main part of explained variation stems from the modelling algorithm, and only a small part is attributed to the modelling approach. The comparison of the no-interaction model with the three interaction approaches emphasizes the importance of including obligate biotic interactions in projective species distribution modelling. We recommend the use of

  18. Pairwise structure alignment specifically tuned for surface pockets and interaction interfaces

    KAUST Repository

    Cui, Xuefeng

    2015-09-09

    To detect and evaluate the similarities between the three-dimensional (3D) structures of two molecules, various kinds of methods have been proposed for the pairwise structure alignment problem [6, 9, 7, 11]. The problem plays important roles when studying the function and the evolution of biological molecules. Recently, pairwise structure alignment methods have been extended and applied on surface pocket structures [10, 3, 5] and interaction interface structures [8, 4]. The results show that, even when there are no global similarities discovered between the global sequences and the global structures, biological molecules or complexes could share similar functions because of well conserved pockets and interfaces. Thus, pairwise pocket and interface structure alignments are promising to unveil such shared functions that cannot be discovered by the well-studied global sequence and global structure alignments. State-of-the-art methods for pairwise pocket and interface structure alignments [4, 5] are direct extensions of the classic pairwise protein structure alignment methods, and thus such methods share a few limitations. First, the goal of the classic protein structure alignment methods is to align single-chain protein structures (i.e., a single fragment of residues connected by peptide bonds). However, we observed that pockets and interfaces tend to consist of tens of extremely short backbone fragments (i.e., three or fewer residues connected by peptide bonds). Thus, existing pocket and interface alignment methods based on the protein structure alignment methods still rely on the existence of long-enough backbone fragments, and the fragmentation issue of pockets and interfaces rises the risk of missing the optimal alignments. Moreover, existing interface structure alignment methods focus on protein-protein interfaces, and require a "blackbox preprocessing" before aligning protein-DNA and protein-RNA interfaces. Therefore, we introduce the PROtein STucture Alignment

  19. Comparison of two approaches for establishing performance criteria related to Maintenance Rule

    International Nuclear Information System (INIS)

    Jerng, Dong-Wook; Kim, Man Cheol

    2015-01-01

    Probabilistic safety assessment (PSA) serves as a tool for systemically analyzing the safety of nuclear power plants. This paper explains and compares two approaches for the establishment of performance criteria related to the Maintenance Rule: (1) the individual reliability-based approach, and (2) the PSA importance measure-based approach. Different characteristics of the two approaches were compared in a qualitative manner, while a quantitative comparison was performed through application of the two approaches to a nuclear power plant. It was observed that the individual reliability-based approach resulted in more conservative performance criteria, compared to the PSA importance measure-based approach. It is thus expected that the PSA importance measure-based approach will allow for more flexible maintenance policy under conditions of limited resources, while providing for a macroscopic view of overall plant safety. Based on insights derived through this analysis, we emphasize the importance of a balance between reliability and safety significance, and propose a balance measure accordingly. The conclusions of this analysis are likely to be applicable to other types of nuclear power plants. (author)

  20. Self-Regulatory Behaviors and Approaches to Learning of Arts Students: A Comparison between Professional Training and English Learning

    Science.gov (United States)

    Tseng, Min-chen; Chen, Chia-cheng

    2017-01-01

    This study investigated the self-regulatory behaviors of arts students, namely memory strategy, goal-setting, self-evaluation, seeking assistance, environmental structuring, learning responsibility, and planning and organizing. We also explored approaches to learning, including deep approach (DA) and surface approach (SA), in a comparison between…

  1. Statistical comparison of a hybrid approach with approximate and exact inference models for Fusion 2+

    Science.gov (United States)

    Lee, K. David; Wiesenfeld, Eric; Gelfand, Andrew

    2007-04-01

    One of the greatest challenges in modern combat is maintaining a high level of timely Situational Awareness (SA). In many situations, computational complexity and accuracy considerations make the development and deployment of real-time, high-level inference tools very difficult. An innovative hybrid framework that combines Bayesian inference, in the form of Bayesian Networks, and Possibility Theory, in the form of Fuzzy Logic systems, has recently been introduced to provide a rigorous framework for high-level inference. In previous research, the theoretical basis and benefits of the hybrid approach have been developed. However, lacking is a concrete experimental comparison of the hybrid framework with traditional fusion methods, to demonstrate and quantify this benefit. The goal of this research, therefore, is to provide a statistical analysis on the comparison of the accuracy and performance of hybrid network theory, with pure Bayesian and Fuzzy systems and an inexact Bayesian system approximated using Particle Filtering. To accomplish this task, domain specific models will be developed under these different theoretical approaches and then evaluated, via Monte Carlo Simulation, in comparison to situational ground truth to measure accuracy and fidelity. Following this, a rigorous statistical analysis of the performance results will be performed, to quantify the benefit of hybrid inference to other fusion tools.

  2. Leaf habit does not determine the investment in both physical and chemical defences and pair-wise correlations between these defensive traits.

    Science.gov (United States)

    Moreira, X; Pearse, I S

    2017-05-01

    Plant life-history strategies associated with resource acquisition and economics (e.g. leaf habit) are thought to be fundamental determinants of the traits and mechanisms that drive herbivore pressure, resource allocation to plant defensive traits, and the simultaneous expression (positive correlations) or trade-offs (negative correlations) between these defensive traits. In particular, it is expected that evergreen species - which usually grow slower and support constant herbivore pressure in comparison with deciduous species - will exhibit higher levels of both physical and chemical defences and a higher predisposition to the simultaneous expression of physical and chemical defensive traits. Here, by using a dataset which included 56 oak species (Quercus genus), we investigated whether leaf habit of plant species governs the investment in both physical and chemical defences and pair-wise correlations between these defensive traits. Our results showed that leaf habit does not determine the production of most leaf physical and chemical defences. Although evergreen oak species had higher levels of leaf toughness and specific leaf mass (physical defences) than deciduous oak species, both traits are essentially prerequisites for evergreenness. Similarly, our results also showed that leaf habit does not determine pair-wise correlations between defensive traits because most physical and chemical defensive traits were simultaneously expressed in both evergreen and deciduous oak species. Our findings indicate that leaf habit does not substantially contribute to oak species differences in plant defence investment. © 2017 German Botanical Society and The Royal Botanical Society of the Netherlands.

  3. Pairwise contact energy statistical potentials can help to find probability of point mutations.

    Science.gov (United States)

    Saravanan, K M; Suvaithenamudhan, S; Parthasarathy, S; Selvaraj, S

    2017-01-01

    To adopt a particular fold, a protein requires several interactions between its amino acid residues. The energetic contribution of these residue-residue interactions can be approximated by extracting statistical potentials from known high resolution structures. Several methods based on statistical potentials extracted from unrelated proteins are found to make a better prediction of probability of point mutations. We postulate that the statistical potentials extracted from known structures of similar folds with varying sequence identity can be a powerful tool to examine probability of point mutation. By keeping this in mind, we have derived pairwise residue and atomic contact energy potentials for the different functional families that adopt the (α/β) 8 TIM-Barrel fold. We carried out computational point mutations at various conserved residue positions in yeast Triose phosphate isomerase enzyme for which experimental results are already reported. We have also performed molecular dynamics simulations on a subset of point mutants to make a comparative study. The difference in pairwise residue and atomic contact energy of wildtype and various point mutations reveals probability of mutations at a particular position. Interestingly, we found that our computational prediction agrees with the experimental studies of Silverman et al. (Proc Natl Acad Sci 2001;98:3092-3097) and perform better prediction than i Mutant and Cologne University Protein Stability Analysis Tool. The present work thus suggests deriving pairwise contact energy potentials and molecular dynamics simulations of functionally important folds could help us to predict probability of point mutations which may ultimately reduce the time and cost of mutation experiments. Proteins 2016; 85:54-64. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.

  4. Bistability, non-ergodicity, and inhibition in pairwise maximum-entropy models.

    Science.gov (United States)

    Rostami, Vahid; Porta Mana, PierGianLuca; Grün, Sonja; Helias, Moritz

    2017-10-01

    Pairwise maximum-entropy models have been used in neuroscience to predict the activity of neuronal populations, given only the time-averaged correlations of the neuron activities. This paper provides evidence that the pairwise model, applied to experimental recordings, would produce a bimodal distribution for the population-averaged activity, and for some population sizes the second mode would peak at high activities, that experimentally would be equivalent to 90% of the neuron population active within time-windows of few milliseconds. Several problems are connected with this bimodality: 1. The presence of the high-activity mode is unrealistic in view of observed neuronal activity and on neurobiological grounds. 2. Boltzmann learning becomes non-ergodic, hence the pairwise maximum-entropy distribution cannot be found: in fact, Boltzmann learning would produce an incorrect distribution; similarly, common variants of mean-field approximations also produce an incorrect distribution. 3. The Glauber dynamics associated with the model is unrealistically bistable and cannot be used to generate realistic surrogate data. This bimodality problem is first demonstrated for an experimental dataset from 159 neurons in the motor cortex of macaque monkey. Evidence is then provided that this problem affects typical neural recordings of population sizes of a couple of hundreds or more neurons. The cause of the bimodality problem is identified as the inability of standard maximum-entropy distributions with a uniform reference measure to model neuronal inhibition. To eliminate this problem a modified maximum-entropy model is presented, which reflects a basic effect of inhibition in the form of a simple but non-uniform reference measure. This model does not lead to unrealistic bimodalities, can be found with Boltzmann learning, and has an associated Glauber dynamics which incorporates a minimal asymmetric inhibition.

  5. Computer-aided detection of masses in digital tomosynthesis mammography: Comparison of three approaches

    International Nuclear Information System (INIS)

    Chan Heangping; Wei Jun; Zhang Yiheng; Helvie, Mark A.; Moore, Richard H.; Sahiner, Berkman; Hadjiiski, Lubomir; Kopans, Daniel B.

    2008-01-01

    The authors are developing a computer-aided detection (CAD) system for masses on digital breast tomosynthesis mammograms (DBT). Three approaches were evaluated in this study. In the first approach, mass candidate identification and feature analysis are performed in the reconstructed three-dimensional (3D) DBT volume. A mass likelihood score is estimated for each mass candidate using a linear discriminant analysis (LDA) classifier. Mass detection is determined by a decision threshold applied to the mass likelihood score. A free response receiver operating characteristic (FROC) curve that describes the detection sensitivity as a function of the number of false positives (FPs) per breast is generated by varying the decision threshold over a range. In the second approach, prescreening of mass candidate and feature analysis are first performed on the individual two-dimensional (2D) projection view (PV) images. A mass likelihood score is estimated for each mass candidate using an LDA classifier trained for the 2D features. The mass likelihood images derived from the PVs are backprojected to the breast volume to estimate the 3D spatial distribution of the mass likelihood scores. The FROC curve for mass detection can again be generated by varying the decision threshold on the 3D mass likelihood scores merged by backprojection. In the third approach, the mass likelihood scores estimated by the 3D and 2D approaches, described above, at the corresponding 3D location are combined and evaluated using FROC analysis. A data set of 100 DBT cases acquired with a GE prototype system at the Breast Imaging Laboratory in the Massachusetts General Hospital was used for comparison of the three approaches. The LDA classifiers with stepwise feature selection were designed with leave-one-case-out resampling. In FROC analysis, the CAD system for detection in the DBT volume alone achieved test sensitivities of 80% and 90% at average FP rates of 1.94 and 3.40 per breast, respectively. With the

  6. Comparison of gimbal approaches to decrease drag force and radar cross sectional area in missile application

    Science.gov (United States)

    Sakarya, Doǧan Uǧur

    2017-05-01

    Drag force effect is an important aspect of range performance in missile applications especially for long flight time. However, old fashioned gimbal approaches force to increase missile diameter. This increase has negative aspect of rising in both drag force and radar cross sectional area. A new gimbal approach was proposed recently. It uses a beam steering optical arrangement. Therefore, it needs less volume envelope for same field of regard and same optomechanical assembly than the old fashioned gimbal approaches. In addition to longer range performance achieved with same fuel in the new gimbal approach, this method provides smaller cross sectional area which can be more invisible in enemies' radar. In this paper, the two gimbal approaches - the old fashioned one and the new one- are compared in order to decrease drag force and radar cross sectional area in missile application. In this study; missile parameters are assumed to generate gimbal and optical design parameters. Optical design is performed according to these missile criteria. Two gimbal configurations are designed with respect to modeled missile parameters. Also analyzes are performed to show decreased drag force and radar cross sectional area in the new approach for comparison.

  7. Effect of pairwise additivity on finite-temperature behavior of classical ideal gas

    Science.gov (United States)

    Shekaari, Ashkan; Jafari, Mahmoud

    2018-05-01

    Finite-temperature molecular dynamics simulations have been applied to inquire into the effect of pairwise additivity on the behavior of classical ideal gas within the temperature range of T = 250-4000 K via applying a variety of pair potentials and then examining the temperature dependence of a number of thermodynamical properties. Examining the compressibility factor reveals the most deviation from ideal-gas behavior for the Lennard-Jones system mainly due to the presence of both the attractive and repulsive terms. The systems with either attractive or repulsive intermolecular potentials are found to present no resemblance to real gases, but the most similarity to the ideal one as temperature rises.

  8. Structural profiles of human miRNA families from pairwise clustering

    DEFF Research Database (Denmark)

    Kaczkowski, Bogumil; Þórarinsson, Elfar; Reiche, Kristin

    2009-01-01

    secondary structure already predicted, little is known about the patterns of structural conservation among pre-miRNAs. We address this issue by clustering the human pre-miRNA sequences based on pairwise, sequence and secondary structure alignment using FOLDALIGN, followed by global multiple alignment...... of obtained clusters by WAR. As a result, the common secondary structure was successfully determined for four FOLDALIGN clusters: the RF00027 structural family of the Rfam database and three clusters with previously undescribed consensus structures. Availability: http://genome.ku.dk/resources/mirclust...

  9. The FOLDALIGN web server for pairwise structural RNA alignment and mutual motif search

    DEFF Research Database (Denmark)

    Havgaard, Jakob Hull; Lyngsø, Rune B.; Gorodkin, Jan

    2005-01-01

    FOLDALIGN is a Sankoff-based algorithm for making structural alignments of RNA sequences. Here, we present a web server for making pairwise alignments between two RNA sequences, using the recently updated version of FOLDALIGN. The server can be used to scan two sequences for a common structural RNA...... motif of limited size, or the entire sequences can be aligned locally or globally. The web server offers a graphical interface, which makes it simple to make alignments and manually browse the results. the web server can be accessed at http://foldalign.kvl.dk...

  10. Computing the Skewness of the Phylogenetic Mean Pairwise Distance in Linear Time

    DEFF Research Database (Denmark)

    Tsirogiannis, Constantinos; Sandel, Brody Steven

    2014-01-01

    The phylogenetic Mean Pairwise Distance (MPD) is one of the most popular measures for computing the phylogenetic distance between a given group of species. More specifically, for a phylogenetic tree and for a set of species R represented by a subset of the leaf nodes of , the MPD of R is equal...... to the average cost of all possible simple paths in that connect pairs of nodes in R. Among other phylogenetic measures, the MPD is used as a tool for deciding if the species of a given group R are closely related. To do this, it is important to compute not only the value of the MPD for this group but also...

  11. Benefits of Using Pairwise Trajectory Management in the Central East Pacific

    Science.gov (United States)

    Chartrand, Ryan; Ballard, Kathryn

    2017-01-01

    Pairwise Trajectory Management (PTM) is a concept that utilizes airborne and ground-based capabilities to enable airborne spacing operations in procedural airspace. This concept makes use of updated ground automation, Automatic Dependent Surveillance-Broadcast (ADS-B) and on board avionics generating real time guidance. An experiment was conducted to examine the potential benefits of implementing PTM in the Central East Pacific oceanic region. An explanation of the experiment and some of the results are included in this paper. The PTM concept allowed for an increase in the average time an aircraft is able to spend at its desired flight level and a reduction in fuel burn.

  12. Technical note: Comparison of methane ebullition modelling approaches used in terrestrial wetland models

    Science.gov (United States)

    Peltola, Olli; Raivonen, Maarit; Li, Xuefei; Vesala, Timo

    2018-02-01

    Emission via bubbling, i.e. ebullition, is one of the main methane (CH4) emission pathways from wetlands to the atmosphere. Direct measurement of gas bubble formation, growth and release in the peat-water matrix is challenging and in consequence these processes are relatively unknown and are coarsely represented in current wetland CH4 emission models. In this study we aimed to evaluate three ebullition modelling approaches and their effect on model performance. This was achieved by implementing the three approaches in one process-based CH4 emission model. All the approaches were based on some kind of threshold: either on CH4 pore water concentration (ECT), pressure (EPT) or free-phase gas volume (EBG) threshold. The model was run using 4 years of data from a boreal sedge fen and the results were compared with eddy covariance measurements of CH4 fluxes.Modelled annual CH4 emissions were largely unaffected by the different ebullition modelling approaches; however, temporal variability in CH4 emissions varied an order of magnitude between the approaches. Hence the ebullition modelling approach drives the temporal variability in modelled CH4 emissions and therefore significantly impacts, for instance, high-frequency (daily scale) model comparison and calibration against measurements. The modelling approach based on the most recent knowledge of the ebullition process (volume threshold, EBG) agreed the best with the measured fluxes (R2 = 0.63) and hence produced the most reasonable results, although there was a scale mismatch between the measurements (ecosystem scale with heterogeneous ebullition locations) and model results (single horizontally homogeneous peat column). The approach should be favoured over the two other more widely used ebullition modelling approaches and researchers are encouraged to implement it into their CH4 emission models.

  13. Incorporating risk attitude into comparison of reforestation alternatives

    International Nuclear Information System (INIS)

    Kangas, J.

    1994-01-01

    By using the approach presented in this paper, the decision-maker's risk can be ascertained and taken into account in the comparison of reforestation alternatives of a forest stand. Risks which reforestation alternatives include are described using distributions of outcomes. Cardinal utility values of five accumulation points of cumulative distributions of outcomes, calculated without considering risk preferences, are the variables included in a preference function. The parameters of that additive preference function represent the importance of the accumulation points in the choice of the reforestation alternatives. They indicate the decision-maker's attitude towards risk. The parameters are estimated on the basis of pairwise comparisons between the importance of variables, using Saaty's eigenvalue method. Estimation, application, and interpretation of preference function are simple to carry out, which is important for an approach applied to practical decision-making. The approach could be applied also, for example, to other forestry decision-making problems. 42 refs, 2 figs, 1 tab

  14. Primary total hip arthroplasty: a comparison of the lateral Hardinge approach to an anterior mini-invasive approach

    Directory of Open Access Journals (Sweden)

    Nathan Wayne

    2009-11-01

    Full Text Available The anterior mini-invasive (MI approach to performing total hip arthroplasty (THA is associated with less soft tissue damage and shorter postoperative recovery than other methods. Our hospital recently abandoned the traditional lateral Hardinge (LH approach in favour of this new method. We compared the first 100 patients operated after the changeover to the new method (MI group to the last 100 patients operated using the traditional method (LH group. Clinical and radiological parameters and complications were recorded pre- and postoperatively and the collected data of the two groups were statistically compared. There were no statistically significant differences between either group with regard to patient demographics or procedural data, placement of the femur component, postoperative leg discrepancy, prosthesis dislocation, blood transfusion, or postoperative dislocation of the components. The MI group had a significantly longer operating time, more bleeding, higher rate of nerve damage, and a higher percentage of acetabular component malposition whilst having a significantly shorter hospital stay and significantly fewer infections of the operative site in comparison to the LH group. Additionally, and perhaps most worrying was the clinically significant increase in intraoperative femur fractures in the MI group. The changeover to the anterior mini-invasive approach, which was the surgeons' initial experience with the MI technique, resulted in a drastic increase in the number of overall complications accompanied by less soft tissue damage and a shorter period of rehabilitation. Our results suggest that further analysis of this surgical MI technique will be needed before it can be recommended for widespread adoption.

  15. Optimal definition of inter-residual contact in globular proteins based on pairwise interaction energy calculations, its robustness, and applications.

    Science.gov (United States)

    Fačkovec, Boris; Vondrášek, Jiří

    2012-10-25

    Although a contact is an essential measurement for the topology as well as strength of non-covalent interactions in biomolecules and their complexes, there is no general agreement in the definition of this feature. Most of the definitions work with simple geometric criteria which do not fully reflect the energy content or ability of the biomolecular building blocks to arrange their environment. We offer a reasonable solution to this problem by distinguishing between "productive" and "non-productive" contacts based on their interaction energy strength and properties. We have proposed a method which converts the protein topology into a contact map that represents interactions with statistically significant high interaction energies. We do not prove that these contacts are exclusively stabilizing, but they represent a gateway to thermodynamically important rather than geometry-based contacts. The process is based on protein fragmentation and calculation of interaction energies using the OPLS force field and relies on pairwise additivity of amino acid interactions. Our approach integrates the treatment of different types of interactions, avoiding the problems resulting from different contributions to the overall stability and the different effect of the environment. The first applications on a set of homologous proteins have shown the usefulness of this classification for a sound estimate of protein stability.

  16. GapMis: a tool for pairwise sequence alignment with a single gap.

    Science.gov (United States)

    Flouri, Tomás; Frousios, Kimon; Iliopoulos, Costas S; Park, Kunsoo; Pissis, Solon P; Tischler, German

    2013-08-01

    Pairwise sequence alignment has received a new motivation due to the advent of recent patents in next-generation sequencing technologies, particularly so for the application of re-sequencing---the assembly of a genome directed by a reference sequence. After the fast alignment between a factor of the reference sequence and a high-quality fragment of a short read by a short-read alignment programme, an important problem is to find the alignment between a relatively short succeeding factor of the reference sequence and the remaining low-quality part of the read allowing a number of mismatches and the insertion of a single gap in the alignment. We present GapMis, a tool for pairwise sequence alignment with a single gap. It is based on a simple algorithm, which computes a different version of the traditional dynamic programming matrix. The presented experimental results demonstrate that GapMis is more suitable and efficient than most popular tools for this task.

  17. Pairwise registration of TLS point clouds using covariance descriptors and a non-cooperative game

    Science.gov (United States)

    Zai, Dawei; Li, Jonathan; Guo, Yulan; Cheng, Ming; Huang, Pengdi; Cao, Xiaofei; Wang, Cheng

    2017-12-01

    It is challenging to automatically register TLS point clouds with noise, outliers and varying overlap. In this paper, we propose a new method for pairwise registration of TLS point clouds. We first generate covariance matrix descriptors with an adaptive neighborhood size from point clouds to find candidate correspondences, we then construct a non-cooperative game to isolate mutual compatible correspondences, which are considered as true positives. The method was tested on three models acquired by two different TLS systems. Experimental results demonstrate that our proposed adaptive covariance (ACOV) descriptor is invariant to rigid transformation and robust to noise and varying resolutions. The average registration errors achieved on three models are 0.46 cm, 0.32 cm and 1.73 cm, respectively. The computational times cost on these models are about 288 s, 184 s and 903 s, respectively. Besides, our registration framework using ACOV descriptors and a game theoretic method is superior to the state-of-the-art methods in terms of both registration error and computational time. The experiment on a large outdoor scene further demonstrates the feasibility and effectiveness of our proposed pairwise registration framework.

  18. Multilevel summation with B-spline interpolation for pairwise interactions in molecular dynamics simulations

    International Nuclear Information System (INIS)

    Hardy, David J.; Schulten, Klaus; Wolff, Matthew A.; Skeel, Robert D.; Xia, Jianlin

    2016-01-01

    The multilevel summation method for calculating electrostatic interactions in molecular dynamics simulations constructs an approximation to a pairwise interaction kernel and its gradient, which can be evaluated at a cost that scales linearly with the number of atoms. The method smoothly splits the kernel into a sum of partial kernels of increasing range and decreasing variability with the longer-range parts interpolated from grids of increasing coarseness. Multilevel summation is especially appropriate in the context of dynamics and minimization, because it can produce continuous gradients. This article explores the use of B-splines to increase the accuracy of the multilevel summation method (for nonperiodic boundaries) without incurring additional computation other than a preprocessing step (whose cost also scales linearly). To obtain accurate results efficiently involves technical difficulties, which are overcome by a novel preprocessing algorithm. Numerical experiments demonstrate that the resulting method offers substantial improvements in accuracy and that its performance is competitive with an implementation of the fast multipole method in general and markedly better for Hamiltonian formulations of molecular dynamics. The improvement is great enough to establish multilevel summation as a serious contender for calculating pairwise interactions in molecular dynamics simulations. In particular, the method appears to be uniquely capable for molecular dynamics in two situations, nonperiodic boundary conditions and massively parallel computation, where the fast Fourier transform employed in the particle–mesh Ewald method falls short.

  19. Extraction of tacit knowledge from large ADME data sets via pairwise analysis.

    Science.gov (United States)

    Keefer, Christopher E; Chang, George; Kauffman, Gregory W

    2011-06-15

    Pharmaceutical companies routinely collect data across multiple projects for common ADME endpoints. Although at the time of collection the data is intended for use in decision making within a specific project, knowledge can be gained by data mining the entire cross-project data set for patterns of structure-activity relationships (SAR) that may be applied to any project. One such data mining method is pairwise analysis. This method has the advantage of being able to identify small structural changes that lead to significant changes in activity. In this paper, we describe the process for full pairwise analysis of our high-throughput ADME assays routinely used for compound discovery efforts at Pfizer (microsomal clearance, passive membrane permeability, P-gp efflux, and lipophilicity). We also describe multiple strategies for the application of these transforms in a prospective manner during compound design. Finally, a detailed analysis of the activity patterns in pairs of compounds that share the same molecular transformation reveals multiple types of transforms from an SAR perspective. These include bioisosteres, additives, multiplicatives, and a type we call switches as they act to either turn on or turn off an activity. Copyright © 2011 Elsevier Ltd. All rights reserved.

  20. Post-Hartree-Fock studies of the He/Mg(0001) interaction: Anti-corrugation, screening, and pairwise additivity

    Energy Technology Data Exchange (ETDEWEB)

    Lara-Castells, María Pilar de, E-mail: Pilar.deLara.Castells@csic.es [Instituto de Física Fundamental (CSIC), Serrano 123, E-28006 Madrid (Spain); Fernández-Perea, Ricardo [Instituto de Estructura de la Materia (CSIC), Serrano 123, E-28006 Madrid (Spain); Madzharova, Fani; Voloshina, Elena, E-mail: elena.voloshina@hu-berlin.de [Humboldt-Universität zu Berlin, Institut für Chemie, Unter den Linden 6, 10099 Berlin (Germany)

    2016-06-28

    The adsorption of noble gases on metallic surfaces represents a paradigmatic case of van-der-Waals (vdW) interaction due to the role of screening effects on the corrugation of the interaction potential [J. L. F. Da Silva et al., Phys. Rev. Lett. 90, 066104 (2003)]. The extremely small adsorption energy of He atoms on the Mg(0001) surface (below 3 meV) and the delocalized nature and mobility of the surface electrons make the He/Mg(0001) system particularly challenging, even for state-of-the-art vdW-corrected density functional-based (vdW-DFT) approaches [M. P. de Lara-Castells et al., J. Chem. Phys. 143, 194701 (2015)]. In this work, we meet this challenge by applying two different procedures. First, the dispersion-corrected second-order Möller-Plesset perturbation theory (MP2C) approach is adopted, using bare metal clusters of increasing size. Second, the method of increments [H. Stoll, J. Chem. Phys. 97, 8449 (1992)] is applied at coupled cluster singles and doubles and perturbative triples level, using embedded cluster models of the metal surface. Both approaches provide clear evidences of the anti-corrugation of the interaction potential: the He atom prefers on-top sites, instead of the expected hollow sites. This is interpreted as a signature of the screening of the He atom by the metal for the on-top configuration. The strong screening in the metal is clearly reflected in the relative contribution of successively deeper surface layers to the main dispersion contribution. Aimed to assist future dynamical simulations, a pairwise potential model for the He/surface interaction as a sum of effective He–Mg pair potentials is also presented, as an improvement of the approximation using isolated He–Mg pairs.

  1. A time warping approach to multiple sequence alignment.

    Science.gov (United States)

    Arribas-Gil, Ana; Matias, Catherine

    2017-04-25

    We propose an approach for multiple sequence alignment (MSA) derived from the dynamic time warping viewpoint and recent techniques of curve synchronization developed in the context of functional data analysis. Starting from pairwise alignments of all the sequences (viewed as paths in a certain space), we construct a median path that represents the MSA we are looking for. We establish a proof of concept that our method could be an interesting ingredient to include into refined MSA techniques. We present a simple synthetic experiment as well as the study of a benchmark dataset, together with comparisons with 2 widely used MSA softwares.

  2. Improved scFv Anti-HIV-1 p17 Binding Affinity Guided from the Theoretical Calculation of Pairwise Decomposition Energies and Computational Alanine Scanning

    Directory of Open Access Journals (Sweden)

    Panthip Tue-ngeun

    2013-01-01

    Full Text Available Computational approaches have been used to evaluate and define important residues for protein-protein interactions, especially antigen-antibody complexes. In our previous study, pairwise decomposition of residue interaction energies of single chain Fv with HIV-1 p17 epitope variants has indicated the key specific residues in the complementary determining regions (CDRs of scFv anti-p17. In this present investigation in order to determine whether a specific side chain group of residue in CDRs plays an important role in bioactivity, computational alanine scanning has been applied. Molecular dynamics simulations were done with several complexes of original scFv anti-p17 and scFv anti-p17mutants with HIV-1 p17 epitope variants with a production run up to 10 ns. With the combination of pairwise decomposition residue interaction and alanine scanning calculations, the point mutation has been initially selected at the position MET100 to improve the residue binding affinity. The calculated docking interaction energy between a single mutation from methionine to either arginine or glycine has shown the improved binding affinity, contributed from the electrostatic interaction with the negative favorably interaction energy, compared to the wild type. Theoretical calculations agreed well with the results from the peptide ELISA results.

  3. Comparison of posterior retroperitoneal and transabdominal lateral approaches in robotic adrenalectomy: an analysis of 200 cases.

    Science.gov (United States)

    Kahramangil, Bora; Berber, Eren

    2018-04-01

    Although numerous studies have been published on robotic adrenalectomy (RA) in the literature, none has done a comparison of posterior retroperitoneal (PR) and transabdominal lateral (TL) approaches. The aim of this study was to compare the outcomes of robotic PR and TL adrenalectomy. This is a retrospective analysis of a prospectively maintained database. Between September 2008 and January 2017, perioperative outcomes of patients undergoing RA through PR and TL approaches were recorded into an IRB-approved database. Clinical and perioperative parameters were compared using Student's t test, Wilcoxon rank-sum test, and χ 2 test. Multivariate regression analysis was performed to determine factors associated with total operative time. 188 patients underwent 200 RAs. 110 patients were operated through TL and 78 patients through PR approach. Overall, conversion rate to open was 2.5% and 90-day morbidity 4.8%. The perioperative outcomes of TL and PR approaches were similar regarding estimated blood loss, rate of conversion to open, length of hospital stay, and 90-day morbidity. PR approach resulted in a shorter mean ± SD total operative time (136.3 ± 38.7 vs. 154.6 ± 48.4 min; p = 0.005) and lower visual analog scale pain score on postoperative day #1 (4.3 ± 2.5 vs. 5.4 ± 2.4; p = 0.001). After excluding tumors larger than 6 cm operated through TL approach, the difference in operative times persisted (136.3 ± 38.7 vs. 153.7 ± 45.7 min; p = 0.009). On multivariate regression analysis, increasing BMI and TL approaches were associated with longer total operative time. This study shows that robotic PR and TL approaches are equally safe and efficacious. With experience, shorter operative time and less postoperative pain can be achieved with PR technique. This supports the preferential utilization of PR approach in high-volume centers with enough experience.

  4. Fast pairwise structural RNA alignments by pruning of the dynamical programming matrix

    DEFF Research Database (Denmark)

    Havgaard, Jakob Hull; Torarinsson, Elfar; Gorodkin, Jan

    2007-01-01

    and backtracked in a normal fashion. Finally, the FOLDALIGN algorithm has also been updated with a better memory implementation and an improved energy model. With these improvements in the algorithm, the FOLDALIGN software package provides the molecular biologist with an efficient and user-friendly tool...... the advantage of providing the constraints dynamically. This has been included in a new implementation of the FOLDALIGN algorithm for pairwise local or global structural alignment of RNA sequences. It is shown that time and memory requirements are dramatically lowered while overall performance is maintained....... Furthermore, a new divide and conquer method is introduced to limit the memory requirement during global alignment and backtrack of local alignment. All branch points in the computed RNA structure are found and used to divide the structure into smaller unbranched segments. Each segment is then realigned...

  5. Inferring Pairwise Interactions from Biological Data Using Maximum-Entropy Probability Models.

    Directory of Open Access Journals (Sweden)

    Richard R Stein

    2015-07-01

    Full Text Available Maximum entropy-based inference methods have been successfully used to infer direct interactions from biological datasets such as gene expression data or sequence ensembles. Here, we review undirected pairwise maximum-entropy probability models in two categories of data types, those with continuous and categorical random variables. As a concrete example, we present recently developed inference methods from the field of protein contact prediction and show that a basic set of assumptions leads to similar solution strategies for inferring the model parameters in both variable types. These parameters reflect interactive couplings between observables, which can be used to predict global properties of the biological system. Such methods are applicable to the important problems of protein 3-D structure prediction and association of gene-gene networks, and they enable potential applications to the analysis of gene alteration patterns and to protein design.

  6. The pairwise phase consistency in cortical network and its relationship with neuronal activation

    Directory of Open Access Journals (Sweden)

    Wang Daming

    2017-01-01

    Full Text Available Gamma-band neuronal oscillation and synchronization with the range of 30-90 Hz are ubiquitous phenomenon across numerous brain areas and various species, and correlated with plenty of cognitive functions. The phase of the oscillation, as one aspect of CTC (Communication through Coherence hypothesis, underlies various functions for feature coding, memory processing and behaviour performing. The PPC (Pairwise Phase Consistency, an improved coherence measure, statistically quantifies the strength of phase synchronization. In order to evaluate the PPC and its relationships with input stimulus, neuronal activation and firing rate, a simplified spiking neuronal network is constructed to simulate orientation columns in primary visual cortex. If the input orientation stimulus is preferred for a certain orientation column, neurons within this corresponding column will obtain higher firing rate and stronger neuronal activation, which consequently engender higher PPC values, with higher PPC corresponding to higher firing rate. In addition, we investigate the PPC in time resolved analysis with a sliding window.

  7. Video-based depression detection using local Curvelet binary patterns in pairwise orthogonal planes.

    Science.gov (United States)

    Pampouchidou, Anastasia; Marias, Kostas; Tsiknakis, Manolis; Simos, Panagiotis; Fan Yang; Lemaitre, Guillaume; Meriaudeau, Fabrice

    2016-08-01

    Depression is an increasingly prevalent mood disorder. This is the reason why the field of computer-based depression assessment has been gaining the attention of the research community during the past couple of years. The present work proposes two algorithms for depression detection, one Frame-based and the second Video-based, both employing Curvelet transform and Local Binary Patterns. The main advantage of these methods is that they have significantly lower computational requirements, as the extracted features are of very low dimensionality. This is achieved by modifying the previously proposed algorithm which considers Three-Orthogonal-Planes, to only Pairwise-Orthogonal-Planes. Performance of the algorithms was tested on the benchmark dataset provided by the Audio/Visual Emotion Challenge 2014, with the person-specific system achieving 97.6% classification accuracy, and the person-independed one yielding promising preliminary results of 74.5% accuracy. The paper concludes with open issues, proposed solutions, and future plans.

  8. Solution to urn models of pairwise interaction with application to social, physical, and biological sciences

    Science.gov (United States)

    Pickering, William; Lim, Chjan

    2017-07-01

    We investigate a family of urn models that correspond to one-dimensional random walks with quadratic transition probabilities that have highly diverse applications. Well-known instances of these two-urn models are the Ehrenfest model of molecular diffusion, the voter model of social influence, and the Moran model of population genetics. We also provide a generating function method for diagonalizing the corresponding transition matrix that is valid if and only if the underlying mean density satisfies a linear differential equation and express the eigenvector components as terms of ordinary hypergeometric functions. The nature of the models lead to a natural extension to interaction between agents in a general network topology. We analyze the dynamics on uncorrelated heterogeneous degree sequence networks and relate the convergence times to the moments of the degree sequences for various pairwise interaction mechanisms.

  9. PairWise Neighbours database: overlaps and spacers among prokaryote genomes

    Directory of Open Access Journals (Sweden)

    Garcia-Vallvé Santiago

    2009-06-01

    Full Text Available Abstract Background Although prokaryotes live in a variety of habitats and possess different metabolic and genomic complexity, they have several genomic architectural features in common. The overlapping genes are a common feature of the prokaryote genomes. The overlapping lengths tend to be short because as the overlaps become longer they have more risk of deleterious mutations. The spacers between genes tend to be short too because of the tendency to reduce the non coding DNA among prokaryotes. However they must be long enough to maintain essential regulatory signals such as the Shine-Dalgarno (SD sequence, which is responsible of an efficient translation. Description PairWise Neighbours is an interactive and intuitive database used for retrieving information about the spacers and overlapping genes among bacterial and archaeal genomes. It contains 1,956,294 gene pairs from 678 fully sequenced prokaryote genomes and is freely available at the URL http://genomes.urv.cat/pwneigh. This database provides information about the overlaps and their conservation across species. Furthermore, it allows the wide analysis of the intergenic regions providing useful information such as the location and strength of the SD sequence. Conclusion There are experiments and bioinformatic analysis that rely on correct annotations of the initiation site. Therefore, a database that studies the overlaps and spacers among prokaryotes appears to be desirable. PairWise Neighbours database permits the reliability analysis of the overlapping structures and the study of the SD presence and location among the adjacent genes, which may help to check the annotation of the initiation sites.

  10. Analysis of Geographic and Pairwise Distances among Chinese Cashmere Goat Populations

    Directory of Open Access Journals (Sweden)

    Jian-Bin Liu

    2013-03-01

    Full Text Available This study investigated the geographic and pairwise distances of nine Chinese local Cashmere goat populations through the analysis of 20 microsatellite DNA markers. Fluorescence PCR was used to identify the markers, which were selected based on their significance as identified by the Food and Agriculture Organization of the United Nations (FAO and the International Society for Animal Genetics (ISAG. In total, 206 alleles were detected; the average allele number was 10.30; the polymorphism information content of loci ranged from 0.5213 to 0.7582; the number of effective alleles ranged from 4.0484 to 4.6178; the observed heterozygosity was from 0.5023 to 0.5602 for the practical sample; the expected heterozygosity ranged from 0.5783 to 0.6464; and Allelic richness ranged from 4.7551 to 8.0693. These results indicated that Chinese Cashmere goat populations exhibited rich genetic diversity. Further, the Wright’s F-statistics of subpopulation within total (FST was 0.1184; the genetic differentiation coefficient (GST was 0.0940; and the average gene flow (Nm was 2.0415. All pairwise FST values among the populations were highly significant (p<0.01 or p<0.001, suggesting that the populations studied should all be considered to be separate breeds. Finally, the clustering analysis divided the Chinese Cashmere goat populations into at least four clusters, with the Hexi and Yashan goat populations alone in one cluster. These results have provided useful, practical, and important information for the future of Chinese Cashmere goat breeding.

  11. A Comparison of Approach and Avoidance Sexual Goals in Couples With Vulvodynia and Community Controls.

    Science.gov (United States)

    Dubé, Justin P; Bergeron, Sophie; Muise, Amy; Impett, Emily A; Rosen, Natalie O

    2017-11-01

    couples coping with PVD. Dubé JP, Bergeron S, Muise A, et al. A Comparison of Approach and Avoidance Sexual Goals in Couples With Vulvodynia and Community Controls. J Sex Med 2017;14:1412-1420. Copyright © 2017 International Society for Sexual Medicine. Published by Elsevier Inc. All rights reserved.

  12. Traction cytometry: regularization in the Fourier approach and comparisons with finite element method.

    Science.gov (United States)

    Kulkarni, Ankur H; Ghosh, Prasenjit; Seetharaman, Ashwin; Kondaiah, Paturu; Gundiah, Namrata

    2018-05-09

    Traction forces exerted by adherent cells are quantified using displacements of embedded markers on polyacrylamide substrates due to cell contractility. Fourier Transform Traction Cytometry (FTTC) is widely used to calculate tractions but has inherent limitations due to errors in the displacement fields; these are mitigated through a regularization parameter (γ) in the Reg-FTTC method. An alternate finite element (FE) approach computes tractions on a domain using known boundary conditions. Robust verification and recovery studies are lacking but essential in assessing the accuracy and noise sensitivity of the traction solutions from the different methods. We implemented the L2 regularization method and defined a maximum curvature point in the traction with γ plot as the optimal regularization parameter (γ*) in the Reg-FTTC approach. Traction reconstructions using γ* yield accurate values of low and maximum tractions (Tmax) in the presence of up to 5% noise. Reg-FTTC is hence a clear improvement over the FTTC method but is inadequate to reconstruct low stresses such as those at nascent focal adhesions. FE, implemented using a node-by-node comparison, showed an intermediate reconstruction compared to Reg-FTTC. We performed experiments using mouse embryonic fibroblast (MEF) and compared results between these approaches. Tractions from FTTC and FE showed differences of ∼92% and 22% as compared to Reg-FTTC. Selection of an optimum value of γ for each cell reduced variability in the computed tractions as compared to using a single value of γ for all the MEF cells in this study.

  13. Concentrations versus amounts of biomarkers in urine: a comparison of approaches to assess pyrethroid exposure

    Directory of Open Access Journals (Sweden)

    Bouchard Michèle

    2008-11-01

    Full Text Available Abstract Background Assessment of human exposure to non-persistent pesticides such as pyrethroids is often based on urinary biomarker measurements. Urinary metabolite levels of these pesticides are usually reported in volume-weighted concentrations or creatinine-adjusted concentrations measured in spot urine samples. It is known that these units are subject to intra- and inter-individual variations. This research aimed at studying the impact of these variations on the assessment of pyrethroid absorbed doses at individual and population levels. Methods Using data obtained from various adult and infantile populations, the intra and inter-individual variability in the urinary flow rate and creatinine excretion rate was first estimated. Individual absorbed doses were then calculated using volume-weighted or creatinine-adjusted concentrations according to published approaches and compared to those estimated from the amounts of biomarkers excreted in 15- or 24-h urine collections, the latter serving as a benchmark unit. The effect of the units of measurements (volume-weighted or creatinine adjusted concentrations or 24-h amounts on results of the comparison of pyrethroid biomarker levels between two populations was also evaluated. Results Estimation of daily absorbed doses of permethrin from volume-weighted or creatinine-adjusted concentrations of biomarkers was found to potentially lead to substantial under or overestimation when compared to doses reconstructed directly from amounts excreted in urine during a given period of time (-70 to +573% and -83 to +167%, respectively. It was also shown that the variability in creatinine excretion rate and urinary flow rate may introduce a bias in the case of between population comparisons. Conclusion The unit chosen to express biomonitoring data may influence the validity of estimated individual absorbed dose as well as the outcome of between population comparisons.

  14. Testing Process Predictions of Models of Risky Choice: A Quantitative Model Comparison Approach

    Directory of Open Access Journals (Sweden)

    Thorsten ePachur

    2013-09-01

    Full Text Available This article presents a quantitative model comparison contrasting the process predictions of two prominent views on risky choice. One view assumes a trade-off between probabilities and outcomes (or nonlinear functions thereof and the separate evaluation of risky options (expectation models. Another view assumes that risky choice is based on comparative evaluation, limited search, aspiration levels, and the forgoing of trade-offs (heuristic models. We derived quantitative process predictions for a generic expectation model and for a specific heuristic model, namely the priority heuristic (Brandstätter, Gigerenzer, & Hertwig, 2006, and tested them in two experiments. The focus was on two key features of the cognitive process: acquisition frequencies (i.e., how frequently individual reasons are looked up and direction of search (i.e., gamble-wise vs. reason-wise. In Experiment 1, the priority heuristic predicted direction of search better than the expectation model (although neither model predicted the acquisition process perfectly; acquisition frequencies, however, were inconsistent with both models. Additional analyses revealed that these frequencies were primarily a function of what Rubinstein (1988 called similarity. In Experiment 2, the quantitative model comparison approach showed that people seemed to rely more on the priority heuristic in difficult problems, but to make more trade-offs in easy problems. This finding suggests that risky choice may be based on a mental toolbox of strategies.

  15. Testing process predictions of models of risky choice: a quantitative model comparison approach

    Science.gov (United States)

    Pachur, Thorsten; Hertwig, Ralph; Gigerenzer, Gerd; Brandstätter, Eduard

    2013-01-01

    This article presents a quantitative model comparison contrasting the process predictions of two prominent views on risky choice. One view assumes a trade-off between probabilities and outcomes (or non-linear functions thereof) and the separate evaluation of risky options (expectation models). Another view assumes that risky choice is based on comparative evaluation, limited search, aspiration levels, and the forgoing of trade-offs (heuristic models). We derived quantitative process predictions for a generic expectation model and for a specific heuristic model, namely the priority heuristic (Brandstätter et al., 2006), and tested them in two experiments. The focus was on two key features of the cognitive process: acquisition frequencies (i.e., how frequently individual reasons are looked up) and direction of search (i.e., gamble-wise vs. reason-wise). In Experiment 1, the priority heuristic predicted direction of search better than the expectation model (although neither model predicted the acquisition process perfectly); acquisition frequencies, however, were inconsistent with both models. Additional analyses revealed that these frequencies were primarily a function of what Rubinstein (1988) called “similarity.” In Experiment 2, the quantitative model comparison approach showed that people seemed to rely more on the priority heuristic in difficult problems, but to make more trade-offs in easy problems. This finding suggests that risky choice may be based on a mental toolbox of strategies. PMID:24151472

  16. Comparison of different modeling approaches to simulate contaminant transport in a fractured limestone aquifer

    DEFF Research Database (Denmark)

    Mosthaf, Klaus; Rosenberg, L.; Balbarini, Nicola

    . Given available field data and model purpose, this paper therefore aims to develop, examine and compare modeling approaches for transport of contaminants in fractured limestone aquifers. The model comparison was conducted for a contaminated site in Denmark, where a plume of a dissolved contaminant (PCE...... was combined with an analysis of heterogeneities and fractures from a nearby excavation (analog site). Methods for translating the geological information and fracture mapping into each of the model concepts were examined. Each model was compared with available field data, considering both model fit...... of field data is the determination of relevant hydraulic properties and interpretation of aqueous and solid phase contaminant concentration sampling data. Traditional water sampling has a bias towards fracture sampling, however concentrations in the limestone matrix are needed for assessing contaminant...

  17. A Comparison of Proliferation Resistance Measures of Misuse Scenarios Using a Markov Approach

    International Nuclear Information System (INIS)

    Yue, M.; Cheng, L.-Y.; Bari, R.

    2008-01-01

    Misuse of declared nuclear facilities is one of the important proliferation threats. The robustness of a facility against these threats is characterized by a number of proliferation resistance (PR) measures. This paper evaluates and compares PR measures for several misuse scenarios using a Markov model approach to implement the pathway analysis methodology being developed by the PR and PP (Proliferation Resistance and Physical Protection) Expert Group. Different misue strategies can be adopted by a proliferator and each strategy is expected to have different impacts on the proliferator's success. Selected as the probabilistic measure to represent proliferation resistance, the probabilities of the proliferator's success of misusing a hypothetical ESFR (Example Sodium Fast Reactor) facility system are calculated using the Markov model based on the pathways constructed for individual misuse scenarios. Insights from a comparison of strategies that are likely to be adopted by the proliferator are discussed in this paper.

  18. The Innovative Approaches to Packaging – Comparison Analysis of Intelligent and Active Packaging Perceptions in Slovakia

    Directory of Open Access Journals (Sweden)

    Loucanova Erika

    2017-06-01

    Full Text Available Packaging has always served a practical function - to hold goods together and protect it when moving toward the customer through distribution channel. Today packaging is also a container for promoting the product and making it easier and safer to use. The sheer importance of packaging functions is still growing and consequently the interest of the company is to access to the packaging more innovative and creative. The paper deals with the innovative approaches to packaging resulting in the creation of packaging with interactive active features in the form of active and intelligent packaging. Using comparative analysis, we monitored the perception of the active packaging functions in comparison to intelligent packaging function among different age categories. We identified the age categories which are most interested in these functions.

  19. Rigorous approach to the comparison between experiment and theory in Casimir force measurements

    International Nuclear Information System (INIS)

    Klimchitskaya, G L; Chen, F; Decca, R S; Fischbach, E; Krause, D E; Lopez, D; Mohideen, U; Mostepanenko, V M

    2006-01-01

    In most experiments on the Casimir force the comparison between measurement data and theory was done using the concept of the root-mean-square deviation, a procedure that has been criticized in the literature. Here we propose a special statistical analysis which should be performed separately for the experimental data and for the results of the theoretical computations. In so doing, the random, systematic and total experimental errors are found as functions of separation, taking into account the distribution laws for each error at 95% confidence. Independently, all theoretical errors are combined to obtain the total theoretical error at the same confidence. Finally, the confidence interval for the differences between theoretical and experimental values is obtained as a function of separation. This rigorous approach is applied to two recent experiments on the Casimir effect

  20. Support vector methods for survival analysis: a comparison between ranking and regression approaches.

    Science.gov (United States)

    Van Belle, Vanya; Pelckmans, Kristiaan; Van Huffel, Sabine; Suykens, Johan A K

    2011-10-01

    To compare and evaluate ranking, regression and combined machine learning approaches for the analysis of survival data. The literature describes two approaches based on support vector machines to deal with censored observations. In the first approach the key idea is to rephrase the task as a ranking problem via the concordance index, a problem which can be solved efficiently in a context of structural risk minimization and convex optimization techniques. In a second approach, one uses a regression approach, dealing with censoring by means of inequality constraints. The goal of this paper is then twofold: (i) introducing a new model combining the ranking and regression strategy, which retains the link with existing survival models such as the proportional hazards model via transformation models; and (ii) comparison of the three techniques on 6 clinical and 3 high-dimensional datasets and discussing the relevance of these techniques over classical approaches fur survival data. We compare svm-based survival models based on ranking constraints, based on regression constraints and models based on both ranking and regression constraints. The performance of the models is compared by means of three different measures: (i) the concordance index, measuring the model's discriminating ability; (ii) the logrank test statistic, indicating whether patients with a prognostic index lower than the median prognostic index have a significant different survival than patients with a prognostic index higher than the median; and (iii) the hazard ratio after normalization to restrict the prognostic index between 0 and 1. Our results indicate a significantly better performance for models including regression constraints above models only based on ranking constraints. This work gives empirical evidence that svm-based models using regression constraints perform significantly better than svm-based models based on ranking constraints. Our experiments show a comparable performance for methods

  1. Derringer desirability and kinetic plot LC-column comparison approach for MS-compatible lipopeptide analysis.

    Science.gov (United States)

    D'Hondt, Matthias; Verbeke, Frederick; Stalmans, Sofie; Gevaert, Bert; Wynendaele, Evelien; De Spiegeleer, Bart

    2014-06-01

    Lipopeptides are currently re-emerging as an interesting subgroup in the peptide research field, having historical applications as antibacterial and antifungal agents and new potential applications as antiviral, antitumor, immune-modulating and cell-penetrating compounds. However, due to their specific structure, chromatographic analysis often requires special buffer systems or the use of trifluoroacetic acid, limiting mass spectrometry detection. Therefore, we used a traditional aqueous/acetonitrile based gradient system, containing 0.1% (m/v) formic acid, to separate four pharmaceutically relevant lipopeptides (polymyxin B 1 , caspofungin, daptomycin and gramicidin A 1 ), which were selected based upon hierarchical cluster analysis (HCA) and principal component analysis (PCA). In total, the performance of four different C18 columns, including one UPLC column, were evaluated using two parallel approaches. First, a Derringer desirability function was used, whereby six single and multiple chromatographic response values were rescaled into one overall D -value per column. Using this approach, the YMC Pack Pro C18 column was ranked as the best column for general MS-compatible lipopeptide separation. Secondly, the kinetic plot approach was used to compare the different columns at different flow rate ranges. As the optimal kinetic column performance is obtained at its maximal pressure, the length elongation factor λ ( P max / P exp ) was used to transform the obtained experimental data (retention times and peak capacities) and construct kinetic performance limit (KPL) curves, allowing a direct visual and unbiased comparison of the selected columns, whereby the YMC Triart C18 UPLC and ACE C18 columns performed as best. Finally, differences in column performance and the (dis)advantages of both approaches are discussed.

  2. Comparison of Different Approaches to Predict the Performance of Pumps As Turbines (PATs

    Directory of Open Access Journals (Sweden)

    Mauro Venturini

    2018-04-01

    Full Text Available This paper deals with the comparison of different methods which can be used for the prediction of the performance curves of pumps as turbines (PATs. The considered approaches are four, i.e., one physics-based simulation model (“white box” model, two “gray box” models, which integrate theory on turbomachines with specific data correlations, and one “black box” model. More in detail, the modeling approaches are: (1 a physics-based simulation model developed by the same authors, which includes the equations for estimating head, power, and efficiency and uses loss coefficients and specific parameters; (2 a model developed by Derakhshan and Nourbakhsh, which first predicts the best efficiency point of a PAT and then reconstructs their complete characteristic curves by means of two ad hoc equations; (3 the prediction model developed by Singh and Nestmann, which predicts the complete turbine characteristics based on pump shape and size; (4 an Evolutionary Polynomial Regression model, which represents a data-driven hybrid scheme which can be used for identifying the explicit mathematical relationship between PAT and pump curves. All approaches are applied to literature data, relying on both pump and PAT performance curves of head, power, and efficiency over the entire range of operation. The experimental data were provided by Derakhshan and Nourbakhsh for four different turbomachines, working in both pump and PAT mode with specific speed values in the range 1.53–5.82. This paper provides a quantitative assessment of the predictions made by means of the considered approaches and also analyzes consistency from a physical point of view. Advantages and drawbacks of each method are also analyzed and discussed.

  3. LEGAL CERTAINTY OF INDUSTRIAL DESIGN REVENUE IN INDONESIA BASED ON INTELLECTUAL PROPERTY APPROACH AND LEGAL COMPARISON

    Directory of Open Access Journals (Sweden)

    Ranti Fauza Mayana

    2018-03-01

    Full Text Available [Legal Certainty Of Industrial Design Revenue  In Indonesia Based On Intellectual Property Approach And Legal Comparison]  Protection of Industrial Designs, as well as intellectual property, is based on the ability of human creativity through creativity, taste and intention. According to Article 25 paragraph (1 TRIPs Protected Industrial Design Agreement is a new or original Industrial Design, this provision holds the principle that the novelty of a design is obtained when the design is differ from the previous, the novelty includes novelty and originality, the principal basis for the grant of Industrial Design, whereas this principle is not fully adopted in the provisions of Industrial Design. The Industrial Design Decree in Indonesia only requires novelty without clarifying how to interpret the novelty requirement so that a large number of Industrial Design Rights are obtained based on the Minor Change approach where slight differences in form and configuration have essentially demonstrated novelty. The minor change approach is considered to exclude the aspect of originality and is less able to provide legal certainty to the holder of the registered Industrial Design Rights. This paper aims to explore minor change approach as the basis for the evaluation of the novelty of Industrial Design in the perspective of comparative law in several countries of the world, namely the United States, Japan, the European Union and Australia as a study and reference material in an effort to establish protection of Industrial Design Rights in Indonesia that can provide legal certainty. Keywords: Industrial Design Revenue, Comparative Law.

  4. Next-Generation Mitogenomics: A Comparison of Approaches Applied to Caecilian Amphibian Phylogeny.

    Directory of Open Access Journals (Sweden)

    Simon T Maddock

    Full Text Available Mitochondrial genome (mitogenome sequences are being generated with increasing speed due to the advances of next-generation sequencing (NGS technology and associated analytical tools. However, detailed comparisons to explore the utility of alternative NGS approaches applied to the same taxa have not been undertaken. We compared a 'traditional' Sanger sequencing method with two NGS approaches (shotgun sequencing and non-indexed, multiplex amplicon sequencing on four different sequencing platforms (Illumina's HiSeq and MiSeq, Roche's 454 GS FLX, and Life Technologies' Ion Torrent to produce seven (near- complete mitogenomes from six species that form a small radiation of caecilian amphibians from the Seychelles. The fastest, most accurate method of obtaining mitogenome sequences that we tested was direct sequencing of genomic DNA (shotgun sequencing using the MiSeq platform. Bayesian inference and maximum likelihood analyses using seven different partitioning strategies were unable to resolve compellingly all phylogenetic relationships among the Seychelles caecilian species, indicating the need for additional data in this case.

  5. Next-Generation Mitogenomics: A Comparison of Approaches Applied to Caecilian Amphibian Phylogeny.

    Science.gov (United States)

    Maddock, Simon T; Briscoe, Andrew G; Wilkinson, Mark; Waeschenbach, Andrea; San Mauro, Diego; Day, Julia J; Littlewood, D Tim J; Foster, Peter G; Nussbaum, Ronald A; Gower, David J

    2016-01-01

    Mitochondrial genome (mitogenome) sequences are being generated with increasing speed due to the advances of next-generation sequencing (NGS) technology and associated analytical tools. However, detailed comparisons to explore the utility of alternative NGS approaches applied to the same taxa have not been undertaken. We compared a 'traditional' Sanger sequencing method with two NGS approaches (shotgun sequencing and non-indexed, multiplex amplicon sequencing) on four different sequencing platforms (Illumina's HiSeq and MiSeq, Roche's 454 GS FLX, and Life Technologies' Ion Torrent) to produce seven (near-) complete mitogenomes from six species that form a small radiation of caecilian amphibians from the Seychelles. The fastest, most accurate method of obtaining mitogenome sequences that we tested was direct sequencing of genomic DNA (shotgun sequencing) using the MiSeq platform. Bayesian inference and maximum likelihood analyses using seven different partitioning strategies were unable to resolve compellingly all phylogenetic relationships among the Seychelles caecilian species, indicating the need for additional data in this case.

  6. A direct comparison of remote sensing approaches for high-throughput phenotyping in plant breeding

    Directory of Open Access Journals (Sweden)

    Maria Tattaris

    2016-08-01

    Full Text Available Remote sensing (RS of plant canopies permits non-intrusive, high-throughput monitoring of plant physiological characteristics. This study compared three RS approaches using a low flying UAV (unmanned aerial vehicle, with that of proximal sensing, and satellite-based imagery. Two physiological traits were considered, canopy temperature (CT and a vegetation index (NDVI, to determine the most viable approaches for large scale crop genetic improvement. The UAV-based platform achieves plot-level resolution while measuring several hundred plots in one mission via high-resolution thermal and multispectral imagery measured at altitudes of 30-100 m. The satellite measures multispectral imagery from an altitude of 770 km. Information was compared with proximal measurements using IR thermometers and an NDVI sensor at a distance of 0.5-1m above plots. For robust comparisons, CT and NDVI were assessed on panels of elite cultivars under irrigated and drought conditions, in different thermal regimes, and on un-adapted genetic resources under water deficit. Correlations between airborne data and yield/biomass at maturity were generally higher than equivalent proximal correlations. NDVI was derived from high-resolution satellite imagery for only larger sized plots (8.5 x 2.4 m due to restricted pixel density. Results support use of UAV-based RS techniques for high-throughput phenotyping for both precision and efficiency.

  7. Applicability of the cost-effectiveness approach for comparison of waste management options

    International Nuclear Information System (INIS)

    Vuori, S.; Peltonen, E.; Vieno, T.; Vira, J.

    1984-01-01

    There is an obvious need to consider the achievable level of safety of waste management in view of the costs involved. The feasibility of the cost-effectiveness approach for this purpose is discussed in the framework of practical case studies. The analysis indicates that such an approach has clear benefits, but it also reveals several issues and ambiguities in its application. The waste management alternatives considered include various concepts for the disposal of low- and intermediate-level reactor wastes as well as of the unreprocessed spent fuel. The employed impact indicators describe both the individual and collective risks. In addition, indicators simultaneously giving a perspective into other risks in the society and a means to make a rank ordering of the alternative options are proposed. The cost-effectiveness ratios for collective risks vary in the range of ten to hundreds of millions US $ per man.Sv. The examples considered also indicate that increased costs do not necessarily improve safety. Furthermore, the comparison of the safety of different options requires more sophisticated and realistic models than those employed in the present analyses, because an unbalanced degree of conservatism could result in misleading conclusions. (author)

  8. German nuclear codes revised: comparison with approaches used in other countries

    International Nuclear Information System (INIS)

    Raetzke, C.; Micklinghoff, M.

    2005-01-01

    The article deals with the plan of the German Federal Ministry for the Environment (BMU) to revise the German set of nuclear codes, and draws a comparison with approaches pursued in other countries in formulating and implementing new requirements imposed upon existing plants. A striking feature of the BMU project is the intention to have the codes reflect the state of the art in an entirely abstract way irrespective of existing plants. This implies new requirements imposed on plant design, among other things. However, the state authorities, which establish the licensing conditions for individual plants in concrete terms, will not be able to apply these new codes for legal reasons (protection of vested rights) to the extent in which they incorporate changes in safety philosophy. Also the procedure adopted has raised considerable concern. The processing time of two years is inordinately short, and participation of the public and of industry does not go beyond the strictly formal framework of general public participation. In the light of this absence of quality assurance, it would be surprising if this new set of codes did not suffer from considerable deficits in its contents. Other countries show that the BMU is embarking on an isolated approach in every respect. Elsewhere, backfitting requirements are developed carefully and over long periods of time; they are discussed in detail with the operators; costs and benefits are weighted, and the consequences are evaluated. These elements are in common to procedures in all countries, irrespective of very different steps in detail. (orig.)

  9. Comparison of Two Probabilistic Fatigue Damage Assessment Approaches Using Prognostic Performance Metrics

    Directory of Open Access Journals (Sweden)

    Xuefei Guan

    2011-01-01

    Full Text Available In this paper, two probabilistic prognosis updating schemes are compared. One is based on the classical Bayesian approach and the other is based on newly developed maximum relative entropy (MRE approach. The algorithm performance of the two models is evaluated using a set of recently developed prognostics-based metrics. Various uncertainties from measurements, modeling, and parameter estimations are integrated into the prognosis framework as random input variables for fatigue damage of materials. Measures of response variables are then used to update the statistical distributions of random variables and the prognosis results are updated using posterior distributions. Markov Chain Monte Carlo (MCMC technique is employed to provide the posterior samples for model updating in the framework. Experimental data are used to demonstrate the operation of the proposed probabilistic prognosis methodology. A set of prognostics-based metrics are employed to quantitatively evaluate the prognosis performance and compare the proposed entropy method with the classical Bayesian updating algorithm. In particular, model accuracy, precision, robustness and convergence are rigorously evaluated in addition to the qualitative visual comparison. Following this, potential development and improvement for the prognostics-based metrics are discussed in detail.

  10. The comparison of various approach to evaluation erosion risks and design control erosion measures

    Science.gov (United States)

    Kapicka, Jiri

    2015-04-01

    In the present is in the Czech Republic one methodology how to compute and compare erosion risks. This methodology contain also method to design erosion control measures. The base of this methodology is Universal Soil Loss Equation (USLE) and their result long-term average annual rate of erosion (G). This methodology is used for landscape planners. Data and statistics from database of erosion events in the Czech Republic shows that many troubles and damages are from local episodes of erosion events. An extent of these events and theirs impact are conditional to local precipitation events, current plant phase and soil conditions. These erosion events can do troubles and damages on agriculture land, municipally property and hydro components and even in a location is from point of view long-term average annual rate of erosion in good conditions. Other way how to compute and compare erosion risks is episodes approach. In this paper is presented the compare of various approach to compute erosion risks. The comparison was computed to locality from database of erosion events on agricultural land in the Czech Republic where have been records two erosion events. The study area is a simple agriculture land without any barriers that can have high influence to water flow and soil sediment transport. The computation of erosion risks (for all methodology) was based on laboratory analysis of soil samples which was sampled on study area. Results of the methodology USLE, MUSLE and results from mathematical model Erosion 3D have been compared. Variances of the results in space distribution of the places with highest soil erosion where compared and discussed. Other part presents variances of design control erosion measures where their design was done on based different methodology. The results shows variance of computed erosion risks which was done by different methodology. These variances can start discussion about different approach how compute and evaluate erosion risks in areas

  11. Advances in the GRADE approach to rate the certainty in estimates from a network meta-analysis.

    Science.gov (United States)

    Brignardello-Petersen, Romina; Bonner, Ashley; Alexander, Paul E; Siemieniuk, Reed A; Furukawa, Toshi A; Rochwerg, Bram; Hazlewood, Glen S; Alhazzani, Waleed; Mustafa, Reem A; Murad, M Hassan; Puhan, Milo A; Schünemann, Holger J; Guyatt, Gordon H

    2018-01-01

    This article describes conceptual advances of the Grading of Recommendations Assessment, Development, and Evaluation (GRADE) working group guidance to evaluate the certainty of evidence (confidence in evidence, quality of evidence) from network meta-analysis (NMA). Application of the original GRADE guidance, published in 2014, in a number of NMAs has resulted in advances that strengthen its conceptual basis and make the process more efficient. This guidance will be useful for systematic review authors who aim to assess the certainty of all pairwise comparisons from an NMA and who are familiar with the basic concepts of NMA and the traditional GRADE approach for pairwise meta-analysis. Two principles of the original GRADE NMA guidance are that we need to rate the certainty of the evidence for each pairwise comparison within a network separately and that in doing so we need to consider both the direct and indirect evidence. We present, discuss, and illustrate four conceptual advances: (1) consideration of imprecision is not necessary when rating the direct and indirect estimates to inform the rating of NMA estimates, (2) there is no need to rate the indirect evidence when the certainty of the direct evidence is high and the contribution of the direct evidence to the network estimate is at least as great as that of the indirect evidence, (3) we should not trust a statistical test of global incoherence of the network to assess incoherence at the pairwise comparison level, and (4) in the presence of incoherence between direct and indirect evidence, the certainty of the evidence of each estimate can help decide which estimate to believe. Copyright © 2017 Elsevier Inc. All rights reserved.

  12. Comparison of Australian and US Cost-Benefit Approaches to MEPS

    Energy Technology Data Exchange (ETDEWEB)

    McMahon, James E.

    2004-03-12

    The Australian Greenhouse Office contracted with the Collaborative Labeling and Appliance Standards Program (CLASP) for LBNL to compare US and Australian approaches to analyzing costs and benefits of minimum energy performance standards (MEPS). This report compares the approaches for three types of products: household refrigerators and freezers, small electric storage water heaters, and commercial/industrial air conditioners. This report presents the findings of similarities and differences between the approaches of the two countries and suggests changes to consider in the approach taken in Australia. The purpose of the Australian program is to reduce greenhouse gas emissions, while the US program is intended to increase energy efficiency; each program is thus subject to specific constraints. The market and policy contexts are different, with the USA producing most of its own products and conducting pioneering engineering-economic studies to identify maximum energy efficiency levels that are technologically feasible and economically justified. In contrast, Australia imports a large share of its products and adopts MEPS already in place elsewhere. With these differences in circumstances, Australia's analysis approach could be expected to have less analytical detail and still result in MEPS levels that are appropriate for their policy and market context. In practice, the analysis required to meet these different objectives is quite similar. To date, Australia's cost-benefit analysis has served the goals and philosophies of the program well and been highly effective in successfully identifying MEPS that are significantly reducing greenhouse gas emissions while providing economic benefits to consumers. In some cases, however, the experience of the USA--using more extensive data sets and more detailed analysis--suggests possible improvements to Australia's cost-benefit analysis. The principal findings of the comparison are: (1) The Technology and Market

  13. [Comparison of ablation of left-sided accessory pathway by atrial septal and retrograde arterial approach].

    Science.gov (United States)

    Zhu, J G; Bao, Z Y; Gu, X

    2017-03-07

    Objective: To compare the advantages and disadvantages of radiofrequency ablation of left-sided accessory pathways by via atrial septal approach with retrograde through aortic approach. Methods: A total of 184 patients of left-side accessory pathways were treated in Taizhou People's Hospital and the Subei People's Hospital from March 2012 to August 2015.A total of 103 cases were treated by aortic retrograde approach as through arterial group, 81 cases were treated by punctured atrial septal to left atrial for mapping and ablation as through atrial septal group.Comparison of ablation procedure time, total and pathways of different parts(subgroup) at instant success and relapse rates, safety (serious complications), and statistics other complications in operation and postoperative. Results: Through arterial group and through atrial septal group were no significant difference ( P >0.05) in the ablation procedure time((25±18 ) vs (22±15)min ), instant success(98.1% vs 97.5%) and relapse rates(1.0% vs 1.2%), security(1 vs 0 case). There was no statistical difference in septal part subgroups (all P >0.05) in the ablation procedure time((22±18)vs (25±19)min), instant success(91.7% vs 89.9 %) and relapse rates(0 vs 11.1%); posterior wall subgroup had no statistical difference in the ablation procedure time((18±15)vs (16±12)min), instant success(100% vs 100 %) and relapse rates(0 vs 0)(all P >0.05); side wall subgroup had no statistical difference in the ablation procedure time((29±20)vs (21±18) min), instant success (98.3% vs 98.1%)and relapse rates(1.7% vs 0%)(all P >0.05). Conclusion: Ablation of left-sided accessory pathways by transseptal approach and transaortic approach has no statistical difference in the procedure time, instant success and relapse rates, security.In a particular case, there is a certain complementarity between the two methods.

  14. Pair-Wise Trajectory Management-Oceanic (PTM-O) . [Concept of Operations—Version 3.9

    Science.gov (United States)

    Jones, Kenneth M.

    2014-01-01

    This document describes the Pair-wise Trajectory Management-Oceanic (PTM-O) Concept of Operations (ConOps). Pair-wise Trajectory Management (PTM) is a concept that includes airborne and ground-based capabilities designed to enable and to benefit from, airborne pair-wise distance-monitoring capability. PTM includes the capabilities needed for the controller to issue a PTM clearance that resolves a conflict for a specific pair of aircraft. PTM avionics include the capabilities needed for the flight crew to manage their trajectory relative to specific designated aircraft. Pair-wise Trajectory Management PTM-Oceanic (PTM-O) is a regional specific application of the PTM concept. PTM is sponsored by the National Aeronautics and Space Administration (NASA) Concept and Technology Development Project (part of NASA's Airspace Systems Program). The goal of PTM is to use enhanced and distributed communications and surveillance along with airborne tools to permit reduced separation standards for given aircraft pairs, thereby increasing the capacity and efficiency of aircraft operations at a given altitude or volume of airspace.

  15. The structure of pairwise correlation in mouse primary visual cortex reveals functional organization in the absence of an orientation map.

    Science.gov (United States)

    Denman, Daniel J; Contreras, Diego

    2014-10-01

    Neural responses to sensory stimuli are not independent. Pairwise correlation can reduce coding efficiency, occur independent of stimulus representation, or serve as an additional channel of information, depending on the timescale of correlation and the method of decoding. Any role for correlation depends on its magnitude and structure. In sensory areas with maps, like the orientation map in primary visual cortex (V1), correlation is strongly related to the underlying functional architecture, but it is unclear whether this correlation structure is an essential feature of the system or arises from the arrangement of cells in the map. We assessed the relationship between functional architecture and pairwise correlation by measuring both synchrony and correlated spike count variability in mouse V1, which lacks an orientation map. We observed significant pairwise synchrony, which was organized by distance and relative orientation preference between cells. We also observed nonzero correlated variability in both the anesthetized (0.16) and awake states (0.18). Our results indicate that the structure of pairwise correlation is maintained in the absence of an underlying anatomical organization and may be an organizing principle of the mammalian visual system preserved by nonrandom connectivity within local networks. © The Author 2013. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  16. Fast-GPU-PCC: A GPU-Based Technique to Compute Pairwise Pearson's Correlation Coefficients for Time Series Data-fMRI Study.

    Science.gov (United States)

    Eslami, Taban; Saeed, Fahad

    2018-04-20

    Functional magnetic resonance imaging (fMRI) is a non-invasive brain imaging technique, which has been regularly used for studying brain’s functional activities in the past few years. A very well-used measure for capturing functional associations in brain is Pearson’s correlation coefficient. Pearson’s correlation is widely used for constructing functional network and studying dynamic functional connectivity of the brain. These are useful measures for understanding the effects of brain disorders on connectivities among brain regions. The fMRI scanners produce huge number of voxels and using traditional central processing unit (CPU)-based techniques for computing pairwise correlations is very time consuming especially when large number of subjects are being studied. In this paper, we propose a graphics processing unit (GPU)-based algorithm called Fast-GPU-PCC for computing pairwise Pearson’s correlation coefficient. Based on the symmetric property of Pearson’s correlation, this approach returns N ( N − 1 ) / 2 correlation coefficients located at strictly upper triangle part of the correlation matrix. Storing correlations in a one-dimensional array with the order as proposed in this paper is useful for further usage. Our experiments on real and synthetic fMRI data for different number of voxels and varying length of time series show that the proposed approach outperformed state of the art GPU-based techniques as well as the sequential CPU-based versions. We show that Fast-GPU-PCC runs 62 times faster than CPU-based version and about 2 to 3 times faster than two other state of the art GPU-based methods.

  17. A preliminary comparison of hydrodynamic approaches for flood inundation modeling of urban areas in Jakarta Ciliwung river basin

    Science.gov (United States)

    Rojali, Aditia; Budiaji, Abdul Somat; Pribadi, Yudhistira Satya; Fatria, Dita; Hadi, Tri Wahyu

    2017-07-01

    This paper addresses on the numerical modeling approaches for flood inundation in urban areas. Decisive strategy to choose between 1D, 2D or even a hybrid 1D-2D model is more than important to optimize flood inundation analyses. To find cost effective yet robust and accurate model has been our priority and motivation in the absence of available High Performance Computing facilities. The application of 1D, 1D/2D and full 2D modeling approach to river flood study in Jakarta Ciliwung river basin, and a comparison of approaches benchmarked for the inundation study are presented. This study demonstrate the successful use of 1D/2D and 2D system to model Jakarta Ciliwung river basin in terms of inundation results and computational aspect. The findings of the study provide an interesting comparison between modeling approaches, HEC-RAS 1D, 1D-2D, 2D, and ANUGA when benchmarked to the Manggarai water level measurement.

  18. Comparison of Exposure in the Kaplan Versus the Kocher Approach in the Treatment of Radial Head Fractures.

    Science.gov (United States)

    Barnes, Leslie Fink; Lombardi, Joseph; Gardner, Thomas R; Strauch, Robert J; Rosenwasser, Melvin P

    2018-01-01

    The aim of this study was to compare the complete visible surface area of the radial head, neck, and coronoid in the Kaplan and Kocher approaches to the lateral elbow. The hypothesis was that the Kaplan approach would afford greater visibility due to the differential anatomy of the intermuscular planes. Ten cadavers were dissected with the Kaplan and Kocher approaches, and the visible surface area was measured in situ using a 3-dimensional digitizer. Six measurements were taken for each approach by 2 surgeons, and the mean of these measurements were analyzed. The mean surface area visible with the lateral collateral ligament (LCL) preserved in the Kaplan approach was 616.6 mm 2 in comparison with the surface area of 136.2 mm 2 visible in the Kocher approach when the LCL was preserved. Using a 2-way analysis of variance, the difference between these 2 approaches was statistically significant. When the LCL complex was incised in the Kocher approach, the average visible surface area of the Kocher approach was 456.1 mm 2 and was statistically less than the Kaplan approach. The average surface area of the coronoid visible using a proximally extended Kaplan approach was 197.8 mm 2 . The Kaplan approach affords significantly greater visible surface area of the proximal radius than the Kocher approach.

  19. Theory of pairwise coupling embedded in more general local dispersion relations

    International Nuclear Information System (INIS)

    Fuchs, V.; Bers, A.; Harten, L.

    1985-01-01

    Earlier work on the mode conversion theory by Fuchs, Ko, and Bers is detailed and expanded upon, and its relation to energy conservation is discussed. Given a local dispersion relation, D(ω; k, z) = 0, describing stable waves excited at an externally imposed frequency ω, a pairwise mode-coupling event embedded therein is extracted by expanding D(k, z) around a contour k = k/sub c/(z) given by partialD/partialk = 0. The branch points of D(k, z) = 0 are the turning points of a second-order differential-equation representation. In obtaining the fraction of mode-converted energy, the connection formula and conservation of energy must be used together. Also, proper attention must be given to distinguish cases for which the coupling disappears or persists upon confluence of the branches, a property which is shown to depend on the forward (v/sub g/v/sub ph/>0) or backward (v/sub g/v/sub ph/<0) nature of the waves. Examples occurring in ion-cyclotron and lower-hybrid heating are presented, illustrating the use of the theory

  20. pyRMSD: a Python package for efficient pairwise RMSD matrix calculation and handling.

    Science.gov (United States)

    Gil, Víctor A; Guallar, Víctor

    2013-09-15

    We introduce pyRMSD, an open source standalone Python package that aims at offering an integrative and efficient way of performing Root Mean Square Deviation (RMSD)-related calculations of large sets of structures. It is specially tuned to do fast collective RMSD calculations, as pairwise RMSD matrices, implementing up to three well-known superposition algorithms. pyRMSD provides its own symmetric distance matrix class that, besides the fact that it can be used as a regular matrix, helps to save memory and increases memory access speed. This last feature can dramatically improve the overall performance of any Python algorithm using it. In addition, its extensibility, testing suites and documentation make it a good choice to those in need of a workbench for developing or testing new algorithms. The source code (under MIT license), installer, test suites and benchmarks can be found at https://pele.bsc.es/ under the tools section. victor.guallar@bsc.es Supplementary data are available at Bioinformatics online.

  1. Evaluation of advanced multiplex short tandem repeat systems in pairwise kinship analysis.

    Science.gov (United States)

    Tamura, Tomonori; Osawa, Motoki; Ochiai, Eriko; Suzuki, Takanori; Nakamura, Takashi

    2015-09-01

    The AmpFLSTR Identifiler Kit, comprising 15 autosomal short tandem repeat (STR) loci, is commonly employed in forensic practice for calculating match probabilities and parentage testing. The conventional system exhibits insufficient estimation for kinship analysis such as sibship testing because of shortness of examined loci. This study evaluated the power of the PowerPlex Fusion System, GlobalFiler Kit, and PowerPlex 21 System, which comprise more than 20 autosomal STR loci, to estimate pairwise blood relatedness (i.e., parent-child, full siblings, second-degree relatives, and first cousins). The genotypes of all 24 STR loci in 10,000 putative pedigrees were constructed by simulation. The likelihood ratio for each locus was calculated from joint probabilities for relatives and non-relatives. The combined likelihood ratio was calculated according to the product rule. The addition of STR loci improved separation between relatives and non-relatives. However, these systems were less effectively extended to the inference for first cousins. In conclusion, these advanced systems will be useful in forensic personal identification, especially in the evaluation of full siblings and second-degree relatives. Moreover, the additional loci may give rise to two major issues of more frequent mutational events and several pairs of linked loci on the same chromosome. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  2. Visualization of pairwise and multilocus linkage disequilibrium structure using latent forests.

    Directory of Open Access Journals (Sweden)

    Raphaël Mourad

    Full Text Available Linkage disequilibrium study represents a major issue in statistical genetics as it plays a fundamental role in gene mapping and helps us to learn more about human history. The linkage disequilibrium complex structure makes its exploratory data analysis essential yet challenging. Visualization methods, such as the triangular heat map implemented in Haploview, provide simple and useful tools to help understand complex genetic patterns, but remain insufficient to fully describe them. Probabilistic graphical models have been widely recognized as a powerful formalism allowing a concise and accurate modeling of dependences between variables. In this paper, we propose a method for short-range, long-range and chromosome-wide linkage disequilibrium visualization using forests of hierarchical latent class models. Thanks to its hierarchical nature, our method is shown to provide a compact view of both pairwise and multilocus linkage disequilibrium spatial structures for the geneticist. Besides, a multilocus linkage disequilibrium measure has been designed to evaluate linkage disequilibrium in hierarchy clusters. To learn the proposed model, a new scalable algorithm is presented. It constrains the dependence scope, relying on physical positions, and is able to deal with more than one hundred thousand single nucleotide polymorphisms. The proposed algorithm is fast and does not require phase genotypic data.

  3. Benefits of Using Pairwise Trajectory Management in the Central East Pacific

    Science.gov (United States)

    Chartrand, Ryan; Ballard, Kathryn

    2016-01-01

    Pairwise Trajectory Management (PTM) is a concept that utilizes airborne and ground-based capabilities to enable airborne spacing operations in oceanic regions. The goal of PTM is to use enhanced surveillance, along with airborne tools, to manage the spacing between aircraft. Due to the enhanced airborne surveillance of Automatic Dependent Surveillance-Broadcast (ADS-B) information and reduced communication, the PTM minimum spacing distance will be less than distances currently required of an air traffic controller. Reduced minimum distance will increase the capacity of aircraft operations at a given altitude or volume of airspace, thereby increasing time on desired trajectory and overall flight efficiency. PTM is designed to allow a flight crew to resolve a specific traffic conflict (or conflicts), identified by the air traffic controller, while maintaining the flight crew's desired altitude. The air traffic controller issues a PTM clearance to a flight crew authorized to conduct PTM operations in order to resolve a conflict for the pair (or pairs) of aircraft (i.e., the PTM aircraft and a designated target aircraft). This clearance requires the flight crew of the PTM aircraft to use their ADS-B-enabled onboard equipment to manage their spacing relative to the designated target aircraft to ensure spacing distances that are no closer than the PTM minimum distance. When the air traffic controller determines that PTM is no longer required, the controller issues a clearance to cancel the PTM operation.

  4. constNJ: an algorithm to reconstruct sets of phylogenetic trees satisfying pairwise topological constraints.

    Science.gov (United States)

    Matsen, Frederick A

    2010-06-01

    This article introduces constNJ (constrained neighbor-joining), an algorithm for phylogenetic reconstruction of sets of trees with constrained pairwise rooted subtree-prune-regraft (rSPR) distance. We are motivated by the problem of constructing sets of trees that must fit into a recombination, hybridization, or similar network. Rather than first finding a set of trees that are optimal according to a phylogenetic criterion (e.g., likelihood or parsimony) and then attempting to fit them into a network, constNJ estimates the trees while enforcing specified rSPR distance constraints. The primary input for constNJ is a collection of distance matrices derived from sequence blocks which are assumed to have evolved in a tree-like manner, such as blocks of an alignment which do not contain any recombination breakpoints. The other input is a set of rSPR constraint inequalities for any set of pairs of trees. constNJ is consistent and a strict generalization of the neighbor-joining algorithm; it uses the new notion of maximum agreement partitions (MAPs) to assure that the resulting trees satisfy the given rSPR distance constraints.

  5. Prediction of microsleeps using pairwise joint entropy and mutual information between EEG channels.

    Science.gov (United States)

    Baseer, Abdul; Weddell, Stephen J; Jones, Richard D

    2017-07-01

    Microsleeps are involuntary and brief instances of complete loss of responsiveness, typically of 0.5-15 s duration. They adversely affect performance in extended attention-driven jobs and can be fatal. Our aim was to predict microsleeps from 16 channel EEG signals. Two information theoretic concepts - pairwise joint entropy and mutual information - were independently used to continuously extract features from EEG signals. k-nearest neighbor (kNN) with k = 3 was used to calculate both joint entropy and mutual information. Highly correlated features were discarded and the rest were ranked using Fisher score followed by an average of 3-fold cross-validation area under the curve of the receiver operating characteristic (AUC ROC ). Leave-one-out method (LOOM) was performed to test the performance of microsleep prediction system on independent data. The best prediction for 0.25 s ahead was AUCROC, sensitivity, precision, geometric mean (GM), and φ of 0.93, 0.68, 0.33, 0.75, and 0.38 respectively with joint entropy using single linear discriminant analysis (LDA) classifier.

  6. Effect of air turbulence on gas transport in soil; comparison of approaches

    Science.gov (United States)

    Pourbakhtiar, Alireza; Papadikis, Konstantinos; Poulsen, Tjalfe; Bridge, Jonathan; Wilkinson, Stephen

    2017-04-01

    Greenhouse gases are playing the key role in global warming. Soil is a source of greenhouse gases such as methane (CH4). Radon (Rn) which is a radioactive gas can emit form subsurface into the atmosphere and leads to health concerns in urban areas. Temperature, humidity, air pressure and vegetation of soil can affect gas emissions inside soil (Oertel et al., 2016). It's shown in many cases that wind induced fluctuations is an important factor in transport of gas through soil and other porous media. An example is: landfill gas emissions (Poulsen et al., 2001). We applied an experimental equipment for measuring controlled air turbulence on gas transport in soil in relation to the depth of sample. Two approaches for measurement of effect of wind turbulence on gas transport were applied and compared. Experiments were carried out with diffusion of CO2 and air as tracer gases with average vertical wind speeds of 0 to 0.83 m s-1. In approach A, Six different sample thicknesses from 5 to 30 cm were selected and total of 4 different wind conditions with different speed and fluctuations were applied. In approach B, a sample with constant depth was used. Five oxygen sensors were places inside sample at different depths. Total of 111 experiments were carried out. Gas transport is described by advection-dispersion equation. Gas transport is quantified as a dispersion coefficient. Oxygen breakthrough curves as a function of distance to the surface of the sample exposed to wind were derived numerically with an explicit forward time, central space finite-difference based model to evaluate gas transport. We showed that wind turbulence-induced fluctuations is an important factor in gas transport that can increase gas transport with average of 45 times more than molecular diffusion under zero wind condition. Comparison of two strategies for experiments, indicated that, constant deep samples (Approach B) are more reliable for measurement of gas transport under influence of wind

  7. Surgically induced astigmatism after phacoemulsification by temporal clear corneal and superior clear corneal approach: a comparison

    Directory of Open Access Journals (Sweden)

    Nikose AS

    2018-01-01

    Full Text Available Archana Sunil Nikose, Dhrubojyoti Saha, Pradnya Mukesh Laddha, Mayuri Patil Department of Ophthalmology, N.K.P. Salve Institute and LMH, Nagpur, Maharashtra, India Introduction: Cataract surgery has undergone various advances since it was evolved from ancient couching to the modern phacoemulsification cataract surgery. Surgically induced astigmatism (SIA remains one of the most common complications. The introduction of sutureless clear corneal incision has gained increasing popularity worldwide because it offers several advantages over the traditional sutured limbal incision and scleral tunnel. A clear corneal incision has the benefit of being bloodless and having an easy approach, but SIA is still a concern.Purpose: In this study, we evaluated the SIA in clear corneal incisions with temporal approach and superior approach phacoemulsification. Comparisons between the two incisions were done using keratometric readings of preoperative and postoperative refractive status.Methodology: It was a hospital-based prospective interventional comparative randomized control trial of 261 patients conducted in a rural-based tertiary care center from September 2012 to August 2014. The visual acuity and detailed anterior segment and posterior segment examinations were done and the cataract was graded according to Lens Opacification Classification System II. Patients were divided for phacoemulsification into two groups, group A and group B, who underwent temporal and superior clear corneal approach, respectively. The patients were followed up on day 1, 7, 30, and 90 postoperatively. The parameters recorded were uncorrected visual acuity, best-corrected visual acuity, slit lamp examination, and keratometry. The mean difference of SIA between 30th and 90th day was statistically evaluated using paired t-test, and all the analyses were performed using SPSS 18.0 (SPSS Inc. software.Results: The mean postoperative SIA in group A was 0.998 D on the 30th day, which

  8. Evaluating the effect of corridors and landscape heterogeneity on dispersal probability: a comparison of three spatially explicit modelling approaches

    DEFF Research Database (Denmark)

    Jepsen, J. U.; Baveco, J. M.; Topping, C. J.

    2004-01-01

    preferences of the modeller, rather than by a critical evaluation of model performance. We present a comparison of three common spatial simulation approaches (patch-based incidence-function model (IFM), individual-based movement model (IBMM), individual-based population model including detailed behaviour...

  9. Identifying functional reorganization of spelling networks: an individual peak probability comparison approach

    Science.gov (United States)

    Purcell, Jeremy J.; Rapp, Brenda

    2013-01-01

    Previous research has shown that damage to the neural substrates of orthographic processing can lead to functional reorganization during reading (Tsapkini et al., 2011); in this research we ask if the same is true for spelling. To examine the functional reorganization of spelling networks we present a novel three-stage Individual Peak Probability Comparison (IPPC) analysis approach for comparing the activation patterns obtained during fMRI of spelling in a single brain-damaged individual with dysgraphia to those obtained in a set of non-impaired control participants. The first analysis stage characterizes the convergence in activations across non-impaired control participants by applying a technique typically used for characterizing activations across studies: Activation Likelihood Estimate (ALE) (Turkeltaub et al., 2002). This method was used to identify locations that have a high likelihood of yielding activation peaks in the non-impaired participants. The second stage provides a characterization of the degree to which the brain-damaged individual's activations correspond to the group pattern identified in Stage 1. This involves performing a Mahalanobis distance statistics analysis (Tsapkini et al., 2011) that compares each of a control group's peak activation locations to the nearest peak generated by the brain-damaged individual. The third stage evaluates the extent to which the brain-damaged individual's peaks are atypical relative to the range of individual variation among the control participants. This IPPC analysis allows for a quantifiable, statistically sound method for comparing an individual's activation pattern to the patterns observed in a control group and, thus, provides a valuable tool for identifying functional reorganization in a brain-damaged individual with impaired spelling. Furthermore, this approach can be applied more generally to compare any individual's activation pattern with that of a set of other individuals. PMID:24399981

  10. A comparison and assessment of approaches for modelling flow over in-line tube banks

    International Nuclear Information System (INIS)

    Iacovides, Hector; Launder, Brian; West, Alastair

    2014-01-01

    Highlights: • We present wall-resolved LES and URANS simulations of periodic flow in heated in-line tube banks. • Simulations of flow in a confined in-line tube-bank are compared with experimental data. • When pitch-to-diameter (P/D) ratio becomes less than 1.6, the periodic flow becomes skewed. • URANS tested here unable to mimic the periodic flow at P/D = 1.6. • In confined tube banks URANS suggest alternate, in the axial direction, flow deflection. - Abstract: The paper reports experiences from applying alternative strategies for modelling turbulent flow and local heat-transfer coefficients around in-line tube banks. The motivation is the simulation of conditions in the closely packed cross-flow heat exchangers used in advanced gas-cooled nuclear reactors (AGRs). The main objective is the flow simulation in large-scale tube banks with confining walls. The suitability and accuracy of wall-resolved large-eddy simulation (LES) and Unsteady Reynolds-Averaged Navier–Stokes (URANS) approaches are examined for generic, square, in-line tube banks, where experimental data are limited but available. Within the latter approach, both eddy-viscosity and Reynolds-stress-transport models have been tested. The assumption of flow periodicity in all three directions is investigated by varying the domain size. It is found that the path taken by the fluid through the tube-bank configuration differs according to the treatment of turbulence and whether the flow is treated as two- or three-dimensional. Finally, the important effect of confining walls has been examined by making direct comparison with the experiments of the complete test rig of Aiba et al. (1982)

  11. Metal Mixture Modeling Evaluation project: 2. Comparison of four modeling approaches

    Science.gov (United States)

    Farley, Kevin J.; Meyer, Joe; Balistrieri, Laurie S.; DeSchamphelaere, Karl; Iwasaki, Yuichi; Janssen, Colin; Kamo, Masashi; Lofts, Steve; Mebane, Christopher A.; Naito, Wataru; Ryan, Adam C.; Santore, Robert C.; Tipping, Edward

    2015-01-01

    As part of the Metal Mixture Modeling Evaluation (MMME) project, models were developed by the National Institute of Advanced Industrial Science and Technology (Japan), the U.S. Geological Survey (USA), HDR⎪HydroQual, Inc. (USA), and the Centre for Ecology and Hydrology (UK) to address the effects of metal mixtures on biological responses of aquatic organisms. A comparison of the 4 models, as they were presented at the MMME Workshop in Brussels, Belgium (May 2012), is provided herein. Overall, the models were found to be similar in structure (free ion activities computed by WHAM; specific or non-specific binding of metals/cations in or on the organism; specification of metal potency factors and/or toxicity response functions to relate metal accumulation to biological response). Major differences in modeling approaches are attributed to various modeling assumptions (e.g., single versus multiple types of binding site on the organism) and specific calibration strategies that affected the selection of model parameters. The models provided a reasonable description of additive (or nearly additive) toxicity for a number of individual toxicity test results. Less-than-additive toxicity was more difficult to describe with the available models. Because of limitations in the available datasets and the strong inter-relationships among the model parameters (log KM values, potency factors, toxicity response parameters), further evaluation of specific model assumptions and calibration strategies is needed.

  12. Perfluoroalkyl substances in aquatic environment-comparison of fish and passive sampling approaches.

    Science.gov (United States)

    Cerveny, Daniel; Grabic, Roman; Fedorova, Ganna; Grabicova, Katerina; Turek, Jan; Kodes, Vit; Golovko, Oksana; Zlabek, Vladimir; Randak, Tomas

    2016-01-01

    The concentrations of seven perfluoroalkyl substances (PFASs) were investigated in 36 European chub (Squalius cephalus) individuals from six localities in the Czech Republic. Chub muscle and liver tissue were analysed at all sampling sites. In addition, analyses of 16 target PFASs were performed in Polar Organic Chemical Integrative Samplers (POCISs) deployed in the water at the same sampling sites. We evaluated the possibility of using passive samplers as a standardized method for monitoring PFAS contamination in aquatic environments and the mutual relationships between determined concentrations. Only perfluorooctane sulphonate was above the LOQ in fish muscle samples and 52% of the analysed fish individuals exceeded the Environmental Quality Standard for water biota. Fish muscle concentration is also particularly important for risk assessment of fish consumers. The comparison of fish tissue results with published data showed the similarity of the Czech results with those found in Germany and France. However, fish liver analysis and the passive sampling approach resulted in different fish exposure scenarios. The total concentration of PFASs in fish liver tissue was strongly correlated with POCIS data, but pollutant patterns differed between these two matrices. The differences could be attributed to the metabolic activity of the living organism. In addition to providing a different view regarding the real PFAS cocktail to which the fish are exposed, POCISs fulfil the Three Rs strategy (replacement, reduction, and refinement) in animal testing. Copyright © 2015 Elsevier Inc. All rights reserved.

  13. On Thermally Interacting Multiple Boreholes with Variable Heating Strength: Comparison between Analytical and Numerical Approaches

    Directory of Open Access Journals (Sweden)

    Marc A. Rosen

    2012-08-01

    Full Text Available The temperature response in the soil surrounding multiple boreholes is evaluated analytically and numerically. The assumption of constant heat flux along the borehole wall is examined by coupling the problem to the heat transfer problem inside the borehole and presenting a model with variable heat flux along the borehole length. In the analytical approach, a line source of heat with a finite length is used to model the conduction of heat in the soil surrounding the boreholes. In the numerical method, a finite volume method in a three dimensional meshed domain is used. In order to determine the heat flux boundary condition, the analytical quasi-three-dimensional solution to the heat transfer problem of the U-tube configuration inside the borehole is used. This solution takes into account the variation in heating strength along the borehole length due to the temperature variation of the fluid running in the U-tube. Thus, critical depths at which thermal interaction occurs can be determined. Finally, in order to examine the validity of the numerical method, a comparison is made with the results of line source method.

  14. Sales Comparison Approach Indicating Heterogeneity of Particular Type of Real Estate and Corresponding Valuation Accuracy

    Directory of Open Access Journals (Sweden)

    Martin Cupal

    2017-01-01

    Full Text Available The article focuses on heterogeneity of goods, namely real estate and consequently deals with market valuation accuracy. The heterogeneity of real estate property is, in particular, that every unit is unique in terms of its construction, condition, financing and mainly location and thus assessing the value must necessarily be difficult. This research also indicates the rate of efficiency of markets across the types based on their level of variability. The research is based on two databases consisting of various types of real estate with specific market parameters. These parameters determine the differences across the types and reveal heterogeneity. The first database has been set on valuations by sales comparison approach and the second one on data of real properties offered on the market. The methodology is based on univariate and multivariate statistics of key variables of those databases. The multivariate analysis is performed by Hotelling T2 control chart and statistics with appropriate numerical characteristics. The results of both databases were joint by weights with regard to the dependence criterion of the variables. The final results indicate potential valuation accuracy across the types. The main contribution of the research is that the evaluation was not only derived from the price deviation or distribution, but it also draws from causes of real property heterogeneity as a whole.

  15. Analyzing Information Systems Development: A Comparison and Analysis of Eight IS Development Approaches.

    Science.gov (United States)

    Iivari, Juhani; Hirschheim, Rudy

    1996-01-01

    Analyzes and compares eight information systems (IS) development approaches: Information Modelling, Decision Support Systems, the Socio-Technical approach, the Infological approach, the Interactionist approach, the Speech Act-based approach, Soft Systems Methodology, and the Scandinavian Trade Unionist approach. Discusses the organizational roles…

  16. Fast pairwise structural RNA alignments by pruning of the dynamical programming matrix.

    Directory of Open Access Journals (Sweden)

    Jakob H Havgaard

    2007-10-01

    Full Text Available It has become clear that noncoding RNAs (ncRNA play important roles in cells, and emerging studies indicate that there might be a large number of unknown ncRNAs in mammalian genomes. There exist computational methods that can be used to search for ncRNAs by comparing sequences from different genomes. One main problem with these methods is their computational complexity, and heuristics are therefore employed. Two heuristics are currently very popular: pre-folding and pre-aligning. However, these heuristics are not ideal, as pre-aligning is dependent on sequence similarity that may not be present and pre-folding ignores the comparative information. Here, pruning of the dynamical programming matrix is presented as an alternative novel heuristic constraint. All subalignments that do not exceed a length-dependent minimum score are discarded as the matrix is filled out, thus giving the advantage of providing the constraints dynamically. This has been included in a new implementation of the FOLDALIGN algorithm for pairwise local or global structural alignment of RNA sequences. It is shown that time and memory requirements are dramatically lowered while overall performance is maintained. Furthermore, a new divide and conquer method is introduced to limit the memory requirement during global alignment and backtrack of local alignment. All branch points in the computed RNA structure are found and used to divide the structure into smaller unbranched segments. Each segment is then realigned and backtracked in a normal fashion. Finally, the FOLDALIGN algorithm has also been updated with a better memory implementation and an improved energy model. With these improvements in the algorithm, the FOLDALIGN software package provides the molecular biologist with an efficient and user-friendly tool for searching for new ncRNAs. The software package is available for download at http://foldalign.ku.dk.

  17. Pharmacological treatments in asthma-affected horses: A pair-wise and network meta-analysis.

    Science.gov (United States)

    Calzetta, L; Roncada, P; di Cave, D; Bonizzi, L; Urbani, A; Pistocchini, E; Rogliani, P; Matera, M G

    2017-11-01

    Equine asthma is a disease characterised by reversible airflow obstruction, bronchial hyper-responsiveness and airway inflammation following exposure of susceptible horses to specific airborne agents. Although clinical remission can be achieved in a low-airborne dust environment, repeated exacerbations may lead to irreversible airway remodelling. The available data on the pharmacotherapy of equine asthma result from several small studies, and no head-to-head clinical trials have been conducted among the available medications. To assess the impact of the pharmacological interventions in equine asthma and compare the effect of different classes of drugs on lung function. Pair-wise and network meta-analysis. Literature searches for clinical trials on the pharmacotherapy of equine asthma were performed. The risk of publication bias was assessed by funnel plots and Egger's test. Changes in maximum transpulmonary or pleural pressure, pulmonary resistance and dynamic lung compliance vs. control were analysed via random-effects models and Bayesian networks. The results obtained from 319 equine asthma-affected horses were extracted from 32 studies. Bronchodilators, corticosteroids and chromones improved maximum transpulmonary or pleural pressure (range: -8.0 to -21.4 cmH 2 O; Ptherapies. Long-term treatments were more effective than short-term treatments. Weak publication bias was detected. This study demonstrates that long-term treatments with inhaled corticosteroids and long-acting β 2 -AR agonists may represent the first choice for treating equine asthma. Further high quality clinical trials are needed to clarify whether inhaled bronchodilators should be preferred to inhaled corticosteroids or vice versa, and to investigate the potential superiority of combination therapy in equine asthma. © 2017 EVJ Ltd.

  18. Further investigations of the W-test for pairwise epistasis testing [version 1; referees: 2 approved

    Directory of Open Access Journals (Sweden)

    Richard Howey

    2017-07-01

    Full Text Available Background: In a recent paper, a novel W-test for pairwise epistasis testing was proposed that appeared, in computer simulations, to have higher power than competing alternatives. Application to genome-wide bipolar data detected significant epistasis between SNPs in genes of relevant biological function. Network analysis indicated that the implicated genes formed two separate interaction networks, each containing genes highly related to autism and neurodegenerative disorders. Methods: Here we investigate further the properties and performance of the W-test via theoretical evaluation, computer simulations and application to real data. Results: We demonstrate that, for common variants, the W-test is closely related to several existing tests of association allowing for interaction, including logistic regression on 8 degrees of freedom, although logistic regression can show inflated type I error for low minor allele frequencies,  whereas the W-test shows good/conservative type I error control. Although in some situations the W-test can show higher power, logistic regression is not limited to tests on 8 degrees of freedom but can instead be taylored to impose greater structure on the assumed alternative hypothesis, offering a power advantage when the imposed structure matches the true structure. Conclusions: The W-test is a potentially useful method for testing for association - without necessarily implying interaction - between genetic variants disease, particularly when one or more of the genetic variants are rare. For common variants, the advantages of the W-test are less clear, and, indeed, there are situations where existing methods perform better. In our investigations, we further uncover a number of problems with the practical implementation and application of the W-test (to bipolar disorder previously described, apparently due to inadequate use of standard data quality-control procedures. This observation leads us to urge caution in

  19. Heisenberg coupling constant predicted for molecular magnets with pairwise spin-contamination correction

    Energy Technology Data Exchange (ETDEWEB)

    Masunov, Artëm E., E-mail: amasunov@ucf.edu [NanoScience Technology Center, Department of Chemistry, and Department of Physics, University of Central Florida, Orlando, FL 32826 (United States); Photochemistry Center RAS, ul. Novatorov 7a, Moscow 119421 (Russian Federation); Gangopadhyay, Shruba [Department of Physics, University of California, Davis, CA 95616 (United States); IBM Almaden Research Center, 650 Harry Road, San Jose, CA 95120 (United States)

    2015-12-15

    New method to eliminate the spin-contamination in broken symmetry density functional theory (BS DFT) calculations is introduced. Unlike conventional spin-purification correction, this method is based on canonical Natural Orbitals (NO) for each high/low spin coupled electron pair. We derive an expression to extract the energy of the pure singlet state given in terms of energy of BS DFT solution, the occupation number of the bonding NO, and the energy of the higher spin state built on these bonding and antibonding NOs (not self-consistent Kohn–Sham orbitals of the high spin state). Compared to the other spin-contamination correction schemes, spin-correction is applied to each correlated electron pair individually. We investigate two binuclear Mn(IV) molecular magnets using this pairwise correction. While one of the molecules is described by magnetic orbitals strongly localized on the metal centers, and spin gap is accurately predicted by Noodleman and Yamaguchi schemes, for the other one the gap is predicted poorly by these schemes due to strong delocalization of the magnetic orbitals onto the ligands. We show our new correction to yield more accurate results in both cases. - Highlights: • Magnetic orbitails obtained for high and low spin states are not related. • Spin-purification correction becomes inaccurate for delocalized magnetic orbitals. • We use the natural orbitals of the broken symmetry state to build high spin state. • This new correction is made separately for each electron pair. • Our spin-purification correction is more accurate for delocalised magnetic orbitals.

  20. A discrete model of Ostwald ripening based on multiple pairwise interactions

    Science.gov (United States)

    Di Nunzio, Paolo Emilio

    2018-06-01

    A discrete multi-particle model of Ostwald ripening based on direct pairwise interactions is developed for particles with incoherent interfaces as an alternative to the classical LSW mean field theory. The rate of matter exchange depends on the average surface-to-surface interparticle distance, a characteristic feature of the system which naturally incorporates the effect of volume fraction of second phase. The multi-particle diffusion is described through the definition of an interaction volume containing all the particles involved in the exchange of solute. At small volume fractions this is proportional to the size of the central particle, at higher volume fractions it gradually reduces as a consequence of diffusion screening described on a geometrical basis. The topological noise present in real systems is also included. For volume fractions below about 0.1 the model predicts broad and right-skewed stationary size distributions resembling a lognormal function. Above this value, a transition to sharper, more symmetrical but still right-skewed shapes occurs. An excellent agreement with experiments is obtained for 3D particle size distributions of solid-solid and solid-liquid systems with volume fraction 0.07, 0.30, 0.52 and 0.74. The kinetic constant of the model depends on the cube root of volume fraction up to about 0.1, then increases rapidly with an upward concavity. It is in good agreement with the available literature data on solid-liquid mixtures in the volume fraction range from 0.20 to about 0.75.

  1. Psychoeducation for hypochondriasis : A comparison of a cognitive-behavioural approach and a problem-solving approach

    NARCIS (Netherlands)

    Buwalda, Femke M.; Bouman, Theo. K.; van Duijn, Marijtje A. J.; Van der Duin, M.

    In this study, two 6-week psychoeducational courses for hypochondriasis are compared, one based on the cognitive-behavioural approach, and the other on the problem-solving approach. Effects of both courses on hypochondriacal complaints, depression, trait anxiety, and number of problems encountered

  2. Comparison of student's learning achievement through realistic mathematics education (RME) approach and problem solving approach on grade VII

    Science.gov (United States)

    Ilyas, Muhammad; Salwah

    2017-02-01

    The type of this research was experiment. The purpose of this study was to determine the difference and the quality of student's learning achievement between students who obtained learning through Realistic Mathematics Education (RME) approach and students who obtained learning through problem solving approach. This study was a quasi-experimental research with non-equivalent experiment group design. The population of this study was all students of grade VII in one of junior high school in Palopo, in the second semester of academic year 2015/2016. Two classes were selected purposively as sample of research that was: year VII-5 as many as 28 students were selected as experiment group I and VII-6 as many as 23 students were selected as experiment group II. Treatment that used in the experiment group I was learning by RME Approach, whereas in the experiment group II by problem solving approach. Technique of data collection in this study gave pretest and posttest to students. The analysis used in this research was an analysis of descriptive statistics and analysis of inferential statistics using t-test. Based on the analysis of descriptive statistics, it can be concluded that the average score of students' mathematics learning after taught using problem solving approach was similar to the average results of students' mathematics learning after taught using realistic mathematics education (RME) approach, which are both at the high category. In addition, It can also be concluded that; (1) there was no difference in the results of students' mathematics learning taught using realistic mathematics education (RME) approach and students who taught using problem solving approach, (2) quality of learning achievement of students who received RME approach and problem solving approach learning was same, which was at the high category.

  3. Pairwise and higher-order correlations among drug-resistance mutations in HIV-1 subtype B protease

    Directory of Open Access Journals (Sweden)

    Morozov Alexandre V

    2009-08-01

    Full Text Available Abstract Background The reaction of HIV protease to inhibitor therapy is characterized by the emergence of complex mutational patterns which confer drug resistance. The response of HIV protease to drugs often involves both primary mutations that directly inhibit the action of the drug, and a host of accessory resistance mutations that may occur far from the active site but may contribute to restoring the fitness or stability of the enzyme. Here we develop a probabilistic approach based on connected information that allows us to study residue, pair level and higher-order correlations within the same framework. Results We apply our methodology to a database of approximately 13,000 sequences which have been annotated by the treatment history of the patients from which the samples were obtained. We show that including pair interactions is essential for agreement with the mutational data, since neglect of these interactions results in order-of-magnitude errors in the probabilities of the simultaneous occurence of many mutations. The magnitude of these pair correlations changes dramatically between sequences obtained from patients that were or were not exposed to drugs. Higher-order effects make a contribution of as much as 10% for residues taken three at a time, but increase to more than twice that for 10 to 15-residue groups. The sequence data is insufficient to determine the higher-order effects for larger groups. We find that higher-order interactions have a significant effect on the predicted frequencies of sequences with large numbers of mutations. While relatively rare, such sequences are more prevalent after multi-drug therapy. The relative importance of these higher-order interactions increases with the number of drugs the patient had been exposed to. Conclusion Correlations are critical for the understanding of mutation patterns in HIV protease. Pair interactions have substantial qualitative effects, while higher-order interactions are

  4. Statistical approaches in published ophthalmic clinical science papers: a comparison to statistical practice two decades ago.

    Science.gov (United States)

    Zhang, Harrison G; Ying, Gui-Shuang

    2018-02-09

    The aim of this study is to evaluate the current practice of statistical analysis of eye data in clinical science papers published in British Journal of Ophthalmology ( BJO ) and to determine whether the practice of statistical analysis has improved in the past two decades. All clinical science papers (n=125) published in BJO in January-June 2017 were reviewed for their statistical analysis approaches for analysing primary ocular measure. We compared our findings to the results from a previous paper that reviewed BJO papers in 1995. Of 112 papers eligible for analysis, half of the studies analysed the data at an individual level because of the nature of observation, 16 (14%) studies analysed data from one eye only, 36 (32%) studies analysed data from both eyes at ocular level, one study (1%) analysed the overall summary of ocular finding per individual and three (3%) studies used the paired comparison. Among studies with data available from both eyes, 50 (89%) of 56 papers in 2017 did not analyse data from both eyes or ignored the intereye correlation, as compared with in 60 (90%) of 67 papers in 1995 (P=0.96). Among studies that analysed data from both eyes at an ocular level, 33 (92%) of 36 studies completely ignored the intereye correlation in 2017, as compared with in 16 (89%) of 18 studies in 1995 (P=0.40). A majority of studies did not analyse the data properly when data from both eyes were available. The practice of statistical analysis did not improve in the past two decades. Collaborative efforts should be made in the vision research community to improve the practice of statistical analysis for ocular data. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2018. All rights reserved. No commercial use is permitted unless otherwise expressly granted.

  5. An empirical comparison of different approaches for combining multimodal neuroimaging data with Support Vector Machine

    Directory of Open Access Journals (Sweden)

    William ePettersson-Yeo

    2014-07-01

    Full Text Available In the pursuit of clinical utility, neuroimaging researchers of psychiatric and neurological illness are increasingly using analyses, such as support vector machine (SVM, that allow inference at the single-subject level. Recent studies employing single-modality data, however, suggest that classification accuracies must be improved for such utility to be realised. One possible solution is to integrate different data types to provide a single combined output classification; either by generating a single decision function based on an integrated kernel matrix, or, by creating an ensemble of multiple single modality classifiers and integrating their predictions. Here, we describe four integrative approaches: 1 an un-weighted sum of kernels, 2 multi-kernel learning, 3 prediction averaging, and 4 majority voting, and compare their ability to enhance classification accuracy relative to the best single-modality classification accuracy. We achieve this by integrating structural, functional and diffusion tensor magnetic resonance imaging data, in order to compare ultra-high risk (UHR; n=19, first episode psychosis (FEP; n=19 and healthy control subjects (HCs; n=19. Our results show that i whilst integration can enhance classification accuracy by up to 13%, the frequency of such instances may be limited, ii where classification can be enhanced, simple methods may yield greater increases relative to more computationally complex alternatives, and, iii the potential for classification enhancement is highly influenced by the specific diagnostic comparison under consideration. In conclusion, our findings suggest that for moderately sized clinical neuroimaging datasets, combining different imaging modalities in a data-driven manner is no magic bullet for increasing classification accuracy.

  6. Social Comparison and Body Image in Adolescence: A Grounded Theory Approach

    Science.gov (United States)

    Krayer, A.; Ingledew, D. K.; Iphofen, R.

    2008-01-01

    This study explored the use of social comparison appraisals in adolescents' lives with particular reference to enhancement appraisals which can be used to counter threats to the self. Social comparison theory has been increasingly used in quantitative research to understand the processes through which societal messages about appearance influence…

  7. The Pennsylvania Phosphorus Index and TopoSWAT: A comparison of transport components and approaches

    Science.gov (United States)

    The regional Chesapeake Bay Conservation Innovation Grant Initiative includes comparison of TopoSWAT results and Phosphorus Index (P Index) evaluations of eight study watersheds throughout the Chesapeake Bay watershed. While similarities exist between the P Index and TopoSWAT, further comparison of ...

  8. Distinguishing between forensic science and forensic pseudoscience: testing of validity and reliability, and approaches to forensic voice comparison.

    Science.gov (United States)

    Morrison, Geoffrey Stewart

    2014-05-01

    In this paper it is argued that one should not attempt to directly assess whether a forensic analysis technique is scientifically acceptable. Rather one should first specify what one considers to be appropriate principles governing acceptable practice, then consider any particular approach in light of those principles. This paper focuses on one principle: the validity and reliability of an approach should be empirically tested under conditions reflecting those of the case under investigation using test data drawn from the relevant population. Versions of this principle have been key elements in several reports on forensic science, including forensic voice comparison, published over the last four-and-a-half decades. The aural-spectrographic approach to forensic voice comparison (also known as "voiceprint" or "voicegram" examination) and the currently widely practiced auditory-acoustic-phonetic approach are considered in light of this principle (these two approaches do not appear to be mutually exclusive). Approaches based on data, quantitative measurements, and statistical models are also considered in light of this principle. © 2013.

  9. A prospective randomized peri- and post-operative comparison of the minimally invasive anterolateral approach versus the lateral approach

    OpenAIRE

    Stefan Landgraeber; Henning Quitmann; Sebastian Güth; Marcel Haversath; Wojciech Kowalczyk; Andrés Kecskeméthy; Hansjörg Heep; Marcus Jäger

    2013-01-01

    There is still controversy as to whether minimally invasive total hip arthroplasty enhances the postoperative outcome. The aim of this study was to compare the outcome of patients who underwent total hip replacement through an anterolateral minimally invasive (MIS) or a conventional lateral approach (CON). We performed a randomized, prospective study of 75 patients with primary hip arthritis, who underwent hip replacement through the MIS (n=36) or CON (n=39) approach. The Western Ontario and ...

  10. Pairwise comparisons of ten porcine tissues identify differential transcriptional regulation at the gene, isoform, promoter and transcription start site level

    DEFF Research Database (Denmark)

    Farajzadeh, Leila; Hornshøj, Henrik; Momeni, Jamal

    2013-01-01

    , isoform, and transcription start site (TSS), and promoter level showed that several of the genes differed at all four levels. Interestingly, these genes were mainly annotated to the "electron transport chain" and neuronal differentiation, emphasizing that "tissue important" genes are regulated at several...

  11. Detection of the pairwise kinematic Sunyaev-Zel'dovich effect with BOSS DR11 and the Atacama Cosmology Telescope

    Energy Technology Data Exchange (ETDEWEB)

    Bernardis, F. De; Vavagiakis, E.M.; Niemack, M.D.; Gallardo, P.A. [Department of Physics, Cornell University, Ithaca, NY 14853 (United States); Aiola, S. [Department of Physics and Astronomy, University of Pittsburgh, and Pittsburgh Particle Physics, Astrophysics, and Cosmology Center, 3941 O' Hara Street, Pittsburgh, PA 15260 (United States); Battaglia, N. [Department of Astrophysical Sciences, Peyton Hall, Princeton University, Princeton, NJ 08544 (United States); Beall, J.; Becker, D.T.; Cho, H.; Fox, A. [National Institute of Standards and Technology, Boulder, CO 80305 (United States); Bond, J.R. [CITA, University of Toronto, 60 St. George St., Toronto, ON M5S 3H8 (Canada); Calabrese, E.; Dunkley, J. [Sub-Department of Astrophysics, University of Oxford, Keble Road, Oxford, OX1 3RH (United Kingdom); Coughlin, K.; Datta, R. [Department of Physics, University of Michigan Ann Arbor, MI 48109 (United States); Devlin, M. [Department of Physics and Astronomy, University of Pennsylvania, 209 South 33rd Street, Philadelphia, PA 19104 (United States); Dunner, R. [Instituto de Astrofísica and Centro de Astro-Ingeniería, Facultad de Física, Pontificia Universidad Católica de Chile, Av. Vicuña Mackenna 4860, 7820436 Macul, Santiago (Chile); Ferraro, S. [Miller Institute for Basic Research in Science, University of California, Berkeley, CA 94720 (United States); Halpern, M. [University of British Columbia, Department of Physics and Astronomy, 6224 Agricultural Road, Vancouver BC V6T 1Z1 (Canada); Hand, N., E-mail: fdeberna@gmail.com [Astronomy Department, University of California, Berkeley, CA 94720 (United States); and others

    2017-03-01

    We present a new measurement of the kinematic Sunyaev-Zel'dovich effect using data from the Atacama Cosmology Telescope (ACT) and the Baryon Oscillation Spectroscopic Survey (BOSS). Using 600 square degrees of overlapping sky area, we evaluate the mean pairwise baryon momentum associated with the positions of 50,000 bright galaxies in the BOSS DR11 Large Scale Structure catalog. A non-zero signal arises from the large-scale motions of halos containing the sample galaxies. The data fits an analytical signal model well, with the optical depth to microwave photon scattering as a free parameter determining the overall signal amplitude. We estimate the covariance matrix of the mean pairwise momentum as a function of galaxy separation, using microwave sky simulations, jackknife evaluation, and bootstrap estimates. The most conservative simulation-based errors give signal-to-noise estimates between 3.6 and 4.1 for varying galaxy luminosity cuts. We discuss how the other error determinations can lead to higher signal-to-noise values, and consider the impact of several possible systematic errors. Estimates of the optical depth from the average thermal Sunyaev-Zel'dovich signal at the sample galaxy positions are broadly consistent with those obtained from the mean pairwise momentum signal.

  12. Self-regulatory Behaviors and Approaches to Learning of Arts Students: A Comparison Between Professional Training and English Learning.

    Science.gov (United States)

    Tseng, Min-Chen; Chen, Chia-Cheng

    2017-06-01

    This study investigated the self-regulatory behaviors of arts students, namely memory strategy, goal-setting, self-evaluation, seeking assistance, environmental structuring, learning responsibility, and planning and organizing. We also explored approaches to learning, including deep approach (DA) and surface approach (SA), in a comparison between students' professional training and English learning. The participants consisted of 344 arts majors. The Academic Self-Regulation Questionnaire and the Revised Learning Process Questionnaire were adopted to examine students' self-regulatory behaviors and their approaches to learning. The results show that a positive and significant correlation was found in students' self-regulatory behaviors between professional training and English learning. The results indicated that increases in using self-regulatory behaviors in professional training were associated with increases in applying self-regulatory behaviors in learning English. Seeking assistance, self-evaluation, and planning and organizing were significant predictors for learning English. In addition, arts students used the deep approach more often than the surface approach in both their professional training and English learning. A positive correlation was found in DA, whereas a negative correlation was shown in SA between students' self-regulatory behaviors and their approaches to learning. Students with high self-regulation adopted a deep approach, and they applied the surface approach less in professional training and English learning. In addition, a SEM model confirmed that DA had a positive influence; however, SA had a negative influence on self-regulatory behaviors.

  13. Post-closure biosphere assessment modelling: comparison of complex and more stylised approaches

    Energy Technology Data Exchange (ETDEWEB)

    Walke, Russell C. [Quintessa Limited, The Hub, 14 Station Road, Henley-on-Thames (United Kingdom); Kirchner, Gerald [University of Hamburg, ZNF, Beim Schlump 83, 20144 Hamburg (Germany); Xu, Shulan; Dverstorp, Bjoern [Swedish Radiation Safety Authority, SE-171 16 Stockholm (Sweden)

    2014-07-01

    to the biosphere. Some radionuclides do not reach equilibrium within the time frame that the biosphere evolves at the Forsmark site, making associated dose factors sensitive to time scales assumed for biosphere evolution. Comparison of the results generated by both types of model demonstrates that, for areas that evolve from marine, through lakes and mires to terrestrial systems with organic soils, the approach adopted in SKB's model is conservative. However, higher dose factors are possible when potential for long-term irrigation with shallow groundwater is considered. Surveys of groundwater wells in the Forsmark area today show that some shallow groundwater is used to water plants, which demonstrates that small scale irrigation from such sources cannot be ruled out for present-day or warmer climate states. Complex models use more of the available site-specific information and contribute to an understanding of complex process interactions and effects of system heterogeneity. The study shows, however, that simple 'reference' biosphere models enable processes that control potential radionuclide impacts to be identified, taking into account climate variability. They help to build understanding and confidence in more complex modelling approaches, quantify the conservatisms involved and remain a valuable tool for nuclear waste disposal licensing procedures. (authors)

  14. Within-culture variations of uniqueness: towards an integrative approach based on social status, gender, life contexts, and interpersonal comparison.

    Science.gov (United States)

    Causse, Elsa; Félonneau, Marie-Line

    2014-01-01

    Research on uniqueness is widely focused on cross-cultural comparisons and tends to postulate a certain form of within-culture homogeneity. Taking the opposite course of this classic posture, we aimed at testing an integrative approach enabling the study of within-culture variations of uniqueness. This approach considered different sources of variation: social status, gender, life contexts, and interpersonal comparison. Four hundred seventy-nine participants completed a measure based on descriptions of "self" and "other." Results showed important variations of uniqueness. An interaction between social status and life contexts revealed the expression of uniqueness in the low-status group. This study highlights the complexity of uniqueness that appears to be related to both cultural ideology and social hierarchy.

  15. Time-History Seismic Analysis of Masonry Buildings: A Comparison between Two Non-Linear Modelling Approaches

    Directory of Open Access Journals (Sweden)

    Michele Betti

    2015-05-01

    Full Text Available The paper presents a comparison between two numerical modelling approaches employed to investigate the seismic behavior of unreinforced masonry buildings with flexible diaphragms. The comparison is performed analyzing a two-story prototype tested on a shaking table at the CNR-ENEA research center of Casaccia (Italy. The first numerical model was built by using the finite element (FE technique, while the second one was built by a simplified macro-element (ME approach. Both models were employed to perform non-linear dynamic analyses, integrating the equations of motion by step-by-step procedures. The shaking table tests were simulated to analyze the behavior of the prototype from the initial elastic state until the development of extensive damage. The main results of the analyses are discussed and critically compared in terms of engineering parameters, such as accelerations, displacements and base shears. The effectiveness of both models within the investigated typology of buildings is then evaluated in depth.

  16. High-throughput detection of prostate cancer in histological sections using probabilistic pairwise Markov models.

    Science.gov (United States)

    Monaco, James P; Tomaszewski, John E; Feldman, Michael D; Hagemann, Ian; Moradi, Mehdi; Mousavi, Parvin; Boag, Alexander; Davidson, Chris; Abolmaesumi, Purang; Madabhushi, Anant

    2010-08-01

    In this paper we present a high-throughput system for detecting regions of carcinoma of the prostate (CaP) in HSs from radical prostatectomies (RPs) using probabilistic pairwise Markov models (PPMMs), a novel type of Markov random field (MRF). At diagnostic resolution a digitized HS can contain 80Kx70K pixels - far too many for current automated Gleason grading algorithms to process. However, grading can be separated into two distinct steps: (1) detecting cancerous regions and (2) then grading these regions. The detection step does not require diagnostic resolution and can be performed much more quickly. Thus, we introduce a CaP detection system capable of analyzing an entire digitized whole-mount HS (2x1.75cm(2)) in under three minutes (on a desktop computer) while achieving a CaP detection sensitivity and specificity of 0.87 and 0.90, respectively. We obtain this high-throughput by tailoring the system to analyze the HSs at low resolution (8microm per pixel). This motivates the following algorithm: (Step 1) glands are segmented, (Step 2) the segmented glands are classified as malignant or benign, and (Step 3) the malignant glands are consolidated into continuous regions. The classification of individual glands leverages two features: gland size and the tendency for proximate glands to share the same class. The latter feature describes a spatial dependency which we model using a Markov prior. Typically, Markov priors are expressed as the product of potential functions. Unfortunately, potential functions are mathematical abstractions, and constructing priors through their selection becomes an ad hoc procedure, resulting in simplistic models such as the Potts. Addressing this problem, we introduce PPMMs which formulate priors in terms of probability density functions, allowing the creation of more sophisticated models. To demonstrate the efficacy of our CaP detection system and assess the advantages of using a PPMM prior instead of the Potts, we alternately

  17. Ranking alternatives based on imprecise multi-criteria data and pairwise overlap dominance relations

    DEFF Research Database (Denmark)

    Franco de los Rios, Camilo Andres; Hougaard, Jens Leth; Nielsen, Kurt

    illustrative example is given for comparison with standard methods like PROMETHEE. The proposed methodology takes into account the risk attitudes of decision makers, organizing the alternatives and ranking them according to their relevance. The whole interactive decision support allows understanding...

  18. Marketing through Social Media : Case: Comparison of Social Media Marketing Approaches of B2C Companies for Company X

    OpenAIRE

    Rantapelkonen Ahlberg, Jaana

    2010-01-01

    Rantapelkonen Ahlberg, Jaana. 2010. Social Media Marketing. Case: Comparison of Social Media Marketing Approaches of B2C Companies for Company X. Master’s Thesis. Kemi-Tornio University of Applied Sciences. Business and Culture. Pages 42 (74). The objective of this thesis is to provide insights on how Company X can use Social Media as a marketing and branding tool in consumer marketing in the Swedish market. More specifically, this study attempts to define what kinds of social media are u...

  19. Which percentile-based approach should be preferred for calculating normalized citation impact values? An empirical comparison of five approaches including a newly developed citation-rank approach (P100)

    NARCIS (Netherlands)

    Bornmann, L.; Leydesdorff, L.; Wang, J.

    2013-01-01

    For comparisons of citation impacts across fields and over time, bibliometricians normalize the observed citation counts with reference to an expected citation value. Percentile-based approaches have been proposed as a non-parametric alternative to parametric central-tendency statistics. Percentiles

  20. Comparison of different statistical modelling approaches for deriving spatial air temperature patterns in an urban environment

    Science.gov (United States)

    Straub, Annette; Beck, Christoph; Breitner, Susanne; Cyrys, Josef; Geruschkat, Uta; Jacobeit, Jucundus; Kühlbach, Benjamin; Kusch, Thomas; Richter, Katja; Schneider, Alexandra; Umminger, Robin; Wolf, Kathrin

    2017-04-01

    Frequently spatial variations of air temperature of considerable magnitude occur within urban areas. They correspond to varying land use/land cover characteristics and vary with season, time of day and synoptic conditions. These temperature differences have an impact on human health and comfort directly by inducing thermal stress as well as indirectly by means of affecting air quality. Therefore, knowledge of the spatial patterns of air temperature in cities and the factors causing them is of great importance, e.g. for urban planners. A multitude of studies have shown statistical modelling to be a suitable tool for generating spatial air temperature patterns. This contribution presents a comparison of different statistical modelling approaches for deriving spatial air temperature patterns in the urban environment of Augsburg, Southern Germany. In Augsburg there exists a measurement network for air temperature and humidity currently comprising 48 stations in the city and its rural surroundings (corporately operated by the Institute of Epidemiology II, Helmholtz Zentrum München, German Research Center for Environmental Health and the Institute of Geography, University of Augsburg). Using different datasets for land surface characteristics (Open Street Map, Urban Atlas) area percentages of different types of land cover were calculated for quadratic buffer zones of different size (25, 50, 100, 250, 500 m) around the stations as well for source regions of advective air flow and used as predictors together with additional variables such as sky view factor, ground level and distance from the city centre. Multiple Linear Regression and Random Forest models for different situations taking into account season, time of day and weather condition were applied utilizing selected subsets of these predictors in order to model spatial distributions of mean hourly and daily air temperature deviations from a rural reference station. Furthermore, the different model setups were

  1. An empirical comparison of different approaches for combining multimodal neuroimaging data with support vector machine.

    Science.gov (United States)

    Pettersson-Yeo, William; Benetti, Stefania; Marquand, Andre F; Joules, Richard; Catani, Marco; Williams, Steve C R; Allen, Paul; McGuire, Philip; Mechelli, Andrea

    2014-01-01

    In the pursuit of clinical utility, neuroimaging researchers of psychiatric and neurological illness are increasingly using analyses, such as support vector machine, that allow inference at the single-subject level. Recent studies employing single-modality data, however, suggest that classification accuracies must be improved for such utility to be realized. One possible solution is to integrate different data types to provide a single combined output classification; either by generating a single decision function based on an integrated kernel matrix, or, by creating an ensemble of multiple single modality classifiers and integrating their predictions. Here, we describe four integrative approaches: (1) an un-weighted sum of kernels, (2) multi-kernel learning, (3) prediction averaging, and (4) majority voting, and compare their ability to enhance classification accuracy relative to the best single-modality classification accuracy. We achieve this by integrating structural, functional, and diffusion tensor magnetic resonance imaging data, in order to compare ultra-high risk (n = 19), first episode psychosis (n = 19) and healthy control subjects (n = 23). Our results show that (i) whilst integration can enhance classification accuracy by up to 13%, the frequency of such instances may be limited, (ii) where classification can be enhanced, simple methods may yield greater increases relative to more computationally complex alternatives, and, (iii) the potential for classification enhancement is highly influenced by the specific diagnostic comparison under consideration. In conclusion, our findings suggest that for moderately sized clinical neuroimaging datasets, combining different imaging modalities in a data-driven manner is no "magic bullet" for increasing classification accuracy. However, it remains possible that this conclusion is dependent on the use of neuroimaging modalities that had little, or no, complementary information to offer one another, and that the

  2. A comparison of conventional local approach and the short crack approach to fatigue crack initiation at a notch

    Energy Technology Data Exchange (ETDEWEB)

    Ranganathan, Narayanaswami; Leroy, Rene; Tougui, Abdellah [Laboratoire de Mecanique et Rheologie, Universite Francois Rabelais de Tours, Polytech Tours, Departement Mecanique et Conception de Systemes, Tours (France)

    2009-09-15

    Methods to estimate fatigue crack initiation life at a notch tip are compared. The methods used determine the strain amplitudes at the notch tip using Neuber's or Glinka's approximation. In conventional approaches, equivalent-damage levels are determined, using appropriate strain-life relationships coupled with damage-summation models. In the short-crack approach, a crack-like defect is assumed to exist at the notch tip. It is shown that the short-crack concept can be successfully applied to predict crack-initiation behavior at a notch. Model predictions are compared with carefully designed experiments. It is shown that model predictions are very close to experimentally measured lives under an aircraft-wing loading spectrum. (Abstract Copyright [2009], Wiley Periodicals, Inc.)

  3. A Prospective Randomized Peri- and Post-Operative Comparison of the Minimally Invasive Anterolateral Approach Versus the Lateral Approach

    Science.gov (United States)

    Landgraeber, Stefan; Quitmann, Henning; Güth, Sebastian; Haversath, Marcel; Kowalczyk, Wojciech; Kecskeméthy, Andrés; Heep, Hansjörg; Jäger, Marcus

    2013-01-01

    There is still controversy as to whether minimally invasive total hip arthroplasty enhances the postoperative outcome. The aim of this study was to compare the outcome of patients who underwent total hip replacement through an anterolateral minimally invasive (MIS) or a conventional lateral approach (CON). We performed a randomized, prospective study of 75 patients with primary hip arthritis, who underwent hip replacement through the MIS (n=36) or CON (n=39) approach. The Western Ontario and McMaster Universities Osteoarthritis Index and Harris Hip score (HHS) were evaluated at frequent intervals during the early postoperative follow-up period and then after 3.5 years. Pain sensations were recorded. Serological and radiological analyses were performed. In the MIS group the patients had smaller skin incisions and there was a significantly lower rate of patients with a positive Trendelenburg sign after six weeks postoperatively. After six weeks the HHS was 6.85 points higher in the MIS group (P=0.045). But calculating the mean difference between the baseline and the six weeks HHS we evaluated no significant differences. Blood loss was greater and the duration of surgery was longer in the MIS group. The other parameters, especially after the twelfth week, did not differ significantly. Radiographs showed the inclination of the acetabular component to be significantly higher in the MIS group, but on average it was within the same permitted tolerance range as in the CON group. Both approaches are adequate for hip replacement. Given the data, there appears to be no significant long term advantage to the MIS approach, as described in this study. PMID:24191179

  4. A prospective randomized peri- and post-operative comparison of the minimally invasive anterolateral approach versus the lateral approach

    Directory of Open Access Journals (Sweden)

    Stefan Landgraeber

    2013-07-01

    Full Text Available There is still controversy as to whether minimally invasive total hip arthroplasty enhances the postoperative outcome. The aim of this study was to compare the outcome of patients who underwent total hip replacement through an anterolateral minimally invasive (MIS or a conventional lateral approach (CON. We performed a randomized, prospective study of 75 patients with primary hip arthritis, who underwent hip replacement through the MIS (n=36 or CON (n=39 approach. The Western Ontario\tand\tMcMaster\tUniversities Osteoarthritis Index and Harris Hip score (HHS were evaluated at frequent intervals during the early postoperative follow-up period and then after 3.5 years. Pain sensations were recorded. Serological and radiological analyses were performed. In the MIS group the patients had smaller skin incisions and there was a significantly lower rate of patients with a positive Trendelenburg sign after six weeks postoperatively. After six weeks the HHS was 6.85 points higher in the MIS group (P=0.045. But calculating the mean difference between the baseline and the six weeks HHS we evaluated no significant differences. Blood loss was greater and the duration of surgery was longer in the MIS group. The other parameters, especially after the twelfth week, did not differ significantly. Radiographs showed the inclination of the acetabular component to be significantly higher in the MIS group, but on average it was within the same permitted tolerance range as in the CON group. Both approaches are adequate for hip replacement. Given the data, there appears to be no significant long term advantage to the MIS approach, as described in this study.

  5. Simulation-based comparison of two approaches frequently used for dynamic contrast-enhanced MRI

    International Nuclear Information System (INIS)

    Zwick, Stefan; Brix, Gunnar; Tofts, Paul S.; Strecker, Ralph; Kopp-Schneider, Annette; Laue, Hendrik; Semmler, Wolfhard; Kiessling, Fabian

    2010-01-01

    The purpose was to compare two approaches for the acquisition and analysis of dynamic-contrast-enhanced MRI data with respect to differences in the modelling of the arterial input-function (AIF), the dependency of the model parameters on physiological parameters and their numerical stability. Eight hundred tissue concentration curves were simulated for different combinations of perfusion, permeability, interstitial volume and plasma volume based on two measured AIFs and analysed according to the two commonly used approaches. The transfer constants (Approach 1) K trans and (Approach 2) k ep were correlated with all tissue parameters. K trans showed a stronger dependency on perfusion, and k ep on permeability. The volume parameters (Approach 1) v e and (Approach 2) A were mainly influenced by the interstitial and plasma volume. Both approaches allow only rough characterisation of tissue microcirculation and microvasculature. Approach 2 seems to be somewhat more robust than 1, mainly due to the different methods of CA administration. (orig.)

  6. A comparison of two sampling approaches for assessing the urban forest canopy cover from aerial photography.

    Science.gov (United States)

    Ucar Zennure; Pete Bettinger; Krista Merry; Jacek Siry; J.M. Bowker

    2016-01-01

    Two different sampling approaches for estimating urban tree canopy cover were applied to two medium-sized cities in the United States, in conjunction with two freely available remotely sensed imagery products. A random point-based sampling approach, which involved 1000 sample points, was compared against a plot/grid sampling (cluster sampling) approach that involved a...

  7. A Comparison of Jungian, Person-Centered, and Gestalt Approaches to Personal Growth Groups.

    Science.gov (United States)

    Day, Bryon; Matthes, William

    1992-01-01

    Compares Jungian approach to personal growth groups to Person-centered and Gestalt approaches. Notes similarities, though Jungian approach adds dimension of "cognitive map" not found in other two. Notes that cognitive map uses constructs from Jung's theory of individuation process, hypothesizing that integration of these constructs into…

  8. Comparison of mandibular stability after SSRO with surgery-first approach versus conventional ortho-first approach.

    Science.gov (United States)

    Akamatsu, Tadashi; Hanai, Ushio; Miyasaka, Muneo; Muramatsu, Hiroyuki; Yamamoto, Shou

    2016-01-01

    Postoperative mandibular stability in the surgery-first (SF) approach and ortho-first (OF) approach in orthognathic surgery was retrospectively assessed using the lateral cephalo X-P in 38 patients with skeletal Angle Class III malocclusion who underwent sagittal split ramus osteotomy (SSRO). The postoperative mandibular relapse of the two groups observed from T1 (2 weeks after the surgery) to T2 (for the OF group, a year after surgery; for the SF group, the day orthodontic treatment was completed) was compared. The mean (SD) horizontal relapse at pogonion was 0.86 (0.92) mm in the forward direction in the SF group and 0.90 (1.09) mm in the forward direction in the OF group. No significant difference was found in the amount of horizontal movement between the two groups. On the other hand, the mean (SD) vertical relapse at pogonion was 1.59 (2.91) mm in the downward direction in the SF group and 0.14 (1.30) mm in the upward direction in the OF group, showing a significant difference in the amount of movement between the two groups. The degree of completion of the occlusion at T2 in the SF group was compared with that in the OF group by measuring OB, OJ, L1-occlusal plane angle, and interincisal angle. No significant difference was found between the two groups and the post-treatment occlusion was clinically favourable. Although the SF approach has several advantages for patients, the method of operation and fixation should be selected carefully to maintain postoperative mandibular stability.

  9. A comparison of trans-cranial and trans-sphenoidal approaches for vision improvement due to pitutary adenomas

    Directory of Open Access Journals (Sweden)

    Fakhr Tabatabai SA

    1997-07-01

    Full Text Available To improve visual disturbance, optic nerve decompression can be performed via transcranial or tran-sphenoidal approaches. Although the surgical exposure in transcranial approach is favourable, yet the optic nerve's presence in the field may make it vulnerable to damage. Of fighty patients with different types of pituitary adenomas, 35 cases with medium-sized (1-3 cm tumors have been studied in a randomized clinical trial during a three year period, to compare the applicability of these approaches. While short hospital stay with better visual outcome was observed in fifteen trans-sphenoidal cases, in comparison to 20 trans-cranial cases, however the preoperative visual status and underlying disorders were similar in both groups. Decompressing the optic apparatus, trans-sphenoidally, seems beneficial, where there are no contraindications for the procedure in medium-sized pituitary adenomas

  10. Model-Assisted Estimation of Tropical Forest Biomass Change: A Comparison of Approaches

    Directory of Open Access Journals (Sweden)

    Nikolai Knapp

    2018-05-01

    Full Text Available Monitoring of changes in forest biomass requires accurate transfer functions between remote sensing-derived changes in canopy height (ΔH and the actual changes in aboveground biomass (ΔAGB. Different approaches can be used to accomplish this task: direct approaches link ΔH directly to ΔAGB, while indirect approaches are based on deriving AGB stock estimates for two points in time and calculating the difference. In some studies, direct approaches led to more accurate estimations, while, in others, indirect approaches led to more accurate estimations. It is unknown how each approach performs under different conditions and over the full range of possible changes. Here, we used a forest model (FORMIND to generate a large dataset (>28,000 ha of natural and disturbed forest stands over time. Remote sensing of forest height was simulated on these stands to derive canopy height models for each time step. Three approaches for estimating ΔAGB were compared: (i the direct approach; (ii the indirect approach and (iii an enhanced direct approach (dir+tex, using ΔH in combination with canopy texture. Total prediction accuracies of the three approaches measured as root mean squared errors (RMSE were RMSEdirect = 18.7 t ha−1, RMSEindirect = 12.6 t ha−1 and RMSEdir+tex = 12.4 t ha−1. Further analyses revealed height-dependent biases in the ΔAGB estimates of the direct approach, which did not occur with the other approaches. Finally, the three approaches were applied on radar-derived (TanDEM-X canopy height changes on Barro Colorado Island (Panama. The study demonstrates the potential of forest modeling for improving the interpretation of changes observed in remote sensing data and for comparing different methodologies.

  11. Comparison between bottom-up and top-down approaches in the estimation of measurement uncertainty.

    Science.gov (United States)

    Lee, Jun Hyung; Choi, Jee-Hye; Youn, Jae Saeng; Cha, Young Joo; Song, Woonheung; Park, Ae Ja

    2015-06-01

    Measurement uncertainty is a metrological concept to quantify the variability of measurement results. There are two approaches to estimate measurement uncertainty. In this study, we sought to provide practical and detailed examples of the two approaches and compare the bottom-up and top-down approaches to estimating measurement uncertainty. We estimated measurement uncertainty of the concentration of glucose according to CLSI EP29-A guideline. Two different approaches were used. First, we performed a bottom-up approach. We identified the sources of uncertainty and made an uncertainty budget and assessed the measurement functions. We determined the uncertainties of each element and combined them. Second, we performed a top-down approach using internal quality control (IQC) data for 6 months. Then, we estimated and corrected systematic bias using certified reference material of glucose (NIST SRM 965b). The expanded uncertainties at the low glucose concentration (5.57 mmol/L) by the bottom-up approach and top-down approaches were ±0.18 mmol/L and ±0.17 mmol/L, respectively (all k=2). Those at the high glucose concentration (12.77 mmol/L) by the bottom-up and top-down approaches were ±0.34 mmol/L and ±0.36 mmol/L, respectively (all k=2). We presented practical and detailed examples for estimating measurement uncertainty by the two approaches. The uncertainties by the bottom-up approach were quite similar to those by the top-down approach. Thus, we demonstrated that the two approaches were approximately equivalent and interchangeable and concluded that clinical laboratories could determine measurement uncertainty by the simpler top-down approach.

  12. Anatomic comparison of the endonasal and transpetrosal approaches for interpeduncular fossa access.

    Science.gov (United States)

    Oyama, Kenichi; Prevedello, Daniel M; Ditzel Filho, Leo F S; Muto, Jun; Gun, Ramazan; Kerr, Edward E; Otto, Bradley A; Carrau, Ricardo L

    2014-01-01

    The interpeduncular cistern, including the retrochiasmatic area, is one of the most challenging regions to approach surgically. Various conventional approaches to this region have been described; however, only the endoscopic endonasal approach via the dorsum sellae and the transpetrosal approach provide ideal exposure with a caudal-cranial view. The authors compared these 2 approaches to clarify their limitations and intrinsic advantages for access to the interpeduncular cistern. Four fresh cadaver heads were studied. An endoscopic endonasal approach via the dorsum sellae with pituitary transposition was performed to expose the interpeduncular cistern. A transpetrosal approach was performed bilaterally, combining a retrolabyrinthine presigmoid and a subtemporal transtentorium approach. Water balloons were used to simulate space-occupying lesions. "Water balloon tumors" (WBTs), inflated to 2 different volumes (0.5 and 1.0 ml), were placed in the interpeduncular cistern to compare visualization using the 2 approaches. The distances between cranial nerve (CN) III and the posterior communicating artery (PCoA) and between CN III and the edge of the tentorium were measured through a transpetrosal approach to determine the width of surgical corridors using 0- to 6-ml WBTs in the interpeduncular cistern (n = 8). Both approaches provided adequate exposure of the interpeduncular cistern. The endoscopic endonasal approach yielded a good visualization of both CN III and the PCoA when a WBT was in the interpeduncular cistern. Visualization of the contralateral anatomical structures was impaired in the transpetrosal approach. The surgical corridor to the interpeduncular cistern via the transpetrosal approach was narrow when the WBT volume was small, but its width increased as the WBT volume increased. There was a statistically significant increase in the maximum distance between CN III and the PCoA (p = 0.047) and between CN III and the tentorium (p = 0.029) when the WBT volume

  13. Comparisons of watershed sulfur budgets in southeast Canada and northeast US: New approaches and implications

    Science.gov (United States)

    Mitchell, M.J.; Lovett, G.; Bailey, S.; Beall, F.; Burns, D.; Buso, D.; Clair, T.A.; Courchesne, F.; Duchesne, L.; Eimers, C.; Fernandez, I.; Houle, D.; Jeffries, D.S.; Likens, G.E.; Moran, M.D.; Rogers, C.; Schwede, D.; Shanley, J.; Weathers, K.C.; Vet, R.

    2011-01-01

    concentrations and deposition predictions with the predictions of two continental-scale air quality models, the Community Multiscale Air Quality (CMAQ) model and A Unified Regional Air-quality Modeling System (AURAMS) that utilize complete inventories of emissions and chemical budgets. The results of this comparison indicated that the predictive relationship provides an accurate representation of SO2 concentrations and S deposition for the region that is generally consistent with these models, and thus provides confidence that our approach could be used to develop accurate watershed S budgets for these 15 sites. Most watersheds showed large net losses of SO42- on an annual basis, and the watershed mass balances were grouped into five categories based on the relative value of mean annual net losses or net gains. The net annual fluxes of SO42- showed a strong relationship with hydrology; the largest net annual negative fluxes were associated with years of greatest precipitation amount and highest discharge. The important role of catchment hydrology on S budgets suggests implications for future predicted climate change as it affects patterns of precipitation and drought. The sensitivity of S budgets is likely to be greatest in watersheds with the greatest wetland area, which are particularly sensitive to drying and wetting cycles. A small number of the watersheds in this analysis were shown to have substantial S sources from mineral weathering, but most showed evidence of an internal source of SO42-, which is likely from the mineralization of organic S stored from decades of increased S deposition. Mobilization of this internal S appears to contribute about 1-6 kg S ha-1 year-1 to stream fluxes at these sites and is affecting the rate and extent of recovery from acidification as S deposition rates have declined in recent years. This internal S source should be considered when developing critical deposition loads that will promote ecosystem recovery from acidification and the depl

  14. Breast and prostate cancer productivity costs: a comparison of the human capital approach and the friction cost approach.

    Science.gov (United States)

    Hanly, Paul; Timmons, Aileen; Walsh, Paul M; Sharp, Linda

    2012-05-01

    Productivity costs constitute a substantial proportion of the total societal costs associated with cancer. We compared the results of applying two different analytical methods--the traditional human capital approach (HCA) and the emerging friction cost approach (FCA)--to estimate breast and prostate cancer productivity costs in Ireland in 2008. Data from a survey of breast and prostate cancer patients were combined with population-level survival estimates and a national wage data set to calculate costs of temporary disability (cancer-related work absence), permanent disability (workforce departure, reduced working hours), and premature mortality. For breast cancer, productivity costs per person using the HCA were € 193,425 and those per person using the FCA were € 8,103; for prostate cancer, the comparable estimates were € 109,154 and € 8,205, respectively. The HCA generated higher costs for younger patients (breast cancer) because of greater lifetime earning potential. In contrast, the FCA resulted in higher productivity costs for older male patients (prostate cancer) commensurate with higher earning capacity over a shorter time period. Reduced working hours postcancer was a key driver of total HCA productivity costs. HCA costs were sensitive to assumptions about discount and growth rates. FCA costs were sensitive to assumptions about the friction period. The magnitude of the estimates obtained in this study illustrates the importance of including productivity costs when considering the economic impact of illness. Vastly different results emerge from the application of the HCA and the FCA, and this finding emphasizes the importance of choosing the study perspective carefully and being explicit about assumptions that underpin the methods. Copyright © 2012 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.

  15. Multi-country comparisons of energy performance: The index decomposition analysis approach

    International Nuclear Information System (INIS)

    Ang, B.W.; Xu, X.Y.; Su, Bin

    2015-01-01

    Index decomposition analysis (IDA) is a popular tool for studying changes in energy consumption over time in a country or region. This specific application of IDA, which may be called temporal decomposition analysis, has been extended by researchers and analysts to study variations in energy consumption or energy efficiency between countries or regions, i.e. spatial decomposition analysis. In spatial decomposition analysis, the main objective is often to understand the relative contributions of overall activity level, activity structure, and energy intensity in explaining differences in total energy consumption between two countries or regions. We review the literature of spatial decomposition analysis, investigate the methodological issues, and propose a spatial decomposition analysis framework for multi-region comparisons. A key feature of the proposed framework is that it passes the circularity test and provides consistent results for multi-region comparisons. A case study in which 30 regions in China are compared and ranked based on their performance in energy consumption is presented. - Highlights: • We conducted cross-regional comparisons of energy consumption using IDA. • We proposed two criteria for IDA method selection in spatial decomposition analysis. • We proposed a new model for regional comparison that passes the circularity test. • Features of the new model are illustrated using the data of 30 regions in China

  16. Are Noncovalent Interactions an Achilles Heel in Chemistry Education? A Comparison of Instructional Approaches

    Science.gov (United States)

    Williams, Leah C.; Underwood, Sonia M.; Klymkowsky, Michael W.; Cooper, Melanie M.

    2015-01-01

    Intermolecular forces (IMFs), or more broadly, noncovalent interactions either within or between molecules, are central to an understanding of a wide range of chemical and biological phenomena. In this study, we present a multiyear, multi-institutional, longitudinal comparison of how students enrolled in traditional general chemistry courses and…

  17. An efficient genetic algorithm for structural RNA pairwise alignment and its application to non-coding RNA discovery in yeast

    Directory of Open Access Journals (Sweden)

    Taneda Akito

    2008-12-01

    Full Text Available Abstract Background Aligning RNA sequences with low sequence identity has been a challenging problem since such a computation essentially needs an algorithm with high complexities for taking structural conservation into account. Although many sophisticated algorithms for the purpose have been proposed to date, further improvement in efficiency is necessary to accelerate its large-scale applications including non-coding RNA (ncRNA discovery. Results We developed a new genetic algorithm, Cofolga2, for simultaneously computing pairwise RNA sequence alignment and consensus folding, and benchmarked it using BRAliBase 2.1. The benchmark results showed that our new algorithm is accurate and efficient in both time and memory usage. Then, combining with the originally trained SVM, we applied the new algorithm to novel ncRNA discovery where we compared S. cerevisiae genome with six related genomes in a pairwise manner. By focusing our search to the relatively short regions (50 bp to 2,000 bp sandwiched by conserved sequences, we successfully predict 714 intergenic and 1,311 sense or antisense ncRNA candidates, which were found in the pairwise alignments with stable consensus secondary structure and low sequence identity (≤ 50%. By comparing with the previous predictions, we found that > 92% of the candidates is novel candidates. The estimated rate of false positives in the predicted candidates is 51%. Twenty-five percent of the intergenic candidates has supports for expression in cell, i.e. their genomic positions overlap those of the experimentally determined transcripts in literature. By manual inspection of the results, moreover, we obtained four multiple alignments with low sequence identity which reveal consensus structures shared by three species/sequences. Conclusion The present method gives an efficient tool complementary to sequence-alignment-based ncRNA finders.

  18. An experimental detrending approach to attributing change of pan evaporation in comparison with the traditional partial differential method

    Science.gov (United States)

    Wang, Tingting; Sun, Fubao; Xia, Jun; Liu, Wenbin; Sang, Yanfang

    2017-04-01

    In predicting how droughts and hydrological cycles would change in a warming climate, change of atmospheric evaporative demand measured by pan evaporation (Epan) is one crucial element to be understood. Over the last decade, the derived partial differential (PD) form of the PenPan equation is a prevailing attribution approach to attributing changes to Epan worldwide. However, the independency among climatic variables required by the PD approach cannot be met using long term observations. Here we designed a series of numerical experiments to attribute changes of Epan over China by detrending each climatic variable, i.e., an experimental detrending approach, to address the inter-correlation among climate variables, and made comparison with the traditional PD method. The results show that the detrending approach is superior not only to a complicate system with multi-variables and mixing algorithm like aerodynamic component (Ep,A) and Epan, but also to a simple case like radiative component (Ep,R), when compared with traditional PD method. The major reason for this is the strong and significant inter-correlation of input meteorological forcing. Very similar and fine attributing results have been achieved based on detrending approach and PD method after eliminating the inter-correlation of input through a randomize approach. The contribution of Rh and Ta in net radiation and thus Ep,R, which has been overlooked based on the PD method but successfully detected by detrending approach, provides some explanation to the comparing results. We adopted the control run from the detrending approach and applied it to made adjustment of PD method. Much improvement has been made and thus proven this adjustment an effective way in attributing changes to Epan. Hence, the detrending approach and the adjusted PD method are well recommended in attributing changes in hydrological models to better understand and predict water and energy cycle.

  19. Comparison of complications in transtrochanteric and anterolateral approaches in primary total hip arthroplasty.

    LENUS (Irish Health Repository)

    Cashman, James P

    2008-11-01

    Three surgical approaches to primary total hip arthroplasty (THA) have been in use since Charnley popularized the transtrochanteric approach. This study was designed to examine the difference in morbidity between the transtrochanteric approach and the anterolateral approach in primary THA. Information on 891 patients who underwent primary THA performed by a single surgeon was collected prospectively between 1998 and 2003 using a modified SF-36 form, preoperatively, intraoperatively, and at 3 months postoperatively. The transtrochanteric group had higher morbidity and more patients who were dissatisfied with their THA. There was a greater range of motion in the anterolateral group.

  20. Comparison of three different surgical approaches for treatment of thoracolumbar burst fracture

    Directory of Open Access Journals (Sweden)

    WU Han

    2013-02-01

    Full Text Available 【Abstract】Objective: The main treatment method used for thoracolumbar fractures is open reduction and in-ternal fixation. Commonly there are three surgical approaches: anterior, posterior and paraspinal. We attempt to compare the three approaches based on our clinical data analysis. Methods: A group of 94 patients with Denis type A or B thoracolumbar burst fracture between March 2008 and September 2010 were recruited in this study. These patients were treated by anterior-, posterior- or paraspinal-approach reduction with or without decompression. The fracture was fixed with titanium mesh and Z-plate via anterior approach (24 patients, screw and rod system via posterior approach (38 patients or paraspinal approach (32 patients. Clinical evaluations included operation duration, blood loss, inci-sion length, preoperative and postoperative Oswestry dis-ability index (ODI. Results: The average operation duration (94.1 min±13.7 min, blood loss (86.7 ml±20.0 ml, length of incision (9.3 mm± 0.7 mm and postoperative ODI (6±0.5 were signifi-cantly lower (P<0.05 in paraspinal approach group than in traditional posterior approach group (operation duration 94.1 min±13.7 min, blood loss 143.3 ml±28.3 ml, length of incision 15.4 cm±2.1 cm and ODI 12±0.7 and anterior approach group (operation duration 176.3 min±20.7 min, blood loss 255.1 ml±38.4 ml, length of incision 18.6 cm±2.4 cm and ODI 13±2.4. There was not statistical difference in terms of Cobb angle on radiographs among the three approaches. Conclusion: The anterior approach surgery is conve-nient for resection of the vertebrae and reconstruction of vertebral height, but it is more complicated and traumatic. Hence it is mostly used for severe Denis type B fracture. The posterior approach is commonly applied to most thora-columbar fractures and has fewer complications compared with the anterior approach, but it has some shortcomings as well. The paraspinal approach has great advantages

  1. Rainfall Prediction of Indian Peninsula: Comparison of Time Series Based Approach and Predictor Based Approach using Machine Learning Techniques

    Science.gov (United States)

    Dash, Y.; Mishra, S. K.; Panigrahi, B. K.

    2017-12-01

    Prediction of northeast/post monsoon rainfall which occur during October, November and December (OND) over Indian peninsula is a challenging task due to the dynamic nature of uncertain chaotic climate. It is imperative to elucidate this issue by examining performance of different machine leaning (ML) approaches. The prime objective of this research is to compare between a) statistical prediction using historical rainfall observations and global atmosphere-ocean predictors like Sea Surface Temperature (SST) and Sea Level Pressure (SLP) and b) empirical prediction based on a time series analysis of past rainfall data without using any other predictors. Initially, ML techniques have been applied on SST and SLP data (1948-2014) obtained from NCEP/NCAR reanalysis monthly mean provided by the NOAA ESRL PSD. Later, this study investigated the applicability of ML methods using OND rainfall time series for 1948-2014 and forecasted up to 2018. The predicted values of aforementioned methods were verified using observed time series data collected from Indian Institute of Tropical Meteorology and the result revealed good performance of ML algorithms with minimal error scores. Thus, it is found that both statistical and empirical methods are useful for long range climatic projections.

  2. The Comparison of a Thematic versus Regional Approach to Teaching a World Geography Course

    Science.gov (United States)

    Korson, Cadey; Kusek, Weronika

    2016-01-01

    The benefits of a regional or thematic approach to the study and presentation of world geography have long been debated. The goal to not reimagine these debates or to promote one approach over another; the aim is to explore how world geography courses are currently being taught in American universities. By polling and sharing information about…

  3. Technical Note: A comparison of two empirical approaches to estimate in-stream net nutrient uptake

    Science.gov (United States)

    von Schiller, D.; Bernal, S.; Martí, E.

    2011-04-01

    To establish the relevance of in-stream processes on nutrient export at catchment scale it is important to accurately estimate whole-reach net nutrient uptake rates that consider both uptake and release processes. Two empirical approaches have been used in the literature to estimate these rates: (a) the mass balance approach, which considers changes in ambient nutrient loads corrected by groundwater inputs between two stream locations separated by a certain distance, and (b) the spiralling approach, which is based on the patterns of longitudinal variation in ambient nutrient concentrations along a reach following the nutrient spiralling concept. In this study, we compared the estimates of in-stream net nutrient uptake rates of nitrate (NO3) and ammonium (NH4) and the associated uncertainty obtained with these two approaches at different ambient conditions using a data set of monthly samplings in two contrasting stream reaches during two hydrological years. Overall, the rates calculated with the mass balance approach tended to be higher than those calculated with the spiralling approach only at high ambient nitrogen (N) concentrations. Uncertainty associated with these estimates also differed between both approaches, especially for NH4 due to the general lack of significant longitudinal patterns in concentration. The advantages and disadvantages of each of the approaches are discussed.

  4. Multidimensional Poverty Indices and First Order Dominance Techniques: An Empirical Comparison of Different Approaches

    DEFF Research Database (Denmark)

    Hussain, M. Azhar; Permanyer, Iñaki

    2018-01-01

    techniques (FOD). Our empirical findings suggest that the FOD approach might be a reasonable cost-effective alternative to the United Nations Development Program (UNDP)’s flagship poverty indicator: the Multidimensional Poverty Index (MPI). To the extent that the FOD approach is able to uncover the socio...

  5. Practical Skills Training in Agricultural Education--A Comparison between Traditional and Blended Approaches

    Science.gov (United States)

    Deegan, Donna; Wims, Padraig; Pettit, Tony

    2016-01-01

    Purpose: In this article the use of blended learning multimedia materials as an education tool was compared with the traditional approach for skills training. Design/Methodology/Approach: This study was conducted in Ireland using a pre-test, post-test experimental design. All students were instructed on how to complete two skills using either a…

  6. Comparison of a rational vs. high throughput approach for rapid salt screening and selection.

    Science.gov (United States)

    Collman, Benjamin M; Miller, Jonathan M; Seadeek, Christopher; Stambek, Julie A; Blackburn, Anthony C

    2013-01-01

    In recent years, high throughput (HT) screening has become the most widely used approach for early phase salt screening and selection in a drug discovery/development setting. The purpose of this study was to compare a rational approach for salt screening and selection to those results previously generated using a HT approach. The rational approach involved a much smaller number of initial trials (one salt synthesis attempt per counterion) that were selected based on a few strategic solubility determinations of the free form combined with a theoretical analysis of the ideal solvent solubility conditions for salt formation. Salt screening results for sertraline, tamoxifen, and trazodone using the rational approach were compared to those previously generated by HT screening. The rational approach produced similar results to HT screening, including identification of the commercially chosen salt forms, but with a fraction of the crystallization attempts. Moreover, the rational approach provided enough solid from the very initial crystallization of a salt for more thorough and reliable solid-state characterization and thus rapid decision-making. The crystallization techniques used in the rational approach mimic larger-scale process crystallization, allowing smoother technical transfer of the selected salt to the process chemist.

  7. Comparison of two approaches to the surgical management of cochlear implantation

    NARCIS (Netherlands)

    Postelmans, Job T. F.; Grolman, Wilko; Tange, Rinze A.; Stokroos, Robert J.

    2009-01-01

    OBJECTIVES/HYPOTHESIS: Our study was designed to compare two surgical approaches that are currently employed in cochlear implantation. METHODS: There were 315 patients who were divided into two groups according to the surgical technique used for implantation. The suprameatal approach (SMA) was

  8. Comparison of qualitative and quantitative approach to prostate MR spectroscopy in peripheral zone cancer detection

    International Nuclear Information System (INIS)

    Klijn, Stijn; De Visschere, Pieter J.; De Meerleer, Gert O.; Villeirs, Geert M.

    2012-01-01

    Objective: To compare the diagnostic performance of a qualitative (pattern recognition) and a quantitative (numerical assessment) approach to magnetic resonance spectroscopy (MRS) in the diagnosis of peripheral zone prostate cancer. Methods: 185 patients (131 with histopathologically proven cancer, 54 normal/benign after at least 12 months follow-up) were prospectively evaluated with qualitative MRS using a 4-point scale between 3/2004 and 1/2008, and retrospectively reassessed using a prototype quantitative postprocessing software in April 2008. Based on pathology and follow-up data, diagnostic performance parameters were calculated. Results: The qualitative and quantitative approaches were concordant in 78.9% (146/185) of cases. The difference between the areas under the ROC curve (0.791 versus 0.772, respectively) was not statistically significant. The sensitivity, specificity and accuracy were 55.7%, 94.4% and 67.0% for the qualitative approach, and 55.0%, 83.3% and 63.2% for the quantitative approach. The sensitivity for high grade tumours (Gleason 4 + 3 or higher) was 85.2% (23/27) for both approaches. All cancers missed on either one approach separately (31/31) and 91% of cancers missed on both approaches together (23/27) were of lower grade (Gleason 3 + 4 or lower). Conclusions: Qualitative and quantitative approaches to MRS yield similar diagnostic results. Discordances in tumour detection only occurred in lower grade cancers.

  9. Three approaches to deal with inconsistent decision tables - Comparison of decision tree complexity

    KAUST Repository

    Azad, Mohammad; Chikalov, Igor; Moshkov, Mikhail

    2013-01-01

    In inconsistent decision tables, there are groups of rows with equal values of conditional attributes and different decisions (values of the decision attribute). We study three approaches to deal with such tables. Instead of a group of equal rows, we consider one row given by values of conditional attributes and we attach to this row: (i) the set of all decisions for rows from the group (many-valued decision approach); (ii) the most common decision for rows from the group (most common decision approach); and (iii) the unique code of the set of all decisions for rows from the group (generalized decision approach). We present experimental results and compare the depth, average depth and number of nodes of decision trees constructed by a greedy algorithm in the framework of each of the three approaches. © 2013 Springer-Verlag.

  10. Comparison of candidate solar array maximum power utilization approaches. [for spacecraft propulsion

    Science.gov (United States)

    Costogue, E. N.; Lindena, S.

    1976-01-01

    A study was made of five potential approaches that can be utilized to detect the maximum power point of a solar array while sustaining operations at or near maximum power and without endangering stability or causing array voltage collapse. The approaches studied included: (1) dynamic impedance comparator, (2) reference array measurement, (3) onset of solar array voltage collapse detection, (4) parallel tracker, and (5) direct measurement. The study analyzed the feasibility and adaptability of these approaches to a future solar electric propulsion (SEP) mission, and, specifically, to a comet rendezvous mission. Such missions presented the most challenging requirements to a spacecraft power subsystem in terms of power management over large solar intensity ranges of 1.0 to 3.5 AU. The dynamic impedance approach was found to have the highest figure of merit, and the reference array approach followed closely behind. The results are applicable to terrestrial solar power systems as well as to other than SEP space missions.

  11. Comparison of methodological approaches to identify economic activity regularities in transition economy

    Directory of Open Access Journals (Sweden)

    Jitka Poměnková

    2011-01-01

    Full Text Available Presented paper focuses on consideration and evaluation of methodical approaches to analyze cyclical structure character of economic activity in transition economy. As a starting point, work in time domain is applied, which is followed in frequency domain approach. Both approaches are viewed from methodical as well as application point of view and their advantage and disadvantage are discussed. Consequently, time-frequency domain approach is added and applied on real data. On the basis of obtained results recommendation is formulated. All discussed methodical approaches are also considered from the perspective of capability to evaluate behaving of business cycle in time of global economic crisis before/after year 2008. The empirical part of the paper deals with data of gross domestic product in the Czech Republic in 1996/Q1–2010/Q2.

  12. MR imaging of soft tissue alterations after total hip arthroplasty: comparison of classic surgical approaches

    Energy Technology Data Exchange (ETDEWEB)

    Agten, Christoph A.; Sutter, Reto; Pfirrmann, Christian W.A. [Balgrist University Hospital, Radiology, Zurich (Switzerland); University of Zurich, Faculty of Medicine, Zurich (Switzerland); Dora, Claudio [Balgrist University Hospital, Orthopedic Surgery, Zurich (Switzerland); University of Zurich, Faculty of Medicine, Zurich (Switzerland)

    2017-03-15

    To compare soft-tissue changes after total hip arthroplasty with posterior, direct-lateral, anterolateral, or anterior surgical approaches. MRI of 120 patients after primary total hip arthroplasty (30 per approach) were included. Each MRI was assessed by two readers regarding identification of surgical access, fatty muscle atrophy (Goutallier classification), tendon quality (0 = normal, 1 = tendinopathy, 2 = partial tear, 3 = avulsion), and fluid collections. Readers were blinded to the surgical approach. Surgical access was correctly identified in all cases. The direct lateral approach showed highest Goutallier grades and tendon damage for gluteus minimus muscle (2.07-2.67 and 2.00-2.77; p = 0.017 and p = 0.001 for readers 1 and 2, respectively) and tendon (2.30/1.67; p < 0.0005 for reader 1/2), and the lateral portion of the gluteus medius tendon (2.77/2.20; p < 0.0005 for reader 1/2). The posterior approach showed highest Goutallier grades and tendon damage for external rotator muscles (1.97-2.67 and 1.57-2.40; p < 0.0005-0.006 for reader 1/2) and tendons (1.41-2.45 and 1.93-2.76; p < 0.0005 for reader 1/2). The anterolateral and anterior approach showed less soft tissue damage. Fluid collections showed no differences between the approaches. MRI is well suited to identify surgical approaches after THA. The anterior and anterolateral approach showed less soft tissue damage compared to the posterior and direct lateral approach. (orig.)

  13. MR imaging of soft tissue alterations after total hip arthroplasty: comparison of classic surgical approaches

    International Nuclear Information System (INIS)

    Agten, Christoph A.; Sutter, Reto; Pfirrmann, Christian W.A.; Dora, Claudio

    2017-01-01

    To compare soft-tissue changes after total hip arthroplasty with posterior, direct-lateral, anterolateral, or anterior surgical approaches. MRI of 120 patients after primary total hip arthroplasty (30 per approach) were included. Each MRI was assessed by two readers regarding identification of surgical access, fatty muscle atrophy (Goutallier classification), tendon quality (0 = normal, 1 = tendinopathy, 2 = partial tear, 3 = avulsion), and fluid collections. Readers were blinded to the surgical approach. Surgical access was correctly identified in all cases. The direct lateral approach showed highest Goutallier grades and tendon damage for gluteus minimus muscle (2.07-2.67 and 2.00-2.77; p = 0.017 and p = 0.001 for readers 1 and 2, respectively) and tendon (2.30/1.67; p < 0.0005 for reader 1/2), and the lateral portion of the gluteus medius tendon (2.77/2.20; p < 0.0005 for reader 1/2). The posterior approach showed highest Goutallier grades and tendon damage for external rotator muscles (1.97-2.67 and 1.57-2.40; p < 0.0005-0.006 for reader 1/2) and tendons (1.41-2.45 and 1.93-2.76; p < 0.0005 for reader 1/2). The anterolateral and anterior approach showed less soft tissue damage. Fluid collections showed no differences between the approaches. MRI is well suited to identify surgical approaches after THA. The anterior and anterolateral approach showed less soft tissue damage compared to the posterior and direct lateral approach. (orig.)

  14. Comparison of approaches for mobile document image analysis using server supported smartphones

    Science.gov (United States)

    Ozarslan, Suleyman; Eren, P. Erhan

    2014-03-01

    With the recent advances in mobile technologies, new capabilities are emerging, such as mobile document image analysis. However, mobile phones are still less powerful than servers, and they have some resource limitations. One approach to overcome these limitations is performing resource-intensive processes of the application on remote servers. In mobile document image analysis, the most resource consuming process is the Optical Character Recognition (OCR) process, which is used to extract text in mobile phone captured images. In this study, our goal is to compare the in-phone and the remote server processing approaches for mobile document image analysis in order to explore their trade-offs. For the inphone approach, all processes required for mobile document image analysis run on the mobile phone. On the other hand, in the remote-server approach, core OCR process runs on the remote server and other processes run on the mobile phone. Results of the experiments show that the remote server approach is considerably faster than the in-phone approach in terms of OCR time, but adds extra delays such as network delay. Since compression and downscaling of images significantly reduce file sizes and extra delays, the remote server approach overall outperforms the in-phone approach in terms of selected speed and correct recognition metrics, if the gain in OCR time compensates for the extra delays. According to the results of the experiments, using the most preferable settings, the remote server approach performs better than the in-phone approach in terms of speed and acceptable correct recognition metrics.

  15. Comparison of rule induction, decision trees and formal concept analysis approaches for classification

    Science.gov (United States)

    Kotelnikov, E. V.; Milov, V. R.

    2018-05-01

    Rule-based learning algorithms have higher transparency and easiness to interpret in comparison with neural networks and deep learning algorithms. These properties make it possible to effectively use such algorithms to solve descriptive tasks of data mining. The choice of an algorithm depends also on its ability to solve predictive tasks. The article compares the quality of the solution of the problems with binary and multiclass classification based on the experiments with six datasets from the UCI Machine Learning Repository. The authors investigate three algorithms: Ripper (rule induction), C4.5 (decision trees), In-Close (formal concept analysis). The results of the experiments show that In-Close demonstrates the best quality of classification in comparison with Ripper and C4.5, however the latter two generate more compact rule sets.

  16. Effects Comparison of Different Resilience Enhancing Strategies for Municipal Water Distribution Network: A Multidimensional Approach

    Directory of Open Access Journals (Sweden)

    Xudong Zhao

    2015-01-01

    Full Text Available Water distribution network (WDN is critical to the city service, economic rehabilitation, public health, and safety. Reconstructing the WDN to improve its resilience in seismic disaster is an important and ongoing issue. Although a considerable body of research has examined the effects of different reconstruction strategies on seismic resistance, it is still hard for decision-makers to choose optimal resilience enhancing strategy. Taking the pipeline ductile retrofitting and network meshed expansion as demonstration, we proposed a feasible framework to contrast the resilience enhancing effects of two reconstruction strategies—units retrofitting strategy and network optimization strategy—in technical and organizational dimension. We also developed a new performance response function (PRF which is based on network equilibrium theory to conduct the effects comparison in integrated technical and organizational dimension. Through the case study of municipal WDN in Lianyungang, China, the comparison results were thoroughly shown and the holistic decision-making support was provided.

  17. Comparison of two anoxia models in rainbow trout cells by a 2-DE and MS/MS-based proteome approach

    DEFF Research Database (Denmark)

    Wulff, Tune; Hoffmann, E.K.; Roepstorff, P.

    2008-01-01

    In the literature, a variety of ways have been used to obtain anoxia, and most often results are compared between studies without taking into consideration how anoxia has been obtained. Here, we provide a comprehensive study of two types of anoxia, using a proteomics approach to compare changes...... and protein synthesis. It was also revealed that the level of a number of keratins was down-regulated. This study therefore provides a valuable comparison of two different anoxia models and shows that great care should be taken when comparing the effects of anoxia in studies that have used different types...

  18. Optimization of approximate decision rules relative to number of misclassifications: Comparison of greedy and dynamic programming approaches

    KAUST Repository

    Amin, Talha

    2013-01-01

    In the paper, we present a comparison of dynamic programming and greedy approaches for construction and optimization of approximate decision rules relative to the number of misclassifications. We use an uncertainty measure that is a difference between the number of rows in a decision table T and the number of rows with the most common decision for T. For a nonnegative real number γ, we consider γ-decision rules that localize rows in subtables of T with uncertainty at most γ. Experimental results with decision tables from the UCI Machine Learning Repository are also presented. © 2013 Springer-Verlag.

  19. An international comparison of models and approaches for the estimation of the radiological exposure of non-human biota

    International Nuclear Information System (INIS)

    Beresford, Nicholas A.; Balonov, Mikhail; Beaugelin-Seiller, Karine; Brown, Justin; Copplestone, David; Hingston, Joanne L.; Horyna, Jan; Hosseini, Ali; Howard, Brenda J.; Kamboj, Sunita; Nedveckaite, Tatjana; Olyslaegers, Geert; Sazykina, Tatiana; Vives i Batlle, Jordi; Yankovich, Tamara L.; Yu, Charley

    2008-01-01

    Over the last decade a number of models and approaches have been developed for the estimation of the exposure of non-human biota to ionising radiations. In some countries these are now being used in regulatory assessments. However, to date there has been no attempt to compare the outputs of the different models used. This paper presents the work of the International Atomic Energy Agency's EMRAS Biota Working Group which compares the predictions of a number of such models in model-model and model-data inter-comparisons

  20. An estimation of the transformation value by means of the estimation function. Market Comparison Approach with abridged data chart

    Directory of Open Access Journals (Sweden)

    Maurizio d’Amato

    2015-06-01

    Full Text Available This essay suggests a re-elaboration of the Marketing Comparison Approach in order to set the value of properties subject to transformation. The essay focuses on identifying the property valuation following a certain transformation and is aimed at determining the land value by means of the extraction method. The outcome, based on trading data and a study case in the province of Bari may also be applied to under construction properties valuation and to the identification of the value of properties under construction, (investment property under construction by means of the Future Value method.

  1. Space commerce in a global economy - Comparison of international approaches to commercial space

    Science.gov (United States)

    Stone, Barbara A.; Kleber, Peter

    1992-01-01

    A historical perspective, current status, and comparison of national government/commercial space industry relationships in the United States and Europe are presented. It is noted that space technology has been developed and used primarily to meet the needs of civil and military government initiatives. Two future trends of space technology development include new space enterprises, and the national drive to achieve a more competitive global economic position.

  2. A Study on Influencing Factors of Knowledge Management Systems Adoption: Models Comparison Approach

    OpenAIRE

    Mei-Chun Yeh; Ming-Shu Yuan

    2007-01-01

    Using Linear Structural Relation model (LISREL model) as analysis method and technology acceptance model and decomposed theory of planned behavior as research foundation, this study approachesmainly from the angle of behavioral intention to examine the influential factors of 421 employees adopting knowledge management systems and in the meantime to compare the two method models mentioned on the top. According to the research, there is no, in comparison with technology acceptance model anddeco...

  3. Comparing Diagnostic Accuracy of Cognitive Screening Instruments: A Weighted Comparison Approach

    Directory of Open Access Journals (Sweden)

    A.J. Larner

    2013-03-01

    Full Text Available Background/Aims: There are many cognitive screening instruments available to clinicians when assessing patients' cognitive function, but the best way to compare the diagnostic utility of these tests is uncertain. One method is to undertake a weighted comparison which takes into account the difference in sensitivity and specificity of two tests, the relative clinical misclassification costs of true- and false-positive diagnosis, and also disease prevalence. Methods: Data were examined from four pragmatic diagnostic accuracy studies from one clinic which compared the Mini-Mental State Examination (MMSE with the Addenbrooke's Cognitive Examination-Revised (ACE-R, the Montreal Cognitive Assessment (MoCA, the Test Your Memory (TYM test, and the Mini-Mental Parkinson (MMP, respectively. Results: Weighted comparison calculations suggested a net benefit for ACE-R, MoCA, and MMP compared to MMSE, but a net loss for TYM test compared to MMSE. Conclusion: Routine incorporation of weighted comparison or other similar net benefit measures into diagnostic accuracy studies merits consideration to better inform clinicians of the relative value of cognitive screening instruments.

  4. Comparison between goal programming and cointegration approaches in enhanced index tracking

    Science.gov (United States)

    Lam, Weng Siew; Jamaan, Saiful Hafizah Hj.

    2013-04-01

    Index tracking is a popular form of passive fund management in stock market. Passive management is a buy-and-hold strategy that aims to achieve rate of return similar to the market return. Index tracking problem is a problem of reproducing the performance of a stock market index, without purchasing all of the stocks that make up the index. This can be done by establishing an optimal portfolio that minimizes risk or tracking error. An improved index tracking (enhanced index tracking) is a dual-objective optimization problem, a trade-off between maximizing the mean return and minimizing the tracking error. Enhanced index tracking aims to generate excess return over the return achieved by the index. The objective of this study is to compare the portfolio compositions and performances by using two different approaches in enhanced index tracking problem, which are goal programming and cointegration. The result of this study shows that the optimal portfolios for both approaches are able to outperform the Malaysia market index which is Kuala Lumpur Composite Index. Both approaches give different optimal portfolio compositions. Besides, the cointegration approach outperforms the goal programming approach because the cointegration approach gives higher mean return and lower risk or tracking error. Therefore, the cointegration approach is more appropriate for the investors in Malaysia.

  5. A comparison between neural response telemetry via cochleostomy or the round window approach in cochlear implantation.

    Science.gov (United States)

    Hamerschmidt, Rogério; Schuch, Luiz Henrique; Rezende, Rodrigo Kopp; Wiemes, Gislaine Richter Minhoto; Oliveira, Adriana Kosma Pires de; Mocellin, Marcos

    2012-01-01

    There are two techniques for cochlear implant (CI) electrode placement: cochleostomy and the round window (RW) approach. This study aims to compare neural response telemetry (NRT) results immediately after surgery to check for possible differences on auditory nerve stimulation between these two techniques. This is a prospective cross-sectional study. Twenty-three patients were enrolled. Six patients underwent surgery by cochleostomy and 17 had it through the RW approach. Mean charge units (MCU) for high frequency sounds: patients submitted to the RW approach had a mean value of 190.4 (± 29.2) while cochleostomy patients averaged 187.8 (± 32.7); p = 0.71. MCU for mid frequency sounds: patients submitted to the RW approach had a mean value of 192.5 (± 22) while cochleostomy patients averaged 178.5 (± 18.5); p = 0.23. MCU for low frequency sounds: patients submitted to the RW approach had a mean value of 183.3 (± 25) while cochleostomy patients averaged 163.8 (± 19.3); p = 0.19. This study showed no differences in the action potential of the distal portion of the auditory nerve in patients with multichannel cochlear implants submitted to surgery by cochleostomy or through the RW approach, using the implant itself to generate stimuli and record responses. Both techniques equally stimulate the cochlear nerve. Therefore, the choice of approach can be made based on the surgeon's own preference and experience.

  6. Comparison of statistical approaches dealing with time-dependent confounding in drug effectiveness studies.

    Science.gov (United States)

    Karim, Mohammad Ehsanul; Petkau, John; Gustafson, Paul; Platt, Robert W; Tremlett, Helen

    2018-06-01

    In longitudinal studies, if the time-dependent covariates are affected by the past treatment, time-dependent confounding may be present. For a time-to-event response, marginal structural Cox models are frequently used to deal with such confounding. To avoid some of the problems of fitting marginal structural Cox model, the sequential Cox approach has been suggested as an alternative. Although the estimation mechanisms are different, both approaches claim to estimate the causal effect of treatment by appropriately adjusting for time-dependent confounding. We carry out simulation studies to assess the suitability of the sequential Cox approach for analyzing time-to-event data in the presence of a time-dependent covariate that may or may not be a time-dependent confounder. Results from these simulations revealed that the sequential Cox approach is not as effective as marginal structural Cox model in addressing the time-dependent confounding. The sequential Cox approach was also found to be inadequate in the presence of a time-dependent covariate. We propose a modified version of the sequential Cox approach that correctly estimates the treatment effect in both of the above scenarios. All approaches are applied to investigate the impact of beta-interferon treatment in delaying disability progression in the British Columbia Multiple Sclerosis cohort (1995-2008).

  7. Preliminary comparison of the endoscopic transnasal vs the sublabial transseptal approach for clinically nonfunctioning pituitary macroadenomas.

    Science.gov (United States)

    Sheehan, M T; Atkinson, J L; Kasperbauer, J L; Erickson, B J; Nippoldt, T B

    1999-07-01

    To assess the advantages and disadvantages of an endoscopic transnasal approach to pituitary surgery for a select group of clinically nonfunctioning macroadenomas and to compare results of this approach with the sublabial transseptal approach at a single institution. We retrospectively reviewed the records of 26 patients with clinically nonfunctioning pituitary macroadenomas approached endoscopically and 44 matched control patients with the same tumors approached sublabially between January 1, 1995, and October 31, 1997. At baseline, the groups were not significantly different for age, sex distribution, number of comorbid conditions, visual field defects, degree of anterior pituitary insufficiency, or preoperative assessment of tumor volume or invasiveness. Mean (SD) operative times were significantly reduced in the endoscopic group vs the sublabial group: 2.7 (0.7) hours vs 3.4 (0.9) hours (P working channel to the sella turcica. For these reasons, the endoscopic approach or its variation is an alternative to the sublabial approach but should be considered only by experienced pituitary neurosurgeons.

  8. A systematic comparison of different approaches of density functional theory for the study of electrical double layers

    International Nuclear Information System (INIS)

    Yang, Guomin; Liu, Longcheng

    2015-01-01

    Based on the best available knowledge of density functional theory (DFT), the reference-fluid perturbation method is here extended to yield different approaches that well account for the cross correlations between the Columbic interaction and the hard-sphere exclusion in an inhomogeneous ionic hard-sphere fluid. In order to quantitatively evaluate the advantage and disadvantage of different approaches in describing the interfacial properties of electrical double layers, this study makes a systematic comparison against Monte Carlo simulations over a wide range of conditions. The results suggest that the accuracy of the DFT approaches is well correlated to a coupling parameter that describes the coupling strength of electrical double layers by accounting for the steric effect and that can be used to classify the systems into two regimes. In the weak-coupling regime, the approaches based on the bulk-fluid perturbation method are shown to be more accurate than the counterparts based on the reference-fluid perturbation method, whereas they exhibit the opposite behavior in the strong-coupling regime. More importantly, the analysis indicates that, with a suitable choice of the reference fluid, the weighted correlation approximation (WCA) to DFT gives the best account of the coupling effect of the electrostatic-excluded volume correlations. As a result, a piecewise WCA approach can be developed that is robust enough to describe the structural and thermodynamic properties of electrical double layers over both weak- and strong-coupling regimes

  9. Comparison of spatial association approaches for landscape mapping of soil organic carbon stocks

    Science.gov (United States)

    Miller, B. A.; Koszinski, S.; Wehrhan, M.; Sommer, M.

    2015-03-01

    The distribution of soil organic carbon (SOC) can be variable at small analysis scales, but consideration of its role in regional and global issues demands the mapping of large extents. There are many different strategies for mapping SOC, among which is to model the variables needed to calculate the SOC stock indirectly or to model the SOC stock directly. The purpose of this research is to compare direct and indirect approaches to mapping SOC stocks from rule-based, multiple linear regression models applied at the landscape scale via spatial association. The final products for both strategies are high-resolution maps of SOC stocks (kg m-2), covering an area of 122 km2, with accompanying maps of estimated error. For the direct modelling approach, the estimated error map was based on the internal error estimations from the model rules. For the indirect approach, the estimated error map was produced by spatially combining the error estimates of component models via standard error propagation equations. We compared these two strategies for mapping SOC stocks on the basis of the qualities of the resulting maps as well as the magnitude and distribution of the estimated error. The direct approach produced a map with less spatial variation than the map produced by the indirect approach. The increased spatial variation represented by the indirect approach improved R2 values for the topsoil and subsoil stocks. Although the indirect approach had a lower mean estimated error for the topsoil stock, the mean estimated error for the total SOC stock (topsoil + subsoil) was lower for the direct approach. For these reasons, we recommend the direct approach to modelling SOC stocks be considered a more conservative estimate of the SOC stocks' spatial distribution.

  10. Can intermuscular cleavage planes provide proper transverse screw angle? Comparison of two paraspinal approaches.

    Science.gov (United States)

    Cheng, Xiaofei; Ni, Bin; Liu, Qi; Chen, Jinshui; Guan, Huapeng

    2013-01-01

    The goal of this study was to determine which paraspinal approach provided a better transverse screw angle (TSA) for each vertebral level in lower lumbar surgery. Axial computed tomography (CT) images of 100 patients, from L3 to S1, were used to measure the angulation parameters, including transverse pedicle angle (TPA) and transverse cleavage plane angle (TCPA) of entry from the two approaches. The difference value between TCPA and TPA, defined as difference angle (DA), was calculated. Statistical differences of DA obtained by the two approaches and the angulation parameters between sexes, and the correlation between each angulation parameter and age or body mass index (BMI) were analyzed. TPA ranged from about 16° at L3 to 30° at S1. TCPA through the Wiltse's and Weaver's approach ranged from about -10° and 25° at L3 to 12° and 32° at S1, respectively. The absolute values of DA through the Weaver's approach were significantly lower than those through the Wiltse's approach at each level. The angulation parameters showed no significant difference with sex and no significant correlation with age or BMI. In the lower lumbar vertebrae (L3-L5) and S1, pedicle screw placement through the Weaver's approach may more easily yield the preferred TSA consistent with TPA than that through the Wiltse's approach. The reference values obtained in this paper may be applied regardless of sex, age or BMI and the descriptive statistical results may be used as references for applying the two paraspinal approaches.

  11. Comparisons of Multilevel Modeling and Structural Equation Modeling Approaches to Actor-Partner Interdependence Model.

    Science.gov (United States)

    Hong, Sehee; Kim, Soyoung

    2018-01-01

    There are basically two modeling approaches applicable to analyzing an actor-partner interdependence model: the multilevel modeling (hierarchical linear model) and the structural equation modeling. This article explains how to use these two models in analyzing an actor-partner interdependence model and how these two approaches work differently. As an empirical example, marital conflict data were used to analyze an actor-partner interdependence model. The multilevel modeling and the structural equation modeling produced virtually identical estimates for a basic model. However, the structural equation modeling approach allowed more realistic assumptions on measurement errors and factor loadings, rendering better model fit indices.

  12. Laying the Foundations for Democratic Behavior - A Comparison of Two Different Approaches to Democratic Education

    Directory of Open Access Journals (Sweden)

    Viola HUANG

    2014-07-01

    Full Text Available A democracy is a society in which everyone has equal rights and is able to participate in decision-making processes. Consequently, in a democratic society, democratic behavior is essential. This work investigates the question: In what ways and to what extent can alternative models of education support the development of democratic skills in children? To explore this question, the author analyzes and compares two different approaches to democratic education: The Sudbury approach and the democratic free school approach. The study is based on qualitative research participant observation and open-ended interviews conducted at different Sudbury and democratic free schools in the US.

  13. Multilevel summation methods for efficient evaluation of long-range pairwise interactions in atomistic and coarse-grained molecular simulation.

    Energy Technology Data Exchange (ETDEWEB)

    Bond, Stephen D.

    2014-01-01

    The availability of efficient algorithms for long-range pairwise interactions is central to the success of numerous applications, ranging in scale from atomic-level modeling of materials to astrophysics. This report focuses on the implementation and analysis of the multilevel summation method for approximating long-range pairwise interactions. The computational cost of the multilevel summation method is proportional to the number of particles, N, which is an improvement over FFTbased methods whos cost is asymptotically proportional to N logN. In addition to approximating electrostatic forces, the multilevel summation method can be use to efficiently approximate convolutions with long-range kernels. As an application, we apply the multilevel summation method to a discretized integral equation formulation of the regularized generalized Poisson equation. Numerical results are presented using an implementation of the multilevel summation method in the LAMMPS software package. Preliminary results show that the computational cost of the method scales as expected, but there is still a need for further optimization.

  14. Comparison of Two Probabilistic Fatigue Damage Assessment Approaches Using Prognostic Performance Metrics

    Data.gov (United States)

    National Aeronautics and Space Administration — A general framework for probabilistic prognosis using maximum entropy approach, MRE, is proposed in this paper to include all available information and uncertainties...

  15. A comparison of approaches for finding minimum identifying codes on graphs

    Science.gov (United States)

    Horan, Victoria; Adachi, Steve; Bak, Stanley

    2016-05-01

    In order to formulate mathematical conjectures likely to be true, a number of base cases must be determined. However, many combinatorial problems are NP-hard and the computational complexity makes this research approach difficult using a standard brute force approach on a typical computer. One sample problem explored is that of finding a minimum identifying code. To work around the computational issues, a variety of methods are explored and consist of a parallel computing approach using MATLAB, an adiabatic quantum optimization approach using a D-Wave quantum annealing processor, and lastly using satisfiability modulo theory (SMT) and corresponding SMT solvers. Each of these methods requires the problem to be formulated in a unique manner. In this paper, we address the challenges of computing solutions to this NP-hard problem with respect to each of these methods.

  16. Automatic Diagnosis of Fetal Heart Rate: Comparison of Different Methodological Approaches

    National Research Council Canada - National Science Library

    Magenes, G

    2001-01-01

    .... A Multilayer Perception (MLP) neural network and an Adaptive Neuro-Fuzzy Inference System (ANFIS) were compared with classical statistical methods. Both the neural and neuro-fuzzy approaches seem to give better results than any tested statistical classifier.

  17. A Comparison of Three Holistic Approaches to Health: One Health, EcoHealth, and Planetary Health

    Directory of Open Access Journals (Sweden)

    Henrik Lerner

    2017-09-01

    Full Text Available Several holistic and interdisciplinary approaches exist to safeguard health. Three of the most influential concepts at the moment, One Health, EcoHealth, and Planetary Health, are analyzed in this paper, revealing similarities and differences at the theoretical conceptual level. These approaches may appear synonymous, as they all promote the underlying assumption of humans and other animals sharing the same planet and the same environmental challenges, infections and infectious agents as well as other aspects of physical—and possibly mental—health. However, we would like to illuminate the differences between these three concepts or approaches, and how the choice of terms may, deliberately or involuntary, signal the focus, and underlying values of the approaches. In this paper, we have chosen some proposed and well-known suggestions of definitions. In our theoretical analysis, we will focus on at least two areas. These are (1 the value of the potential scientific areas which could be included and (2 core values present within the approach. In the first area, our main concern is whether the approaches are interdisciplinary and whether the core scientific areas are assigned equal importance. For the second area, which is rather wide, we analyze core values such as biodiversity, health, and how one values humans, animals, and ecosystems. One Health has been described as either a narrow approach combining public health and veterinary medicine or as a wide approach as in the wide-spread “umbrella” depiction including both scientific fields, core concepts, and interdisciplinary research areas. In both cases, however, safeguarding the health of vertebrates is usually in focus although ecosystems are also included in the model. The EcoHealth approach seems to have more of a biodiversity focus, with an emphasis on all living creatures, implying that parasites, unicellular organisms, and possibly also viruses have a value and should be protected

  18. A Comparison of Three Holistic Approaches to Health: One Health, EcoHealth, and Planetary Health.

    Science.gov (United States)

    Lerner, Henrik; Berg, Charlotte

    2017-01-01

    Several holistic and interdisciplinary approaches exist to safeguard health. Three of the most influential concepts at the moment, One Health, EcoHealth, and Planetary Health, are analyzed in this paper, revealing similarities and differences at the theoretical conceptual level. These approaches may appear synonymous, as they all promote the underlying assumption of humans and other animals sharing the same planet and the same environmental challenges, infections and infectious agents as well as other aspects of physical-and possibly mental-health. However, we would like to illuminate the differences between these three concepts or approaches, and how the choice of terms may, deliberately or involuntary, signal the focus, and underlying values of the approaches. In this paper, we have chosen some proposed and well-known suggestions of definitions. In our theoretical analysis, we will focus on at least two areas. These are (1) the value of the potential scientific areas which could be included and (2) core values present within the approach. In the first area, our main concern is whether the approaches are interdisciplinary and whether the core scientific areas are assigned equal importance. For the second area, which is rather wide, we analyze core values such as biodiversity, health, and how one values humans, animals, and ecosystems. One Health has been described as either a narrow approach combining public health and veterinary medicine or as a wide approach as in the wide-spread "umbrella" depiction including both scientific fields, core concepts, and interdisciplinary research areas. In both cases, however, safeguarding the health of vertebrates is usually in focus although ecosystems are also included in the model. The EcoHealth approach seems to have more of a biodiversity focus, with an emphasis on all living creatures, implying that parasites, unicellular organisms, and possibly also viruses have a value and should be protected. Planetary Health, on the

  19. A comparison of image restoration approaches applied to three-dimensional confocal and wide-field fluorescence microscopy.

    Science.gov (United States)

    Verveer, P. J; Gemkow, M. J; Jovin, T. M

    1999-01-01

    We have compared different image restoration approaches for fluorescence microscopy. The most widely used algorithms were classified with a Bayesian theory according to the assumed noise model and the type of regularization imposed. We considered both Gaussian and Poisson models for the noise in combination with Tikhonov regularization, entropy regularization, Good's roughness and without regularization (maximum likelihood estimation). Simulations of fluorescence confocal imaging were used to examine the different noise models and regularization approaches using the mean squared error criterion. The assumption of a Gaussian noise model yielded only slightly higher errors than the Poisson model. Good's roughness was the best choice for the regularization. Furthermore, we compared simulated confocal and wide-field data. In general, restored confocal data are superior to restored wide-field data, but given sufficient higher signal level for the wide-field data the restoration result may rival confocal data in quality. Finally, a visual comparison of experimental confocal and wide-field data is presented.

  20. Laying the Foundations for Democratic Behavior - A Comparison of Two Different Approaches to Democratic Education

    OpenAIRE

    Viola HUANG

    2014-01-01

    A democracy is a society in which everyone has equal rights and is able to participate in decision-making processes. Consequently, in a democratic society, democratic behavior is essential. This work investigates the question: In what ways and to what extent can alternative models of education support the development of democratic skills in children? To explore this question, the author analyzes and compares two different approaches to democratic education: The Sudbury approach and the democr...

  1. Reconsidering Cluster Bias in Multilevel Data: A Monte Carlo Comparison of Free and Constrained Baseline Approaches.

    Science.gov (United States)

    Guenole, Nigel

    2018-01-01

    The test for item level cluster bias examines the improvement in model fit that results from freeing an item's between level residual variance from a baseline model with equal within and between level factor loadings and between level residual variances fixed at zero. A potential problem is that this approach may include a misspecified unrestricted model if any non-invariance is present, but the log-likelihood difference test requires that the unrestricted model is correctly specified. A free baseline approach where the unrestricted model includes only the restrictions needed for model identification should lead to better decision accuracy, but no studies have examined this yet. We ran a Monte Carlo study to investigate this issue. When the referent item is unbiased, compared to the free baseline approach, the constrained baseline approach led to similar true positive (power) rates but much higher false positive (Type I error) rates. The free baseline approach should be preferred when the referent indicator is unbiased. When the referent assumption is violated, the false positive rate was unacceptably high for both free and constrained baseline approaches, and the true positive rate was poor regardless of whether the free or constrained baseline approach was used. Neither the free or constrained baseline approach can be recommended when the referent indicator is biased. We recommend paying close attention to ensuring the referent indicator is unbiased in tests of cluster bias. All Mplus input and output files, R, and short Python scripts used to execute this simulation study are uploaded to an open access repository.

  2. A Comparison of Approaches for Solving Hard Graph-Theoretic Problems

    Science.gov (United States)

    2015-04-29

    and Search”, in Discrete Mathematics and Its Applications, Book 7, CRC Press (1998): Boca Raton. [6] A. Lucas, “Ising Formulations of Many NP Problems...owner. 14. ABSTRACT In order to formulate mathematical conjectures likely to be true, a number of base cases must be determined. However, many... combinatorial problems are NP-hard and the computational complexity makes this research approach difficult using a standard brute force approach on a

  3. Comparison of various approaches for the treatment of fractures of the mandibular condylar process.

    Science.gov (United States)

    Handschel, Jörg; Rüggeberg, Tim; Depprich, Rita; Schwarz, Frank; Meyer, Ulrich; Kübler, Norbert R; Naujoks, Christian

    2012-12-01

    Fractures of the mandibular condyle process are the most common fractures of the lower jaw. Unfortunately, the type of treatment is still a matter of debate. The aim of this investigation was to compare the outcome of different treatment approaches regarding function and surgical side-effects. 111 fractures of the mandibular condyle representing all types according to the classification of Spiessl and Schroll were included. Both closed reduction (CR) and open reduction with internal fixation (ORIF) including the retromandibular/transparotid, submandibular, preauricular and intraoral approach were performed. The clinical examination included functional and aesthetic aspects at least 1 year after the fracture. The majority of fractures (45%) were classified into Type II and IV according to Spiessl and Schroll followed by fractures without any displacement or dislocation (29.7%). The submandibular approach showed the worst outcome regarding permanent palsy of the facial nerve and hypertrophic scarring. No significant differences between the various approaches were detected in the functional status in any diagnosis group. Inferior condylar neck fractures benefit from ORIF by an intraoral approach whereas in high condylar neck fractures the retromandibular/transparotid approach shows the best results. Fractures of the condylar head were almost all treated by CR and our results cannot contribute to the debate of CR vs. ORIF in this type of fracture. Copyright © 2012 European Association for Cranio-Maxillo-Facial Surgery. Published by Elsevier Ltd. All rights reserved.

  4. COMPARISONS BETWEEN AND COMBINATIONS OF DIFFERENT APPROACHES TO ACCELERATE ENGINEERING PROJECTS

    Directory of Open Access Journals (Sweden)

    H. Steyn

    2012-01-01

    Full Text Available

    ENGLISH ABSTRACT: In this article, traditional project management methods such as PERT and CPM, as well as fast-tracking and systems approaches, viz. concurrent engineering and critical chain, are reviewed with specific reference to their contribution to reducing the duration of the execution phase of engineering projects. Each of these techniques has some role to play in the acceleration of project execution. Combinations of approaches are evaluated by considering the potential of sets consisting of two different approaches each. While PERT and CPM approaches have been combined for many years in a technique called PERT/CPM, new combinations of approaches are discussed. Certain assumptions inherent to PERT and often wrong are not made by the critical chain approach.

    AFRIKAANSE OPSOMMING: In hierdie artikel word tradisionele projekbestuurbenaderings soos PERT en CPM asook projekversnelling en stelselbenaderings, naamlik gelyktydige ingenieurswese, en kritiekeketting-ondersoek met betrekking tot die bydrae wat elk tot die versnelling van die uitvoeringsfase van ingenieursprojekte kan lewer. Elk van hierdie benaderings kan ‘n spesifieke bydrae tot die versnelling van projekte lewer. Kombinasies, elk bestaande uit twee verskillende benaderings, word geëvalueer. Terwyl PERT en CPM reeds baie jare lank in kombinasie gebruik word, word nuwe kombinasies ook hier bespreek. Sekere aannames inherent aan die PERT-benadering is dikwels foutief. Hierdie aannames word nie deur die kritieke-ketting-benadering gemaak nie.

  5. Enthalpic discrimination of homochiral pairwise interactions: Enantiomers of proline and hydroxyproline in (dimethyl formamide (DMF) + H2O) and (dimethylsulfoxide (DMSO) + H2O) mixtures at 298.15 K

    International Nuclear Information System (INIS)

    Hu, Xin-Gen; Liu, Jia-Min; Guo, Zheng; Liang, Hong-Yu; Jia, Zhao-Peng; Cheng, Wei-Na; Guo, Ai-Di; Zhang, He-Juan

    2013-01-01

    Highlights: • The h xx values of each α-amino acids decrease gradually with the mass fractions of cosolvents. • The absolute values of h xx of L-enantiomers are larger than D-enantiomers in the range w COS = 0 to 0.30. • The h xx values of the two proline enantiomers are all positive at each composition of mixed solvents. • When a hydrophilic hydroxyl group is introduced into proline enantiomers, the values of h xx become negative. -- Abstract: Dilution enthalpies of two pairs of α-amino acid enantiomers, namely L-proline vsD-proline, and L-hydroxyproline vsD-hydroxyproline, in water-rich regions of dimethyl formamide (DMF) + H 2 O and dimethylsulfoxide (DMSO) + H 2 O mixtures (mass fractions of cosolvents w COS = 0 to 0.30) have been determined respectively at 298.15 K by isothermal titration calorimetry (ITC). The successive values of dilution enthalpy obtained in a single run of ITC determination were used to calculate homochiral enthalpic pairwise interaction coefficients (h xx ) at the corresponding composition of mixed solvents according to the McMillan–Mayer’ statistical thermodynamic approach. The sign and magnitude of h xx were interpreted in terms of solute–solute interactions mediated by solvent and cosolvent molecules, and preferential configurations of homochiral pairwise interactions (L–L or D–D pairs) in aqueous solutions. The variations of h xx with w COS were considered to be dependent greatly on the competition equilibrium between hydrophobic and hydrophilic interactions, as well as the structural alteration of water caused by the two highly polar aprotic cosolvents (DMF and DMSO). Especially, it was found that when one of the two kinds of interactions (hydrophobic or hydrophilic interactions) preponderates over the other in solutions, enthalpic effect of homochiral pairwise interactions is always remarkable, and is characterized by a large absolute value of h xx , positive or negative, which corresponds respectively to the

  6. Comparison of Posteromedial Versus Posterolateral Approach for Posterior Malleolus Fixation in Trimalleolar Ankle Fractures.

    Science.gov (United States)

    Zhong, Sheng; Shen, Lin; Zhao, Jia-Guo; Chen, Jie; Xie, Jin-Feng; Shi, Qi; Wu, Ying-Hua; Zeng, Xian-Tie

    2017-02-01

    To compare clinical and radiographic outcomes of posterior malleolar fractures (PMF) treated with lag screws from anterior to posterior versus posterior to anterior approach. We retrospectively analyzed 48 patients with trimalleolar fractures who underwent open reduction and internal fixation (ORIF) with either posteromedial (PM) or posterolateral (PL) approaches between January 2012 and December 2014. Fixation of the posterior malleolus was made with anteroposterior screws in 20 patients using the PM approach and posteroanterior screws in 28 patients using the PL approach. The American Orthopedic Foot and Ankle Society (AOFAS) scores and range of motion (ROM) of the ankle were used as the main outcome measurements, and results were evaluated at the 6-month, 12-month and final follow-up. Postoperative radiographs and computed tomography scans were used to evaluate the residual gap/step-off. The degree of arthritis was evaluated on final follow-up using Bargon criteria. Other complications were also recorded to compare the clinical outcomes of the two approaches. The mean duration of follow-up regardless of the approaches was 21.1 months (range, 15-54 months). None of the patients developed delayed union or nonunion. Functional bone healing was obtained in all patients at 10.7 weeks (range, 8-16 weeks). The mean AOFAS scores of the PM group at the postoperative 6-mouth, 12-month, and final follow-up were 91.4 (range, 82-100), 92.5 (range, 84-100), and 92.9 (range, 86-100), respectively. In the PL group, the mean AOFAS scores were 89.9 (range, 72-100), 91.4 (range, 77-100), and 91.9 (range, 77-100), respectively. At the final follow-up, the median loss of range of motion (ROM) for dorsiflexion and plantaflexion were 0°(0°, 5°) and 0°(0°, 0°), respectively, in both groups. There were no significant differences between the two approaches in AOFAS scores and ROM of the ankle in each period postoperatively (P > 0.05). Two patients in the PL group and 1 in the PM

  7. Linear VSS and Distributed Commitments Based on Secret Sharing and Pairwise Checks

    DEFF Research Database (Denmark)

    Fehr, Serge; Maurer, Ueli M.

    2002-01-01

    . VSS and DC are main building blocks for unconditional secure multi-party computation protocols. This general approach covers all known linear VSS and DC schemes. The main theorem states that the security of a scheme is equivalent to a pure linear-algebra condition on the linear mappings (e.......g. described as matrices and vectors) describing the scheme. The security of all known schemes follows as corollaries whose proofs are pure linear-algebra arguments, in contrast to some hybrid arguments used in the literature. Our approach is demonstrated for the CDM DC scheme, which we generalize to be secure...

  8. Comparison of weighting approaches for genetic risk scores in gene-environment interaction studies.

    Science.gov (United States)

    Hüls, Anke; Krämer, Ursula; Carlsten, Christopher; Schikowski, Tamara; Ickstadt, Katja; Schwender, Holger

    2017-12-16

    Weighted genetic risk scores (GRS), defined as weighted sums of risk alleles of single nucleotide polymorphisms (SNPs), are statistically powerful for detection gene-environment (GxE) interactions. To assign weights, the gold standard is to use external weights from an independent study. However, appropriate external weights are not always available. In such situations and in the presence of predominant marginal genetic effects, we have shown in a previous study that GRS with internal weights from marginal genetic effects ("GRS-marginal-internal") are a powerful and reliable alternative to single SNP approaches or the use of unweighted GRS. However, this approach might not be appropriate for detecting predominant interactions, i.e. interactions showing an effect stronger than the marginal genetic effect. In this paper, we present a weighting approach for such predominant interactions ("GRS-interaction-training") in which parts of the data are used to estimate the weights from the interaction terms and the remaining data are used to determine the GRS. We conducted a simulation study for the detection of GxE interactions in which we evaluated power, type I error and sign-misspecification. We compared this new weighting approach to the GRS-marginal-internal approach and to GRS with external weights. Our simulation study showed that in the absence of external weights and with predominant interaction effects, the highest power was reached with the GRS-interaction-training approach. If marginal genetic effects were predominant, the GRS-marginal-internal approach was more appropriate. Furthermore, the power to detect interactions reached by the GRS-interaction-training approach was only slightly lower than the power achieved by GRS with external weights. The power of the GRS-interaction-training approach was confirmed in a real data application to the Traffic, Asthma and Genetics (TAG) Study (N = 4465 observations). When appropriate external weights are unavailable, we

  9. Comparison of the coracoid and retroclavicular approaches for ultrasound-guided infraclavicular brachial plexus block.

    Science.gov (United States)

    Kavrut Ozturk, Nilgun; Kavakli, Ali Sait

    2017-08-01

    This prospective randomized study compared the coracoid and retroclavicular approaches to ultrasound-guided infraclavicular brachial plexus block (IBPB) in terms of needle tip and shaft visibility and quality of block. We hypothesized that the retroclavicular approach would increase needle tip and shaft visibility and decrease the number of needle passes compared to the coracoid approach. A total of 100 adult patients who received IBPB block for upper limb surgery were randomized into two groups: a coracoid approach group (group C) and a retroclavicular approach group (group R). In group C, the needle was inserted 2 cm medial and 2 cm inferior to the coracoid process and directed from ventral to dorsal. In group R, the needle insertion point was posterior to the clavicle and the needle was advanced from cephalad to caudal. All ultrasound images were digitally stored for analysis. The primary aim of the present study was to compare needle tip and shaft visibility between the coracoid approach and retroclavicular approach in patients undergoing upper limb surgery. The secondary aim was to investigate differences between the two groups in the number of needle passes, sensory and motor block success rates, surgical success rate, block performance time, block performance-related pain, patient satisfaction, use of supplemental local anesthetic and analgesic, and complications. Needle tip visibility and needle shaft visibility were significantly better in group R (p = 0.040, p = 0.032, respectively). Block performance time and anesthesia-related time were significantly shorter in group R (p = 0.022, p = 0.038, respectively). Number of needle passes was significantly lower in group R (p = 0.044). Paresthesia during block performance was significantly higher in group C (p = 0.045). There were no statistically significant differences between the two groups in terms of sensory or motor block success, surgical success, block-related pain, and patient satisfaction

  10. Comparison of the different approaches to generate holograms from data acquired with a Kinect sensor

    Science.gov (United States)

    Kang, Ji-Hoon; Leportier, Thibault; Ju, Byeong-Kwon; Song, Jin Dong; Lee, Kwang-Hoon; Park, Min-Chul

    2017-05-01

    Data of real scenes acquired in real-time with a Kinect sensor can be processed with different approaches to generate a hologram. 3D models can be generated from a point cloud or a mesh representation. The advantage of the point cloud approach is that computation process is well established since it involves only diffraction and propagation of point sources between parallel planes. On the other hand, the mesh representation enables to reduce the number of elements necessary to represent the object. Then, even though the computation time for the contribution of a single element increases compared to a simple point, the total computation time can be reduced significantly. However, the algorithm is more complex since propagation of elemental polygons between non-parallel planes should be implemented. Finally, since a depth map of the scene is acquired at the same time than the intensity image, a depth layer approach can also be adopted. This technique is appropriate for a fast computation since propagation of an optical wavefront from one plane to another can be handled efficiently with the fast Fourier transform. Fast computation with depth layer approach is convenient for real time applications, but point cloud method is more appropriate when high resolution is needed. In this study, since Kinect can be used to obtain both point cloud and depth map, we examine the different approaches that can be adopted for hologram computation and compare their performance.

  11. A comparison of the stochastic and machine learning approaches in hydrologic time series forecasting

    Science.gov (United States)

    Kim, T.; Joo, K.; Seo, J.; Heo, J. H.

    2016-12-01

    Hydrologic time series forecasting is an essential task in water resources management and it becomes more difficult due to the complexity of runoff process. Traditional stochastic models such as ARIMA family has been used as a standard approach in time series modeling and forecasting of hydrological variables. Due to the nonlinearity in hydrologic time series data, machine learning approaches has been studied with the advantage of discovering relevant features in a nonlinear relation among variables. This study aims to compare the predictability between the traditional stochastic model and the machine learning approach. Seasonal ARIMA model was used as the traditional time series model, and Random Forest model which consists of decision tree and ensemble method using multiple predictor approach was applied as the machine learning approach. In the application, monthly inflow data from 1986 to 2015 of Chungju dam in South Korea were used for modeling and forecasting. In order to evaluate the performances of the used models, one step ahead and multi-step ahead forecasting was applied. Root mean squared error and mean absolute error of two models were compared.

  12. Delineating Individual Trees from Lidar Data: A Comparison of Vector- and Raster-based Segmentation Approaches

    Directory of Open Access Journals (Sweden)

    Maggi Kelly

    2013-08-01

    Full Text Available Light detection and ranging (lidar data is increasingly being used for ecosystem monitoring across geographic scales. This work concentrates on delineating individual trees in topographically-complex, mixed conifer forest across the California’s Sierra Nevada. We delineated individual trees using vector data and a 3D lidar point cloud segmentation algorithm, and using raster data with an object-based image analysis (OBIA of a canopy height model (CHM. The two approaches are compared to each other and to ground reference data. We used high density (9 pulses/m2, discreet lidar data and WorldView-2 imagery to delineate individual trees, and to classify them by species or species types. We also identified a new method to correct artifacts in a high-resolution CHM. Our main focus was to determine the difference between the two types of approaches and to identify the one that produces more realistic results. We compared the delineations via tree detection, tree heights, and the shape of the generated polygons. The tree height agreement was high between the two approaches and the ground data (r2: 0.93–0.96. Tree detection rates increased for more dominant trees (8–100 percent. The two approaches delineated tree boundaries that differed in shape: the lidar-approach produced fewer, more complex, and larger polygons that more closely resembled real forest structure.

  13. Template-based protein-protein docking exploiting pairwise interfacial residue restraints

    NARCIS (Netherlands)

    Xue, Li C; Garcia Lopes Maia Rodrigues, João; Dobbs, Drena; Honavar, Vasant; Bonvin, Alexandre M J J

    2016-01-01

    Although many advanced and sophisticatedab initioapproaches for modeling protein-protein complexes have been proposed in past decades, template-based modeling (TBM) remains the most accurate and widely used approach, given a reliable template is available. However, there are many different ways to

  14. Comparison of a Traditional Probabilistic Risk Assessment Approach with Advanced Safety Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Smith, Curtis L; Mandelli, Diego; Zhegang Ma

    2014-11-01

    As part of the Light Water Sustainability Program (LWRS) [1], the purpose of the Risk Informed Safety Margin Characterization (RISMC) [2] Pathway research and development (R&D) is to support plant decisions for risk-informed margin management with the aim to improve economics, reliability, and sustain safety of current NPPs. In this paper, we describe the RISMC analysis process illustrating how mechanistic and probabilistic approaches are combined in order to estimate a safety margin. We use the scenario of a “station blackout” (SBO) wherein offsite power and onsite power is lost, thereby causing a challenge to plant safety systems. We describe the RISMC approach, illustrate the station blackout modeling, and contrast this with traditional risk analysis modeling for this type of accident scenario. We also describe our approach we are using to represent advanced flooding analysis.

  15. Comparison of governance approaches for the control of antimicrobial resistance: Analysis of three European countries

    Directory of Open Access Journals (Sweden)

    Gabriel Birgand

    2018-02-01

    Full Text Available Abstract Policy makers and governments are calling for coordination to address the crisis emerging from the ineffectiveness of current antibiotics and stagnated pipe-line of new ones – antimicrobial resistance (AMR. Wider contextual drivers and mechanisms are contributing to shifts in governance strategies in health care, but are national health system approaches aligned with strategies required to tackle antimicrobial resistance? This article provides an analysis of governance approaches within healthcare systems including: priority setting, performance monitoring and accountability for AMR prevention in three European countries: England, France and Germany. Advantages and unresolved issues from these different experiences are reported, concluding that mechanisms are needed to support partnerships between healthcare professionals and patients with democratized decision-making and accountability via collaboration. But along with this multi-stakeholder approach to governance, a balance between regulation and persuasion is needed.

  16. A comparison of two closely-related approaches to aerodynamic design optimization

    Science.gov (United States)

    Shubin, G. R.; Frank, P. D.

    1991-01-01

    Two related methods for aerodynamic design optimization are compared. The methods, called the implicit gradient approach and the variational (or optimal control) approach, both attempt to obtain gradients necessary for numerical optimization at a cost significantly less than that of the usual black-box approach that employs finite difference gradients. While the two methods are seemingly quite different, they are shown to differ (essentially) in that the order of discretizing the continuous problem, and of applying calculus, is interchanged. Under certain circumstances, the two methods turn out to be identical. We explore the relationship between these methods by applying them to a model problem for duct flow that has many features in common with transonic flow over an airfoil. We find that the gradients computed by the variational method can sometimes be sufficiently inaccurate to cause the optimization to fail.

  17. Comparison between the basic least squares and the Bayesian approach for elastic constants identification

    Science.gov (United States)

    Gogu, C.; Haftka, R.; LeRiche, R.; Molimard, J.; Vautrin, A.; Sankar, B.

    2008-11-01

    The basic formulation of the least squares method, based on the L2 norm of the misfit, is still widely used today for identifying elastic material properties from experimental data. An alternative statistical approach is the Bayesian method. We seek here situations with significant difference between the material properties found by the two methods. For a simple three bar truss example we illustrate three such situations in which the Bayesian approach leads to more accurate results: different magnitude of the measurements, different uncertainty in the measurements and correlation among measurements. When all three effects add up, the Bayesian approach can have a large advantage. We then compared the two methods for identification of elastic constants from plate vibration natural frequencies.

  18. A Monte Carlo Study on Multiple Output Stochastic Frontiers: Comparison of Two Approaches

    DEFF Research Database (Denmark)

    Henningsen, Geraldine; Henningsen, Arne; Jensen, Uwe

    , dividing all other output quantities by the selected output quantity, and using these ratios as regressors (OD). Another approach is the stochastic ray production frontier (SR) which transforms the output quantities into their Euclidean distance as the dependent variable and their polar coordinates......In the estimation of multiple output technologies in a primal approach, the main question is how to handle the multiple outputs. Often an output distance function is used, where the classical approach is to exploit its homogeneity property by selecting one output quantity as the dependent variable...... of both specifications for the case of a Translog output distance function with respect to different common statistical problems as well as problems arising as a consequence of zero values in the output quantities. Although, our results partly show clear reactions to statistical misspecifications...

  19. Comparison of two approaches for differentiating full-field data in solid mechanics

    International Nuclear Information System (INIS)

    Avril, Stéphane; Feissel, Pierre; Villon, Pierre; Pierron, Fabrice

    2010-01-01

    In this study, the issue of reconstructing the gradients of noisy full-field data is addressed within the framework of solid mechanics. Two approaches are considered, a global one based on finite element approximation (FEA) and a local one based on diffuse approximation (DA). For both approaches, it is proposed to monitor locally the filtering effect in order to adapt the uncertainty to the local signal to noise ratio. Both approaches are applied to a case study which is commonly considered as difficult in solid mechanics (open-hole tensile test on a composite laminate). Both DA and FEA are successful for detecting local subsurface damage from the measured noisy displacement fields. Indications are also provided about the compared performances of DA and FEA. It is shown that DA is more robust, but the downside is that it is also more CPU time consuming

  20. Comparison of modeling approaches to prioritize chemicals based on estimates of exposure and exposure potential.

    Science.gov (United States)

    Mitchell, Jade; Arnot, Jon A; Jolliet, Olivier; Georgopoulos, Panos G; Isukapalli, Sastry; Dasgupta, Surajit; Pandian, Muhilan; Wambaugh, John; Egeghy, Peter; Cohen Hubal, Elaine A; Vallero, Daniel A

    2013-08-01

    While only limited data are available to characterize the potential toxicity of over 8 million commercially available chemical substances, there is even less information available on the exposure and use-scenarios that are required to link potential toxicity to human and ecological health outcomes. Recent improvements and advances such as high throughput data gathering, high performance computational capabilities, and predictive chemical inherency methodology make this an opportune time to develop an exposure-based prioritization approach that can systematically utilize and link the asymmetrical bodies of knowledge for hazard and exposure. In response to the US EPA's need to develop novel approaches and tools for rapidly prioritizing chemicals, a "Challenge" was issued to several exposure model developers to aid the understanding of current systems in a broader sense and to assist the US EPA's effort to develop an approach comparable to other international efforts. A common set of chemicals were prioritized under each current approach. The results are presented herein along with a comparative analysis of the rankings of the chemicals based on metrics of exposure potential or actual exposure estimates. The analysis illustrates the similarities and differences across the domains of information incorporated in each modeling approach. The overall findings indicate a need to reconcile exposures from diffuse, indirect sources (far-field) with exposures from directly, applied chemicals in consumer products or resulting from the presence of a chemical in a microenvironment like a home or vehicle. Additionally, the exposure scenario, including the mode of entry into the environment (i.e. through air, water or sediment) appears to be an important determinant of the level of agreement between modeling approaches. Copyright © 2013 Elsevier B.V. All rights reserved.

  1. Comparison of modeling approaches to prioritize chemicals based on estimates of exposure and exposure potential

    Science.gov (United States)

    Mitchell, Jade; Arnot, Jon A.; Jolliet, Olivier; Georgopoulos, Panos G.; Isukapalli, Sastry; Dasgupta, Surajit; Pandian, Muhilan; Wambaugh, John; Egeghy, Peter; Cohen Hubal, Elaine A.; Vallero, Daniel A.

    2014-01-01

    While only limited data are available to characterize the potential toxicity of over 8 million commercially available chemical substances, there is even less information available on the exposure and use-scenarios that are required to link potential toxicity to human and ecological health outcomes. Recent improvements and advances such as high throughput data gathering, high performance computational capabilities, and predictive chemical inherency methodology make this an opportune time to develop an exposure-based prioritization approach that can systematically utilize and link the asymmetrical bodies of knowledge for hazard and exposure. In response to the US EPA’s need to develop novel approaches and tools for rapidly prioritizing chemicals, a “Challenge” was issued to several exposure model developers to aid the understanding of current systems in a broader sense and to assist the US EPA’s effort to develop an approach comparable to other international efforts. A common set of chemicals were prioritized under each current approach. The results are presented herein along with a comparative analysis of the rankings of the chemicals based on metrics of exposure potential or actual exposure estimates. The analysis illustrates the similarities and differences across the domains of information incorporated in each modeling approach. The overall findings indicate a need to reconcile exposures from diffuse, indirect sources (far-field) with exposures from directly, applied chemicals in consumer products or resulting from the presence of a chemical in a microenvironment like a home or vehicle. Additionally, the exposure scenario, including the mode of entry into the environment (i.e. through air, water or sediment) appears to be an important determinant of the level of agreement between modeling approaches. PMID:23707726

  2. Comparison of ensemble post-processing approaches, based on empirical and dynamical error modelisation of rainfall-runoff model forecasts

    Science.gov (United States)

    Chardon, J.; Mathevet, T.; Le Lay, M.; Gailhard, J.

    2012-04-01

    In the context of a national energy company (EDF : Electricité de France), hydro-meteorological forecasts are necessary to ensure safety and security of installations, meet environmental standards and improve water ressources management and decision making. Hydrological ensemble forecasts allow a better representation of meteorological and hydrological forecasts uncertainties and improve human expertise of hydrological forecasts, which is essential to synthesize available informations, coming from different meteorological and hydrological models and human experience. An operational hydrological ensemble forecasting chain has been developed at EDF since 2008 and is being used since 2010 on more than 30 watersheds in France. This ensemble forecasting chain is characterized ensemble pre-processing (rainfall and temperature) and post-processing (streamflow), where a large human expertise is solicited. The aim of this paper is to compare 2 hydrological ensemble post-processing methods developed at EDF in order improve ensemble forecasts reliability (similar to Monatanari &Brath, 2004; Schaefli et al., 2007). The aim of the post-processing methods is to dress hydrological ensemble forecasts with hydrological model uncertainties, based on perfect forecasts. The first method (called empirical approach) is based on a statistical modelisation of empirical error of perfect forecasts, by streamflow sub-samples of quantile class and lead-time. The second method (called dynamical approach) is based on streamflow sub-samples of quantile class and streamflow variation, and lead-time. On a set of 20 watersheds used for operational forecasts, results show that both approaches are necessary to ensure a good post-processing of hydrological ensemble, allowing a good improvement of reliability, skill and sharpness of ensemble forecasts. The comparison of the empirical and dynamical approaches shows the limits of the empirical approach which is not able to take into account hydrological

  3. A comparison of labeling and label-free mass spectrometry-based proteomics approaches.

    Science.gov (United States)

    Patel, Vibhuti J; Thalassinos, Konstantinos; Slade, Susan E; Connolly, Joanne B; Crombie, Andrew; Murrell, J Colin; Scrivens, James H

    2009-07-01

    The proteome of the recently discovered bacterium Methylocella silvestris has been characterized using three profiling and comparative proteomics approaches. The organism has been grown on two different substrates enabling variations in protein expression to be identified. The results obtained using the experimental approaches have been compared with respect to number of proteins identified, confidence in identification, sequence coverage and agreement of regulated proteins. The sample preparation, instrumental time and sample loading requirements of the differing experiments are compared and discussed. A preliminary screen of the protein regulation results for biological significance has also been performed.

  4. Avian fatalities at wind energy facilities in North America: A comparison of recent approaches

    Science.gov (United States)

    Johnson, Douglas H.; Loss, Scott R.; Smallwood, K. Shawn; Erickson, Wallace P.

    2016-01-01

    Three recent publications have estimated the number of birds killed each year by wind energy facilities at 2012 build-out levels in the United States. The 3 publications differ in scope, methodology, and resulting estimates. We compare and contrast characteristics of the approaches used in the publications. In addition, we describe decisions made in obtaining the estimates that were produced. Despite variation in the 3 approaches, resulting estimates were reasonably similar; about a quarter- to a half-million birds are killed per year by colliding with wind turbines.

  5. Performance comparisons between PCA-EA-LBG and PCA-LBG-EA approaches in VQ codebook generation for image compression

    Science.gov (United States)

    Tsai, Jinn-Tsong; Chou, Ping-Yi; Chou, Jyh-Horng

    2015-11-01

    The aim of this study is to generate vector quantisation (VQ) codebooks by integrating principle component analysis (PCA) algorithm, Linde-Buzo-Gray (LBG) algorithm, and evolutionary algorithms (EAs). The EAs include genetic algorithm (GA), particle swarm optimisation (PSO), honey bee mating optimisation (HBMO), and firefly algorithm (FF). The study is to provide performance comparisons between PCA-EA-LBG and PCA-LBG-EA approaches. The PCA-EA-LBG approaches contain PCA-GA-LBG, PCA-PSO-LBG, PCA-HBMO-LBG, and PCA-FF-LBG, while the PCA-LBG-EA approaches contain PCA-LBG, PCA-LBG-GA, PCA-LBG-PSO, PCA-LBG-HBMO, and PCA-LBG-FF. All training vectors of test images are grouped according to PCA. The PCA-EA-LBG used the vectors grouped by PCA as initial individuals, and the best solution gained by the EAs was given for LBG to discover a codebook. The PCA-LBG approach is to use the PCA to select vectors as initial individuals for LBG to find a codebook. The PCA-LBG-EA used the final result of PCA-LBG as an initial individual for EAs to find a codebook. The search schemes in PCA-EA-LBG first used global search and then applied local search skill, while in PCA-LBG-EA first used local search and then employed global search skill. The results verify that the PCA-EA-LBG indeed gain superior results compared to the PCA-LBG-EA, because the PCA-EA-LBG explores a global area to find a solution, and then exploits a better one from the local area of the solution. Furthermore the proposed PCA-EA-LBG approaches in designing VQ codebooks outperform existing approaches shown in the literature.

  6. Approach to Multi-Criteria Group Decision-Making Problems Based on the Best-Worst-Method and ELECTRE Method

    Directory of Open Access Journals (Sweden)

    Xinshang You

    2016-09-01

    Full Text Available This paper proposes a novel approach to cope with the multi-criteria group decision-making problems. We give the pairwise comparisons based on the best-worst-method (BWM, which can decrease comparison times. Additionally, our comparison results are determined with the positive and negative aspects. In order to deal with the decision matrices effectively, we consider the elimination and choice translation reality (ELECTRE III method under the intuitionistic multiplicative preference relations environment. The ELECTRE III method is designed for a double-automatic system. Under a certain limitation, without bothering the decision-makers to reevaluate the alternatives, this system can adjust some special elements that have the most influence on the group’s satisfaction degree. Moreover, the proposed method is suitable for both the intuitionistic multiplicative preference relation and the interval valued fuzzy preference relations through the transformation formula. An illustrative example is followed to demonstrate the rationality and availability of the novel method.

  7. Comparison of the Modeling Approach between Membrane Bioreactor and Conventional Activated Sludge Processes

    DEFF Research Database (Denmark)

    Jiang, Tao; Sin, Gürkan; Spanjers, Henri

    2009-01-01

    Activated sludge models (ASM) have been developed and largely applied in conventional activated sludge (CAS) systems. The applicability of ASM to model membrane bioreactors (MBR) and the differences in modeling approaches have not been studied in detail. A laboratory-scale MBR was modeled using ASM...

  8. Continuous Training and Wages: An Empirical Analysis Using a Comparison-Group Approach

    Science.gov (United States)

    Gorlitz, Katja

    2011-01-01

    Using German linked employer-employee data, this paper investigates the short-term impact of on-the-job training on wages. The applied estimation approach was first introduced by Leuven and Oosterbeek (2008). Wages of employees who intended to participate in training but did not do so because of a random event are compared to wages of training…

  9. Risk-based microbiological criteria for Campylobacter in broiler meat: A comparison of two approaches

    DEFF Research Database (Denmark)

    Nauta, Maarten; Andersen, Jens Kirk; Tuominen, Pirkko

    2015-01-01

    Risk-based microbiological criteria can offer a tool to control Campylobacter in the broiler meat production chain. Recently two approaches have been applied to derive such criteria and to analyse their potential impact in terms of human health risk reduction: the risk-based version...

  10. A Comparison of Online and Face-to-Face Approaches to Teaching Introduction to American Government

    Science.gov (United States)

    Bolsen, Toby; Evans, Michael; Fleming, Anna McCaghren

    2016-01-01

    This article reports results from a large study comparing four different approaches to teaching Introduction to American Government: (1) traditional, a paper textbook with 100% face-to-face lecture-style teaching; (2) breakout, a paper textbook with 50% face-to-face lecture-style teaching and 50% face-to-face small-group breakout discussion…

  11. Distinguishing Continuous and Discrete Approaches to Multilevel Mixture IRT Models: A Model Comparison Perspective

    Science.gov (United States)

    Zhu, Xiaoshu

    2013-01-01

    The current study introduced a general modeling framework, multilevel mixture IRT (MMIRT) which detects and describes characteristics of population heterogeneity, while accommodating the hierarchical data structure. In addition to introducing both continuous and discrete approaches to MMIRT, the main focus of the current study was to distinguish…

  12. Learning Biology through Innovative Curricula: A Comparison of Game- and Nongame-Based Approaches

    Science.gov (United States)

    Sadler, Troy D.; Romine, William L.; Menon, Deepika; Ferdig, Richard E.; Annetta, Leonard

    2015-01-01

    This study explored student learning in the context of innovative biotechnology curricula and the effects of gaming as a central element of the learning experience. The quasi-experimentally designed study compared learning outcomes between two curricular approaches: One built around a computer-based game, and the other built around a narrative…

  13. Psychiatric Cultures Compared : Psychiatry and Mental Health Care in the Twentieth Century: Comparisons and Approaches

    NARCIS (Netherlands)

    Gijswijt-Hofstra, Marijke; Oosterhuis, Harry; Vijselaar, Joost; Freeman, Hugh

    2005-01-01

    The history of mental health care in the twentieth century is a relatively uncharted territory. Exemplifying a new emphasis on the comparative approach, this volume offers overviews of various national psychiatric cultures and explores new research subjects. By confronting Dutch psychiatry with

  14. Detecting autologous blood transfusions: a comparison of three passport approaches and four blood markers

    DEFF Research Database (Denmark)

    Mørkeberg, J; Sharpe, K; Belhage, B

    2011-01-01

    Blood passport has been suggested as an indirect tool to detect various kinds of blood manipulations. Autologous blood transfusions are currently undetectable, and the objective of this study was to examine the sensitivities of different blood markers and blood passport approaches in order to det...

  15. A Critical Comparison of Transformation and Deep Approach Theories of Learning

    Science.gov (United States)

    Howie, Peter; Bagnall, Richard

    2015-01-01

    This paper reports a critical comparative analysis of two popular and significant theories of adult learning: the transformation and the deep approach theories of learning. These theories are operative in different educational sectors, are significant, respectively, in each, and they may be seen as both touching on similar concerns with learning…

  16. Comparison of Two Music Training Approaches on Music and Speech Perception in Cochlear Implant Users

    NARCIS (Netherlands)

    Fuller, Christina D; Galvin, John J; Maat, Bert; Başkent, Deniz; Free, Rolien H

    2018-01-01

    In normal-hearing (NH) adults, long-term music training may benefit music and speech perception, even when listening to spectro-temporally degraded signals as experienced by cochlear implant (CI) users. In this study, we compared two different music training approaches in CI users and their effects

  17. A comparison between prescriptive- and performance-based approaches in fire safety design of structures

    DEFF Research Database (Denmark)

    Budny, Iwona; Giuliani, Luisa

    2010-01-01

    methodology of the performance-based fire design approach is considered, with the avail of computer-aided simulations of the main frame of the car-park. Nonlinear analyses, with respect to thermally induced effects and emphasis laid to the collapse modality, are carried out on a frame of the considered...

  18. Comparison between AGC and a tuningless LFC approach based on direct observation of DERs

    DEFF Research Database (Denmark)

    Prostejovsky, Alexander Maria; Marinelli, Mattia

    2017-01-01

    , and the resulting reduction of available inertia. In this paper, we propose a tuningless Load-Frequency Control (LFC) approach able to cope with the changing dynamics of electric power grids. Harnessing the possibilities of modern monitoring and communication means, the so-called Direct Load-Frequency Control (DLFC...

  19. International Students' Motivation and Learning Approach: A Comparison with Local Students

    Science.gov (United States)

    Chue, Kah Loong; Nie, Youyan

    2016-01-01

    Psychological factors contribute to motivation and learning for international students as much as teaching strategies. 254 international students and 144 local students enrolled in a private education institute were surveyed regarding their perception of psychological needs support, their motivation and learning approach. The results from this…

  20. A comparison of two different approaches for mapping potential ozone damage to vegetation. A model study

    International Nuclear Information System (INIS)

    Simpson, D.; Ashmore, M.R.; Emberson, L.; Tuovinen, J.-P.

    2007-01-01

    Two very different types of approaches are currently in use today for indicating risk of ozone damage to vegetation in Europe. One approach is the so-called AOTX (accumulated exposure over threshold of X ppb) index, which is based upon ozone concentrations only. The second type of approach entails an estimate of the amount of ozone entering via the stomates of vegetation, the AFstY approach (accumulated stomatal flux over threshold of Y nmol m -2 s -1 ). The EMEP chemical transport model is used to map these different indicators of ozone damage across Europe, for two illustrative vegetation types, wheat and beech forests. The results show that exceedences of critical levels for either type of indicator are widespread, but that the indicators give very different spatial patterns across Europe. Model simulations for year 2020 scenarios suggest reductions in risks of vegetation damage whichever indicator is used, but suggest that AOT40 is much more sensitive to emission control than AFstY values. - Model calculations of AOT40 and AFstY show very different spatial variations in the risks of ozone damage to vegetation

  1. A Comparison of Three Approaches to Correct for Direct and Indirect Range Restrictions: A Simulation Study

    Science.gov (United States)

    Pfaffel, Andreas; Schober, Barbara; Spiel, Christiane

    2016-01-01

    A common methodological problem in the evaluation of the predictive validity of selection methods, e.g. in educational and employment selection, is that the correlation between predictor and criterion is biased. Thorndike's (1949) formulas are commonly used to correct for this biased correlation. An alternative approach is to view the selection…

  2. Post-closure biosphere assessment modelling: comparison of complex and more stylised approaches.

    Science.gov (United States)

    Walke, Russell C; Kirchner, Gerald; Xu, Shulan; Dverstorp, Björn

    2015-10-01

    Geological disposal facilities are the preferred option for high-level radioactive waste, due to their potential to provide isolation from the surface environment (biosphere) on very long timescales. Assessments need to strike a balance between stylised models and more complex approaches that draw more extensively on site-specific information. This paper explores the relative merits of complex versus more stylised biosphere models in the context of a site-specific assessment. The more complex biosphere modelling approach was developed by the Swedish Nuclear Fuel and Waste Management Co (SKB) for the Formark candidate site for a spent nuclear fuel repository in Sweden. SKB's approach is built on a landscape development model, whereby radionuclide releases to distinct hydrological basins/sub-catchments (termed 'objects') are represented as they evolve through land rise and climate change. Each of seventeen of these objects is represented with more than 80 site specific parameters, with about 22 that are time-dependent and result in over 5000 input values per object. The more stylised biosphere models developed for this study represent releases to individual ecosystems without environmental change and include the most plausible transport processes. In the context of regulatory review of the landscape modelling approach adopted in the SR-Site assessment in Sweden, the more stylised representation has helped to build understanding in the more complex modelling approaches by providing bounding results, checking the reasonableness of the more complex modelling, highlighting uncertainties introduced through conceptual assumptions and helping to quantify the conservatisms involved. The more stylised biosphere models are also shown capable of reproducing the results of more complex approaches. A major recommendation is that biosphere assessments need to justify the degree of complexity in modelling approaches as well as simplifying and conservative assumptions. In light of

  3. Comparison of Two Music Training Approaches on Music and Speech Perception in Cochlear Implant Users.

    Science.gov (United States)

    Fuller, Christina D; Galvin, John J; Maat, Bert; Başkent, Deniz; Free, Rolien H

    2018-01-01

    In normal-hearing (NH) adults, long-term music training may benefit music and speech perception, even when listening to spectro-temporally degraded signals as experienced by cochlear implant (CI) users. In this study, we compared two different music training approaches in CI users and their effects on speech and music perception, as it remains unclear which approach to music training might be best. The approaches differed in terms of music exercises and social interaction. For the pitch/timbre group, melodic contour identification (MCI) training was performed using computer software. For the music therapy group, training involved face-to-face group exercises (rhythm perception, musical speech perception, music perception, singing, vocal emotion identification, and music improvisation). For the control group, training involved group nonmusic activities (e.g., writing, cooking, and woodworking). Training consisted of weekly 2-hr sessions over a 6-week period. Speech intelligibility in quiet and noise, vocal emotion identification, MCI, and quality of life (QoL) were measured before and after training. The different training approaches appeared to offer different benefits for music and speech perception. Training effects were observed within-domain (better MCI performance for the pitch/timbre group), with little cross-domain transfer of music training (emotion identification significantly improved for the music therapy group). While training had no significant effect on QoL, the music therapy group reported better perceptual skills across training sessions. These results suggest that more extensive and intensive training approaches that combine pitch training with the social aspects of music therapy may further benefit CI users.

  4. A comparison of direct aspiration versus stent retriever as a first approach ('COMPASS'): protocol.

    Science.gov (United States)

    Turk, Aquilla S; Siddiqui, Adnan H; Mocco, J

    2018-02-20

    Acute ischemic stroke is a potentially devastating condition and leading cause of morbidity and mortality, affecting an estimated 800 000 people per year in the USA. The natural history of untreated or unrevascularized large vessel occlusions in acute stroke patients results in mortality rates approaching 30%, with only 25% achieving good neurologic outcomes at 90 days. Recently, data have demonstrated that early endovascular recanalization of large vessel occlusions results in better outcomes than medical therapy alone. However, the majority of patients in these studies were treated with a stent retriever based approach. The purpose of COMPASS is to evaluate whether patients treated with a direct aspiration first pass (ADAPT) approach have non-inferior functional outcomes to those treated with a stent retriever as the firstline (SRFL) approach. All patients who meet the inclusion and exclusion criteria and consent to participate will be enrolled at participating centers. Treatment will be randomly assigned by a central web based system in a 1:1 manner to treatment with either ADAPT or SRFL thrombectomy. Statistical methodology is prespecified with details available in the statistical analysis plan. The trial recently completed enrollment, and data collection/verification is ongoing. The final results will be made available on completion of enrollment and follow-up. This paper details the design of the COMPASS trial, a randomized, blinded adjudicator, concurrent, controlled trial of patients treated with either ADAPT or SRFL approaches in order to evaluate whether ADAPT results in non-inferior functional outcome. NCT02466893, Results. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2018. All rights reserved. No commercial use is permitted unless otherwise expressly granted.

  5. A comparison of regional flood frequency analysis approaches in a simulation framework

    Science.gov (United States)

    Ganora, D.; Laio, F.

    2016-07-01

    Regional frequency analysis (RFA) is a well-established methodology to provide an estimate of the flood frequency curve at ungauged (or scarcely gauged) sites. Different RFA approaches exist, depending on the way the information is transferred to the site of interest, but it is not clear in the literature if a specific method systematically outperforms the others. The aim of this study is to provide a framework wherein carrying out the intercomparison by building up a virtual environment based on synthetically generated data. The considered regional approaches include: (i) a unique regional curve for the whole region; (ii) a multiple-region model where homogeneous subregions are determined through cluster analysis; (iii) a Region-of-Influence model which defines a homogeneous subregion for each site; (iv) a spatially smooth estimation procedure where the parameters of the regional model vary continuously along the space. Virtual environments are generated considering different patterns of heterogeneity, including step change and smooth variations. If the region is heterogeneous, with the parent distribution changing continuously within the region, the spatially smooth regional approach outperforms the others, with overall errors 10-50% lower than the other methods. In the case of a step-change, the spatially smooth and clustering procedures perform similarly if the heterogeneity is moderate, while clustering procedures work better when the step-change is severe. To extend our findings, an extensive sensitivity analysis has been performed to investigate the effect of sample length, number of virtual stations, return period of the predicted quantile, variability of the scale parameter of the parent distribution, number of predictor variables and different parent distribution. Overall, the spatially smooth approach appears as the most robust approach as its performances are more stable across different patterns of heterogeneity, especially when short records are

  6. RNA structure alignment by a unit-vector approach.

    Science.gov (United States)

    Capriotti, Emidio; Marti-Renom, Marc A

    2008-08-15

    The recent discovery of tiny RNA molecules such as microRNAs and small interfering RNA are transforming the view of RNA as a simple information transfer molecule. Similar to proteins, the native three-dimensional structure of RNA determines its biological activity. Therefore, classifying the current structural space is paramount for functionally annotating RNA molecules. The increasing numbers of RNA structures deposited in the PDB requires more accurate, automatic and benchmarked methods for RNA structure comparison. In this article, we introduce a new algorithm for RNA structure alignment based on a unit-vector approach. The algorithm has been implemented in the SARA program, which results in RNA structure pairwise alignments and their statistical significance. The SARA program has been implemented to be of general applicability even when no secondary structure can be calculated from the RNA structures. A benchmark against the ARTS program using a set of 1275 non-redundant pairwise structure alignments results in inverted approximately 6% extra alignments with at least 50% structurally superposed nucleotides and base pairs. A first attempt to perform RNA automatic functional annotation based on structure alignments indicates that SARA can correctly assign the deepest SCOR classification to >60% of the query structures. The SARA program is freely available through a World Wide Web server http://sgu.bioinfo.cipf.es/services/SARA/. Supplementary data are available at Bioinformatics online.

  7. Performance comparison of novel WNN approach with RBFNN in navigation of autonomous mobile robotic agent

    Directory of Open Access Journals (Sweden)

    Ghosh Saradindu

    2016-01-01

    Full Text Available This paper addresses the performance comparison of Radial Basis Function Neural Network (RBFNN with novel Wavelet Neural Network (WNN of designing intelligent controllers for path planning of mobile robot in an unknown environment. In the proposed WNN, different types of activation functions such as Mexican Hat, Gaussian and Morlet wavelet functions are used in the hidden nodes. The neural networks are trained by an intelligent supervised learning technique so that the robot makes a collision-free path in the unknown environment during navigation from different starting points to targets/goals. The efficiency of two algorithms is compared using some MATLAB simulations and experimental setup with Arduino Mega 2560 microcontroller in terms of path length and time taken to reach the target as an indicator for the accuracy of the network models.

  8. Predictive Utility of Personality Disorder in Depression: Comparison of Outcomes and Taxonomic Approach.

    Science.gov (United States)

    Newton-Howes, Giles; Mulder, Roger; Ellis, Pete M; Boden, Joseph M; Joyce, Peter

    2017-09-19

    There is debate around the best model for diagnosing personality disorder, both in terms of its relationship to the empirical data and clinical utility. Four randomized controlled trials examining various treatments for depression were analyzed at an individual patient level. Three different approaches to the diagnosis of personality disorder were analyzed in these patients. A total of 578 depressed patients were included in the analysis. Personality disorder, however measured, was of little predictive utility in the short term but added significantly to predictive modelling of medium-term outcomes, accounting for more than twice as much of the variance in social functioning outcome as depression psychopathology. Personality disorder assessment is of predictive utility with longer timeframes and when considering social outcomes as opposed to symptom counts. This utility is sufficiently great that there appears to be value in assessing personality; however, no particular approach outperforms any other.

  9. Comparison of different Kalman filter approaches in deriving time varying connectivity from EEG data.

    Science.gov (United States)

    Ghumare, Eshwar; Schrooten, Maarten; Vandenberghe, Rik; Dupont, Patrick

    2015-08-01

    Kalman filter approaches are widely applied to derive time varying effective connectivity from electroencephalographic (EEG) data. For multi-trial data, a classical Kalman filter (CKF) designed for the estimation of single trial data, can be implemented by trial-averaging the data or by averaging single trial estimates. A general linear Kalman filter (GLKF) provides an extension for multi-trial data. In this work, we studied the performance of the different Kalman filtering approaches for different values of signal-to-noise ratio (SNR), number of trials and number of EEG channels. We used a simulated model from which we calculated scalp recordings. From these recordings, we estimated cortical sources. Multivariate autoregressive model parameters and partial directed coherence was calculated for these estimated sources and compared with the ground-truth. The results showed an overall superior performance of GLKF except for low levels of SNR and number of trials.

  10. Robotic longitudinal pancreaticojejunostomy for chronic pancreatitis: Comparison of clinical outcomes and cost to the open approach.

    Science.gov (United States)

    Kirks, Russell C; Lorimer, Patrick D; Fruscione, Michael; Cochran, Allyson; Baker, Erin H; Iannitti, David A; Vrochides, Dionisios; Martinie, John B

    2017-09-01

    This study compares clinical and cost outcomes of robot-assisted laparoscopic (RAL) and open longitudinal pancreaticojejunostomy (LPJ) for chronic pancreatitis. Clinical and cost data were retrospectively compared between open and RAL LPJ performed at a single center from 2008-2015. Twenty-six patients underwent LPJ: 19 open and 7 RAL. Two robot-assisted cases converted to open were included in the open group for analysis. Patients undergoing RAL LPJ had less intraoperative blood loss, a shorter surgical length of stay, and lower medication costs. Operation supply cost was higher in the RAL group. No difference in hospitalization cost was found. Versus the open approach, RAL LPJ performed for chronic pancreatitis shortens hospitalization and reduces medication costs; hospitalization costs are equivalent. A higher operative cost for RAL LPJ is mitigated by a shorter hospitalization. Decreased morbidity and healthcare resource economy support use of the robotic approach for LPJ when appropriate. Copyright © 2017 John Wiley & Sons, Ltd.

  11. Alcohol intake and colorectal cancer: a comparison of approaches for including repeated measures of alcohol consumption

    DEFF Research Database (Denmark)

    Thygesen, Lau Caspar; Wu, Kana; Grønbaek, Morten

    2008-01-01

    BACKGROUND: In numerous studies, alcohol intake has been found to be positively associated with colorectal cancer risk. However, the majority of studies included only one exposure measurement, which may bias the results if long-term intake is relevant.METHODS: We compared different approaches...... for including repeated measures of alcohol intake among 47,432 US men enrolled in the Health Professionals Follow-up Study. Questionnaires including questions on alcohol intake had been completed in 1986, 1990, 1994, and 1998. The outcome was incident colorectal cancer during follow-up from 1986 to 2002.RESULTS......: During follow-up, 868 members of the cohort experienced colorectal cancer. Baseline, updated, and cumulative average alcohol intakes were positively associated with colorectal cancer, with only minor differences among the approaches. These results support moderately increased risk for intake >30 g...

  12. Neurobiological studies of risk assessment: A comparison of expected utility and mean-variance approaches

    OpenAIRE

    d'Acremont, M.; Bossaerts, Peter

    2008-01-01

    When modeling valuation under uncertainty, economists generally prefer expected utility because it has an axiomatic foundation, meaning that the resulting choices will satisfy a number of rationality requirements. In expected utility theory, values are computed by multiplying probabilities of each possible state of nature by the payoff in that state and summing the results. The drawback of this approach is that all state probabilities need to be dealt with separately, which becomes extremely ...

  13. Sample preparation with solid phase microextraction and exhaustive extraction approaches: Comparison for challenging cases.

    Science.gov (United States)

    Boyacı, Ezel; Rodríguez-Lafuente, Ángel; Gorynski, Krzysztof; Mirnaghi, Fatemeh; Souza-Silva, Érica A; Hein, Dietmar; Pawliszyn, Janusz

    2015-05-11

    In chemical analysis, sample preparation is frequently considered the bottleneck of the entire analytical method. The success of the final method strongly depends on understanding the entire process of analysis of a particular type of analyte in a sample, namely: the physicochemical properties of the analytes (solubility, volatility, polarity etc.), the environmental conditions, and the matrix components of the sample. Various sample preparation strategies have been developed based on exhaustive or non-exhaustive extraction of analytes from matrices. Undoubtedly, amongst all sample preparation approaches, liquid extraction, including liquid-liquid (LLE) and solid phase extraction (SPE), are the most well-known, widely used, and commonly accepted methods by many international organizations and accredited laboratories. Both methods are well documented and there are many well defined procedures, which make them, at first sight, the methods of choice. However, many challenging tasks, such as complex matrix applications, on-site and in vivo applications, and determination of matrix-bound and free concentrations of analytes, are not easily attainable with these classical approaches for sample preparation. In the last two decades, the introduction of solid phase microextraction (SPME) has brought significant progress in the sample preparation area by facilitating on-site and in vivo applications, time weighted average (TWA) and instantaneous concentration determinations. Recently introduced matrix compatible coatings for SPME facilitate direct extraction from complex matrices and fill the gap in direct sampling from challenging matrices. Following introduction of SPME, numerous other microextraction approaches evolved to address limitations of the above mentioned techniques. There is not a single method that can be considered as a universal solution for sample preparation. This review aims to show the main advantages and limitations of the above mentioned sample

  14. A Comparison of Organizational Structure and Pedagogical Approach: Online versus Face-to-face

    OpenAIRE

    Donovan A. McFarlane

    2011-01-01

    This paper examines online versus face-to-face organizational structure and pedagogy in terms of education and the teaching and learning process. The author distinguishes several important terms related to distance/online/e-learning, virtual learning and brick-and-mortar learning interactions and concepts such as asynchronous and synchronous interactions, etc, before deliberating on perceived differences in organizational structure and pedagogical approaches of virtual and brick-and-mortar sc...

  15. Comparison between endoscopic and microscopic approaches for surgery of pituitary tumours.

    Science.gov (United States)

    Khan, Inamullah; Shamim, Muhammad Shahzad

    2017-11-01

    Surgical techniques for resection of pituitary tumours have come a long way since it was first introduced in late 18th century. Nowadays, most pituitary surgeries are performed through trans-nasal trans-sphenoidal approach either using a microscope, or an endoscope. Herein the authors review the literature and compare these two instruments with regards to their outcomes when used for resection of pituitary tumours. .

  16. Optimal PID settings for first and second-order processes - Comparison with different controller tuning approaches

    OpenAIRE

    Pappas, Iosif

    2016-01-01

    PID controllers are extensively used in industry. Although many tuning methodologies exist, finding good controller settings is not an easy task and frequently optimization-based design is preferred to satisfy more complex criteria. In this thesis, the focus was to find which tuning approaches, if any, present close to optimal behavior. Pareto-optimal controllers were found for different first and second-order processes with time delay. Performance was quantified in terms of the integrat...

  17. Comparison of numerical approaches to solve a Poincare-covariant Faddeev equation

    International Nuclear Information System (INIS)

    Alkofer, R.; Eichmann, G.; Krassnigg, A.; Schwinzerl, M.

    2006-01-01

    Full text: The quark core of Baryons can be described with the help of the numerical solution of the Poincare-Faddeev equation. Hereby the used elements, as e.g. the quark propagator are taken from non-perturbative studies of Landau gauge QCD. Different numerical approaches to solve in this way the relativistic three quark problem are compared and benchmarked results for the efficiency of different algorithms are presented. (author)

  18. Comparisons of single-stage and two-stage approaches to genomic selection.

    Science.gov (United States)

    Schulz-Streeck, Torben; Ogutu, Joseph O; Piepho, Hans-Peter

    2013-01-01

    Genomic selection (GS) is a method for predicting breeding values of plants or animals using many molecular markers that is commonly implemented in two stages. In plant breeding the first stage usually involves computation of adjusted means for genotypes which are then used to predict genomic breeding values in the second stage. We compared two classical stage-wise approaches, which either ignore or approximate correlations among the means by a diagonal matrix, and a new method, to a single-stage analysis for GS using ridge regression best linear unbiased prediction (RR-BLUP). The new stage-wise method rotates (orthogonalizes) the adjusted means from the first stage before submitting them to the second stage. This makes the errors approximately independently and identically normally distributed, which is a prerequisite for many procedures that are potentially useful for GS such as machine learning methods (e.g. boosting) and regularized regression methods (e.g. lasso). This is illustrated in this paper using componentwise boosting. The componentwise boosting method minimizes squared error loss using least squares and iteratively and automatically selects markers that are most predictive of genomic breeding values. Results are compared with those of RR-BLUP using fivefold cross-validation. The new stage-wise approach with rotated means was slightly more similar to the single-stage analysis than the classical two-stage approaches based on non-rotated means for two unbalanced datasets. This suggests that rotation is a worthwhile pre-processing step in GS for the two-stage approaches for unbalanced datasets. Moreover, the predictive accuracy of stage-wise RR-BLUP was higher (5.0-6.1%) than that of componentwise boosting.

  19. Comparison of approaches for the characterization of contamination at rural megasites

    DEFF Research Database (Denmark)

    Rein, Arno; Popp, Steffen; Zacharias, Steffen

    2011-01-01

    the complete area. The DP investigation provided information on the contamination distribution and yielded also important information on hydraulic conditions. Statistical analysis of the results applying indicator kriging revealed that the conventional approach is markedly risky when decision-making relies...... consideration into account, DP-based groundwater screening is recommended to obtain either first or complementary information on the entire site. Based on these data, also locations for a long-term monitoring could be selected if temporal variability is assumed relevant....

  20. Accounting for context in studies of health inequalities: a review and comparison of analytic approaches.

    Science.gov (United States)

    Schempf, Ashley H; Kaufman, Jay S

    2012-10-01

    A common epidemiologic objective is to evaluate the contribution of residential context to individual-level disparities by race or socioeconomic position. We reviewed analytic strategies to account for the total (observed and unobserved factors) contribution of environmental context to health inequalities, including conventional fixed effects (FE) and hybrid FE implemented within a random effects (RE) or a marginal model. To illustrate results and limitations of the various analytic approaches of accounting for the total contextual component of health disparities, we used data on births nested within neighborhoods as an applied example of evaluating neighborhood confounding of racial disparities in gestational age at birth, including both a continuous and a binary outcome. Ordinary and RE models provided disparity estimates that can be substantially biased in the presence of neighborhood confounding. Both FE and hybrid FE models can account for cluster level confounding and provide disparity estimates unconfounded by neighborhood, with the latter having greater flexibility in allowing estimation of neighborhood-level effects and intercept/slope variability when implemented in a RE specification. Given the range of models that can be implemented in a hybrid approach and the frequent goal of accounting for contextual confounding, this approach should be used more often. Published by Elsevier Inc.

  1. Comparison of scientific and engineering approaches to the treatment of mixed wastes

    International Nuclear Information System (INIS)

    Gilbert, K.V.; Bowers, J.S.

    1993-12-01

    This paper discusses two approaches to the treatment of mixed waste. (Mixed waste, defined as radioactive waste that is co-contaminated with hazardous waste as defined in the Resource Conservation and Conservation Act, is presently stored throughout the United States awaiting the establishment of treatment capability.)The first approach employs conventional engineering that focuses on low risk technology which has been proven in other industries in similar applications and is adaptable for waste treatment use. The term ''low risk'' means that implementation success is relatively certain, and the major uncertainty is the degree of success. Technologies under consideration include centrifugation, evaporation, microfiltration and stabilization. Process offgases are treated with traditional scrubbers and carbon absorption units.For the scientific approach, Lawrence Livermore National Laboratory is in the conceptual design phase of a project to demonstrate incinerator alternatives to destroy organic contaminants in radioactive waste streams without the use of incineration. This Mixed Waste Management Facility will use approximately 15000 square feet of an existing facility to demonstrate an integrated waste management system. Robotic and telerobotic systems will be employed for waste segregation, characterization and feed preparation. Waste feeds will be treated using molten salt oxidation, mediated electrochemical oxidation and wet oxidation. Residues, which can be managed as radioactive-only waste, will be immobilized in an organic matrix prior to shipment to an authorized disposal site

  2. A comparison of bilingual education and generalist teachers' approaches to scientific biliteracy

    Science.gov (United States)

    Garza, Esther

    The purpose of this study was to determine if educators were capitalizing on bilingual learners' use of their biliterate abilities to acquire scientific meaning and discourse that would formulate a scientific biliterate identity. Mixed methods were used to explore teachers' use of biliteracy and Funds of Knowledge (Moll, L., Amanti, C., Neff, D., & Gonzalez, N., 1992; Gonzales, Moll, & Amanti, 2005) from the students' Latino heritage while conducting science inquiry. The research study explored four constructs that conceptualized scientific biliteracy. The four constructs include science literacy, science biliteracy, reading comprehension strategies and students' cultural backgrounds. There were 156 4th-5th grade bilingual and general education teachers in South Texas that were surveyed using the Teacher Scientific Biliteracy Inventory (TSBI) and five teachers' science lessons were observed. Qualitative findings revealed that a variety of scientific biliteracy instructional strategies were frequently used in both bilingual and general education classrooms. The language used to deliver this instruction varied. A General Linear Model revealed that classroom assignment, bilingual or general education, had a significant effect on a teacher's instructional approach to employ scientific biliteracy. A simple linear regression found that the TSBI accounted for 17% of the variance on 4th grade reading benchmarks. Mixed methods results indicated that teachers were utilizing scientific biliteracy strategies in English, Spanish and/or both languages. Household items and science experimentation at home were encouraged by teachers to incorporate the students' cultural backgrounds. Finally, science inquiry was conducted through a universal approach to science learning versus a multicultural approach to science learning.

  3. A comparison of two surgical approaches to the scapulohumeral joint in dogs.

    Science.gov (United States)

    McLaughlin, R; Roush, J K

    1995-01-01

    Two scapulohumeral arthrotomy techniques were evaluated and compared in 10 normal, young adult greyhounds. A caudolateral approach with craniodorsal retraction of the teres minor muscle (no-tenotomy) and a craniolateral approach with tenotomy of the infraspinatus tendon were each performed unilaterally in 5 dogs. The dogs were evaluated using force plate gait analysis, lameness evaluation, radiography, and goniometry for 5 weeks and then euthanatized. Tenotomy sites and sections of the humeral articular cartilage were collected from shoulder joints that had been operated on and examined microscopically. The same surgical approach has then performed on the contralateral shoulder in the cadavers and exposure of the humeral articular cartilage was measured using planimetry. Peak vertical force applied to the operated limbs in the tenotomy group was significantly less than preoperative leads on day 3 and significantly less than the no-tenotomy group on days 21 and 28. The peak vertical force applied to the operated limbs in the no-tenotomy group was not significantly different from preoperative levels during the study. Scapulohumeral arthrotomy by tenotomy of the infraspinatus resulted in decreased range-of-motion and joint extension compared with joints operated on without tenotomies, but provided significantly greater exposure to the articular surface. Scapulohumeral arthrotomy with craniodorsal retraction of the teres minor muscle did not significantly alter goniometric measurements compared with unoperated joints. Both techniques resulted in similar subjective lameness scores and caused no gross microscopic or radiographic evidence of articular cartilage damage.

  4. Comparison of Australian and American orthodontic clinical approaches towards root resorption.

    Science.gov (United States)

    Lim, Elaine; Sameshima, Glenn; Petocz, Peter; Darendeliler, Ali

    2012-11-01

    As part of The Rocky Mountain Travelling Fellowship, a pilot survey was conducted to assess current diagnostic and clinical approaches to the management of orthodontic patients in relation to root resorption. Groups comprising Australians (Sydney, New South Wales) and North Americans (Los Angeles, California), in two stages of their orthodontic careers (post-graduate orthodontic students from the University of Sydney and University of Southern California and qualified practising orthodontists) were asked to complete a questionnaire. The questions examined diagnosis and management approaches related to root resorption used in their clinical practice. Replies demonstrated that there were differences in management depending on operator experience and the country of clinical practice. However, a summarised common approach to orthodontic root resorption comprised (1) the use of an orthopantomogram as a screening diagnostic tool, followed by periapical radiographs for those perceived as 'higher risk' patients, particularly individuals with a history of root resorption; (2) a six monthly radiographic review during treatment; (3) the use of light forces and/or rest periods (discontinuous forces) every two to three months; (4) the extraction of deciduous teeth if permanent successors were erupting ectopically and causing damage to adjacent root structures; and (5) the use of fixed retention after treatment. This project was intended to initiate discussion and form a basis for further investigation into the clinical management of orthodontic root resorption.

  5. Incarcerated inguinal hernia management in children: 'a comparison of the open and laparoscopic approach'.

    Science.gov (United States)

    Mishra, Pankaj Kumar; Burnand, Katherine; Minocha, Ashish; Mathur, Azad B; Kulkarni, Milind S; Tsang, Thomas

    2014-06-01

    To compare the outcomes of management of incarcerated inguinal hernia by open versus laparoscopic approach. This is a retrospective analysis of incarcerated inguinal hernina in a paediatric surgery centre involving four consultants. Manual reduction was attempted in all and failure was managed by emergency surgery. The laparoscopy group had 27 patients. Four patients failed manual reduction and underwent emergency laparoscopic surgery. Three of them had small bowel strangulation which was reduced laparoscopically. The strangulated bowel was dusky in colour initially but changed to normal colour subsequently under vision. The fourth patient required appendectomy for strangulated appendix. One patient had concomitant repair of umbilical hernia and one patient had laparoscopic pyloromyotomy at the same time. One patient had testicular atrophy, one had hydrocoele and one had recurrence of hernia on the asymptomatic side. The open surgery group had 45 patients. Eleven patients had failed manual reduction requiring emergency surgery, of these two required resection and anastomosis of small intestine. One patient in this group had concomitant repair of undescended testis. There was no recurrence in this group, one had testicular atrophy and seven had metachronous hernia. Both open herniotomy and laparoscopic repair offer safe surgery with comparable outcomes for incarcerated inguinal hernia in children. Laparoscopic approach and hernioscopy at the time of open approach appear to show the advantage of repairing the contralateral patent processus vaginalis at the same time and avoiding metachronous inguinal hernia.

  6. Neurobiological studies of risk assessment: a comparison of expected utility and mean-variance approaches.

    Science.gov (United States)

    D'Acremont, Mathieu; Bossaerts, Peter

    2008-12-01

    When modeling valuation under uncertainty, economists generally prefer expected utility because it has an axiomatic foundation, meaning that the resulting choices will satisfy a number of rationality requirements. In expected utility theory, values are computed by multiplying probabilities of each possible state of nature by the payoff in that state and summing the results. The drawback of this approach is that all state probabilities need to be dealt with separately, which becomes extremely cumbersome when it comes to learning. Finance academics and professionals, however, prefer to value risky prospects in terms of a trade-off between expected reward and risk, where the latter is usually measured in terms of reward variance. This mean-variance approach is fast and simple and greatly facilitates learning, but it impedes assigning values to new gambles on the basis of those of known ones. To date, it is unclear whether the human brain computes values in accordance with expected utility theory or with mean-variance analysis. In this article, we discuss the theoretical and empirical arguments that favor one or the other theory. We also propose a new experimental paradigm that could determine whether the human brain follows the expected utility or the mean-variance approach. Behavioral results of implementation of the paradigm are discussed.

  7. Basic echocardiography for undergraduate students: a comparison of different peer-teaching approaches.

    Science.gov (United States)

    Gradl-Dietsch, G; Menon, A K; Gürsel, A; Götzenich, A; Hatam, N; Aljalloud, A; Schrading, S; Hölzl, F; Knobe, M

    2018-02-01

    The aim of this study was to assess the impact of different teaching interventions in a peer-teaching environment on basic echocardiography skills and to examine the influence of gender on learning outcomes. We randomly assigned 79 s year medical students (55 women, 24 men) to one of four groups: peer teaching (PT), peer teaching using Peyton's four-step approach (PPT), team based learning (TBL) and video-based learning (VBL). All groups received theoretical and practical hands-on training according to the different approaches. Using a pre-post-design we assessed differences in theoretical knowledge [multiple choice (MC) exam], practical skills (Objective Structured Practical Examination, OSPE) and evaluation results with respect to gender. There was a significant gain in theoretical knowledge for all students. There were no relevant differences between the four groups regarding the MC exam and OSPE results. The majority of students achieved good or very good results. Acceptance of the peer-teaching concept was moderate and all students preferred medical experts to peer tutors even though the overall rating of the instructors was fairly good. Students in the Video group would have preferred a different training method. There was no significant effect of gender on evaluation results. Using different peer-teaching concepts proved to be effective in teaching basic echocardiography. Gender does not seem to have an impact on effectiveness of the instructional approach. Qualitative analysis revealed limited acceptance of peer teaching and especially of video-based instruction.

  8. Optimising MR perfusion imaging: comparison of different software-based approaches in acute ischaemic stroke

    Energy Technology Data Exchange (ETDEWEB)

    Schaafs, Lars-Arne [Charite-Universitaetsmedizin, Department of Radiology, Berlin (Germany); Charite-Universitaetsmedizin, Academic Neuroradiology, Department of Neurology and Center for Stroke Research, Berlin (Germany); Porter, David [Fraunhofer Institute for Medical Image Computing MEVIS, Bremen (Germany); Audebert, Heinrich J. [Charite-Universitaetsmedizin, Department of Neurology with Experimental Neurology, Berlin (Germany); Fiebach, Jochen B.; Villringer, Kersten [Charite-Universitaetsmedizin, Academic Neuroradiology, Department of Neurology and Center for Stroke Research, Berlin (Germany)

    2016-11-15

    Perfusion imaging (PI) is susceptible to confounding factors such as motion artefacts as well as delay and dispersion (D/D). We evaluate the influence of different post-processing algorithms on hypoperfusion assessment in PI analysis software packages to improve the clinical accuracy of stroke PI. Fifty patients with acute ischaemic stroke underwent MRI imaging in the first 24 h after onset. Diverging approaches to motion and D/D correction were applied. The calculated MTT and CBF perfusion maps were assessed by volumetry of lesions and tested for agreement with a standard approach and with the final lesion volume (FLV) on day 6 in patients with persisting vessel occlusion. MTT map lesion volumes were significantly smaller throughout the software packages with correction of motion and D/D when compared to the commonly used approach with no correction (p = 0.001-0.022). Volumes on CBF maps did not differ significantly (p = 0.207-0.925). All packages with advanced post-processing algorithms showed a high level of agreement with FLV (ICC = 0.704-0.879). Correction of D/D had a significant influence on estimated lesion volumes and leads to significantly smaller lesion volumes on MTT maps. This may improve patient selection. (orig.)

  9. Optimising MR perfusion imaging: comparison of different software-based approaches in acute ischaemic stroke

    International Nuclear Information System (INIS)

    Schaafs, Lars-Arne; Porter, David; Audebert, Heinrich J.; Fiebach, Jochen B.; Villringer, Kersten

    2016-01-01

    Perfusion imaging (PI) is susceptible to confounding factors such as motion artefacts as well as delay and dispersion (D/D). We evaluate the influence of different post-processing algorithms on hypoperfusion assessment in PI analysis software packages to improve the clinical accuracy of stroke PI. Fifty patients with acute ischaemic stroke underwent MRI imaging in the first 24 h after onset. Diverging approaches to motion and D/D correction were applied. The calculated MTT and CBF perfusion maps were assessed by volumetry of lesions and tested for agreement with a standard approach and with the final lesion volume (FLV) on day 6 in patients with persisting vessel occlusion. MTT map lesion volumes were significantly smaller throughout the software packages with correction of motion and D/D when compared to the commonly used approach with no correction (p = 0.001-0.022). Volumes on CBF maps did not differ significantly (p = 0.207-0.925). All packages with advanced post-processing algorithms showed a high level of agreement with FLV (ICC = 0.704-0.879). Correction of D/D had a significant influence on estimated lesion volumes and leads to significantly smaller lesion volumes on MTT maps. This may improve patient selection. (orig.)

  10. Indicators for assessment of rural electrification-An approach for the comparison of apples and pears

    International Nuclear Information System (INIS)

    Ilskog, Elisabeth

    2008-01-01

    Despite a large number of rural electrification projects being implemented in developing countries, there are few published in-depth evaluations of the effects of these projects on sustainable development. There is also no generally accepted method for the assessment of such effects that includes all relevant aspects of sustainability. An issue of growing importance is whether rural electrification implemented by private entrepreneurs or other non-governmental organisations contribute more effectively to sustainable development than the conventional approach where rural electrification is the responsibility of a government utility. This paper presents a method for sustainability evaluation based on the use of 39 indicators. The proposed indicators cover the five dimensions of sustainability: technical, economical, social/ethical, environmental and institutional sustainability. The paper presents the indicators and gives a detailed example of the procedure to calculate an indicator based on information that can realistically be collected in field studies. It is suggested that this interdisciplinary approach will lead to an improved basis for evaluation of projects than previous, more limited approaches. Projects promoted on the basis of information only about prioritised dimensions of sustainability, such as environment, may fail as a result of weaknesses in other dimensions. The proposed method may reduce this risk

  11. Dynamical pairwise entanglement and two-point correlations in the three-ligand spin-star structure

    Science.gov (United States)

    Motamedifar, M.

    2017-10-01

    We consider the three-ligand spin-star structure through homogeneous Heisenberg interactions (XXX-3LSSS) in the framework of dynamical pairwise entanglement. It is shown that the time evolution of the central qubit ;one-particle; state (COPS) brings about the generation of quantum W states at periodical time instants. On the contrary, W states cannot be generated from the time evolution of a ligand ;one-particle; state (LOPS). We also investigate the dynamical behavior of two-point quantum correlations as well as the expectation values of the different spin-components for each element in the XXX-3LSSS. It is found that when a W state is generated, the same value of the concurrence between any two arbitrary qubits arises from the xx and yy two-point quantum correlations. On the opposite, zz quantum correlation between any two qubits vanishes at these time instants.

  12. Pairwise NMR experiments for the determination of protein backbone dihedral angle Φ based on cross-correlated spin relaxation

    International Nuclear Information System (INIS)

    Takahashi, Hideo; Shimada, Ichio

    2007-01-01

    Novel cross-correlated spin relaxation (CCR) experiments are described, which measure pairwise CCR rates for obtaining peptide dihedral angles Φ. The experiments utilize intra-HNCA type coherence transfer to refocus 2-bond J NCα coupling evolution and generate the N (i)-C α (i) or C'(i-1)-C α (i) multiple quantum coherences which are required for measuring the desired CCR rates. The contribution from other coherences is also discussed and an appropriate setting of the evolution delays is presented. These CCR experiments were applied to 15 N- and 13 C-labeled human ubiquitin. The relevant CCR rates showed a high degree of correlation with the Φ angles observed in the X-ray structure. By utilizing these CCR experiments in combination with those previously established for obtaining dihedral angle Ψ, we can determine high resolution structures of peptides that bind weakly to large target molecules

  13. Plant lock and ant key: pairwise coevolution of an exclusion filter in an ant-plant mutualism.

    Science.gov (United States)

    Brouat, C; Garcia, N; Andary, C; McKey, D

    2001-10-22

    Although observations suggest pairwise coevolution in specific ant-plant symbioses, coevolutionary processes have rarely been demonstrated. We report on, what is to the authors' knowledge, the strongest evidence yet for reciprocal adaptation of morphological characters in a species-specific ant-plant mutualism. The plant character is the prostoma, which is a small unlignified organ at the apex of the domatia in which symbiotic ants excavate an entrance hole. Each myrmecophyte in the genus Leonardoxa has evolved a prostoma with a different shape. By performing precise measurements on the prostomata of three related myrmecophytes, on their specific associated ants and on the entrance holes excavated by symbiotic ants at the prostomata, we showed that correspondence of the plant and ant traits forms a morphological and behavioural filter. We have strong evidence for coevolution between the dimensions and shape of the symbiotic ants and the prostoma in one of the three ant-Leonardoxa associations.

  14. Galaxy and Mass Assembly (GAMA): small-scale anisotropic galaxy clustering and the pairwise velocity dispersion of galaxies

    Science.gov (United States)

    Loveday, J.; Christodoulou, L.; Norberg, P.; Peacock, J. A.; Baldry, I. K.; Bland-Hawthorn, J.; Brown, M. J. I.; Colless, M.; Driver, S. P.; Holwerda, B. W.; Hopkins, A. M.; Kafle, P. R.; Liske, J.; Lopez-Sanchez, A. R.; Taylor, E. N.

    2018-03-01

    The galaxy pairwise velocity dispersion (PVD) can provide important tests of non-standard gravity and galaxy formation models. We describe measurements of the PVD of galaxies in the Galaxy and Mass Assembly (GAMA) survey as a function of projected separation and galaxy luminosity. Due to the faint magnitude limit (r PVD to smaller scales (r⊥ = 0.01 h - 1 Mpc) than previous work. The measured PVD at projected separations r⊥ ≲ 1 h - 1 Mpc increases near monotonically with increasing luminosity from σ12 ≈ 200 km s - 1 at Mr = -17 mag to σ12 ≈ 600 km s - 1 at Mr ≈ -22 mag. Analysis of the Gonzalez-Perez et al. (2014) GALFORM semi-analytic model yields no such trend of PVD with luminosity: the model overpredicts the PVD for faint galaxies. This is most likely a result of the model placing too many low-luminosity galaxies in massive haloes.

  15. Link removal for the control of stochastically evolving epidemics over networks: a comparison of approaches.

    Science.gov (United States)

    Enns, Eva A; Brandeau, Margaret L

    2015-04-21

    For many communicable diseases, knowledge of the underlying contact network through which the disease spreads is essential to determining appropriate control measures. When behavior change is the primary intervention for disease prevention, it is important to understand how to best modify network connectivity using the limited resources available to control disease spread. We describe and compare four algorithms for selecting a limited number of links to remove from a network: two "preventive" approaches (edge centrality, R0 minimization), where the decision of which links to remove is made prior to any disease outbreak and depends only on the network structure; and two "reactive" approaches (S-I edge centrality, optimal quarantining), where information about the initial disease states of the nodes is incorporated into the decision of which links to remove. We evaluate the performance of these algorithms in minimizing the total number of infections that occur over the course of an acute outbreak of disease. We consider different network structures, including both static and dynamic Erdös-Rényi random networks with varying levels of connectivity, a real-world network of residential hotels connected through injection drug use, and a network exhibiting community structure. We show that reactive approaches outperform preventive approaches in averting infections. Among reactive approaches, removing links in order of S-I edge centrality is favored when the link removal budget is small, while optimal quarantining performs best when the link removal budget is sufficiently large. The budget threshold above which optimal quarantining outperforms the S-I edge centrality algorithm is a function of both network structure (higher for unstructured Erdös-Rényi random networks compared to networks with community structure or the real-world network) and disease infectiousness (lower for highly infectious diseases). We conduct a value-of-information analysis of knowing which

  16. AllerHunter: a SVM-pairwise system for assessment of allergenicity and allergic cross-reactivity in proteins.

    Directory of Open Access Journals (Sweden)

    Hon Cheng Muh

    Full Text Available Allergy is a major health problem in industrialized countries. The number of transgenic food crops is growing rapidly creating the need for allergenicity assessment before they are introduced into human food chain. While existing bioinformatic methods have achieved good accuracies for highly conserved sequences, the discrimination of allergens and non-allergens from allergen-like non-allergen sequences remains difficult. We describe AllerHunter, a web-based computational system for the assessment of potential allergenicity and allergic cross-reactivity in proteins. It combines an iterative pairwise sequence similarity encoding scheme with SVM as the discriminating engine. The pairwise vectorization framework allows the system to model essential features in allergens that are involved in cross-reactivity, but not limited to distinct sets of physicochemical properties. The system was rigorously trained and tested using 1,356 known allergen and 13,449 putative non-allergen sequences. Extensive testing was performed for validation of the prediction models. The system is effective for distinguishing allergens and non-allergens from allergen-like non-allergen sequences. Testing results showed that AllerHunter, with a sensitivity of 83.4% and specificity of 96.4% (accuracy = 95.3%, area under the receiver operating characteristic curve AROC = 0.928+/-0.004 and Matthew's correlation coefficient MCC = 0.738, performs significantly better than a number of existing methods using an independent dataset of 1443 protein sequences. AllerHunter is available at (http://tiger.dbs.nus.edu.sg/AllerHunter.

  17. Surface feeding and aggressive behaviour of diploid and triploid brown trout Salmo trutta during allopatric pair-wise matchings.

    Science.gov (United States)

    Preston, A C; Taylor, J F; Adams, C E; Migaud, H

    2014-09-01

    Diploid and triploid brown trout Salmo trutta were acclimated for 6 weeks on two feeding regimes (floating and sinking). Thereafter, aggression and surface feeding response were compared between pairs of all diploid, all triploid and diploid and triploid S. trutta in an experimental stream. In each pair-wise matching, fish of similar size were placed in allopatry and rank was determined by the total number of aggressive interactions recorded. Dominant individuals initiated more aggression than subordinates, spent more time defending a territory and positioned themselves closer to the surface food source (Gammarus pulex), whereas subordinates occupied the peripheries. In cross ploidy trials, diploid S. trutta were more aggressive than triploid, and dominated their sibling when placed in pair-wise matchings. Surface feeding, however, did not differ statistically between ploidy irrespective of feeding regime. Triploids adopted a sneak feeding strategy while diploids expended more time defending a territory. In addition, we also tested whether triploids exhibit a similar social dominance to diploids when placed in allopatry. Although aggression was lower in triploid pairs than in the diploid and triploid pairs, a dominance hierarchy was also observed between individuals of the same ploidy. Dominant triploid fish were more aggressive and consumed more feed items than subordinate individuals. Subordinate fish displayed a darker colour index than dominant fish suggesting increased stress levels. Dominant triploid fish, however, appeared to be more tolerant of subordinate individuals and did not display the same degree of invasive aggression as seen in the diploid and diploid or diploid and triploid matchings. These novel findings suggest that sterile triploid S. trutta feed similarly but are less aggressive than diploid trout. Future studies should determine the habitat choice of triploid S. trutta after release and the interaction between wild fish and triploids during

  18. Synchronization of pairwise-coupled, identical, relaxation oscillators based on metal-insulator phase transition devices: A model study

    Science.gov (United States)

    Parihar, Abhinav; Shukla, Nikhil; Datta, Suman; Raychowdhury, Arijit

    2015-02-01

    Computing with networks of synchronous oscillators has attracted wide-spread attention as novel materials and device topologies have enabled realization of compact, scalable and low-power coupled oscillatory systems. Of particular interest are compact and low-power relaxation oscillators that have been recently demonstrated using MIT (metal-insulator-transition) devices using properties of correlated oxides. Further the computational capability of pairwise coupled relaxation oscillators has also been shown to outperform traditional Boolean digital logic circuits. This paper presents an analysis of the dynamics and synchronization of a system of two such identical coupled relaxation oscillators implemented with MIT devices. We focus on two implementations of the oscillator: (a) a D-D configuration where complementary MIT devices (D) are connected in series to provide oscillations and (b) a D-R configuration where it is composed of a resistor (R) in series with a voltage-triggered state changing MIT device (D). The MIT device acts like a hysteresis resistor with different resistances in the two different states. The synchronization dynamics of such a system has been analyzed with purely charge based coupling using a resistive (RC) and a capacitive (CC) element in parallel. It is shown that in a D-D configuration symmetric, identical and capacitively coupled relaxation oscillator system synchronizes to an anti-phase locking state, whereas when coupled resistively the system locks in phase. Further, we demonstrate that for certain range of values of RC and CC, a bistable system is possible which can have potential applications in associative computing. In D-R configuration, we demonstrate the existence of rich dynamics including non-monotonic flows and complex phase relationship governed by the ratios of the coupling impedance. Finally, the developed theoretical formulations have been shown to explain experimentally measured waveforms of such pairwise coupled

  19. Quantitative Comparison of Ternary Eutectic Phase-Field Simulations with Analytical 3D Jackson-Hunt Approaches

    Science.gov (United States)

    Steinmetz, Philipp; Kellner, Michael; Hötzer, Johannes; Nestler, Britta

    2018-02-01

    For the analytical description of the relationship between undercoolings, lamellar spacings and growth velocities during the directional solidification of ternary eutectics in 2D and 3D, different extensions based on the theory of Jackson and Hunt are reported in the literature. Besides analytical approaches, the phase-field method has been established to study the spatially complex microstructure evolution during the solidification of eutectic alloys. The understanding of the fundamental mechanisms controlling the morphology development in multiphase, multicomponent systems is of high interest. For this purpose, a comparison is made between the analytical extensions and three-dimensional phase-field simulations of directional solidification in an ideal ternary eutectic system. Based on the observed accordance in two-dimensional validation cases, the experimentally reported, inherently three-dimensional chain-like pattern is investigated in extensive simulation studies. The results are quantitatively compared with the analytical results reported in the literature, and with a newly derived approach which uses equal undercoolings. A good accordance of the undercooling-spacing characteristics between simulations and the analytical Jackson-Hunt apporaches are found. The results show that the applied phase-field model, which is based on the Grand potential approach, is able to describe the analytically predicted relationship between the undercooling and the lamellar arrangements during the directional solidification of a ternary eutectic system in 3D.

  20. Transoral endoscopic thyroidectomy vestibular approach (TOETVA) for Graves' disease: a comparison of surgical results with open thyroidectomy.

    Science.gov (United States)

    Jitpratoom, Pornpeera; Ketwong, Khwannara; Sasanakietkul, Thanyawat; Anuwong, Angkoon

    2016-12-01

    Transoral endoscopic thyroidectomy vestibular approach (TOETVA) provides excellent cosmetic results from its potential for scar-free operation. The procedure has been applied successfully for Graves' disease by the authors of this work and compared with the standard open cervical approach to evaluate its safety and outcomes. From January 2014 to November 2016, a total of 97 patients with Graves' disease were reviewed retrospectively. Open thyroidectomy (OT) and TOETVA were performed in 49 patients and 46 patients, respectively. For TOETVA, a three-port technique through the oral vestibule was utilized. The thyroidectomy was done endoscopically using conventional laparoscopic instruments and an ultrasonic device. Patient demographics and surgical variables, including operative time, blood loss, and complications, were investigated and compared. TOETVA was performed successfully in all 45 patients, although conversion to open surgery was deemed necessary in one patient. All patient characteristics for both groups were similar. Operative time was shorter for the OT group compared to the TOETVA group, which totaled 101.97±24.618 and 134.11±31.48 minutes, respectively (PGraves' disease in comparison to the standard open cervical approach. It is considered a viable alternative for patients who have been indicated for surgery with excellent cosmetic results.

  1. A Cross-Disciplinary Successful Aging Intervention and Evaluation: Comparison of Person-to-Person and Digital-Assisted Approaches

    Directory of Open Access Journals (Sweden)

    Hui-Chuan Hsu

    2018-05-01

    Full Text Available Background: Successful aging has been the paradigm of old-age life. The purpose of this study was to implement and evaluate a cross-disciplinary intervention program using two approaches for community-based older adults in Taichung, Taiwan. Methods: The content of the intervention included successful aging concepts and preparation, physical activity, chronic disease and health management, dietary and nutrition information, cognitive training, emotional awareness and coping skills, family relationship and resilience, legal concepts regarding financial protection, and Internet use. The traditional person-to-person (P2P intervention approach was implemented among participants at urban centers, and the personal-and-digital (P&D intervention approach was implemented among participants at rural centers; before the P&D group received the intervention, participants were assessed as the control group for comparison. Results: Healthy behavior and nutrition improved for the P2P group, although not significantly. Strategies for adapting to old age and reducing ineffective coping were significantly improved in the P2P group. The ability to search for health information improved in the P&D group, and knowledge of finance-related law increased in the P2P group. Conclusion: A continuous, well-designed and evidence-based intervention program is beneficial for improving the health of older adults, or at least delaying its decline.

  2. Capturing pair-wise epistatic effects associated with three agronomic traits in barley.

    Science.gov (United States)

    Xu, Yi; Wu, Yajun; Wu, Jixiang

    2018-04-01

    Genetic association mapping has been widely applied to determine genetic markers favorably associated with a trait of interest and provide information for marker-assisted selection. Many association mapping studies commonly focus on main effects due to intolerable computing intensity. This study aims to select several sets of DNA markers with potential epistasis to maximize genetic variations of some key agronomic traits in barley. By doing so, we integrated a MDR (multifactor dimensionality reduction) method with a forward variable selection approach. This integrated approach was used to determine single nucleotide polymorphism pairs with epistasis effects associated with three agronomic traits: heading date, plant height, and grain yield in barley from the barley Coordinated Agricultural Project. Our results showed that four, seven, and five SNP pairs accounted for 51.06, 45.66 and 40.42% for heading date, plant height, and grain yield, respectively with epistasis being considered, while corresponding contributions to these three traits were 45.32, 31.39, 31.31%, respectively without epistasis being included. The results suggested that epistasis model was more effective than non-epistasis model in this study and can be more preferred for other applications.

  3. Analysis of Pairwise Interactions in a Maximum Likelihood Sense to Identify Leaders in a Group

    Directory of Open Access Journals (Sweden)

    Violet Mwaffo

    2017-07-01

    Full Text Available Collective motion in animal groups manifests itself in the form of highly coordinated maneuvers determined by local interactions among individuals. A particularly critical question in understanding the mechanisms behind such interactions is to detect and classify leader–follower relationships within the group. In the technical literature of coupled dynamical systems, several methods have been proposed to reconstruct interaction networks, including linear correlation analysis, transfer entropy, and event synchronization. While these analyses have been helpful in reconstructing network models from neuroscience to public health, rules on the most appropriate method to use for a specific dataset are lacking. Here, we demonstrate the possibility of detecting leaders in a group from raw positional data in a model-free approach that combines multiple methods in a maximum likelihood sense. We test our framework on synthetic data of groups of self-propelled Vicsek particles, where a single agent acts as a leader and both the size of the interaction region and the level of inherent noise are systematically varied. To assess the feasibility of detecting leaders in real-world applications, we study a synthetic dataset of fish shoaling, generated by using a recent data-driven model for social behavior, and an experimental dataset of pharmacologically treated zebrafish. Not only does our approach offer a robust strategy to detect leaders in synthetic data but it also allows for exploring the role of psychoactive compounds on leader–follower relationships.

  4. Monitoring the training intensity and recovery with a psychometrics approach: a gender comparison with young athletes

    Directory of Open Access Journals (Sweden)

    Ramon Cruz

    2017-12-01

    Full Text Available Abstract AIMS The purpose of present study was verify if the RPE-training session differs between females and males during the track and field training and if biological maturity (BM has interference on this response. METHODS Seventy-five athletes (13-15 years old have participated of study, with 38 male 37 female. Five training sessions of track and field were prescribe and monitoring by RPE-training session (intensity and Total Quality Recovery (TQR (recovery. RESULTS There was no statistical difference between males and females on 75-meters run, long jump and shot put. Otherwise, for training of 250 and 1000-meters females related higher RPE-values than males 3.68 ± 0.79, 3.26 ± 0.56, p < 0.01 and 4.14 ± 0.94, 3.72 ± 0.89, p < 0.05; respectively. Even when controlling the effect of biological maturity the same results were observed to 250-meters F1,73 = 2.060; p = 0.002 and 1000-meters F1,73 = 0.997; p = 0.036. There was no difference for TQR between genders. CONCLUSION The comparison the RPE-training session of females and males indicated there were difference to 250 and 1000-m training sessions, females have more RPE-training sessions than males. Additionally, there were no differences between genders for recovery parameters, even controlling BM.

  5. MR-guided stereotactic neurosurgery-comparison of fiducial-based and anatomical landmark transformation approaches

    International Nuclear Information System (INIS)

    Hunsche, S; Sauner, D; Maarouf, M; Hoevels, M; Luyken, K; Schulte, O; Lackner, K; Sturm, V; Treuer, H

    2004-01-01

    For application in magnetic resonance (MR) guided stereotactic neurosurgery, two methods for transformation of MR-image coordinates in stereotactic, frame-based coordinates exist: the direct stereotactic fiducial-based transformation method and the indirect anatomical landmark method. In contrast to direct stereotactic MR transformation, indirect transformation is based on anatomical landmark coregistration of stereotactic computerized tomography and non-stereotactic MR images. In a patient study, both transformation methods have been investigated with visual inspection and mutual information analysis. Comparison was done for our standard imaging protocol, including t2-weighted spin-echo as well as contrast enhanced t1-weighted gradient-echo imaging. For t2-weighted spin-echo imaging, both methods showed almost similar and satisfying performance with a small, but significant advantage for fiducial-based transformation. In contrast, for t1-weighted gradient-echo imaging with more geometric distortions due to field inhomogenities and gradient nonlinearity than t2-weighted spin-echo imaging, mainly caused by a reduced bandwidth per pixel, anatomical landmark transformation delivered markedly better results. Here, fiducial-based transformation yielded results which are intolerable for stereotactic neurosurgery. Mean Euclidian distances between both transformation methods were 0.96 mm for t2-weighted spin-echo and 1.67 mm for t1-weighted gradient-echo imaging. Maximum deviations were 1.72 mm and 3.06 mm, respectively

  6. Comparison of Computational Approaches for Rapid Aerodynamic Assessment of Small UAVs

    Science.gov (United States)

    Shafer, Theresa C.; Lynch, C. Eric; Viken, Sally A.; Favaregh, Noah; Zeune, Cale; Williams, Nathan; Dansie, Jonathan

    2014-01-01

    Computational Fluid Dynamic (CFD) methods were used to determine the basic aerodynamic, performance, and stability and control characteristics of the unmanned air vehicle (UAV), Kahu. Accurate and timely prediction of the aerodynamic characteristics of small UAVs is an essential part of military system acquisition and air-worthiness evaluations. The forces and moments of the UAV were predicted using a variety of analytical methods for a range of configurations and conditions. The methods included Navier Stokes (N-S) flow solvers (USM3D, Kestrel and Cobalt) that take days to set up and hours to converge on a single solution; potential flow methods (PMARC, LSAERO, and XFLR5) that take hours to set up and minutes to compute; empirical methods (Datcom) that involve table lookups and produce a solution quickly; and handbook calculations. A preliminary aerodynamic database can be developed very efficiently by using a combination of computational tools. The database can be generated with low-order and empirical methods in linear regions, then replacing or adjusting the data as predictions from higher order methods are obtained. A comparison of results from all the data sources as well as experimental data obtained from a wind-tunnel test will be shown and the methods will be evaluated on their utility during each portion of the flight envelope.

  7. Comparison of NDE standards in the frame of fracture mechanics approach

    International Nuclear Information System (INIS)

    Reale, S.; Capurro, E.; Corvi, A.

    1991-01-01

    The Design and Construction Codes are a set of rules which were set together because they were the best ones when the Codes were issued. A permanent objective must be to complete and improve these rules. This objective can be attained as the result of industrial experiences and by means of research and development activities. Until recently, high risk plants like nuclear plants were designed and built on the basis of the codes and standards of the country where the plant was to be built and operated, and this caused many disadvantages. On the contrary, the use of common codes and standards offers many advantages. A general objective is to compare codes in order to identify the differences in national rules and standards. The acceptance criteria based on nondestructive testing to reject dangerous defects are discussed. In this paper, the standards adopted in France, Germany, Italy and the United Kingdom are taken in consideration, and ultrasonic and radiographic inspections are selected. The methodology of this activity and the results of comparison are reported. (K.I.)

  8. Audio Classification in Speech and Music: A Comparison between a Statistical and a Neural Approach

    Directory of Open Access Journals (Sweden)

    Alessandro Bugatti

    2002-04-01

    Full Text Available We focus the attention on the problem of audio classification in speech and music for multimedia applications. In particular, we present a comparison between two different techniques for speech/music discrimination. The first method is based on Zero crossing rate and Bayesian classification. It is very simple from a computational point of view, and gives good results in case of pure music or speech. The simulation results show that some performance degradation arises when the music segment contains also some speech superimposed on music, or strong rhythmic components. To overcome these problems, we propose a second method, that uses more features, and is based on neural networks (specifically a multi-layer Perceptron. In this case we obtain better performance, at the expense of a limited growth in the computational complexity. In practice, the proposed neural network is simple to be implemented if a suitable polynomial is used as the activation function, and a real-time implementation is possible even if low-cost embedded systems are used.

  9. A novel model approach for esophageal burns in rats: A comparison of three methods.

    Science.gov (United States)

    Kalkan, Yildiray; Tumkaya, Levent; Akdogan, Remzi Adnan; Yucel, Ahmet Fikret; Tomak, Yakup; Sehitoglu, İbrahim; Pergel, Ahmet; Kurt, Aysel

    2015-07-01

    Corrosive esophageal injury causes serious clinical problems. We aimed to create a new experimental esophageal burn model using a single catheter without a surgical procedure. We conducted the study with two groups of 12 male rats that fasted for 12 h before application. A modified Foley balloon catheter was inserted into the esophageal lumen. The control group was given 0.9% sodium chloride, while the experimental group was given 37.5% sodium hydroxide with the other part of the catheter. After 60s, esophagus was washed with distilled water. The killed rats were examined using histopathological methods after 28 days. In comparison with the histopathological changes experienced by the study groups, the control groups were observed to have no pathological changes. Basal cell degeneration, dermal edema, and a slight increase in the keratin layer and collagen density of submucosa due to stenosis were all observed in the group subjected to esophageal corrosion. A new burn model can thus, we believe, be created without the involvement of invasive laparoscopic surgery and general anesthesia. The burn in our experiment was formed in both the distal and proximal esophagus, as in other models; it can also be formed optionally in the entire esophagus. © The Author(s) 2013.

  10. MR-guided stereotactic neurosurgery-comparison of fiducial-based and anatomical landmark transformation approaches

    Energy Technology Data Exchange (ETDEWEB)

    Hunsche, S [Department of Stereotaxy and Functional Neurosurgery, University of Cologne, Cologne (Germany); Sauner, D [Institute for Diagnostic and Interventional Radiology, Friedrich-Schiller-University of Jena, Jena (Germany); Maarouf, M [Department of Stereotaxy and Functional Neurosurgery, University of Cologne, Cologne (Germany); Hoevels, M [Department of Stereotaxy and Functional Neurosurgery, University of Cologne, Cologne (Germany); Luyken, K [Department of Stereotaxy and Functional Neurosurgery, University of Cologne, Cologne (Germany); Schulte, O [Department of Radiology, University of Cologne, Cologne (Germany); Lackner, K [Department of Radiology, University of Cologne, Cologne (Germany); Sturm, V [Department of Stereotaxy and Functional Neurosurgery, University of Cologne, Cologne (Germany); Treuer, H [Department of Stereotaxy and Functional Neurosurgery, University of Cologne, Cologne (Germany)

    2004-06-21

    For application in magnetic resonance (MR) guided stereotactic neurosurgery, two methods for transformation of MR-image coordinates in stereotactic, frame-based coordinates exist: the direct stereotactic fiducial-based transformation method and the indirect anatomical landmark method. In contrast to direct stereotactic MR transformation, indirect transformation is based on anatomical landmark coregistration of stereotactic computerized tomography and non-stereotactic MR images. In a patient study, both transformation methods have been investigated with visual inspection and mutual information analysis. Comparison was done for our standard imaging protocol, including t2-weighted spin-echo as well as contrast enhanced t1-weighted gradient-echo imaging. For t2-weighted spin-echo imaging, both methods showed almost similar and satisfying performance with a small, but significant advantage for fiducial-based transformation. In contrast, for t1-weighted gradient-echo imaging with more geometric distortions due to field inhomogenities and gradient nonlinearity than t2-weighted spin-echo imaging, mainly caused by a reduced bandwidth per pixel, anatomical landmark transformation delivered markedly better results. Here, fiducial-based transformation yielded results which are intolerable for stereotactic neurosurgery. Mean Euclidian distances between both transformation methods were 0.96 mm for t2-weighted spin-echo and 1.67 mm for t1-weighted gradient-echo imaging. Maximum deviations were 1.72 mm and 3.06 mm, respectively.

  11. Estimating amplitude ratios in boundary layer stability theory: a comparison between two approaches

    Science.gov (United States)

    Govindarajan, Rama; Narasimha, R.

    2001-07-01

    We first demonstrate that, if the contributions of higher-order mean flow are ignored, the parabolized stability equations (Bertolotti et al. 1992) and the ‘full’ non-parallel equation of Govindarajan & Narasimha (1995, hereafter GN95) are both equivalent to order R[minus sign]1 in the local Reynolds number R to Gaster's (1974) equation for the stability of spatially developing boundary layers. It is therefore of some concern that a detailed comparison between Gaster (1974) and GN95 reveals a small difference in the computed amplitude ratios. Although this difference is not significant in practical terms in Blasius flow, it is traced here to the approximation, in Gaster's method, of neglecting the change in eigenfunction shape due to flow non-parallelism. This approximation is not justified in the critical and the wall layers, where the neglected term is respectively O(R[minus sign]2/3) and O(R[minus sign]1) compared to the largest term. The excellent agreement of GN95 with exact numerical simulations, on the other hand, suggests that the effect of change in eigenfunction is accurately taken into account in that paper.

  12. A Comparison of Hybrid Approaches for Turbofan Engine Gas Path Fault Diagnosis

    Science.gov (United States)

    Lu, Feng; Wang, Yafan; Huang, Jinquan; Wang, Qihang

    2016-09-01

    A hybrid diagnostic method utilizing Extended Kalman Filter (EKF) and Adaptive Genetic Algorithm (AGA) is presented for performance degradation estimation and sensor anomaly detection of turbofan engine. The EKF is used to estimate engine component performance degradation for gas path fault diagnosis. The AGA is introduced in the integrated architecture and applied for sensor bias detection. The contributions of this work are the comparisons of Kalman Filters (KF)-AGA algorithms and Neural Networks (NN)-AGA algorithms with a unified framework for gas path fault diagnosis. The NN needs to be trained off-line with a large number of prior fault mode data. When new fault mode occurs, estimation accuracy by the NN evidently decreases. However, the application of the Linearized Kalman Filter (LKF) and EKF will not be restricted in such case. The crossover factor and the mutation factor are adapted to the fitness function at each generation in the AGA, and it consumes less time to search for the optimal sensor bias value compared to the Genetic Algorithm (GA). In a word, we conclude that the hybrid EKF-AGA algorithm is the best choice for gas path fault diagnosis of turbofan engine among the algorithms discussed.

  13. A comparison of rule-based and machine learning approaches for classifying patient portal messages.

    Science.gov (United States)

    Cronin, Robert M; Fabbri, Daniel; Denny, Joshua C; Rosenbloom, S Trent; Jackson, Gretchen Purcell

    2017-09-01

    Secure messaging through patient portals is an increasingly popular way that consumers interact with healthcare providers. The increasing burden of secure messaging can affect clinic staffing and workflows. Manual management of portal messages is costly and time consuming. Automated classification of portal messages could potentially expedite message triage and delivery of care. We developed automated patient portal message classifiers with rule-based and machine learning techniques using bag of words and natural language processing (NLP) approaches. To evaluate classifier performance, we used a gold standard of 3253 portal messages manually categorized using a taxonomy of communication types (i.e., main categories of informational, medical, logistical, social, and other communications, and subcategories including prescriptions, appointments, problems, tests, follow-up, contact information, and acknowledgement). We evaluated our classifiers' accuracies in identifying individual communication types within portal messages with area under the receiver-operator curve (AUC). Portal messages often contain more than one type of communication. To predict all communication types within single messages, we used the Jaccard Index. We extracted the variables of importance for the random forest classifiers. The best performing approaches to classification for the major communication types were: logistic regression for medical communications (AUC: 0.899); basic (rule-based) for informational communications (AUC: 0.842); and random forests for social communications and logistical communications (AUCs: 0.875 and 0.925, respectively). The best performing classification approach of classifiers for individual communication subtypes was random forests for Logistical-Contact Information (AUC: 0.963). The Jaccard Indices by approach were: basic classifier, Jaccard Index: 0.674; Naïve Bayes, Jaccard Index: 0.799; random forests, Jaccard Index: 0.859; and logistic regression, Jaccard

  14. Comparison of marine spatial planning methods in Madagascar demonstrates value of alternative approaches.

    Directory of Open Access Journals (Sweden)

    Thomas F Allnutt

    Full Text Available The Government of Madagascar plans to increase marine protected area coverage by over one million hectares. To assist this process, we compare four methods for marine spatial planning of Madagascar's west coast. Input data for each method was drawn from the same variables: fishing pressure, exposure to climate change, and biodiversity (habitats, species distributions, biological richness, and biodiversity value. The first method compares visual color classifications of primary variables, the second uses binary combinations of these variables to produce a categorical classification of management actions, the third is a target-based optimization using Marxan, and the fourth is conservation ranking with Zonation. We present results from each method, and compare the latter three approaches for spatial coverage, biodiversity representation, fishing cost and persistence probability. All results included large areas in the north, central, and southern parts of western Madagascar. Achieving 30% representation targets with Marxan required twice the fish catch loss than the categorical method. The categorical classification and Zonation do not consider targets for conservation features. However, when we reduced Marxan targets to 16.3%, matching the representation level of the "strict protection" class of the categorical result, the methods show similar catch losses. The management category portfolio has complete coverage, and presents several management recommendations including strict protection. Zonation produces rapid conservation rankings across large, diverse datasets. Marxan is useful for identifying strict protected areas that meet representation targets, and minimize exposure probabilities for conservation features at low economic cost. We show that methods based on Zonation and a simple combination of variables can produce results comparable to Marxan for species representation and catch losses, demonstrating the value of comparing alternative

  15. Quantifying Neural Oscillatory Synchronization: A Comparison between Spectral Coherence and Phase-Locking Value Approaches

    Science.gov (United States)

    Lowet, Eric; Roberts, Mark J.; Bonizzi, Pietro; Karel, Joël; De Weerd, Peter

    2016-01-01

    Synchronization or phase-locking between oscillating neuronal groups is considered to be important for coordination of information among cortical networks. Spectral coherence is a commonly used approach to quantify phase locking between neural signals. We systematically explored the validity of spectral coherence measures for quantifying synchronization among neural oscillators. To that aim, we simulated coupled oscillatory signals that exhibited synchronization dynamics using an abstract phase-oscillator model as well as interacting gamma-generating spiking neural networks. We found that, within a large parameter range, the spectral coherence measure deviated substantially from the expected phase-locking. Moreover, spectral coherence did not converge to the expected value with increasing signal-to-noise ratio. We found that spectral coherence particularly failed when oscillators were in the partially (intermittent) synchronized state, which we expect to be the most likely state for neural synchronization. The failure was due to the fast frequency and amplitude changes induced by synchronization forces. We then investigated whether spectral coherence reflected the information flow among networks measured by transfer entropy (TE) of spike trains. We found that spectral coherence failed to robustly reflect changes in synchrony-mediated information flow between neural networks in many instances. As an alternative approach we explored a phase-locking value (PLV) method based on the reconstruction of the instantaneous phase. As one approach for reconstructing instantaneous phase, we used the Hilbert Transform (HT) preceded by Singular Spectrum Decomposition (SSD) of the signal. PLV estimates have broad applicability as they do not rely on stationarity, and, unlike spectral coherence, they enable more accurate estimations of oscillatory synchronization across a wide range of different synchronization regimes, and better tracking of synchronization-mediated information

  16. Comparison of Standard and Novel Signal Analysis Approaches to Obstructive Sleep Apnoea Classification

    Directory of Open Access Journals (Sweden)

    Aoife eRoebuck

    2015-08-01

    Full Text Available Obstructive sleep apnoea (OSA is a disorder characterised by repeated pauses in breathing during sleep, which leads to deoxygenation and voiced chokes at the end of each episode. OSA is associated by daytime sleepiness and an increased risk of serious conditions such as cardiovascular disease, diabetes and stroke. Between 2-7% of the adult population globally has OSA, but it is estimated that up to 90% of those are undiagnosed and untreated. Diagnosis of OSA requires expensive and cumbersome screening. Audio offers a potential non-contact alternative, particularly with the ubiquity of excellent signal processing on every phone.Previous studies have focused on the classification of snoring and apnoeic chokes. However, such approaches require accurate identification of events. This leads to limited accuracy and small study populations. In this work we propose an alternative approach which uses multiscale entropy (MSE coefficients presented to a classifier to identify disorder in vocal patterns indicative of sleep apnoea. A database of 858 patients was used, the largest reported in this domain. Apnoeic choke, snore, and noise events encoded with speech analysis features were input into a linear classifier. Coefficients of MSE derived from the first 4 hours of each recording were used to train and test a random forest to classify patients as apnoeic or not.Standard speech analysis approaches for event classification achieved an out of sample accuracy (Ac of 76.9% with a sensitivity (Se of 29.2% and a specificity (Sp of 88.7% but high variance. For OSA severity classification, MSE provided an out of sample Ac of 79.9%, Se of 66.0% and Sp = 88.8%. Including demographic information improved the MSE-based classification performance to Ac = 80.5%, Se = 69.2%, Sp = 87.9%. These results indicate that audio recordings could be used in screening for OSA, but are generally under-sensitive.

  17. A comparison of gap-filling approaches for Landsat-7 satellite data

    KAUST Repository

    Yin, Gaohong

    2017-08-10

    The purpose of this study is to assess the relative performance of four different gap-filling approaches across a range of land-surface conditions, including both homogeneous and heterogeneous areas as well as in scenes with abrupt changes in landscape elements. The techniques considered in this study include: (1) Kriging and co-Kriging; (2) geostatistical neighbourhood similar pixel interpolator (GNSPI); (3) a weighted linear regression (WLR) algorithm; and (4) the direct sampling (DS) method. To examine the impact of image availability and the influence of temporal distance on the selection of input training data (i.e. time separating the training data from the gap-filled target image), input images acquired within the same season (temporally close) as well as in different seasons (temporally far) to the target image were examined, as was the case of using information only within the target image itself. Root mean square error (RMSE), mean spectral angle (MSA), and coefficient of determination (R-2) were used as the evaluation metrics to assess the prediction results. In addition, the overall accuracy (OA) and kappa coefficient (kappa) were used to assess a land-cover classification based on the gap-filled images. Results show that all of the gap-filling approaches provide satisfactory results for the homogeneous case, with R-2 > 0.93 for bands 1 and 2 in all cases and R-2 > 0.80 for bands 3 and 4 in most cases. For the heterogeneous example, GNSPI performs the best, with R-2 > 0.85 for all tested cases. WLR and GNSPI exhibit equivalent accuracy when a temporally close input image is used (i.e. WLR and GNSPI both have an R-2 equal to 0.89 for band 1). For the case of abrupt changes in scene elements or in the absence of ancillary data, the DS approach outperforms the other tested methods.

  18. Comparison of CTT and Rasch-based approaches for the analysis of longitudinal Patient Reported Outcomes.

    Science.gov (United States)

    Blanchin, Myriam; Hardouin, Jean-Benoit; Le Neel, Tanguy; Kubis, Gildas; Blanchard, Claire; Mirallié, Eric; Sébille, Véronique

    2011-04-15

    Health sciences frequently deal with Patient Reported Outcomes (PRO) data for the evaluation of concepts, in particular health-related quality of life, which cannot be directly measured and are often called latent variables. Two approaches are commonly used for the analysis of such data: Classical Test Theory (CTT) and Item Response Theory (IRT). Longitudinal data are often collected to analyze the evolution of an outcome over time. The most adequate strategy to analyze longitudinal latent variables, which can be either based on CTT or IRT models, remains to be identified. This strategy must take into account the latent characteristic of what PROs are intended to measure as well as the specificity of longitudinal designs. A simple and widely used IRT model is the Rasch model. The purpose of our study was to compare CTT and Rasch-based approaches to analyze longitudinal PRO data regarding type I error, power, and time effect estimation bias. Four methods were compared: the Score and Mixed models (SM) method based on the CTT approach, the Rasch and Mixed models (RM), the Plausible Values (PV), and the Longitudinal Rasch model (LRM) methods all based on the Rasch model. All methods have shown comparable results in terms of type I error, all close to 5 per cent. LRM and SM methods presented comparable power and unbiased time effect estimations, whereas RM and PV methods showed low power and biased time effect estimations. This suggests that RM and PV methods should be avoided to analyze longitudinal latent variables. Copyright © 2010 John Wiley & Sons, Ltd.

  19. Comparison among different downscaling approaches in building water scarcity scenarios in an Alpine basin.

    Science.gov (United States)

    Guyennon, Nicolas; Romano, Emanuele; Mariani, Davide; Bruna Petrangeli, Anna; Portoghese, Ivan

    2014-05-01

    Various downscaling techniques have been developed to bridge the scale gap between global climate models (GCMs) and finer scales required to assess hydrological impacts of climate change. Although statistical downscaling (SD) has been traditionally seen as an alternative to dynamical downscaling (DD), recent works on statistical downscaling have aimed to combine the benefits of these two approaches. The overall objective of this study is to assess whether a DD processing performed before the SD is able to provide more reliable climate forcing for crop water demand models. The case study presented here focuses on the Maggiore Lake (Alpine region), with a watershed of approximately 4750 km2 and whose waters are mainly used for irrigation purposes in the Lombardia and Piemonte regions. The fifth-generation ECHAM model from the Max-Planck-Institute for Meteorology was adopted as GCM. The DD was carried out with the Protheus system (ENEA), while the SD was performed through a monthly quantile-quantile correction of the precipitation data collected in the period 1950-2012 by the 19 rainfall gauges located in the watershed area (some of them operating not continuously during the study period). The relationship between the precipitation regime and the inflow to the reservoir is obtained through a simple multilinear regression model, validated using both precipitation data and inflow measurements to the lake in the period 1996-2012 then, the same relation has been applied to the control (20c) and scenario (a1b) simulations downscaled by means of the different downscaling approaches (DD, SD and combined DD-SD). The resulting forcing has been used as input to a daily water balance model taking into account the inflow to the lake, the demand for irrigation and the reservoir management policies. The impact of the different downscaling approaches on the water budget scenarios has been evaluated in terms of occurrence, duration and intensity of water scarcity periods.

  20. Comparison of different deep learning approaches for parotid gland segmentation from CT images

    Science.gov (United States)

    Hänsch, Annika; Schwier, Michael; Gass, Tobias; Morgas, Tomasz; Haas, Benjamin; Klein, Jan; Hahn, Horst K.

    2018-02-01

    The segmentation of target structures and organs at risk is a crucial and very time-consuming step in radiotherapy planning. Good automatic methods can significantly reduce the time clinicians have to spend on this task. Due to its variability in shape and often low contrast to surrounding structures, segmentation of the parotid gland is especially challenging. Motivated by the recent success of deep learning, we study different deep learning approaches for parotid gland segmentation. Particularly, we compare 2D, 2D ensemble and 3D U-Net approaches and find that the 2D U-Net ensemble yields the best results with a mean Dice score of 0.817 on our test data. The ensemble approach reduces false positives without the need for an automatic region of interest detection. We also apply our trained 2D U-Net ensemble to segment the test data of the 2015 MICCAI head and neck auto-segmentation challenge. With a mean Dice score of 0.861, our classifier exceeds the highest mean score in the challenge. This shows that the method generalizes well onto data from independent sites. Since appropriate reference annotations are essential for training but often difficult and expensive to obtain, it is important to know how many samples are needed to properly train a neural network. We evaluate the classifier performance after training with differently sized training sets (50-450) and find that 250 cases (without using extensive data augmentation) are sufficient to obtain good results with the 2D ensemble. Adding more samples does not significantly improve the Dice score of the segmentations.

  1. A comparison of gap-filling approaches for Landsat-7 satellite data

    KAUST Repository

    Yin, Gaohong; Mariethoz, Gregoire; Sun, Ying; McCabe, Matthew

    2017-01-01

    The purpose of this study is to assess the relative performance of four different gap-filling approaches across a range of land-surface conditions, including both homogeneous and heterogeneous areas as well as in scenes with abrupt changes in landscape elements. The techniques considered in this study include: (1) Kriging and co-Kriging; (2) geostatistical neighbourhood similar pixel interpolator (GNSPI); (3) a weighted linear regression (WLR) algorithm; and (4) the direct sampling (DS) method. To examine the impact of image availability and the influence of temporal distance on the selection of input training data (i.e. time separating the training data from the gap-filled target image), input images acquired within the same season (temporally close) as well as in different seasons (temporally far) to the target image were examined, as was the case of using information only within the target image itself. Root mean square error (RMSE), mean spectral angle (MSA), and coefficient of determination (R-2) were used as the evaluation metrics to assess the prediction results. In addition, the overall accuracy (OA) and kappa coefficient (kappa) were used to assess a land-cover classification based on the gap-filled images. Results show that all of the gap-filling approaches provide satisfactory results for the homogeneous case, with R-2 > 0.93 for bands 1 and 2 in all cases and R-2 > 0.80 for bands 3 and 4 in most cases. For the heterogeneous example, GNSPI performs the best, with R-2 > 0.85 for all tested cases. WLR and GNSPI exhibit equivalent accuracy when a temporally close input image is used (i.e. WLR and GNSPI both have an R-2 equal to 0.89 for band 1). For the case of abrupt changes in scene elements or in the absence of ancillary data, the DS approach outperforms the other tested methods.

  2. Approaching acquisition path analysis formally. A comparison between AP and nonAP states

    International Nuclear Information System (INIS)

    Listner, Clemens; Canty, Morton J.; Niemeyer, Irmgard; Rezniczek, Arnold; Stein, Gotthard

    2014-01-01

    In the past, the IAEA has planned its activities mainly based on the presence of nuclear material. However, resources should be spent where they are needed most. Therefore, a new risk model was developed to change the inspection system to a comprehensive, objective‑driven approach where the State is considered as a whole, the so called State‑level concept (SLC). Acquisition path analysis (APA) is a key element of the State‑level concept. By considering the State’s nuclear profile, the APA generates a list of acquisition paths ranked by their attractiveness for the State. Currently, this process is mainly based on expert judgment. However, the IAEA’s requirements state that APA must be objective, reproducible, transparent, standardized, documented and as a result non‑discriminatory. A formal approach fulfilling the requirements was set up by the authors in the past [1]. This methodology is based on a three step approach. The process starts in the first step with the parametrization of the network. In the second step, the network is analyzed in order find all acquisition paths for a State. Finally, game theory is used in the third step to model the decisions made by the IAEA and the State. In this paper, an advanced methodology will be presented. Improvements were made in the interface definition between the three stages. Also, the general network model was updated and the automatic visualization of acquisition paths was accomplished. Furthermore, a prototype implementation will be shown. The advanced methodology was applied to two test non‑nuclear weapon States under comprehensive safeguards agreements with the IAEA. Both States hold complex fuel cycles with only small technical differences. However,only one State is supposed to have the additional protocol (AP) in force. The example will show how the presence of the AP influences the detection probabilities of illegal behavior. As a consequence, these examples also indicate where to best focus

  3. Risk of brain injury during diagnostic coronary angiography: comparison between right and left radial approach.

    Science.gov (United States)

    Pacchioni, Andrea; Versaci, Francesco; Mugnolo, Antonio; Penzo, Carlo; Nikas, Dimitrios; Saccà, Salvatore; Favero, Luca; Agostoni, Pier Francesco; Garami, Zsolt; Prati, Francesco; Reimers, Bernhard

    2013-09-10

    To assess the incidence of silent cerebral embolization when using the transradial approach for diagnostic coronary angiography (DCA). Compared to other vascular access sites, the right transradial approach (RTA) could reduce the amount of brain emboli by avoiding mechanical trauma to the aortic wall caused by catheters and wire, whereas it increases manipulation of catheters in the ascending aorta and has a higher risk of direct embolization into the right common carotid artery. A recent study showed an increased incidence of microembolic signals (MES) in RTA compared to femoral. However, left transradial approach (LTA) has never been assessed. 40 patients with suspected coronary artery disease were randomized to DCA via RTA (n=20) or LTA (n=20) with contemporaneous bilateral transcranial Doppler monitoring. MES were detected in all patients, with a significantly higher rate in the RTA group (median 61, interquartile range (IQR) 47-105, vs 48, IQR 31-60, p=0.035). MES generated during procedures needing >2 catheters (n=8), are higher than those detected during procedures performed with 2 catheters (n=32, 102, IQR 70-108, vs 48, IQR 33-60, p=0.001). At multivariate analysis increasing number of catheters was the only independent predictor of high incidence of MES (OR 16.4, 95% CI 1.23-219.9, p=0.034, -2LL=26.7). LTA has a lower risk of brain embolization because of the lower number of catheter exchange maneuvers. Since the degree of brain embolism depends on the magnitude of mechanical manipulation, catheter changes should be minimized to reduce the risk of cerebral embolization. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.

  4. Comparison of total mercury and methylmercury cycling at five sites using the small watershed approach

    Science.gov (United States)

    Shanley, J.B.; Alisa, Mast M.; Campbell, D.H.; Aiken, G.R.; Krabbenhoft, D.P.; Hunt, R.J.; Walker, J.F.; Schuster, P.F.; Chalmers, A.; Aulenbach, Brent T.; Peters, N.E.; Marvin-DiPasquale, M.; Clow, D.W.; Shafer, M.M.

    2008-01-01

    The small watershed approach is well-suited but underutilized in mercury research. We applied the small watershed approach to investigate total mercury (THg) and methylmercury (MeHg) dynamics in streamwater at the five diverse forested headwater catchments of the US Geological Survey Water, Energy, and Biogeochemical Budgets (WEBB) program. At all sites, baseflow THg was generally less than 1 ng L-1 and MeHg was less than 0.2 ng L-1. THg and MeHg concentrations increased with streamflow, so export was primarily episodic. At three sites, THg and MeHg concentration and export were dominated by the particulate fraction in association with POC at high flows, with maximum THg (MeHg) concentrations of 94 (2.56) ng L-1 at Sleepers River, Vermont; 112 (0.75) ng L-1 at Rio Icacos, Puerto Rico; and 55 (0.80) ng L-1 at Panola Mt., Georgia. Filtered (Colorado, THg export was also episodic but was dominated by filtered THg, as POC concentrations were low. MeHg typically tracked THg so that each site had a fairly constant MeHg/THg ratio, which ranged from near zero at Andrews to 15% at the low-relief, groundwater-dominated Allequash Creek, Wisconsin. Allequash was the only site with filtered MeHg consistently above detection, and the filtered fraction dominated both THg and MeHg. Relative to inputs in wet deposition, watershed retention of THg (minus any subsequent volatilization) was 96.6% at Allequash, 60% at Sleepers, and 83% at Andrews. Icacos had a net export of THg, possibly due to historic gold mining or frequent disturbance from landslides. Quantification and interpretation of Hg dynamics was facilitated by the small watershed approach with emphasis on event sampling. ?? 2008 Elsevier Ltd. All rights reserved.

  5. Quantifying Neural Oscillatory Synchronization: A Comparison between Spectral Coherence and Phase-Locking Value Approaches.

    Directory of Open Access Journals (Sweden)

    Eric Lowet

    Full Text Available Synchronization or phase-locking between oscillating neuronal groups is considered to be important for coordination of information among cortical networks. Spectral coherence is a commonly used approach to quantify phase locking between neural signals. We systematically explored the validity of spectral coherence measures for quantifying synchronization among neural oscillators. To that aim, we simulated coupled oscillatory signals that exhibited synchronization dynamics using an abstract phase-oscillator model as well as interacting gamma-generating spiking neural networks. We found that, within a large parameter range, the spectral coherence measure deviated substantially from the expected phase-locking. Moreover, spectral coherence did not converge to the expected value with increasing signal-to-noise ratio. We found that spectral coherence particularly failed when oscillators were in the partially (intermittent synchronized state, which we expect to be the most likely state for neural synchronization. The failure was due to the fast frequency and amplitude changes induced by synchronization forces. We then investigated whether spectral coherence reflected the information flow among networks measured by transfer entropy (TE of spike trains. We found that spectral coherence failed to robustly reflect changes in synchrony-mediated information flow between neural networks in many instances. As an alternative approach we explored a phase-locking value (PLV method based on the reconstruction of the instantaneous phase. As one approach for reconstructing instantaneous phase, we used the Hilbert Transform (HT preceded by Singular Spectrum Decomposition (SSD of the signal. PLV estimates have broad applicability as they do not rely on stationarity, and, unlike spectral coherence, they enable more accurate estimations of oscillatory synchronization across a wide range of different synchronization regimes, and better tracking of synchronization

  6. On the application of G(Θ) method and its comparison with De Lorenzi's approach

    International Nuclear Information System (INIS)

    Suo Xiaozheng; Combescure, A.

    1992-01-01

    Based on a continuum mechanics formulation, Destuynder proposed recently a Lagrangian method, called G (Θ) method, for calculating the energy release rate G. The present paper outlines a detailed finite element application of this method for use with the 3-noded DKT shell elements and the 20-noded isoparametric 3-D elements. The representation in this paper allows one to calculate G either as an integral part of a finite element analysis or separately in a post-processing program using the stress and strain as input for calculations. Example computations are given and compared with the Lorenzi's approach for several elastic as well as elasto-plastic crack problems. (orig.)

  7. Comparison of Numerical Approaches to a Steady-State Landscape Equation

    Science.gov (United States)

    Bachman, S.; Peckham, S.

    2008-12-01

    A mathematical model of an idealized fluvial landscape has been developed, in which a land surface will evolve to preserve dendritic channel networks as the surface is lowered. The physical basis for this model stems from the equations for conservation of mass for water and sediment. These equations relate the divergence of the 2D vector fields showing the unit-width discharge of water and sediment to the excess rainrate and tectonic uplift on the land surface. The 2D flow direction is taken to be opposite to the water- surface gradient vector. These notions are combined with a generalized Manning-type flow resistance formula and a generalized sediment transport law to give a closed mathematical system that can, in principle, be solved for all variables of interest: discharge of water and sediment, land surface height, vertically- averaged flow velocity, water depth, and shear stress. The hydraulic geometry equations (Leopold et. al, 1964, 1995) are used to incorporate width, depth, velocity, and slope of river channels as powers of the mean-annual river discharge. Combined, they give the unit- width discharge of the stream as a power, γ, of the water surface slope. The simplified steady-state model takes into account three components among those listed above: conservation of mass for water, flow opposite the gradient, and a slope-discharge exponent γ = -1 to reflect mature drainage networks. The mathematical representation of this model appears as a second-order hyperbolic partial differential equation (PDE) where the diffusivity is inversely proportional to the square of the local surface slope. The highly nonlinear nature of this PDE has made it very difficult to solve both analytically and numerically. We present simplistic analytic solutions to this equation which are used to test the validity of the numerical algorithms. We also present three such numerical approaches which have been used in solving the differential equation. The first is based on a

  8. Enhancing sewage sludge dewaterability by bioleaching approach with comparison to other physical and chemical conditioning methods.

    Science.gov (United States)

    Liu, Fenwu; Zhou, Jun; Wang, Dianzhan; Zhou, Lixiang

    2012-01-01

    The sewage sludge conditioning process is critical to improve the sludge dewaterability prior to mechanical dewatering. Traditionally, sludge is conditioned by physical or chemical approaches, mostly with the addition of inorganic or organic chemicals. Here we report that bioleaching, an efficient and economical microbial method for the removal of sludge-borne heavy metals, also plays a significant role in enhancing sludge dewaterability. The effects of bioleaching and physical or chemical approaches on sludge dewaterability were compared. The conditioning result of bioleaching by Acidithiobacillus thiooxidans and Acidithiobacillus ferrooxidans on sludge dewatering was investigated and compared with the effects of hydrothermal (121 degrees C for 2 hr), microwave (1050 W for 50 sec), ultrasonic (250 W for 2 min), and chemical conditioning (24% ferric chloride and 68% calcium oxide; dry basis). The results show that the specific resistance to filtration (SRF) or capillary suction time (CST) of sludge is decreased by 93.1% or 74.1%, respectively, after fresh sludge is conditioned by bioleaching, which is similar to chemical conditioning treatment with ferric chloride and calcium oxide but much more effective than other conditioning approaches including hydrothermal, microwave, and ultrasonic conditioning. Furthermore, after sludge dewatering, bioleached sludge filtrate contains the lowest concentrations of chroma (18 times), COD (542 mg/L), total N (TN, 300 mg/L), NH4(+)-N (208 mg/L), and total P (TP, 2 mg/L) while the hydrothermal process resulted in the highest concentration of chroma (660 times), COD (18,155 mg/L), TN (472 mg/L), NH4(+)-N (381 mg/L), and TP (191 mg/L) among these selected conditioning methods. Moreover, unlike chemical conditioning, sludge bioleaching does not result in a significant reduction of organic matter, TN, and TP in the resulting dewatered sludge cake. Therefore, considering sludge dewaterability and the chemical properties of sludge

  9. A comparison of Lagrangian/Eulerian approaches for tracking the kinematics of high deformation solid motion.

    Energy Technology Data Exchange (ETDEWEB)

    Ames, Thomas L.; Farnsworth, Grant V.; Ketcheson, David Isaac; Robinson, Allen Conrad

    2009-09-01

    The modeling of solids is most naturally placed within a Lagrangian framework because it requires constitutive models which depend on knowledge of the original material orientations and subsequent deformations. Detailed kinematic information is needed to ensure material frame indifference which is captured through the deformation gradient F. Such information can be tracked easily in a Lagrangian code. Unfortunately, not all problems can be easily modeled using Lagrangian concepts due to severe distortions in the underlying motion. Either a Lagrangian/Eulerian or a pure Eulerian modeling framework must be introduced. We discuss and contrast several Lagrangian/Eulerian approaches for keeping track of the details of material kinematics.

  10. Comparison of different approaches to determine the bursting threshold at ANKA

    Energy Technology Data Exchange (ETDEWEB)

    Schoenfeldt, Patrik; Hiller, Nicole; Judin, Vitali; Mueller, Anke-Susanne [Karlsruher Institut fuer Technologie (KIT), Karlsruhe (Germany)

    2013-07-01

    The synchrotron light source ANKA at the Karlsruhe Institute of Technology provides a dedicated low-α-optics. In this mode bursting of Coherent Synchrotron Radiation (CSR) is observed for bunch charges above a threshold that depends on beam parameters. This threshold can be determined by several approaches, e.g. bunch lengthening or changes in the THz radiation spectra. This talk compares different methods and their implementation at the ANKA storage ring outlining their advantages, disadvantages and limitations, including reliability and possibility of real time analysis.

  11. Estimation and Comparison of Underground Economy in Croatia and European Union Countries: Fuzzy Logic Approach

    Directory of Open Access Journals (Sweden)

    Kristina Marsic

    2016-06-01

    The purpose of this paper is to address this issue in three ways. First, we review existing estimates of the size of the underground economy. Second, we apply a novel calculation method for estimation: fuzzy logic. Third, we calculated and compared underground economy index for 25 European Union countries and compared it, with special focus on Croatian underground economy index. Results indicated that Croatia has the thirteenth largest underground economy among measured members of the European Union. This study is the first of its kind with recent data to measure the size of underground economy in European Union countries by employing fuzzy logic approach.

  12. Modeling the Static Force of a Festo Pneumatic Muscle Actuator: A New Approach and a Comparison to Existing Models

    Directory of Open Access Journals (Sweden)

    Mirco Martens

    2017-11-01

    Full Text Available In this paper, a new approach for modeling the static force characteristic of Festo pneumatic muscle actuators (PMAs will be presented. The model is physically motivated and therefore gives a deeper understanding of the Festo PMA. After introducing the new model, it will be validated through a comparison to a measured force map of a Festo DMSP-10-250 and a DMSP-20-300, respectively. It will be shown that the error between the new model and the measured data is below 4.4% for the DMSP-10-250 and below 2.35% for the DMSP-20-300. In addition, the quality of the presented model will be compared to the quality of existing models by comparing the maximum error. It can be seen that the newly introduced model is closer to the measured force characteristic of a Festo PMA than any existing model.

  13. Comparison of quantum-mechanical and semiclassical approaches for an analysis of spin dynamics in quantum dots

    International Nuclear Information System (INIS)

    Petrov, M. Yu.; Yakovlev, S. V.

    2012-01-01

    Two approaches to the description of spin dynamics of electron-nuclear system in quantum dots are compared: the quantum-mechanical one is based on direct diagonalization of the model Hamiltonian and semiclassical one is based on coupled equations for precession of mean electron spin and mean spin of nuclear spin fluctuations. The comparison was done for a model problem describing periodic excitation of electron-nuclear system by optical excitation. The computation results show that scattering of parameters related to fluctuation of the nuclear spin system leads to appearance of an ordered state in the system caused by periodic excitation and to the effect of electron-spin mode locking in an external magnetic field. It is concluded that both models can qualitatively describe the mode-locking effect, however give significantly different quantitative results. This may indicate the limited applicability of the precession model for describing the spin dynamics in quantum dots in the presence of optical pumping.

  14. Stima del valore di trasformazione utilizzando la funzione di stima. Il Market Comparison Approach a tabella dei dati ridotta

    Directory of Open Access Journals (Sweden)

    Maurizio d’Amato

    2015-06-01

    Full Text Available Il contributo propone l’utilizzo di una rielaborazione dell’uso del Market Comparison Approach per la determinazione del valore di beni immobili che sono stati oggetto di una trasformazione. Esso è focalizzato sulla determinazione valore di un bene immobile in seguito a una trasformazione finalizzato alla determinazione del valore di un suolo nell’ambito dell’applicazione dell’extraction method. I risultati, basati su dati di compravendita e riferiti a un caso reale in provincia di Bari, possono essere estesi anche alla valutazione degli immobili in costruzione e alla determinazione del valore di beni immobili in costruzione (investment property under construction con il metodo del valore futuro.

  15. A global approach to estimate irrigated areas - a comparison between different data and statistics

    Science.gov (United States)

    Meier, Jonas; Zabel, Florian; Mauser, Wolfram

    2018-02-01

    Agriculture is the largest global consumer of water. Irrigated areas constitute 40 % of the total area used for agricultural production (FAO, 2014a) Information on their spatial distribution is highly relevant for regional water management and food security. Spatial information on irrigation is highly important for policy and decision makers, who are facing the transition towards more efficient sustainable agriculture. However, the mapping of irrigated areas still represents a challenge for land use classifications, and existing global data sets differ strongly in their results. The following study tests an existing irrigation map based on statistics and extends the irrigated area using ancillary data. The approach processes and analyzes multi-temporal normalized difference vegetation index (NDVI) SPOT-VGT data and agricultural suitability data - both at a spatial resolution of 30 arcsec - incrementally in a multiple decision tree. It covers the period from 1999 to 2012. The results globally show a 18 % larger irrigated area than existing approaches based on statistical data. The largest differences compared to the official national statistics are found in Asia and particularly in China and India. The additional areas are mainly identified within already known irrigated regions where irrigation is more dense than previously estimated. The validation with global and regional products shows the large divergence of existing data sets with respect to size and distribution of irrigated areas caused by spatial resolution, the considered time period and the input data and assumption made.

  16. Comparison of propagation-based phase-contrast tomography approaches for the evaluation of dentin microstructure

    Science.gov (United States)

    Deyhle, Hans; Weitkamp, Timm; Lang, Sabrina; Schulz, Georg; Rack, Alexander; Zanette, Irene; Müller, Bert

    2012-10-01

    The complex hierarchical structure of human tooth hard tissues, enamel and dentin, guarantees function for decades. On the micrometer level the dentin morphology is dominated by the tubules, micrometer-narrow channels extending from the dentin-enamel junction to the pulp chamber. Their structure has been extensively studied, mainly with two-dimensional approaches. Dentin tubules are formed during tooth growth and their orientation is linked to the morphology of the nanometer-sized components, which is of interest for example for the development of bio-inspired dental fillings. Therefore, a method has to be identified that can access the three-dimensional organization of the tubules, e.g. density and orientation. Tomographic setups with pixel sizes in the sub-micrometer range allow for the three-dimensional visualization of tooth dentin tubules both in phase and absorption contrast modes. We compare high-resolution tomographic scans reconstructed with propagation based phase retrieval algorithms as well as reconstructions without phase retrieval concerning spatial and density resolution as well as rendering of the dentin microstructure to determine the approach best suited for dentin tubule imaging. Reasonable results were obtained with a single-distance phase retrieval algorithm and a propagation distance of about 75% of the critical distance of d2/λ, where d is the size of the smallest objects identifiable in the specimen and λ is the X-ray wavelength.

  17. Comparison of transfemoral vs transbrachial approach for angioplastic reconstruction of chronic total aortoiliac occlusion.

    Science.gov (United States)

    Damera, Sheshagiri Rao; Barik, Ramachandra; Prasad, Akula Siva

    2016-09-01

    The angioplasty of chronic total aortoiliac occlusion using transfemoral is controversial. From March 2014 to December 2015, four consecutive patients (4 males; mean age 58.2±6.8 years; age of range 51-65 years) underwent angioplasty and stenting of TASC-D occlusion. In all the cases, we failed to cross from femoral approach. On switching over to left brachial access, angioplasty was done successfully in all. There was no procedural site complication or clinical evidence cerebral thromboembolism. Self-expandable stents were implanted in all with adequate pre and post dilation. Complete revascularisation was achieved in two cases and in other two cases, the angioplasty to the left aortoiliac carina was staged. Therefore, it is better to avoid femoral approach as initial step to cross chronic TASC 2007 type D (chronic total aortoiliac occlusion or called extensive aortoiliac disease) because of failure to cross retrogradely due to subintimal course of guide wire leading to retrograde aortic dissection. Copyright © 2016 Cardiological Society of India. Published by Elsevier B.V. All rights reserved.

  18. A comparison of two coaching approaches to enhance implementation of a recovery-oriented service model.

    Science.gov (United States)

    Deane, Frank P; Andresen, Retta; Crowe, Trevor P; Oades, Lindsay G; Ciarrochi, Joseph; Williams, Virginia

    2014-09-01

    Moving to recovery-oriented service provision in mental health may entail retraining existing staff, as well as training new staff. This represents a substantial burden on organisations, particularly since transfer of training into practice is often poor. Follow-up supervision and/or coaching have been found to improve the implementation and sustainment of new approaches. We compared the effect of two coaching conditions, skills-based and transformational coaching, on the implementation of a recovery-oriented model following training. Training followed by coaching led to significant sustained improvements in the quality of care planning in accordance with the new model over the 12-month study period. No interaction effect was observed between the two conditions. However, post hoc analyses suggest that transformational coaching warrants further exploration. The results support the provision of supervision in the form of coaching in the implementation of a recovery-oriented service model, and suggest the need to better elucidate the mechanisms within different coaching approaches that might contribute to improved care.

  19. Creating a test blueprint for a progress testing program: A paired-comparisons approach.

    Science.gov (United States)

    von Bergmann, HsingChi; Childs, Ruth A

    2018-03-01

    Creating a new testing program requires the development of a test blueprint that will determine how the items on each test form are distributed across possible content areas and practice domains. To achieve validity, categories of a blueprint are typically based on the judgments of content experts. How experts judgments are elicited and combined is important to the quality of resulting test blueprints. Content experts in dentistry participated in a day-long faculty-wide workshop to discuss, refine, and confirm the categories and their relative weights. After reaching agreement on categories and their definitions, experts judged the relative importance between category pairs, registering their judgments anonymously using iClicker, an audience response system. Judgments were combined in two ways: a simple calculation that could be performed during the workshop and a multidimensional scaling of the judgments performed later. Content experts were able to produce a set of relative weights using this approach. The multidimensional scaling yielded a three-dimensional model with the potential to provide deeper insights into the basis of the experts' judgments. The approach developed and demonstrated in this study can be applied across academic disciplines to elicit and combine content experts judgments for the development of test blueprints.

  20. [Note on the epistemology of clinical pharmacology: comparison with the approach of Karl Popper].

    Science.gov (United States)

    Boissel, J P

    1999-01-01

    Is clinical pharmacology a science or only an application of science? Karl Popper suggested a method to identify science and to sort it out from other logical activities such as metaphysics, whereby the falsification criterion he proposed can apply to the theory in such a way that the theory could be refuted. The clinical pharmacologist's approach requires the build-up of a therapeutic model on the basis of two other models: the physiopathologic and the pharmacological. The three-model construct is a theory. Is it scientific in the Popperian sense? From the therapeutic model, one can predict the efficacy of a drug, and the corresponding statement is tested by a clinical trial. Whatever the original statement, it is modified into a refutable one because of the use of the statistical approach in clinical trials. Furthermore, the predicate represents a hypothesis of the model validity, which will then be confronted with 'reality' through clinical experiment. As the therapeutic model is refutable, clinical pharmacology is a science in the Popperian sense.

  1. Comparison of machine learned approaches for thyroid nodule characterization from shear wave elastography images

    Science.gov (United States)

    Pereira, Carina; Dighe, Manjiri; Alessio, Adam M.

    2018-02-01

    Various Computer Aided Diagnosis (CAD) systems have been developed that characterize thyroid nodules using the features extracted from the B-mode ultrasound images and Shear Wave Elastography images (SWE). These features, however, are not perfect predictors of malignancy. In other domains, deep learning techniques such as Convolutional Neural Networks (CNNs) have outperformed conventional feature extraction based machine learning approaches. In general, fully trained CNNs require substantial volumes of data, motivating several efforts to use transfer learning with pre-trained CNNs. In this context, we sought to compare the performance of conventional feature extraction, fully trained CNNs, and transfer learning based, pre-trained CNNs for the detection of thyroid malignancy from ultrasound images. We compared these approaches applied to a data set of 964 B-mode and SWE images from 165 patients. The data were divided into 80% training/validation and 20% testing data. The highest accuracies achieved on the testing data for the conventional feature extraction, fully trained CNN, and pre-trained CNN were 0.80, 0.75, and 0.83 respectively. In this application, classification using a pre-trained network yielded the best performance, potentially due to the relatively limited sample size and sub-optimal architecture for the fully trained CNN.

  2. Comparison of phase-constrained parallel MRI approaches: Analogies and differences.

    Science.gov (United States)

    Blaimer, Martin; Heim, Marius; Neumann, Daniel; Jakob, Peter M; Kannengiesser, Stephan; Breuer, Felix A

    2016-03-01

    Phase-constrained parallel MRI approaches have the potential for significantly improving the image quality of accelerated MRI scans. The purpose of this study was to investigate the properties of two different phase-constrained parallel MRI formulations, namely the standard phase-constrained approach and the virtual conjugate coil (VCC) concept utilizing conjugate k-space symmetry. Both formulations were combined with image-domain algorithms (SENSE) and a mathematical analysis was performed. Furthermore, the VCC concept was combined with k-space algorithms (GRAPPA and ESPIRiT) for image reconstruction. In vivo experiments were conducted to illustrate analogies and differences between the individual methods. Furthermore, a simple method of improving the signal-to-noise ratio by modifying the sampling scheme was implemented. For SENSE, the VCC concept was mathematically equivalent to the standard phase-constrained formulation and therefore yielded identical results. In conjunction with k-space algorithms, the VCC concept provided more robust results when only a limited amount of calibration data were available. Additionally, VCC-GRAPPA reconstructed images provided spatial phase information with full resolution. Although both phase-constrained parallel MRI formulations are very similar conceptually, there exist important differences between image-domain and k-space domain reconstructions regarding the calibration robustness and the availability of high-resolution phase information. © 2015 Wiley Periodicals, Inc.

  3. In with the new, out with the old? Comparison of two approaches for psoas compartment block.

    LENUS (Irish Health Repository)

    Mannion, Stephen

    2012-02-03

    We compared the approaches of Winnie and Capdevila for psoas compartment block (PCB) performed by a single operator in terms of contralateral spread, lumbar plexus blockade, and postoperative analgesic efficacy. Sixty patients underwent PCB (0.4 mL\\/kg levobupivacaine 0.5%) and subsequent spinal anesthesia for primary joint arthroplasty (hip or knee) in a prospective, double-blind study. Patients were randomly allocated to undergo PCB by using the Capdevila (group C; n = 30) or a modified Winnie (group W; n = 30) approach. Contralateral spread and lumbar plexus blockade were assessed 15, 30, and 45 min after PCB. Contralateral spread (bilateral from T4 to S5) and femoral and lateral cutaneous nerve block were evaluated by sensory testing, and obturator motor block was assessed. Bilateral anesthesia occurred in 10 patients in group C and 12 patients in group W (P = 0.8). Blockade of the femoral, lateral cutaneous, and obturator nerves was 90%, 93%, and 80%, respectively, for group C and 93%, 97%, and 90%, respectively, for group W (P > 0.05). No differences were found in PCB procedure time, pain scores, 24-h morphine consumption, or time to first morphine analgesia.

  4. Comparison of three different approaches in the treatment of chronic low back pain.

    Science.gov (United States)

    Koldaş Doğan, Sebnem; Sonel Tur, Birkan; Kurtaiş, Yeşim; Atay, Mesut Birol

    2008-07-01

    Our aim is to investigate the effects of three therapeutic approaches in the chronic low back pain on pain, spinal mobility, disability, psychological state, and aerobic capacity. Sixty patients with chronic low back pain were randomized to three groups: group 1, aerobic exercise + home exercise; group 2, physical therapy (hot pack, ultrasound, TENS) + home exercise; group 3, home exercise only. Spinal mobility, pain severity, disability, and psychological disturbance of the patients were assessed before and after the treatment and at 1-month follow-up. Aerobic capacities of the patients were measured before and after treatment. All of the groups showed similar decrease in pain after the treatment and at 1-month follow-up, and there was no significant difference between the groups. In group 2, a significant decrease in Beck Depression Inventory scores was observed with treatment. At 1-month follow-up, group 1 and 2 showed significant decreases in General Health Assessment Questionnaire scores. In group 2, there was also a significant improvement in Roland Morris Disability scores. There were similar improvements in exercise test duration and the MET levels in all the three groups. All of the three therapeutic approaches were found to be effective in diminishing pain and thus increasing aerobic capacity in patients with chronic low back pain. On the other hand, physical therapy + home exercise was found to be more effective regarding disability and psychological disturbance.

  5. Isolating DNA from sexual assault cases: a comparison of standard methods with a nuclease-based approach

    Science.gov (United States)

    2012-01-01

    Background Profiling sperm DNA present on vaginal swabs taken from rape victims often contributes to identifying and incarcerating rapists. Large amounts of the victim’s epithelial cells contaminate the sperm present on swabs, however, and complicate this process. The standard method for obtaining relatively pure sperm DNA from a vaginal swab is to digest the epithelial cells with Proteinase K in order to solubilize the victim’s DNA, and to then physically separate the soluble DNA from the intact sperm by pelleting the sperm, removing the victim’s fraction, and repeatedly washing the sperm pellet. An alternative approach that does not require washing steps is to digest with Proteinase K, pellet the sperm, remove the victim’s fraction, and then digest the residual victim’s DNA with a nuclease. Methods The nuclease approach has been commercialized in a product, the Erase Sperm Isolation Kit (PTC Labs, Columbia, MO, USA), and five crime laboratories have tested it on semen-spiked female buccal swabs in a direct comparison with their standard methods. Comparisons have also been performed on timed post-coital vaginal swabs and evidence collected from sexual assault cases. Results For the semen-spiked buccal swabs, Erase outperformed the standard methods in all five laboratories and in most cases was able to provide a clean male profile from buccal swabs spiked with only 1,500 sperm. The vaginal swabs taken after consensual sex and the evidence collected from rape victims showed a similar pattern of Erase providing superior profiles. Conclusions In all samples tested, STR profiles of the male DNA fractions obtained with Erase were as good as or better than those obtained using the standard methods. PMID:23211019

  6. A Voting TOPSIS Approach for Determining the Priorities of Areas Damaged in Disasters

    Directory of Open Access Journals (Sweden)

    Yanjin He

    2018-05-01

    Full Text Available In this paper, we investigate the priority determination problem for areas that have been damaged during disasters. Relief distribution should be planned while considering the priorities of the damaged areas. To determine the priorities of the damaged areas, we first define four criteria and then propose a voting TOPSIS (technique for order of preference by similarity to ideal solution that utilizes the fuzzy pair-wise comparison, data envelopment analysis, and TOPSIS. Since the voting TOPSIS is based on the voting results of multiple experts, it can be applied to urgent situations quickly, regardless of the consistency of comparison, the number of alternatives, and the number of participating experts. The proposed approach is validated using a real-world case, and this case analysis shows that the voting TOPSIS is viable.

  7. Comparison of SMOS and SMAP Soil Moisture Retrieval Approaches Using Tower-based Radiometer Data over a Vineyard Field

    Science.gov (United States)

    Miernecki, Maciej; Wigneron, Jean-Pierre; Lopez-Baeza, Ernesto; Kerr, Yann; DeJeu, Richard; DeLannoy, Gabielle J. M.; Jackson, Tom J.; O'Neill, Peggy E.; Shwank, Mike; Moran, Roberto Fernandez; hide

    2014-01-01

    The objective of this study was to compare several approaches to soil moisture (SM) retrieval using L-band microwave radiometry. The comparison was based on a brightness temperature (TB) data set acquired since 2010 by the L-band radiometer ELBARA-II over a vineyard field at the Valencia Anchor Station (VAS) site. ELBARA-II, provided by the European Space Agency (ESA) within the scientific program of the SMOS (Soil Moisture and Ocean Salinity) mission, measures multiangular TB data at horizontal and vertical polarization for a range of incidence angles (30-60). Based on a three year data set (2010-2012), several SM retrieval approaches developed for spaceborne missions including AMSR-E (Advanced Microwave Scanning Radiometer for EOS), SMAP (Soil Moisture Active Passive) and SMOS were compared. The approaches include: the Single Channel Algorithm (SCA) for horizontal (SCA-H) and vertical (SCA-V) polarizations, the Dual Channel Algorithm (DCA), the Land Parameter Retrieval Model (LPRM) and two simplified approaches based on statistical regressions (referred to as 'Mattar' and 'Saleh'). Time series of vegetation indices required for three of the algorithms (SCA-H, SCA-V and Mattar) were obtained from MODIS observations. The SM retrievals were evaluated against reference SM values estimated from a multiangular 2-Parameter inversion approach. The results obtained with the current base line algorithms developed for SMAP (SCA-H and -V) are in very good agreement with the reference SM data set derived from the multi-angular observations (R2 around 0.90, RMSE varying between 0.035 and 0.056 m3m3 for several retrieval configurations). This result showed that, provided the relationship between vegetation optical depth and a remotely-sensed vegetation index can be calibrated, the SCA algorithms can provide results very close to those obtained from multi-angular observations in this study area. The approaches based on statistical regressions provided similar results and the

  8. Bayesian comparison of cost-effectiveness of different clinical approaches to diagnose coronary artery disease

    International Nuclear Information System (INIS)

    Patterson, R.E.; Eng, C.; Horowitz, S.F.; Gorlin, R.; Goldstein, S.R.

    1984-01-01

    The objective of this study was to compare the cost-effectiveness of four clinical policies (policies I to IV) in the diagnosis of the presence or absence of coronary artery disease. A model based on Bayes theorem and published clinical data was constructed to make these comparisons. Effectiveness was defined as either the number of patients with coronary disease diagnosed or as the number of quality-adjusted life years extended by therapy after the diagnosis of coronary disease. The following conclusions arise strictly from analysis of the model and may not necessarily be applicable to all situations. As prevalence of coronary disease in the population increased, it caused a linear increase in cost per patient tested, but a hyperbolic decrease in cost per effect, that is, increased cost-effectiveness. Thus, cost-effectiveness of all policies (I to IV) was poor in populations with a prevalence of disease below 10%. Analysis of the model also indicates that at prevalences less than 80%, exercise thallium scintigraphy alone as a first test (policy II) is a more cost-effective initial test than is exercise electrocardiography alone as a first test (policy I) or exercise electrocardiography first combined with thallium imaging as a second test (policy IV). Exercise electrocardiography before thallium imaging (policy IV) is more cost-effective than exercise electrocardiography alone (policy I) at prevalences less than 80%. 4) Noninvasive exercise testing before angiography (policies I, II and IV) is more cost-effective than using coronary angiography as the first and only test (policy III) at prevalences less than 80%. 5) Above a threshold value of prevalence of 80% (for example patients with typical angina), proceeding to angiography as the first test (policy III) was more cost-effective than initial noninvasive exercise tests (policies I, II and IV)

  9. Comparison of total mercury and methylmercury cycling at five sites using the small watershed approach

    Energy Technology Data Exchange (ETDEWEB)

    Shanley, James B. [US Geological Survey, PO Box 628, Montpelier, VT 05601 (United States)], E-mail: jshanley@usgs.gov; Alisa Mast, M. [US Geological Survey, MS 415 Denver Federal Center, Denver, CO 80225 (United States)], E-mail: mamast@usgs.gov; Campbell, Donald H. [US Geological Survey, MS 415 Denver Federal Center, Denver, CO 80225 (United States)], E-mail: dhcampbe@usgs.gov; Aiken, George R. [US Geological Survey, 3215 Marine Street, Suite E-127, Boulder, CO 80303 (United States)], E-mail: graiken@usgs.gov; Krabbenhoft, David P. [US Geological Survey, 8505 Research Way, Middleton, WI 53562 (United States)], E-mail: dpkrabbe@usgs.gov; Hunt, Randall J. [US Geological Survey, 8505 Research Way, Middleton, WI 53562 (United States)], E-mail: rjhunt@usgs.gov; Walker, John F. [US Geological Survey, 8505 Research Way, Middleton, WI 53562 (United States)], E-mail: jfwalker@usgs.gov; Schuster, Paul F. [US Geological Survey, 3215 Marine Street, Suite E-127, Boulder, CO 80303 (United States)], E-mail: pschuste@usgs.gov; Chalmers, Ann [US Geological Survey, PO Box 628, Montpelier, VT 05601 (United States)], E-mail: chalmers@usgs.gov; Aulenbach, Brent T. [US Geological Survey, 3039 Amwiler Road, Suite 130, Atlanta, GA 30360 (United States)], E-mail: btaulenb@usgs.gov; Peters, Norman E. [US Geological Survey, 3039 Amwiler Road, Suite 130, Atlanta, GA 30360 (United States)], E-mail: nepeters@usgs.gov; Marvin-DiPasquale, Mark [US Geological Survey, 345 Middlefield Rd., MS 480, Menlo Park, CA 94025 (United States)], E-mail: mmarvin@usgs.gov; Clow, David W. [US Geological Survey, MS 415 Denver Federal Center, Denver, CO 80225 (United States)], E-mail: dwclow@usgs.gov; Shafer, Martin M. [Environmental Chemistry and Technology and Wisconsin State Laboratory of Hygiene, University of Wisconsin, Madison, WI 53706 (United States)], E-mail: mmshafer@wisc.edu

    2008-07-15

    The small watershed approach is well-suited but underutilized in mercury research. We applied the small watershed approach to investigate total mercury (THg) and methylmercury (MeHg) dynamics in streamwater at the five diverse forested headwater catchments of the US Geological Survey Water, Energy, and Biogeochemical Budgets (WEBB) program. At all sites, baseflow THg was generally less than 1 ng L{sup -1} and MeHg was less than 0.2 ng L{sup -1}. THg and MeHg concentrations increased with streamflow, so export was primarily episodic. At three sites, THg and MeHg concentration and export were dominated by the particulate fraction in association with POC at high flows, with maximum THg (MeHg) concentrations of 94 (2.56) ng L{sup -1} at Sleepers River, Vermont; 112 (0.75) ng L{sup -1} at Rio Icacos, Puerto Rico; and 55 (0.80) ng L{sup -1} at Panola Mt., Georgia. Filtered (<0.7 {mu}m) THg increased more modestly with flow in association with the hydrophobic acid fraction (HPOA) of DOC, with maximum filtered THg concentrations near 5 ng L{sup -1} at both Sleepers and Icacos. At Andrews Creek, Colorado, THg export was also episodic but was dominated by filtered THg, as POC concentrations were low. MeHg typically tracked THg so that each site had a fairly constant MeHg/THg ratio, which ranged from near zero at Andrews to 15% at the low-relief, groundwater-dominated Allequash Creek, Wisconsin. Allequash was the only site with filtered MeHg consistently above detection, and the filtered fraction dominated both THg and MeHg. Relative to inputs in wet deposition, watershed retention of THg (minus any subsequent volatilization) was 96.6% at Allequash, 60% at Sleepers, and 83% at Andrews. Icacos had a net export of THg, possibly due to historic gold mining or frequent disturbance from landslides. Quantification and interpretation of Hg dynamics was facilitated by the small watershed approach with emphasis on event sampling. - High-flow sampling reveals strong contrasts in total

  10. Comparison of beta-binomial regression model approaches to analyze health-related quality of life data.

    Science.gov (United States)

    Najera-Zuloaga, Josu; Lee, Dae-Jin; Arostegui, Inmaculada

    2017-01-01

    Health-related quality of life has become an increasingly important indicator of health status in clinical trials and epidemiological research. Moreover, the study of the relationship of health-related quality of life with patients and disease characteristics has become one of the primary aims of many health-related quality of life studies. Health-related quality of life scores are usually assumed to be distributed as binomial random variables and often highly skewed. The use of the beta-binomial distribution in the regression context has been proposed to model such data; however, the beta-binomial regression has been performed by means of two different approaches in the literature: (i) beta-binomial distribution with a logistic link; and (ii) hierarchical generalized linear models. None of the existing literature in the analysis of health-related quality of life survey data has performed a comparison of both approaches in terms of adequacy and regression parameter interpretation context. This paper is motivated by the analysis of a real data application of health-related quality of life outcomes in patients with Chronic Obstructive Pulmonary Disease, where the use of both approaches yields to contradictory results in terms of covariate effects significance and consequently the interpretation of the most relevant factors in health-related quality of life. We present an explanation of the results in both methodologies through a simulation study and address the need to apply the proper approach in the analysis of health-related quality of life survey data for practitioners, providing an R package.

  11. Testing the Kerr black hole hypothesis: Comparison between the gravitational wave and the iron line approaches

    Energy Technology Data Exchange (ETDEWEB)

    Cárdenas-Avendaño, Alejandro [Center for Field Theory and Particle Physics and Department of Physics, Fudan University, 200433 Shanghai (China); Programa de Matemática, Fundación Universitaria Konrad Lorenz, 110231 Bogotá (Colombia); Jiang, Jiachen [Center for Field Theory and Particle Physics and Department of Physics, Fudan University, 200433 Shanghai (China); Bambi, Cosimo, E-mail: bambi@fudan.edu.cn [Center for Field Theory and Particle Physics and Department of Physics, Fudan University, 200433 Shanghai (China); Theoretical Astrophysics, Eberhard-Karls Universität Tübingen, 72076 Tübingen (Germany)

    2016-09-10

    The recent announcement of the detection of gravitational waves by the LIGO/Virgo Collaboration has opened a new window to test the nature of astrophysical black holes. Konoplya & Zhidenko have shown how the LIGO data of GW 150914 can constrain possible deviations from the Kerr metric. In this letter, we compare their constraints with those that can be obtained from accreting black holes by fitting their X-ray reflection spectrum, the so-called iron line method. We simulate observations with eXTP, a next generation X-ray mission, finding constraints much stronger than those obtained by Konoplya & Zhidenko. Our results can at least show that, contrary to what is quite commonly believed, it is not obvious that gravitational waves are the most powerful approach to test strong gravity. In the presence of high quality data and with the systematics under control, the iron line method may provide competitive constraints.

  12. NIF: Impacts of chemical accidents and comparison of chemical/radiological accident approaches

    International Nuclear Information System (INIS)

    Lazaro, M.A.; Policastro, A.J.; Rhodes, M.

    1996-01-01

    The US Department of Energy (DOE) proposes to construct and operate the National Ignition Facility (NIF). The goals of the NIF are to (1) achieve fusion ignition in the laboratory for the first time by using inertial confinement fusion (ICF) technology based on an advanced-design neodymium glass solid-state laser, and (2) conduct high-energy-density experiments in support of national security and civilian applications. The primary focus of this paper is worker-public health and safety issues associated with postulated chemical accidents during the operation of NIF. The key findings from the accident analysis will be presented. Although NIF chemical accidents will be emphasized, the important differences between chemical and radiological accident analysis approaches and the metrics for reporting results will be highlighted. These differences are common EIS facility and transportation accident assessments

  13. Electronic health records approaches and challenges: a comparison between Malaysia and four East Asian countries.

    Science.gov (United States)

    Abd Ghani, Mohd Khanapi; Bali, Rajeev K; Naguib, Raouf N G; Marshall, Ian M

    2008-01-01

    An integrated Lifetime Health Record (LHR) is fundamental for achieving seamless and continuous access to patient medical information and for the continuum of care. However, the aim has not yet been fully realised. The efforts are actively progressing around the globe. Every stage of the development of the LHR initiatives had presented peculiar challenges. The best lessons in life are those of someone else's experiences. This paper presents an overview of the development approaches undertaken by four East Asian countries in implementing a national Electronic Health Record (EHR) in the public health system. The major challenges elicited from the review including integration efforts, process reengineering, funding, people, and law and regulation will be presented, compared, discussed and used as lessons learned for the further development of the Malaysian integrated LHR.

  14. Approaches to death and dying: a cultural comparison of Turkey and the United States.

    Science.gov (United States)

    Beaty, Darla D

    2015-01-01

    Three principles that guide the bioethics movement in the United States and other Western societies apply to the approaches of death and dying in both the United States and Turkey. These three principles, Autonomy, Beneficence, and justice, are reflected in the practices of people in both countries. The issue of autonomy is of greater concern to those in the United States, while decisions are made entirely with family and physician involvement in Turkey. Beneficence and Justice can be identified as ethical issues in both countries. Similarities with end-of-life experiences are linked by faith-based beliefs of Islam and Christianity. Differences in sociocultural influences, such as policies about advance directives in the United States, account for differences in end-of-life decision making. This article examines the spiritual, cultural, legal, and political factors that inform the experience of people in Turkey and in the United States when death is at hand.

  15. Comparison of Two Grid Refinement Approaches for High Resolution Regional Climate Modeling: MPAS vs WRF

    Science.gov (United States)

    Leung, L.; Hagos, S. M.; Rauscher, S.; Ringler, T.

    2012-12-01

    This study compares two grid refinement approaches using global variable resolution model and nesting for high-resolution regional climate modeling. The global variable resolution model, Model for Prediction Across Scales (MPAS), and the limited area model, Weather Research and Forecasting (WRF) model, are compared in an idealized aqua-planet context with a focus on the spatial and temporal characteristics of tropical precipitation simulated by the models using the same physics package from the Community Atmosphere Model (CAM4). For MPAS, simulations have been performed with a quasi-uniform resolution global domain at coarse (1 degree) and high (0.25 degree) resolution, and a variable resolution domain with a high-resolution region at 0.25 degree configured inside a coarse resolution global domain at 1 degree resolution. Similarly, WRF has been configured to run on a coarse (1 degree) and high (0.25 degree) resolution tropical channel domain as well as a nested domain with a high-resolution region at 0.25 degree nested two-way inside the coarse resolution (1 degree) tropical channel. The variable resolution or nested simulations are compared against the high-resolution simulations that serve as virtual reality. Both MPAS and WRF simulate 20-day Kelvin waves propagating through the high-resolution domains fairly unaffected by the change in resolution. In addition, both models respond to increased resolution with enhanced precipitation. Grid refinement induces zonal asymmetry in precipitation (heating), accompanied by zonal anomalous Walker like circulations and standing Rossby wave signals. However, there are important differences between the anomalous patterns in MPAS and WRF due to differences in the grid refinement approaches and sensitivity of model physics to grid resolution. This study highlights the need for "scale aware" parameterizations in variable resolution and nested regional models.

  16. Neuropsychological assessment of individuals with brain tumor: Comparison of approaches used in the classification of impairment

    Directory of Open Access Journals (Sweden)

    Toni Maree Dwan

    2015-03-01

    Full Text Available Approaches to classifying neuropsychological impairment after brain tumor vary according to testing level (individual tests, domains or global index and source of reference (i.e., norms, controls and premorbid functioning. This study aimed to compare rates of impairment according to different classification approaches. Participants were 44 individuals (57% female with a primary brain tumor diagnosis (mean age = 45.6 years and 44 matched control participants (59% female, mean age = 44.5 years. All participants completed a test battery that assesses premorbid IQ (Wechsler Adult Reading Test, attention/processing speed (Digit Span, Trail Making Test A, memory (Hopkins Verbal Learning Test – Revised, Rey-Osterrieth Complex Figure-recall and executive function (Trail Making Test B, Rey-Osterrieth Complex Figure copy, Controlled Oral Word Association Test. Results indicated that across the different sources of reference, 86-93% of participants were classified as impaired at a test-specific level, 61-73% were classified as impaired at a domain-specific level, and 32-50% were classified as impaired at a global level. Rates of impairment did not significantly differ according to source of reference (p>.05; however, at the individual participant level, classification based on estimated premorbid IQ was often inconsistent with classification based on the norms or controls. Participants with brain tumor performed significantly poorer than matched controls on tests of neuropsychological functioning, including executive function (p=.001 and memory (p.05. These results highlight the need to examine individuals’ performance across a multi-faceted neuropsychological test battery to avoid over- or under-estimation of impairment.

  17. Comparison of Early Outcomes with Three Approaches for Combined Coronary Revascularization and Carotid Endarterectomy

    Directory of Open Access Journals (Sweden)

    Arzu Antal Dönmez

    Full Text Available Abstract Objective: This study aims to compare three different surgical approaches for combined coronary and carotid artery stenosis as a single stage procedure and to assess effect of operative strategy on mortality and neurological complications. Methods: This retrospective study involves 136 patients who had synchronous coronary artery revascularization and carotid endarterectomy in our institution, between January 2002 and December 2012. Patients were divided into 3 groups according to the surgical technique used. Group I included 70 patients who had carotid endarterectomy, followed by coronary revascularization with on-pump technique, group II included 29 patients who had carotid endarterectomy, followed by coronary revascularization with off-pump technique, group III included 37 patients who had coronary revascularization with on-pump technique followed by carotid endarterectomy under aortic cross-clamp and systemic hypothermia (22-27ºC. Postoperative outcomes were evaluated. Results: Overall early mortality and stroke rate was 5.1% for both. There were 3 (4.3% deaths in group I, 2 (6.9% deaths in group II and 2 (5.4% deaths in group III. Stroke was observed in 5 (7.1% patients in group I and 2 (6.9% in group II. Stroke was not observed in group III. No statistically significant difference was observed for mortality and stroke rates among the groups. Conclusion: We identified no significant difference in mortality or neurologic complications among three approaches for synchronous surgery for coronary and carotid disease. Therefore it is impossible to conclude that a single principle might be adapted into standard practice. Patient specific risk factors and clinical conditions might be important in determining the surgical tecnnique.

  18. Comparison of Customer Preference for Bulk Material Handling Equipment through Fuzzy-AHP Approach

    Science.gov (United States)

    Sen, Kingshuk; Ghosh, Surojit; Sarkar, Bijan

    2017-06-01

    In the present study, customer's perception has played one of the important roles for selection of the exact equipment out of available alternatives. The present study is dealt with the method of optimization of selection criteria of a material handling equipment, based on the technical specifications considered to be available at the user end. In this work, the needs of customers have been identified and prioritized, that lead to the selection of number of criteria, which have direct effect upon the performance of the equipment. To check the consistency of selection criteria, first of all an AHP based methodology is adopted with the identified criteria and available product categories, based upon which, the judgments of the users are defined to derive the priority scales. Such judgments expressed the relative strength or intensity of the impact of the elements of the hierarchy. Subsequently, all the alternatives have ranked for each identified criteria with subsequent constitution of weighted matrices. The same has been compared with the normalized values of approximate selling prices of the equipments to determine individual cost-benefit ratio. Based on the cost-benefit ratio, the equipment is ranked. With same conditions, the study is obtained again with a Fuzzy AHP concept, where a fuzzy linguistic approach has reduced the amount of uncertainty in decision making, caused by conventional AHP due to lack of deterministic approach. The priority vectors of category and criteria are determined separately and multiplied to obtain composite score. Subsequently, the average of fuzzy weights was determined and the preferences of equipment are ranked.

  19. Pairwise graphical models for structural health monitoring with dense sensor arrays

    Science.gov (United States)

    Mohammadi Ghazi, Reza; Chen, Justin G.; Büyüköztürk, Oral

    2017-09-01

    Through advances in sensor technology and development of camera-based measurement techniques, it has become affordable to obtain high spatial resolution data from structures. Although measured datasets become more informative by increasing the number of sensors, the spatial dependencies between sensor data are increased at the same time. Therefore, appropriate data analysis techniques are needed to handle the inference problem in presence of these dependencies. In this paper, we propose a novel approach that uses graphical models (GM) for considering the spatial dependencies between sensor measurements in dense sensor networks or arrays to improve damage localization accuracy in structural health monitoring (SHM) application. Because there are always unobserved damaged states in this application, the available information is insufficient for learning the GMs. To overcome this challenge, we propose an approximated model that uses the mutual information between sensor measurements to learn the GMs. The study is backed by experimental validation of the method on two test structures. The first is a three-story two-bay steel model structure that is instrumented by MEMS accelerometers. The second experimental setup consists of a plate structure and a video camera to measure the displacement field of the plate. Our results show that considering the spatial dependencies by the proposed algorithm can significantly improve damage localization accuracy.

  20. Can a pairwise contact potential stabilize native protein folds against decoys obtained by threading?

    Science.gov (United States)

    Vendruscolo, M; Najmanovich, R; Domany, E

    2000-02-01

    We present a method to derive contact energy parameters from large sets of proteins. The basic requirement on which our method is based is that for each protein in the database the native contact map has lower energy than all its decoy conformations that are obtained by threading. Only when this condition is satisfied one can use the proposed energy function for fold identification. Such a set of parameters can be found (by perceptron learning) if Mp, the number of proteins in the database, is not too large. Other aspects that influence the existence of such a solution are the exact definition of contact and the value of the critical distance Rc, below which two residues are considered to be in contact. Another important novel feature of our approach is its ability to determine whether an energy function of some suitable proposed form can or cannot be parameterized in a way that satisfies our basic requirement. As a demonstration of this, we determine the region in the (Rc, Mp) plane in which the problem is solvable, i.e., we can find a set of contact parameters that stabilize simultaneously all the native conformations. We show that for large enough databases the contact approximation to the energy cannot stabilize all the native folds even against the decoys obtained by gapless threading.