Novel Cluster Validity Index for FCM Algorithm
Jian Yu; Cui-Xia Li
2006-01-01
How to determine an appropriate number of clusters is very important when implementing a specific clustering algorithm, like c-means, fuzzy c-means (FCM). In the literature, most cluster validity indices are originated from partition or geometrical property of the data set. In this paper, the authors developed a novel cluster validity index for FCM, based on the optimality test of FCM. Unlike the previous cluster validity indices, this novel cluster validity index is inherent in FCM itself. Comparison experiments show that the stability index can be used as cluster validity index for the fuzzy c-means.
Muraru, Denisa; Spadotto, Veronica; Cecchetto, Antonella; Romeo, Gabriella; Aruta, Patrizia; Ermacora, Davide; Jenei, Csaba; Cucchini, Umberto; Iliceto, Sabino; Badano, Luigi P
2016-11-01
(i) To validate a new software for right ventricular (RV) analysis by 3D echocardiography (3DE) against cardiac magnetic resonance (CMR); (ii) to assess the accuracy of different measurement approaches; and (iii) to explore any benefits vs. the previous software. We prospectively studied with 3DE and CMR 47 patients (14-82 years, 28 men) having a wide range of RV end-diastolic volumes (EDV 82-354 mL at CMR) and ejection fractions (EF 34-81%). Multi-beat RV 3DE data sets were independently analysed with the new software using both automated and manual editing options, as well as with the previous software. RV volume reproducibility was tested in 15 random patients. RV volumes and EF measurements by the new software had an excellent accuracy (bias ± SD: -15 ± 24 mL for EDV; 1.4 ± 4.9% for EF) and reproducibility compared with CMR, provided that the RV borders automatically tracked by software were systematically edited by operator. The automated analysis option underestimated the EDV, overestimated the ESV, and largely underestimated the EF (bias ± SD: -17 ± 10%). RV volumes measured with the new software using manual editing showed similar accuracy, but lower inter-observer variability and shorter analysis time (3-5') in comparison with the previous software. Novel vendor-independent 3DE software enables an accurate, reproducible and faster quantitation of RV volumes and ejection fraction. Rather than optional, systematic verification of border tracking quality and manual editing are mandatory to ensure accurate 3DE measurements. These findings are relevant for echocardiography laboratories aiming to implement 3DE for RV analysis for both research and clinical purposes. Published on behalf of the European Society of Cardiology. All rights reserved. © The Author 2015. For permissions please email: journals.permissions@oup.com.
Aramburu Ander
2010-11-01
Full Text Available Abstract Background Exon arrays provide a way to measure the expression of different isoforms of genes in an organism. Most of the procedures to deal with these arrays are focused on gene expression or on exon expression. Although the only biological analytes that can be properly assigned a concentration are transcripts, there are very few algorithms that focus on them. The reason is that previously developed summarization methods do not work well if applied to transcripts. In addition, gene structure prediction, i.e., the correspondence between probes and novel isoforms, is a field which is still unexplored. Results We have modified and adapted a previous algorithm to take advantage of the special characteristics of the Affymetrix exon arrays. The structure and concentration of transcripts -some of them possibly unknown- in microarray experiments were predicted using this algorithm. Simulations showed that the suggested modifications improved both specificity (SP and sensitivity (ST of the predictions. The algorithm was also applied to different real datasets showing its effectiveness and the concordance with PCR validated results. Conclusions The proposed algorithm shows a substantial improvement in the performance over the previous version. This improvement is mainly due to the exploitation of the redundancy of the Affymetrix exon arrays. An R-Package of SPACE with the updated algorithms have been developed and is freely available.
David Maldavsky
2013-08-01
Full Text Available The author first exposes a complement of a previous test about convergent validity, then a construct validity test and finally an external validity test of the David Liberman algorithm. The first part of the paper focused on a complementary aspect, the differential sensitivity of the DLA 1 in an external comparison (to other methods, and 2 in an internal comparison (between two ways of using the same method, the DLA. The construct validity test exposes the concepts underlined to DLA, their operationalization and some corrections emerging from several empirical studies we carried out. The external validity test examines the possibility of using the investigation of a single case and its relation with the investigation of a more extended sample.
Univariate time series forecasting algorithm validation
Ismail, Suzilah; Zakaria, Rohaiza; Muda, Tuan Zalizam Tuan
2014-12-01
Forecasting is a complex process which requires expert tacit knowledge in producing accurate forecast values. This complexity contributes to the gaps between end users and expert. Automating this process by using algorithm can act as a bridge between them. Algorithm is a well-defined rule for solving a problem. In this study a univariate time series forecasting algorithm was developed in JAVA and validated using SPSS and Excel. Two set of simulated data (yearly and non-yearly); several univariate forecasting techniques (i.e. Moving Average, Decomposition, Exponential Smoothing, Time Series Regressions and ARIMA) and recent forecasting process (such as data partition, several error measures, recursive evaluation and etc.) were employed. Successfully, the results of the algorithm tally with the results of SPSS and Excel. This algorithm will not just benefit forecaster but also end users that lacking in depth knowledge of forecasting process.
Soil moisture and temperature algorithms and validation
Passive microwave remote sensing of soil moisture has matured over the past decade as a result of the Advanced Microwave Scanning Radiometer (AMSR) program of JAXA. This program has resulted in improved algorithms that have been supported by rigorous validation. Access to the products and the valida...
Clinical Validation of Adjusted Corneal Power in Patients with Previous Myopic Lasik Surgery
Vicente J. Camps
2015-01-01
Full Text Available Purpose. To validate clinically a new method for estimating the corneal power (Pc using a variable keratometric index (nkadj in eyes with previous laser refractive surgery. Setting. University of Alicante and Medimar International Hospital (Oftalmar, Alicante, (Spain. Design. Retrospective case series. Methods. This retrospective study comprised 62 eyes of 62 patients that had undergone myopic LASIK surgery. An algorithm for the calculation of nkadj was used for the estimation of the adjusted keratometric corneal power (Pkadj. This value was compared with the classical keratometric corneal power (Pk, the True Net Power (TNP, and the Gaussian corneal power (PcGauss. Likewise, Pkadj was compared with other previously described methods. Results. Differences between PcGauss and Pc values obtained with all methods evaluated were statistically significant (p<0.01. Differences between Pkadj and PcGauss were in the limit of clinical significance (p<0.01, loA [−0.33,0.60] D. Differences between Pkadj and TNP were not statistically and clinically significant (p=0.319, loA [−0.50,0.44] D. Differences between Pkadj and previously described methods were statistically significant (p<0.01, except with PcHaigisL (p=0.09, loA [−0.37,0.29] D. Conclusion. The use of the adjusted keratometric index (nkadj is a valid method to estimate the central corneal power in corneas with previous myopic laser refractive surgery, providing results comparable to PcHaigisL.
Improvement and Validation of the BOAT Algorithm
Yingchun Liu
2014-04-01
Full Text Available The main objective of this paper is improving the BOAT classification algorithm and applying it in credit card big data analysis. Decision tree algorithm is a data analysis method for the classification which can be used to describe the extract important data class models or predict future data trends. The BOAT algorithm can reduce the data during reading and writing the operations, the improved algorithms in large data sets under the operating efficiency, and in line with the popular big data analysis. Through this paper, BOAT algorithm can further improve the performance of the algorithm and the distributed data sources under the performance. In this paper, large banking sectors of credit card data as the being tested data sets. The improved algorithm, the original BOAT algorithms, and the performance of other classical classification algorithms will be compared and analyzed.
Di Candia, Michele; Asfoor, Ahmed Al; Jessop, Zita M.; Kumiponjera, Devor; Hsieh, Frank; Malata, Charles M.
2012-01-01
Presented in part at the following Academic Meetings: 57th Meeting of the Italian Society of Plastic, Reconstructive and Aesthetic Surgery, September 24-27, 2008, Naples, Italy.45th Congress of the European Society for Surgical Research (ESSR), June 9-12, 2010, Geneva, Switzerland.British Association of Plastic Reconstructive and Aesthetic Surgeons Summer Scientific Meeting, June 30-July 2, 2010, Sheffield Hallam University, Sheffield, UK. Background: Patients with previous multiple abdominal surgeries are often denied abdominal free flap breast reconstruction because of concerns about flap viability and abdominal wall integrity. We therefore studied their flap and donor site outcomes and compared them to patients with no previous abdominal surgery to find out whether this is a valid contraindication to the use of abdominal tissue. Patients and Methods: Twenty patients with multiple previous abdominal operations who underwent abdominal free flap breast reconstruction by a single surgeon (C.M.M., 2000-2009) were identified and retrospectively compared with a cohort of similar patients without previous abdominal surgery (sequential allocation control group, n = 20). Results: The index and control groups were comparable in age, body mass index, comorbidities, previous chemotherapy, and RT exposure. The index patients had a mean age of 54 years (r, 42-63) and an average body mass index of 27.5 kg/m2 (r, 22-38). The main previous surgeries were Caesarean sections (19), hysterectomies (8), and cholecystectomies (6). They underwent immediate (n = 9) or delayed (n = 11) reconstructions either unilaterally (n = 18) or bilaterally (n = 2) and comprising 9 muscle-sparing free transverse rectus abdominis muscle and 13 deep inferior epigastric perforator flaps. All flaps were successful, and there were no significant differences in flap and donor site outcomes between the 2 groups after an average follow up of 26 months (r, 10-36). Conclusion: Multiple previous abdominal
GCOM-W soil moisture and temperature algorithms and validation
Passive microwave remote sensing of soil moisture has matured over the past decade as a result of the Advanced Microwave Scanning Radiometer (AMSR) program of JAXA. This program has resulted in improved algorithms that have been supported by rigorous validation. Access to the products and the valida...
Benchmarking protein classification algorithms via supervised cross-validation
Kertész-Farkas, A.; Dhir, S.; Sonego, P.; Pacurar, M.; Netoteia, S.; Nijveen, H.; Kuzniar, A.; Leunissen, J.A.M.; Kocsor, A.; Pongor, S.
2008-01-01
Development and testing of protein classification algorithms are hampered by the fact that the protein universe is characterized by groups vastly different in the number of members, in average protein size, similarity within group, etc. Datasets based on traditional cross-validation (k-fold, leave-o
New validation algorithm for data association in SLAM.
Guerra, Edmundo; Munguia, Rodrigo; Bolea, Yolanda; Grau, Antoni
2013-09-01
In this work, a novel data validation algorithm for a single-camera SLAM system is introduced. A 6-degree-of-freedom monocular SLAM method based on the delayed inverse-depth (DI-D) feature initialization is used as a benchmark. This SLAM methodology has been improved with the introduction of the proposed data association batch validation technique, the highest order hypothesis compatibility test, HOHCT. This new algorithm is based on the evaluation of statistically compatible hypotheses, and a search algorithm designed to exploit the characteristics of delayed inverse-depth technique. In order to show the capabilities of the proposed technique, experimental tests have been compared with classical methods. The results of the proposed technique outperformed the results of the classical approaches.
Benchmarking protein classification algorithms via supervised cross-validation.
Kertész-Farkas, Attila; Dhir, Somdutta; Sonego, Paolo; Pacurar, Mircea; Netoteia, Sergiu; Nijveen, Harm; Kuzniar, Arnold; Leunissen, Jack A M; Kocsor, András; Pongor, Sándor
2008-04-24
Development and testing of protein classification algorithms are hampered by the fact that the protein universe is characterized by groups vastly different in the number of members, in average protein size, similarity within group, etc. Datasets based on traditional cross-validation (k-fold, leave-one-out, etc.) may not give reliable estimates on how an algorithm will generalize to novel, distantly related subtypes of the known protein classes. Supervised cross-validation, i.e., selection of test and train sets according to the known subtypes within a database has been successfully used earlier in conjunction with the SCOP database. Our goal was to extend this principle to other databases and to design standardized benchmark datasets for protein classification. Hierarchical classification trees of protein categories provide a simple and general framework for designing supervised cross-validation strategies for protein classification. Benchmark datasets can be designed at various levels of the concept hierarchy using a simple graph-theoretic distance. A combination of supervised and random sampling was selected to construct reduced size model datasets, suitable for algorithm comparison. Over 3000 new classification tasks were added to our recently established protein classification benchmark collection that currently includes protein sequence (including protein domains and entire proteins), protein structure and reading frame DNA sequence data. We carried out an extensive evaluation based on various machine-learning algorithms such as nearest neighbor, support vector machines, artificial neural networks, random forests and logistic regression, used in conjunction with comparison algorithms, BLAST, Smith-Waterman, Needleman-Wunsch, as well as 3D comparison methods DALI and PRIDE. The resulting datasets provide lower, and in our opinion more realistic estimates of the classifier performance than do random cross-validation schemes. A combination of supervised and
Validation of a Bayesian-based isotope identification algorithm
Sullivan, C.J.; Stinnett, J., E-mail: stinnettjacob@gmail.com
2015-06-01
Handheld radio-isotope identifiers (RIIDs) are widely used in Homeland Security and other nuclear safety applications. However, most commercially available devices have serious problems in their ability to correctly identify isotopes. It has been reported that this flaw is largely due to the overly simplistic identification algorithms on-board the RIIDs. This paper reports on the experimental validation of a new isotope identification algorithm using a Bayesian statistics approach to identify the source while allowing for calibration drift and unknown shielding. We present here results on further testing of this algorithm and a study on the observed variation in the gamma peak energies and areas from a wavelet-based peak identification algorithm.
Spence, S H
1981-01-01
Seventy convicted young male offenders were videotaped during a 5-min standardized interview with a previously unknown adult. In order to determine the social validity of the behavioral components of social interaction for this population, measures of 13 behaviors were obtained from the tapes. These measures were then correlated with ratings of friendliness, social anxiety, social skills performance, and employability made by four independent adult judges from the same tapes. It was found that measures of eye contact and verbal initiations were correlated significantly with all four criterion rating scales. The frequencies of smiling and speech dysfluencies were both significantly correlated with ratings of friendliness and employability. The amount spoken was found to be a significant predictor of social skills performance whereas the frequency of head movements influenced judgments of social anxiety. The latency of response was negatively correlated with social skills and employability ratings and the frequency of question-asking and interruptions correlated significantly with friendliness, social skills, and employability ratings. Finally, the levels of gestures, gross body movements, and attention feedback responses were not found to influence judgments on any of the criterion scales. The implications of the study for selection of targets for social skills training for adolescent male offenders are discussed.
Algorithms for verbal autopsies: a validation study in Kenyan children.
Quigley, M. A.; Armstrong Schellenberg, J. R.; Snow, R. W.
1996-01-01
The verbal autopsy (VA) questionnaire is a widely used method for collecting information on cause-specific mortality where the medical certification of deaths in childhood is incomplete. This paper discusses review by physicians and expert algorithms as approaches to ascribing cause of deaths from the VA questionnaire and proposes an alternative, data-derived approach. In this validation study, the relatives of 295 children who had died in hospital were interviewed using a VA questionnaire. The children were assigned causes of death using data-derived algorithms obtained under logistic regression and using expert algorithms. For most causes of death, the data-derived algorithms and expert algorithms yielded similar levels of diagnostic accuracy. However, a data-derived algorithm for malaria gave a sensitivity of 71% (95% Cl: 58-84%), which was significantly higher than the sensitivity of 47% obtained under an expert algorithm. The need for exploring this and other ways in which the VA technique can be improved are discussed. The implications of less-than-perfect sensitivity and specificity are explored using numerical examples. Misclassification bias should be taken into consideration when planning and evaluating epidemiological studies. PMID:8706229
Solar occultation with SCIAMACHY: algorithm description and first validation
J. Meyer
2005-01-01
Full Text Available This presentation concentrates on solar occultation measurements with the spaceborne spectrometer SCIAMACHY in the UV-Vis wavelength range. Solar occultation measurements provide unique information about the vertical distribution of atmospheric constituents. For retrieval of vertical trace gas concentration profiles, an algorithm has been developed based on the optimal estimation method. The forward model is capable of simulating the extinction signals of different species as they occur in atmospheric transmission spectra obtained from occultation measurements. Furthermore, correction algorithms have been implemented to address shortcomings of the tangent height pre-processing and inhomogeneities of measured solar spectra. First results of O3 and NO2 vertical profile retrievals have been validated with data from ozone sondes and satellite based occultation instruments. The validation shows very promising results for SCIAMACHY O3 and NO2 values between 15 to 35km with errors of the order of 10% and 15%, respectively.
Validation of an algorithm for planar surgical resection reconstruction
Milano, Federico E.; Ritacco, Lucas E.; Farfalli, Germán L.; Aponte-Tinao, Luis A.; González Bernaldo de Quirós, Fernán; Risk, Marcelo
2012-02-01
Surgical planning followed by computer-assisted intraoperative navigation in orthopaedics oncology for tumor resection have given acceptable results in the last few years. However, the accuracy of preoperative planning and navigation is not clear yet. The aim of this study is to validate a method capable of reconstructing the nearly planar surface generated by the cutting saw in the surgical specimen taken off the patient during the resection procedure. This method estimates an angular and offset deviation that serves as a clinically useful resection accuracy measure. The validation process targets the degree to which the automatic estimation is true, taking as a validation criterium the accuracy of the estimation algorithm. For this purpose a manually estimated gold standard (a bronze standard) data set is built by an expert surgeon. The results show that the manual and the automatic methods consistently provide similar measures.
Simonett, Joseph M.; Sohrab, Mahsa A.; Pacheco, Jennifer; Armstrong, Loren L.; Rzhetskaya, Margarita; Smith, Maureen; Geoffrey Hayes, M.; Fawzi, Amani A.
2015-01-01
Age-related macular degeneration (AMD), a multifactorial, neurodegenerative disease, is a leading cause of vision loss. With the rapid advancement of DNA sequencing technologies, many AMD-associated genetic polymorphisms have been identified. Currently, the most time consuming steps of these studies are patient recruitment and phenotyping. In this study, we describe the development of an automated algorithm to identify neovascular (wet) AMD, non-neovascular (dry) AMD and control subjects using electronic medical record (EMR)-based criteria. Positive predictive value (91.7%) and negative predictive value (97.5%) were calculated using expert chart review as the gold standard to assess algorithm performance. We applied the algorithm to an EMR-linked DNA bio-repository to study previously identified AMD-associated single nucleotide polymorphisms (SNPs), using case/control status determined by the algorithm. Risk alleles of three SNPs, rs1061170 (CFH), rs1410996 (CFH), and rs10490924 (ARMS2) were found to be significantly associated with the AMD case/control status as defined by the algorithm. With the rapid growth of EMR-linked DNA biorepositories, patient selection algorithms can greatly increase the efficiency of genetic association study. We have found that stepwise validation of such an algorithm can result in reliable cohort selection and, when coupled within an EMR-linked DNA biorepository, replicates previously published AMD-associated SNPs. PMID:26255974
Linear algorithms for phase retrieval in the Fresnel region: validity conditions
Gureyev, T E
2015-01-01
We describe the relationship between different forms of linearized expressions for the spatial distribution of intensity of X-ray projection images obtained in the Fresnel region. We prove that under the natural validity conditions some of the previously published expressions can be simplified without a loss of accuracy. We also introduce modified validity conditions which are likely to be fulfilled in many relevant practical cases, and which lead to a further significant simplification of the expression for the image-plane intensity, permitting simple non-iterative linear algorithms for the phase retrieval.
The semianalytical cloud retrieval algorithm for SCIAMACHY I. The validation
A. A. Kokhanovsky
2006-01-01
Full Text Available A recently developed cloud retrieval algorithm for the SCanning Imaging Absorption spectroMeter for Atmospheric CHartographY (SCIAMACHY is briefly presented and validated using independent and well tested cloud retrieval techniques based on the look-up-table approach for MODeration resolutIon Spectrometer (MODIS data. The results of the cloud top height retrievals using measurements in the oxygen A-band by an airborne crossed Czerny-Turner spectrograph and the Global Ozone Monitoring Experiment (GOME instrument are compared with those obtained from airborne dual photography and retrievals using data from Along Track Scanning Radiometer (ATSR-2, respectively.
KnoE: A Web Mining Tool to Validate Previously Discovered Semantic Correspondences
Jorge Martinez-Gil; José F.Aldana-Montes
2012-01-01
The problem of matching schemas or ontologies consists of providing corresponding entities in two or more knowledge models that belong to a same domain but have been developed separately.Nowadays there are a lot of techniques and tools for addressing this problem,however,the complex nature of the matching problem make existing solutions for real situations not fully satisfactory.The Google Similarity Distance has appeared recently.Its purpose is to mine knowledge from the Web using the Google search engine in order to semantically compare text expressions.Our work consists of developing a software application for validating results discovered by schema and ontology matching tools using the philosophy behind this distance.Moreover,we are interested in using not only Google,but other popular search engines with this similarity distance.The results reveal three main facts.Firstly,some web search engines can help us to validate semantic correspondences satisfactorily.Secondly there are significant differences among the web search engines.And thirdly the best results are obtained when using combinations of the web search engines that we have studied.
Solar occultation with SCIAMACHY: algorithm description and first validation
J. Meyer
2005-01-01
Full Text Available This presentation concentrates on solar occultation measurements with the spaceborne spectrometer SCIAMACHY in the UV-Vis wavelength range. Solar occultation measurements provide unique information about the vertical distribution of atmospheric constituents. For retrieval of vertical trace gas concentration profiles, an algorithm has been developed based on the optimal estimation method. The forward model is capable to simulate the extinction signals of different species as they occur in atmospheric transmission spectra obtained from occultation measurements. Furthermore, correction algorithms have been implemented to address shortcomings of the tangent height pre-processing and inhomogeneities of measured solar spectra. First results of O_{3} and NO_{2} vertical profile retrievals have been validated with data from ozone sondes and satellite based occultation instruments. The validation shows very promising results for SCIAMACHY O_{3} and NO_{2} values between 15 to 35 km with errors in the order of 10% and 15%, respectively.
Development of a validation algorithm for 'present on admission' flagging
Cheng Diana
2009-12-01
Full Text Available Abstract Background The use of routine hospital data for understanding patterns of adverse outcomes has been limited in the past by the fact that pre-existing and post-admission conditions have been indistinguishable. The use of a 'Present on Admission' (or POA indicator to distinguish pre-existing or co-morbid conditions from those arising during the episode of care has been advocated in the US for many years as a tool to support quality assurance activities and improve the accuracy of risk adjustment methodologies. The USA, Australia and Canada now all assign a flag to indicate the timing of onset of diagnoses. For quality improvement purposes, it is the 'not-POA' diagnoses (that is, those acquired in hospital that are of interest. Methods Our objective was to develop an algorithm for assessing the validity of assignment of 'not-POA' flags. We undertook expert review of the International Classification of Diseases, 10th Revision, Australian Modification (ICD-10-AM to identify conditions that could not be plausibly hospital-acquired. The resulting computer algorithm was tested against all diagnoses flagged as complications in the Victorian (Australia Admitted Episodes Dataset, 2005/06. Measures reported include rates of appropriate assignment of the new Australian 'Condition Onset' flag by ICD chapter, and patterns of invalid flagging. Results Of 18,418 diagnosis codes reviewed, 93.4% (n = 17,195 reflected agreement on status for flagging by at least 2 of 3 reviewers (including 64.4% unanimous agreement; Fleiss' Kappa: 0.61. In tests of the new algorithm, 96.14% of all hospital-acquired diagnosis codes flagged were found to be valid in the Victorian records analysed. A lower proportion of individual codes was judged to be acceptably flagged (76.2%, but this reflected a high proportion of codes used Conclusion An indicator variable about the timing of occurrence of diagnoses can greatly expand the use of routinely coded data for hospital quality
Fogliata, Antonella; Nicolini, Giorgia; Clivio, Alessandro; Vanetti, Eugenio; Cozzi, Luca [Oncology Institute of Southern Switzerland, Medical Physics Unit, Bellinzona (Switzerland); Mancosu, Pietro, E-mail: afc@iosi.ch [Istituto Clinico Humanitas, Radio-Oncology Department, Milan-Rozzano (Italy)
2011-03-21
A new algorithm, Acuros (registered) XB Advanced Dose Calculation, has been introduced by Varian Medical Systems in the Eclipse planning system for photon dose calculation in external radiotherapy. Acuros XB is based on the solution of the linear Boltzmann transport equation (LBTE). The LBTE describes the macroscopic behaviour of radiation particles as they travel through and interact with matter. The implementation of Acuros XB in Eclipse has not been assessed; therefore, it is necessary to perform these pre-clinical validation tests to determine its accuracy. This paper summarizes the results of comparisons of Acuros XB calculations against measurements and calculations performed with a previously validated dose calculation algorithm, the Anisotropic Analytical Algorithm (AAA). The tasks addressed in this paper are limited to the fundamental characterization of Acuros XB in water for simple geometries. Validation was carried out for four different beams: 6 and 15 MV beams from a Varian Clinac 2100 iX, and 6 and 10 MV 'flattening filter free' (FFF) beams from a TrueBeam linear accelerator. The TrueBeam FFF are new beams recently introduced in clinical practice on general purpose linear accelerators and have not been previously reported on. Results indicate that Acuros XB accurately reproduces measured and calculated (with AAA) data and only small deviations were observed for all the investigated quantities. In general, the overall degree of accuracy for Acuros XB in simple geometries can be stated to be within 1% for open beams and within 2% for mechanical wedges. The basic validation of the Acuros XB algorithm was therefore considered satisfactory for both conventional photon beams as well as for FFF beams of new generation linacs such as the Varian TrueBeam.
Leung, K M; Hasan, A G; Rees, K S; Parker, R G; Legorreta, A P
1999-01-01
The objectives of this study were to validate a claims-based algorithm for identification of patients with newly diagnosed carcinoma of the breast and to optimize the algorithm. Claims data from all females aged 21 years or older who enrolled in a large California health maintenance organization during the study period from October 1, 1994 through March 31, 1996 were analyzed. Medical records of the patients identified through the claims-based algorithm were reviewed to determine whether the patients were correctly identified. The initial algorithm had a positive predictive value of 84% which was similar to the previous study. The percentages of correct identification significantly increased with the patient's age at diagnosis. Other patient demographic characteristics and facility characteristics were not related to the accuracy of the identification. Using a classification tree procedure and additional information from the false-positive cases, the initial algorithm was modified for improvement. The best-modified algorithm had a positive predictive value of 92% while only 0.5% (4/837) of the true-positive cases were excluded. The results once again demonstrated that patients with newly diagnosed carcinomas of the breast can be identified using claims data. These databases provide an efficient and effective tool for performing health services studies on large patient populations.
[Validity of the 24-h previous day physical activity recall (PDPAR-24) in Spanish adolescents].
Cancela, José María; Lago, Joaquín; Ouviña, Lara; Ayán, Carlos
2015-04-01
Introducción: El control del nivel de práctica de actividad física que realizan los adolescentes, de sus factores determinantes y susceptibilidad al cambio resulta indispensable para intervenir sobre la epidemia de obesidad que afecta a la sociedad española. Sin embargo, el número de cuestionarios validados para valorar la actividad física en adolescentes españoles es escaso. Objetivos: Evaluar la validez del cuestionario24hPrevious Day Physical Activity Recall (PDPAR-24) cuando es aplicado a la población de adolescentes españoles. Método: Participaron en este estudio estudiantes de 14-15 años de dos centros de educación secundaria del norte de Galicia. Como criterio objetivo de la actividad física realizada se utilizó el registro proporcionado por el acelerómetro Actigraph GT3X.Se monitorizó a los sujetos durante un día por medio del acelerómetro y al día siguiente se administró el cuestionario de auto-informe. Resultados: Un total de 79 alumnos (15.16 ± 0.81 años, 39% mujeres) finalizaron el estudio. Se observan correlaciones positivas estadísticamente significativas de tamaño medio a grande en ambos sexos (r=0.50-0.98), para la actividad física ligera y moderada. Las correlaciones observadas son más elevadas a medida que aumenta la intensidad de la actividad física realizada. Conclusiones: El cuestionario de auto-informe PDPAR-24 puede ser considerado como una herramienta válida a la hora de valorar el nivel de actividad física en adolescentes españoles.
Daniel H Rapoport
Full Text Available Automated microscopy is currently the only method to non-invasively and label-free observe complex multi-cellular processes, such as cell migration, cell cycle, and cell differentiation. Extracting biological information from a time-series of micrographs requires each cell to be recognized and followed through sequential microscopic snapshots. Although recent attempts to automatize this process resulted in ever improving cell detection rates, manual identification of identical cells is still the most reliable technique. However, its tedious and subjective nature prevented tracking from becoming a standardized tool for the investigation of cell cultures. Here, we present a novel method to accomplish automated cell tracking with a reliability comparable to manual tracking. Previously, automated cell tracking could not rival the reliability of manual tracking because, in contrast to the human way of solving this task, none of the algorithms had an independent quality control mechanism; they missed validation. Thus, instead of trying to improve the cell detection or tracking rates, we proceeded from the idea to automatically inspect the tracking results and accept only those of high trustworthiness, while rejecting all other results. This validation algorithm works independently of the quality of cell detection and tracking through a systematic search for tracking errors. It is based only on very general assumptions about the spatiotemporal contiguity of cell paths. While traditional tracking often aims to yield genealogic information about single cells, the natural outcome of a validated cell tracking algorithm turns out to be a set of complete, but often unconnected cell paths, i.e. records of cells from mitosis to mitosis. This is a consequence of the fact that the validation algorithm takes complete paths as the unit of rejection/acceptance. The resulting set of complete paths can be used to automatically extract important biological parameters
Moisseiev, Elad; Barequet, Dana; Zunz, Eran; Barak, Adiel; Mardor, Yael; Last, David; Goez, David; Segal, Zvi; Loewenstein, Anat
2015-09-01
To validate and evaluate the accuracy of an algorithm for the identification of nonmetallic intraocular foreign body composition based on computed tomography and magnetic resonance imaging. An algorithm for the identification of 10 nonmetallic materials based on computed tomography and magnetic resonance imaging has been previously determined in an ex vivo porcine model. Materials were classified into 4 groups (plastic, glass, stone, and wood). The algorithm was tested by 40 ophthalmologists, which completed a questionnaire including 10 sets of computed tomography and magnetic resonance images of eyes with intraocular foreign bodies and were asked to use the algorithm to identify their compositions. Rates of exact material identification and group identification were measured. Exact material identification was achieved in 42.75% of the cases, and correct group identification in 65%. Using the algorithm, 6 of the materials were exactly identified by over 50% of the participants, and 7 were correctly classified according to their groups by over 75% of the materials. The algorithm was validated and was found to enable correct identification of nonmetallic intraocular foreign body composition in the majority of cases. This is the first study to report and validate a clinical tool allowing intraocular foreign body composition based on their appearance in computed tomography and magnetic resonance imaging, which was previously impossible.
Weller, Daniel; Shiwakoti, Suvash; Bergholz, Peter; Grohn, Yrjo; Wiedmann, Martin
2015-01-01
Technological advancements, particularly in the field of geographic information systems (GIS), have made it possible to predict the likelihood of foodborne pathogen contamination in produce production environments using geospatial models. Yet, few studies have examined the validity and robustness of such models. This study was performed to test and refine the rules associated with a previously developed geospatial model that predicts the prevalence of Listeria monocytogenes in produce farms in New York State (NYS). Produce fields for each of four enrolled produce farms were categorized into areas of high or low predicted L. monocytogenes prevalence using rules based on a field's available water storage (AWS) and its proximity to water, impervious cover, and pastures. Drag swabs (n = 1,056) were collected from plots assigned to each risk category. Logistic regression, which tested the ability of each rule to accurately predict the prevalence of L. monocytogenes, validated the rules based on water and pasture. Samples collected near water (odds ratio [OR], 3.0) and pasture (OR, 2.9) showed a significantly increased likelihood of L. monocytogenes isolation compared to that for samples collected far from water and pasture. Generalized linear mixed models identified additional land cover factors associated with an increased likelihood of L. monocytogenes isolation, such as proximity to wetlands. These findings validated a subset of previously developed rules that predict L. monocytogenes prevalence in produce production environments. This suggests that GIS and geospatial models can be used to accurately predict L. monocytogenes prevalence on farms and can be used prospectively to minimize the risk of preharvest contamination of produce. PMID:26590280
Jung, Yeonjin; Kim, Jhoon; Kim, Woogyung; Boesch, Hartmut; Goo, Tae-Young; Cho, Chunho
2017-04-01
Although several CO2 retrieval algorithms have been developed to improve our understanding about carbon cycle, limitations in spatial coverage and uncertainties due to aerosols and thin cirrus clouds are still remained as a problem for monitoring CO2 concentration globally. Based on an optimal estimation method, the Yonsei CArbon Retrieval (YCAR) algorithm was developed to retrieve the column-averaged dry-air mole fraction of carbon dioxide (XCO2) using the Greenhouse Gases Observing SATellite (GOSAT) measurements with optimized a priori CO2 profiles and aerosol models over East Asia. In previous studies, the aerosol optical properties (AOP) are the most important factors in CO2 retrievals since AOPs are assumed as fixed parameters during retrieval process, resulting in significant XCO2 retrieval error up to 2.5 ppm. In this study, to reduce these errors caused by inaccurate aerosol optical information, the YCAR algorithm improved with taking into account aerosol optical properties as well as aerosol vertical distribution simultaneously. The CO2 retrievals with two difference aerosol approaches have been analyzed using the GOSAT spectra and have been evaluated throughout the comparison with collocated ground-based observations at several Total Carbon Column Observing Network (TCCON) sites. The improved YCAR algorithm has biases of 0.59±0.48 ppm and 2.16±0.87 ppm at Saga and Tsukuba sites, respectively, with smaller biases and higher correlation coefficients compared to the GOSAT operational algorithm. In addition, the XCO2 retrievals will be validated at other TCCON sites and error analysis will be evaluated. These results reveal that considering better aerosol information can improve the accuracy of CO2 retrieval algorithm and provide more useful XCO2 information with reduced uncertainties. This study would be expected to provide useful information in estimating carbon sources and sinks.
Validation of the Saskatoon Falls Prevention Consortium's Falls Screening and Referral Algorithm
Lawson, Sara Nicole; Zaluski, Neal; Petrie, Amanda; Arnold, Cathy; Basran, Jenny
2013-01-01
ABSTRACT Purpose: To investigate the concurrent validity of the Saskatoon Falls Prevention Consortium's Falls Screening and Referral Algorithm (FSRA). Method: A total of 29 older adults (mean age 77.7 [SD 4.0] y) residing in an independent-living senior's complex who met inclusion criteria completed a demographic questionnaire and the components of the FSRA and Berg Balance Scale (BBS). The FSRA consists of the Elderly Fall Screening Test (EFST) and the Multi-factor Falls Questionnaire (MFQ); it is designed to categorize individuals into low, moderate, or high fall-risk categories to determine appropriate management pathways. A predictive model for probability of fall risk, based on previous research, was used to determine concurrent validity of the FSRI. Results: The FSRA placed 79% of participants into the low-risk category, whereas the predictive model found the probability of fall risk to range from 0.04 to 0.74, with a mean of 0.35 (SD 0.25). No statistically significant correlation was found between the FSRA and the predictive model for probability of fall risk (Spearman's ρ=0.35, p=0.06). Conclusion: The FSRA lacks concurrent validity relative to to a previously established model of fall risk and appears to over-categorize individuals into the low-risk group. Further research on the FSRA as an adequate tool to screen community-dwelling older adults for fall risk is recommended. PMID:24381379
Accuracy Validation for Medical Image Registration Algorithms: a Review
Zhe Liu; Xiang Deng; Guang-zhi Wang
2012-01-01
Accuracy validation is essential to clinical application of medical image registration techniques.Registration validation remains a challenging problem in practice mainly due to lack of 'ground truth'.In this paper,an overview of current validation methods for medical image registration is presented with detailed discussion of their benefits and drawbacks.Special focus is on non-rigid registration validation.Promising solution is also discussed.
Hung, Peter W; Paik, David S; Napel, Sandy; Yee, Judy; Jeffrey, R Brooke; Steinauer-Gebauer, Andreas; Min, Juno; Jathavedam, Ashwin; Beaulieu, Christopher F
2002-02-01
Three bowel distention-measuring algorithms for use at computed tomographic (CT) colonography were developed, validated in phantoms, and applied to a human CT colonographic data set. The three algorithms are the cross-sectional area method, the moving spheres method, and the segmental volume method. Each algorithm effectively quantified distention, but accuracy varied between methods. Clinical feasibility was demonstrated. Depending on the desired spatial resolution and accuracy, each algorithm can quantitatively depict colonic diameter in CT colonography.
External validation of a claims-based algorithm for classifying kidney-cancer surgeries
Deapen Dennis
2009-06-01
Full Text Available Abstract Background Unlike other malignancies, there is no literature supporting the accuracy of medical claims data for identifying surgical treatments among patients with kidney cancer. We sought to validate externally a previously published Medicare-claims-based algorithm for classifying surgical treatments among patients with early-stage kidney cancer. To achieve this aim, we compared procedure assignments based on Medicare claims with the type of surgery specified in SEER registry data and clinical operative reports. Methods Using linked SEER-Medicare data, we calculated the agreement between Medicare claims and SEER data for identification of cancer-directed surgery among 6,515 patients diagnosed with early-stage kidney cancer. Next, for a subset of 120 cases, we determined the agreement between the claims algorithm and the medical record. Finally, using the medical record as the reference-standard, we calculated the sensitivity, specificity, and positive and negative predictive values of the claims algorithm. Results Among 6,515 cases, Medicare claims and SEER data identified 5,483 (84.1% and 5,774 (88.6% patients, respectively, who underwent cancer-directed surgery (observed agreement = 93%, κ = 0.69, 95% CI 0.66 – 0.71. The two data sources demonstrated 97% agreement for classification of partial versus radical nephrectomy (κ = 0.83, 95% CI 0.81 – 0.86. We observed 97% agreement between the claims algorithm and clinical operative reports; the positive predictive value of the claims algorithm exceeded 90% for identification of both partial nephrectomy and laparoscopic surgery. Conclusion Medicare claims represent an accurate data source for ascertainment of population-based patterns of surgical care among patients with early-stage kidney cancer.
Widdifield, Jessica; Bombardier, Claire; Bernatsky, Sasha; Paterson, J Michael; Green, Diane; Young, Jacqueline; Ivers, Noah; Butt, Debra A; Jaakkimainen, R Liisa; Thorne, J Carter; Tu, Karen
2014-06-23
We have previously validated administrative data algorithms to identify patients with rheumatoid arthritis (RA) using rheumatology clinic records as the reference standard. Here we reassessed the accuracy of the algorithms using primary care records as the reference standard. We performed a retrospective chart abstraction study using a random sample of 7500 adult patients under the care of 83 family physicians contributing to the Electronic Medical Record Administrative data Linked Database (EMRALD) in Ontario, Canada. Using physician-reported diagnoses as the reference standard, we computed and compared the sensitivity, specificity, and predictive values for over 100 administrative data algorithms for RA case ascertainment. We identified 69 patients with RA for a lifetime RA prevalence of 0.9%. All algorithms had excellent specificity (>97%). However, sensitivity varied (75-90%) among physician billing algorithms. Despite the low prevalence of RA, most algorithms had adequate positive predictive value (PPV; 51-83%). The algorithm of "[1 hospitalization RA diagnosis code] or [3 physician RA diagnosis codes with ≥1 by a specialist over 2 years]" had a sensitivity of 78% (95% CI 69-88), specificity of 100% (95% CI 100-100), PPV of 78% (95% CI 69-88) and NPV of 100% (95% CI 100-100). Administrative data algorithms for detecting RA patients achieved a high degree of accuracy amongst the general population. However, results varied slightly from our previous report, which can be attributed to differences in the reference standards with respect to disease prevalence, spectrum of disease, and type of comparator group.
An analytic parton shower. Algorithms, implementation and validation
Schmidt, Sebastian
2012-06-15
The realistic simulation of particle collisions is an indispensable tool to interpret the data measured at high-energy colliders, for example the now running Large Hadron Collider at CERN. These collisions at these colliders are usually simulated in the form of exclusive events. This thesis focuses on the perturbative QCD part involved in the simulation of these events, particularly parton showers and the consistent combination of parton showers and matrix elements. We present an existing parton shower algorithm for emissions off final state partons along with some major improvements. Moreover, we present a new parton shower algorithm for emissions off incoming partons. The aim of these particular algorithms, called analytic parton shower algorithms, is to be able to calculate the probabilities for branchings and for whole events after the event has been generated. This allows a reweighting procedure to be applied after the events have been simulated. We show a detailed description of the algorithms, their implementation and the interfaces to the event generator WHIZARD. Moreover we discuss the implementation of a MLM-type matching procedure and an interface to the shower and hadronization routines from PYTHIA. Finally, we compare several predictions by our implementation to experimental measurements at LEP, Tevatron and LHC, as well as to predictions obtained using PYTHIA. (orig.)
A pathway EM-algorithm for estimating vaccine efficacy with a non-monotone validation set.
Yang, Yang; Halloran, M Elizabeth; Chen, Yanjun; Kenah, Eben
2014-09-01
Here, we consider time-to-event data where individuals can experience two or more types of events that are not distinguishable from one another without further confirmation, perhaps by laboratory test. The event type of primary interest can occur only once. The other types of events can recur. If the type of a portion of the events is identified, this forms a validation set. However, even if a random sample of events are tested, confirmations can be missing nonmonotonically, creating uncertainty about whether an individual is still at risk for the event of interest. For example, in a study to estimate efficacy of an influenza vaccine, an individual may experience a sequence of symptomatic respiratory illnesses caused by various pathogens over the season. Often only a limited number of these episodes are confirmed in the laboratory to be influenza-related or not. We propose two novel methods to estimate covariate effects in this survival setting, and subsequently vaccine efficacy. The first is a pathway expectation-maximization (EM) algorithm that takes into account all pathways of event types in an individual compatible with that individual's test outcomes. The pathway EM iteratively estimates baseline hazards that are used to weight possible event types. The second method is a non-iterative pathway piecewise validation method that does not estimate the baseline hazards. These methods are compared with a previous simpler method. Simulation studies suggest mean squared error is lower in the efficacy estimates when the baseline hazards are estimated, especially at higher hazard rates. We use the pathway EM-algorithm to reevaluate the efficacy of a trivalent live-attenuated influenza vaccine during the 2003-2004 influenza season in Temple-Belton, Texas, and compare our results with a previously published analysis. © 2014, The International Biometric Society.
VDA, a method of choosing a better algorithm with fewer validations.
Francesco Strino
Full Text Available The multitude of bioinformatics algorithms designed for performing a particular computational task presents end-users with the problem of selecting the most appropriate computational tool for analyzing their biological data. The choice of the best available method is often based on expensive experimental validation of the results. We propose an approach to design validation sets for method comparison and performance assessment that are effective in terms of cost and discrimination power.Validation Discriminant Analysis (VDA is a method for designing a minimal validation dataset to allow reliable comparisons between the performances of different algorithms. Implementation of our VDA approach achieves this reduction by selecting predictions that maximize the minimum Hamming distance between algorithmic predictions in the validation set. We show that VDA can be used to correctly rank algorithms according to their performances. These results are further supported by simulations and by realistic algorithmic comparisons in silico.VDA is a novel, cost-efficient method for minimizing the number of validation experiments necessary for reliable performance estimation and fair comparison between algorithms.Our VDA software is available at http://sourceforge.net/projects/klugerlab/files/VDA/
Fast algorithms for Quadrature by Expansion I: Globally valid expansions
Rachh, Manas; Klöckner, Andreas; O'Neil, Michael
2017-09-01
The use of integral equation methods for the efficient numerical solution of PDE boundary value problems requires two main tools: quadrature rules for the evaluation of layer potential integral operators with singular kernels, and fast algorithms for solving the resulting dense linear systems. Classically, these tools were developed separately. In this work, we present a unified numerical scheme based on coupling Quadrature by Expansion, a recent quadrature method, to a customized Fast Multipole Method (FMM) for the Helmholtz equation in two dimensions. The method allows the evaluation of layer potentials in linear-time complexity, anywhere in space, with a uniform, user-chosen level of accuracy as a black-box computational method. Providing this capability requires geometric and algorithmic considerations beyond the needs of standard FMMs as well as careful consideration of the accuracy of multipole translations. We illustrate the speed and accuracy of our method with various numerical examples.
Wognum, S; Heethuis, S E; Rosario, T; Hoogeman, M S; Bel, A
2014-07-01
The spatial accuracy of deformable image registration (DIR) is important in the implementation of image guided adaptive radiotherapy techniques for cancer in the pelvic region. Validation of algorithms is best performed on phantoms with fiducial markers undergoing controlled large deformations. Excised porcine bladders, exhibiting similar filling and voiding behavior as human bladders, provide such an environment. The aim of this study was to determine the spatial accuracy of different DIR algorithms on CT images of ex vivo porcine bladders with radiopaque fiducial markers applied to the outer surface, for a range of bladder volumes, using various accuracy metrics. Five excised porcine bladders with a grid of 30-40 radiopaque fiducial markers attached to the outer wall were suspended inside a water-filled phantom. The bladder was filled with a controlled amount of water with added contrast medium for a range of filling volumes (100-400 ml in steps of 50 ml) using a luer lock syringe, and CT scans were acquired at each filling volume. DIR was performed for each data set, with the 100 ml bladder as the reference image. Six intensity-based algorithms (optical flow or demons-based) implemented in theMATLAB platform DIRART, a b-spline algorithm implemented in the commercial software package VelocityAI, and a structure-based algorithm (Symmetric Thin Plate Spline Robust Point Matching) were validated, using adequate parameter settings according to values previously published. The resulting deformation vector field from each registration was applied to the contoured bladder structures and to the marker coordinates for spatial error calculation. The quality of the algorithms was assessed by comparing the different error metrics across the different algorithms, and by comparing the effect of deformation magnitude (bladder volume difference) per algorithm, using the Independent Samples Kruskal-Wallis test. The authors found good structure accuracy without dependency on
Greiff, G; Pleym, H; Stenseth, R; Wahba, A; Videm, V
2015-07-01
Severe post-operative bleeding in cardiac surgery is associated with increased morbidity and mortality. We hypothesized that variation in genetic susceptibility contributes to post-operative bleeding in addition to clinical factors. We included 1036 adults undergoing cardiac surgery with cardiopulmonary bypass. Two different endpoints for excessive post-operative bleeding were used, either defined as blood loss exceeding 2 ml/kg/h the first 4 h post-operatively or a composite including bleeding, transfusions, and reoperations. Twenty-two single nucleotide polymorphisms (SNPs) central in the coagulation and fibrinolysis systems or in platelet membrane receptors were genotyped, focusing on replication of earlier non-replicated findings and exploration of potential novel associations. Using logistic regression, significant SNPs were added to a model with only clinical variables to evaluate whether the genetic variables provided additional information. Univariate tests identified rs1799809 (located in the promoter region of the PROC gene), rs27646 and rs1062535 (in the ITGA2 gene), rs630014 (in the ABO gene), and rs6048 (in the F9 gene) as significantly associated with excessive post-operative bleeding (P after adjustment with clinical variables, showing almost unchanged odds ratios except for rs1799809 (P = 0.06). Addition of the genetic covariates to a logistic regression model with clinical variables significantly improved the model (P bleeding after cardiac surgery, of which two validated previously published associations. Addition of genetic information to models with only clinical variables improved the models. Our results indicate that common genetic variations significantly influence post-operative bleeding after cardiac surgery. © 2015 The Acta Anaesthesiologica Scandinavica Foundation. Published by John Wiley & Sons Ltd.
Validation of space/ground antenna control algorithms using a computer-aided design tool
Gantenbein, Rex E.
1995-01-01
The validation of the algorithms for controlling the space-to-ground antenna subsystem for Space Station Alpha is an important step in assuring reliable communications. These algorithms have been developed and tested using a simulation environment based on a computer-aided design tool that can provide a time-based execution framework with variable environmental parameters. Our work this summer has involved the exploration of this environment and the documentation of the procedures used to validate these algorithms. We have installed a variety of tools in a laboratory of the Tracking and Communications division for reproducing the simulation experiments carried out on these algorithms to verify that they do meet their requirements for controlling the antenna systems. In this report, we describe the processes used in these simulations and our work in validating the tests used.
Validating Machine Learning Algorithms for Twitter Data Against Established Measures of Suicidality.
Braithwaite, Scott R; Giraud-Carrier, Christophe; West, Josh; Barnes, Michael D; Hanson, Carl Lee
2016-05-16
One of the leading causes of death in the United States (US) is suicide and new methods of assessment are needed to track its risk in real time. Our objective is to validate the use of machine learning algorithms for Twitter data against empirically validated measures of suicidality in the US population. Using a machine learning algorithm, the Twitter feeds of 135 Mechanical Turk (MTurk) participants were compared with validated, self-report measures of suicide risk. Our findings show that people who are at high suicidal risk can be easily differentiated from those who are not by machine learning algorithms, which accurately identify the clinically significant suicidal rate in 92% of cases (sensitivity: 53%, specificity: 97%, positive predictive value: 75%, negative predictive value: 93%). Machine learning algorithms are efficient in differentiating people who are at a suicidal risk from those who are not. Evidence for suicidality can be measured in nonclinical populations using social media data.
AnL1 smoothing spline algorithm with cross validation
Bosworth, Ken W.; Lall, Upmanu
1993-08-01
We propose an algorithm for the computation ofL1 (LAD) smoothing splines in the spacesWM(D), with . We assume one is given data of the formyiD(f(ti) +ɛi, iD1,...,N with {itti}iD1N ⊂D, theɛi are errors withE(ɛi)D0, andf is assumed to be inWM. The LAD smoothing spline, for fixed smoothing parameterλ?;0, is defined as the solution,sλ, of the optimization problem (1/N)∑iD1N yi-g(ti +λJM(g), whereJM(g) is the seminorm consisting of the sum of the squaredL2 norms of theMth partial derivatives ofg. Such an LAD smoothing spline,sλ, would be expected to give robust smoothed estimates off in situations where theɛi are from a distribution with heavy tails. The solution to such a problem is a "thin plate spline" of known form. An algorithm for computingsλ is given which is based on considering a sequence of quadratic programming problems whose structure is guided by the optimality conditions for the above convex minimization problem, and which are solved readily, if a good initial point is available. The "data driven" selection of the smoothing parameter is achieved by minimizing aCV(λ) score of the form .The combined LAD-CV smoothing spline algorithm is a continuation scheme in λ↘0 taken on the above SQPs parametrized inλ, with the optimal smoothing parameter taken to be that value ofλ at which theCV(λ) score first begins to increase. The feasibility of constructing the LAD-CV smoothing spline is illustrated by an application to a problem in environment data interpretation.
Scherer, Moritz; Cordes, Jonas; Younsi, Alexander; Sahin, Yasemin-Aylin; Götz, Michael; Möhlenbruch, Markus; Stock, Christian; Bösel, Julian; Unterberg, Andreas; Maier-Hein, Klaus; Orakcioglu, Berk
2016-11-01
ABC/2 is still widely accepted for volume estimations in spontaneous intracerebral hemorrhage (ICH) despite known limitations, which potentially accounts for controversial outcome-study results. The aim of this study was to establish and validate an automatic segmentation algorithm, allowing for quick and accurate quantification of ICH. A segmentation algorithm implementing first- and second-order statistics, texture, and threshold features was trained on manual segmentations with a random-forest methodology. Quantitative data of the algorithm, manual segmentations, and ABC/2 were evaluated for agreement in a study sample (n=28) and validated in an independent sample not used for algorithm training (n=30). ABC/2 volumes were significantly larger compared with either manual or algorithm values, whereas no significant differences were found between the latter (Pcorrelation coefficient 0.95 [lower 95% confidence interval 0.91]) and superior to ABC/2 (concordance correlation coefficient 0.77 [95% confidence interval 0.64]). Validation confirmed agreement in an independent sample (algorithm concordance correlation coefficient 0.99 [95% confidence interval 0.98], ABC/2 concordance correlation coefficient 0.82 [95% confidence interval 0.72]). The algorithm was closer to respective manual segmentations than ABC/2 in 52/58 cases (89.7%). An automatic segmentation algorithm for volumetric analysis of spontaneous ICH was developed and validated in this study. Algorithm measurements showed strong agreement with manual segmentations, whereas ABC/2 exhibited its limitations, yielding inaccurate overestimations of ICH volume. The refined, yet time-efficient, quantification of ICH by the algorithm may facilitate evaluation of clot volume as an outcome predictor and trigger for surgical interventions in the clinical setting. © 2016 American Heart Association, Inc.
Validation and incremental value of the hybrid algorithm for CTO PCI.
Pershad, Ashish; Eddin, Moneer; Girotra, Sudhakar; Cotugno, Richard; Daniels, David; Lombardi, William
2014-10-01
To evaluate the outcomes and benefits of using the hybrid algorithm for chronic total occlusion (CTO) percutaneous coronary intervention (PCI). The hybrid algorithm harmonizes antegrade and retrograde techniques for performing CTO PCI. It has the potential to increase success rates and improve efficiency for CTO PCI. No previous data have analyzed the impact of this algorithm on CTO PCI success rates and procedural efficiency. Retrospective analysis of contemporary CTO PCI performed at two high-volume centers with adoption of the hybrid technique was compared to previously published CTO outcomes in a well matched group of patients and lesion subsets. After adoption of the hybrid algorithm, technical success was significantly higher in the post hybrid algorithm group 189/198 (95.4%) vs the pre-algorithm group 367/462 (79.4%) (P CTO PCI. © 2014 Wiley Periodicals, Inc.
Validation of the Eclipse AAA algorithm at extended SSD.
Hussain, Amjad; Villarreal-Barajas, Eduardo; Brown, Derek; Dunscombe, Peter
2010-06-08
The accuracy of dose calculations at extended SSD is of significant importance in the dosimetric planning of total body irradiation (TBI). In a first step toward the implementation of electronic, multi-leaf collimator compensation for dose inhomogeneities and surface contour in TBI, we have evaluated the ability of the Eclipse AAA to accurately predict dose distributions in water at extended SSD. For this purpose, we use the Eclipse AAA algorithm, commissioned with machine-specific beam data for a 6 MV photon beam, at standard SSD (100 cm). The model was then used for dose distribution calculations at extended SSD (179.5 cm). Two sets of measurements were acquired for a 6 MV beam (from a Varian linear accelerator) in a water tank at extended SSD: i) open beam for 5 x 5, 10 x 10, 20 x 20 and 40 x 40 cm2 field sizes (defined at 179.5 cm SSD), and ii) identical field sizes but with a 1.3 cm thick acrylic spoiler placed 10 cm above the water surface. Dose profiles were acquired at 5 cm, 10 cm and 20 cm depths. Dose distributions for the two setups were calculated using the AAA algorithm in Eclipse. Confidence limits for comparisons between measured and calculated absolute depth dose curves and normalized dose profiles were determined as suggested by Venselaar et al. The confidence limits were within 2% and 2 mm for both setups. Extended SSD calculations were also performed using Eclipse AAA, commissioned with Varian Golden beam data at standard SSD. No significant difference between the custom commissioned and Golden Eclipse AAA was observed. In conclusion, Eclipse AAA commissioned at standard SSD can be used to accurately predict dose distributions in water at extended SSD for 6 MV open beams.
Using linked electronic data to validate algorithms for health outcomes in administrative databases.
Lee, Wan-Ju; Lee, Todd A; Pickard, Alan Simon; Shoaibi, Azadeh; Schumock, Glen T
2015-08-01
The validity of algorithms used to identify health outcomes in claims-based and administrative data is critical to the reliability of findings from observational studies. The traditional approach to algorithm validation, using medical charts, is expensive and time-consuming. An alternative method is to link the claims data to an external, electronic data source that contains information allowing confirmation of the event of interest. In this paper, we describe this external linkage validation method and delineate important considerations to assess the feasibility and appropriateness of validating health outcomes using this approach. This framework can help investigators decide whether to pursue an external linkage validation method for identifying health outcomes in administrative/claims data.
Hayley Evers-King
2017-08-01
Full Text Available Particulate Organic Carbon (POC plays a vital role in the ocean carbon cycle. Though relatively small compared with other carbon pools, the POC pool is responsible for large fluxes and is linked to many important ocean biogeochemical processes. The satellite ocean-color signal is influenced by particle composition, size, and concentration and provides a way to observe variability in the POC pool at a range of temporal and spatial scales. To provide accurate estimates of POC concentration from satellite ocean color data requires algorithms that are well validated, with uncertainties characterized. Here, a number of algorithms to derive POC using different optical variables are applied to merged satellite ocean color data provided by the Ocean Color Climate Change Initiative (OC-CCI and validated against the largest database of in situ POC measurements currently available. The results of this validation exercise indicate satisfactory levels of performance from several algorithms (highest performance was observed from the algorithms of Loisel et al., 2002; Stramski et al., 2008 and uncertainties that are within the requirements of the user community. Estimates of the standing stock of the POC can be made by applying these algorithms, and yield an estimated mixed-layer integrated global stock of POC between 0.77 and 1.3 Pg C of carbon. Performance of the algorithms vary regionally, suggesting that blending of region-specific algorithms may provide the best way forward for generating global POC products.
Assessing the external validity of algorithms to estimate EQ-5D-3L from the WOMAC.
Kiadaliri, Aliasghar A; Englund, Martin
2016-10-04
The use of mapping algorithms have been suggested as a solution to predict health utilities when no preference-based measure is included in the study. However, validity and predictive performance of these algorithms are highly variable and hence assessing the accuracy and validity of algorithms before use them in a new setting is of importance. The aim of the current study was to assess the predictive accuracy of three mapping algorithms to estimate the EQ-5D-3L from the Western Ontario and McMaster Universities Osteoarthritis Index (WOMAC) among Swedish people with knee disorders. Two of these algorithms developed using ordinary least squares (OLS) models and one developed using mixture model. The data from 1078 subjects mean (SD) age 69.4 (7.2) years with frequent knee pain and/or knee osteoarthritis from the Malmö Osteoarthritis study in Sweden were used. The algorithms' performance was assessed using mean error, mean absolute error, and root mean squared error. Two types of prediction were estimated for mixture model: weighted average (WA), and conditional on estimated component (CEC). The overall mean was overpredicted by an OLS model and underpredicted by two other algorithms (P algorithms suffered from overprediction for severe health states and underprediction for mild health states with lesser extent for mixture model. While the mixture model outperformed OLS models at the extremes of the EQ-5D-3D distribution, it underperformed around the center of the distribution. While algorithm based on mixture model reflected the distribution of EQ-5D-3L data more accurately compared with OLS models, all algorithms suffered from systematic bias. This calls for caution in applying these mapping algorithms in a new setting particularly in samples with milder knee problems than original sample. Assessing the impact of the choice of these algorithms on cost-effectiveness studies through sensitivity analysis is recommended.
A Global Grassland Drought Index (GDI Product: Algorithm and Validation
Binbin He
2015-09-01
Full Text Available Existing drought indices have been widely used to monitor meteorological drought and agricultural drought; however, few of them are focus on drought monitoring for grassland regions. This study presented a new drought index, the Grassland Drought Index (GDI, for monitoring drought conditions in global grassland regions. These regions are vital for the environment and human society but susceptible to drought. The GDI was constructed based on three measures of water content: precipitation, soil moisture (SM, and canopy water content (CWC. The precipitation information was extracted from the available precipitation datasets, and SM was estimated by downscaling exiting soil moisture data to a 1 km resolution, and CWC was retrieved based on the PROSAIL (PROSPECT + SAIL model. Each variable was scaled from 0 to 1 for each pixel based on absolute minimum and maximum values over time, and these scaled variables were combined with the selected weights to construct the GDI. According to validation at the regional scale, the GDI was correlated with the Standardized Precipitation Index (SPI to some extent, and captured most of the drought area identified by the United States Drought Monitor (USDM maps. In addition, the global GDI product at a 1 km spatial resolution substantially agreed with the global Standardized Precipitation Evapotranspiration Index (SPEI product throughout the period 2005–2010, and it provided detailed and accurate information about the location and the duration of drought based on the evaluation using the known drought events.
Wright, J M; Mattu, G S; Perry Jr, T L; Gelferc, M E; Strange, K D; Zorn, A; Chen, Y
2001-06-01
To test the accuracy of a new algorithm for the BPM-100, an automated oscillometric blood pressure (BP) monitor, using stored data from an independently conducted validation trial comparing the BPM-100(Beta) with a mercury sphygmomanometer. Raw pulse wave and cuff pressure data were stored electronically using embedded software in the BPM-100(Beta), during the validation trial. The 391 sets of measurements were separated objectively into two subsets. A subset of 136 measurements was used to develop a new algorithm to enhance the accuracy of the device when reading higher systolic pressures. The larger subset of 255 measurements (three readings for 85 subjects) was used as test data to validate the accuracy of the new algorithm. Differences between the new algorithm BPM-100 and the reference (mean of two observers) were determined and expressed as the mean difference +/- SD, plus the percentage of measurements within 5, 10, and 15 mmHg. The mean difference between the BPM-100 and reference systolic BP was -0.16 +/- 5.13 mmHg, with 73.7% BPM-100 and reference diastolic BP was -1.41 +/- 4.67 mmHg, with 78.4% BPM-100(Beta) and pass the AAMI standard, and 'A' grade BHS protocol. This study illustrates a new method for developing and testing a change in an algorithm for an oscillometric BP monitor utilizing collected and stored electronic data and demonstrates that the new algorithm meets the AAMI standard and BHS protocol.
Development and validation of an algorithm to identify planned readmissions from claims data
Horwitz, Leora I.; Grady, Jacqueline N.; Cohen, Dorothy; Lin, Zhenqiu; Volpe, Mark; Ngo, Chi; Masica, Andrew L.; Long, Theodore; Wang, Jessica; Keenan, Megan; Montague, Julia; Suter, Lisa G.; Ross, Joseph S.; Drye, Elizabeth E.; Krumholz, Harlan M.; Bernheim, Susannah M.
2017-01-01
Background It is desirable not to include planned readmissions in readmission measures because they represent deliberate, scheduled care. Objectives To develop an algorithm to identify planned readmissions, describe its performance characteristics and identify improvements. Design Consensus-driven algorithm development and chart review validation study at 7 acute care hospitals in 2 health systems. Patients For development, all discharges qualifying for the publicly-reported hospital-wide readmission measure. For validation, all qualifying same-hospital readmissions that were characterized by the algorithm as planned, and a random sampling of same-hospital readmissions that were characterized as unplanned. Measurements We calculated weighted sensitivity and specificity, and positive and negative predictive values of the algorithm (version 2.1), compared to gold standard chart review. Results In consultation with 27 experts, we developed an algorithm that characterizes 7.8% of readmissions as planned. For validation we reviewed 634 readmissions. The weighted sensitivity of the algorithm was 45.1% overall; 50.9% in large teaching centers and 40.2% in smaller community hospitals. The weighted specificity was 95.9%, positive predictive value was 51.6% and negative predictive value was 94.7%. We identified 4 minor changes to improve algorithm performance. The revised algorithm had a weighted sensitivity 49.8% (57.1% at large hospitals), weighted specificity 96.5%, positive predictive value 58.7%, and negative predictive value 94.5%. Positive predictive value was poor for the two most common potentially planned procedures: diagnostic cardiac catheterization (25%) and procedures involving cardiac devices (33%). Conclusions An administrative claims-based algorithm to identify planned readmissions is feasible and can facilitate public reporting of primarily unplanned readmissions. PMID:26149225
Cloud detection algorithm comparison and validation for operational Landsat data products
Foga, Steven Curtis; Scaramuzza, Pat; Guo, Song; Zhu, Zhe; Dilley, Ronald; Beckmann, Tim; Schmidt, Gail L.; Dwyer, John L.; Hughes, MJ; Laue, Brady
2017-01-01
Clouds are a pervasive and unavoidable issue in satellite-borne optical imagery. Accurate, well-documented, and automated cloud detection algorithms are necessary to effectively leverage large collections of remotely sensed data. The Landsat project is uniquely suited for comparative validation of cloud assessment algorithms because the modular architecture of the Landsat ground system allows for quick evaluation of new code, and because Landsat has the most comprehensive manual truth masks of any current satellite data archive. Currently, the Landsat Level-1 Product Generation System (LPGS) uses separate algorithms for determining clouds, cirrus clouds, and snow and/or ice probability on a per-pixel basis. With more bands onboard the Landsat 8 Operational Land Imager (OLI)/Thermal Infrared Sensor (TIRS) satellite, and a greater number of cloud masking algorithms, the U.S. Geological Survey (USGS) is replacing the current cloud masking workflow with a more robust algorithm that is capable of working across multiple Landsat sensors with minimal modification. Because of the inherent error from stray light and intermittent data availability of TIRS, these algorithms need to operate both with and without thermal data. In this study, we created a workflow to evaluate cloud and cloud shadow masking algorithms using cloud validation masks manually derived from both Landsat 7 Enhanced Thematic Mapper Plus (ETM +) and Landsat 8 OLI/TIRS data. We created a new validation dataset consisting of 96 Landsat 8 scenes, representing different biomes and proportions of cloud cover. We evaluated algorithm performance by overall accuracy, omission error, and commission error for both cloud and cloud shadow. We found that CFMask, C code based on the Function of Mask (Fmask) algorithm, and its confidence bands have the best overall accuracy among the many algorithms tested using our validation data. The Artificial Thermal-Automated Cloud Cover Algorithm (AT-ACCA) is the most accurate
Convergent validity of the ASAM Patient Placement Criteria using a standardized computer algorithm.
Staines, Graham; Kosanke, Nicole; Magura, Stephen; Bali, Priti; Foote, Jeffrey; Deluca, Alexander
2003-01-01
The study examined the convergent validity of the ASAM Patient Placement Criteria (PPC) by comparing Level of Care (LOC) recommendations produced by two alternative methods: a computerdriven algorithm and a "standard" clinical assessment. A cohort of 248 applicants for alcoholism treatment were evaluated at a multi-modality treatment center. The two methods disagreed (58% of cases) more often than they agreed (42%). The algorithm recommended a more intense LOC than the clinician protocol in 81% of the discrepant cases. Four categories of disagreement accounted for 97% of the discrepant cases. Several major sources of disagreement were identified and examined in detail: clinicians' reasoned departures from the PPC rules, conservatism in algorithm LOC recommendations, and measurement overlap between two specific dimensions. In order for the ASAM PPC and its associated algorithm to be embraced by treatment programs, the observed differences in LOC recommendations between the algorithm and "standard" clinical assessment should be resolved.
Ming, W Q; Chen, J H
2013-11-01
Three different types of multislice algorithms, namely the conventional multislice (CMS) algorithm, the propagator-corrected multislice (PCMS) algorithm and the fully-corrected multislice (FCMS) algorithm, have been evaluated in comparison with respect to the accelerating voltages in transmission electron microscopy. Detailed numerical calculations have been performed to test their validities. The results show that the three algorithms are equivalent for accelerating voltage above 100kV. However, below 100 kV, the CMS algorithm will introduce significant errors, not only for higher-order Laue zone (HOLZ) reflections but also for zero-order Laue zone (ZOLZ) reflections. The differences between the PCMS and FCMS algorithms are negligible and mainly appear in HOLZ reflections. Nonetheless, when the accelerating voltage is further lowered to 20 kV or below, the PCMS algorithm will also yield results deviating from the FCMS results. The present study demonstrates that the propagation of the electron wave from one slice to the next slice is actually cross-correlated with the crystal potential in a complex manner, such that when the accelerating voltage is lowered to 10 kV, the accuracy of the algorithms is dependent of the scattering power of the specimen. © 2013 Elsevier B.V. All rights reserved.
Chan, S L; Tham, M Y; Tan, S H; Loke, C; Foo, Bpq; Fan, Y; Ang, P S; Brunham, L R; Sung, C
2017-05-01
The purpose of this study was to develop and validate sensitive algorithms to detect hospitalized statin-induced myopathy (SIM) cases from electronic medical records (EMRs). We developed four algorithms on a training set of 31,211 patient records from a large tertiary hospital. We determined the performance of these algorithms against manually curated records. The best algorithm used a combination of elevated creatine kinase (>4× the upper limit of normal (ULN)), discharge summary, diagnosis, and absence of statin in discharge medications. This algorithm achieved a positive predictive value of 52-71% and a sensitivity of 72-78% on two validation sets of >30,000 records each. Using this algorithm, the incidence of SIM was estimated at 0.18%. This algorithm captured three times more rhabdomyolysis cases than spontaneous reports (95% vs. 30% of manually curated gold standard cases). Our results show the potential power of utilizing data and text mining of EMRs to enhance pharmacovigilance activities. © 2016 American Society for Clinical Pharmacology and Therapeutics.
Singh, Sheldon M; Webster, Lauren; Calzavara, Andrew; Wijeysundera, Harindra C
2017-06-01
Administrative database research can provide insight into the real-world effectiveness of invasive electrophysiology procedures. However, no validated algorithm to identify these procedures within administrative data currently exists. To develop and validate algorithms to identify atrial fibrillation (AF), atrial flutter (AFL), supraventricular tachycardia (SVT) catheter ablation procedures, and diagnostic electrophysiology studies (EPS) within administrative data. Algorithms consisting of physician procedural billing codes and their associated most responsible hospital diagnosis codes were used to identify potential AF, AFL, SVT catheter ablation procedures and diagnostic EPS within large administrative databases in Ontario, Canada. The potential procedures were then limited to those performed between October 1, 2011 and March 31, 2013 at a single large regional cardiac center (Sunnybrook Health Sciences Center) in Ontario, Canada. These procedures were compared with a gold-standard cohort of patients known to have undergone invasive electrophysiology procedures during the same time period at the same institution. The sensitivity, specificity, positive and negative predictive values of each algorithm was determined. Algorithms specific to each of AF, AFL, and SVT ablation were associated with a moderate sensitivity (75%-86%), high specificity (95%-98%), positive (95%-98%), and negative (99%) predictive values. The best algorithm to identify diagnostic EPS was less optimal with a sensitivity of 61% and positive predictive value of 88%. Algorithms using a combination of physician procedural billing codes and accompanying most responsible hospital diagnosis may identify catheter ablation procedures within administrative data with a high degree of accuracy. Diagnostic EPS may be identified with reduced accuracy.
Gok, Gokhan; Mosna, Zbysek; Arikan, Feza; Arikan, Orhan; Erdem, Esra
2016-07-01
for various purposes including calculation of actual height and generation of ionograms. In this study, the performance of electron density reconstruction algorithm of IONOLAB group and standard electron density profile algorithms of ionosondes are compared with IONOLAB-RAY wave propagation simulation in near vertical incidence. The electron density reconstruction and parameter extraction algorithms of ionosondes are validated with the IONOLAB-RAY results both for quiet anddisturbed ionospheric states in Central Europe using ionosonde stations such as Pruhonice and Juliusruh . It is observed that IONOLAB ionosonde parameter extraction and electron density reconstruction algorithm performs significantly better compared to standard algorithms especially for disturbed ionospheric conditions. IONOLAB-RAY provides an efficient and reliable tool to investigate and validate ionosonde electron density reconstruction algorithms, especially in determination of reflection height (true height) of signals and critical parameters of ionosphere. This study is supported by TUBITAK 114E541, 115E915 and Joint TUBITAK 114E092 and AS CR 14/001 projects.
Validation of the GCOM-W SCA and JAXA soil moisture algorithms
Satellite-based remote sensing of soil moisture has matured over the past decade as a result of the Global Climate Observing Mission-Water (GCOM-W) program of JAXA. This program has resulted in improved algorithms that have been supported by rigorous validation. Access to the products and the valida...
Mays, Darren; Gerfen, Elissa; Mosher, Revonda B.; Shad, Aziza T.; Tercyak, Kenneth P.
2012-01-01
Objective: To assess the construct validity of a milk consumption Stages of Change (SOC) algorithm among adolescent survivors of childhood cancer ages 11 to 21 years (n = 75). Methods: Baseline data from a randomized controlled trial designed to evaluate a health behavior intervention were analyzed. Assessments included a milk consumption SOC…
Mays, Darren; Gerfen, Elissa; Mosher, Revonda B.; Shad, Aziza T.; Tercyak, Kenneth P.
2012-01-01
Objective: To assess the construct validity of a milk consumption Stages of Change (SOC) algorithm among adolescent survivors of childhood cancer ages 11 to 21 years (n = 75). Methods: Baseline data from a randomized controlled trial designed to evaluate a health behavior intervention were analyzed. Assessments included a milk consumption SOC…
An evaluation method based on absolute difference to validate the performance of SBNUC algorithms
Jin, Minglei; Jin, Weiqi; Li, Yiyang; Li, Shuo
2016-09-01
Scene-based non-uniformity correction (SBNUC) algorithms are an important part of infrared image processing; however, SBNUC algorithms usually cause two defects: (1) ghosting artifacts and (2) over-correction. In this paper, we use the absolute difference based on guided image filter (AD-GF) method to validate the performance of SBNUC algorithms. We obtain a self-separation source using the improved guided image filter to process the input image, and use the self-separation source to obtain the space-high-frequency parts of the input image and the corrected image. Finally, we use the absolute difference between the two space-high-frequency parts as the evaluation result. Based on experimental results, the AD-GF method has better robustness and can validate the performance of SBNUC algorithms even if ghosting artifacts or over-correction occur. Also the AD-GF method can measure how SBNUC algorithms perform in the time domain, it's an effective evaluation method for SBNUC algorithm.
Sullivan, J. T.; McGee, T. J.; Leblanc, T.; Sumnicht, G. K.; Twigg, L. W.
2015-10-01
The main purpose of the NASA Goddard Space Flight Center TROPospheric OZone DIfferential Absorption Lidar (GSFC TROPOZ DIAL) is to measure the vertical distribution of tropospheric ozone for science investigations. Because of the important health and climate impacts of tropospheric ozone, it is imperative to quantify background photochemical ozone concentrations and ozone layers aloft, especially during air quality episodes. For these reasons, this paper addresses the necessary procedures to validate the TROPOZ retrieval algorithm and confirm that it is properly representing ozone concentrations. This paper is focused on ensuring the TROPOZ algorithm is properly quantifying ozone concentrations, and a following paper will focus on a systematic uncertainty analysis. This methodology begins by simulating synthetic lidar returns from actual TROPOZ lidar return signals in combination with a known ozone profile. From these synthetic signals, it is possible to explicitly determine retrieval algorithm biases from the known profile. This was then systematically performed to identify any areas that need refinement for a new operational version of the TROPOZ retrieval algorithm. One immediate outcome of this exercise was that a bin registration error in the correction for detector saturation within the original retrieval was discovered and was subsequently corrected for. Another noticeable outcome was that the vertical smoothing in the retrieval algorithm was upgraded from a constant vertical resolution to a variable vertical resolution to yield a statistical uncertainty of <10 %. This new and optimized vertical-resolution scheme retains the ability to resolve fluctuations in the known ozone profile, but it now allows near-field signals to be more appropriately smoothed. With these revisions to the previous TROPOZ retrieval, the optimized TROPOZ retrieval algorithm (TROPOZopt) has been effective in retrieving nearly 200 m lower to the surface. Also, as compared to the
Adjusting for COPD severity in database research: developing and validating an algorithm
Goossens LMA
2011-12-01
Full Text Available Lucas MA Goossens1, Christine L Baker2, Brigitta U Monz3, Kelly H Zou2, Maureen PMH Rutten-van Mölken11Institute for Medical Technology Assessment, Erasmus University, Rotterdam, The Netherlands; 2Pfizer Inc, New York City, NY, USA; 3Boehringer Ingelheim International GmbH, Ingelheim am Rhein, GermanyPurpose: When comparing chronic obstructive lung disease (COPD interventions in database research, it is important to adjust for severity. Global Initiative for Chronic Obstructive Lung Disease (GOLD guidelines grade severity according to lung function. Most databases lack data on lung function. Previous database research has approximated COPD severity using demographics and healthcare utilization. This study aims to derive an algorithm for COPD severity using baseline data from a large respiratory trial (UPLIFT.Methods: Partial proportional odds logit models were developed for probabilities of being in GOLD stages II, III and IV. Concordance between predicted and observed stage was assessed using kappa-statistics. Models were estimated in a random selection of 2/3 of patients and validated in the remainder. The analysis was repeated in a subsample with a balanced distribution across severity stages. Univariate associations of COPD severity with the covariates were tested as well.Results: More severe COPD was associated with being male and younger, having quit smoking, lower BMI, osteoporosis, hospitalizations, using certain medications, and oxygen. After adjusting for these variables, co-morbidities, previous healthcare resource use (eg, emergency room, hospitalizations and inhaled corticosteroids, xanthines, or mucolytics were no longer independently associated with COPD severity, although they were in univariate tests. The concordance was poor (kappa = 0.151 and only slightly better in the balanced sample (kappa = 0.215.Conclusion: COPD severity cannot be reliably predicted from demographics and healthcare use. This limitation should be
L. Frank
2013-01-01
Full Text Available Iridology is defined as a photographic science that identifies pathological and functional changes within organs via biomicroscopic iris assessment for aberrant lines, spots, and discolourations. According to iridology, the iris does not reflect changes during anaesthesia, due to the drugs inhibitory effects on nerves impulses, and in cases of organ removal, it reflects the pre-surgical condition.The profession of Homoeopathy is frequently associated with iridology and in a recent survey (2009 investigating the perceptions of Masters of Technology graduates in Homoeopathy of University of Johannesburg, iridology was highly regarded as a potential additional skill requirement for assessing the health status of the patient.This study investigated the reliability of iridology in the diagnosis of previous acute appendicitis, as evidenced by appendectomy. A total of 60 participants took part in the study. Thirty of the 60 participants had an appendectomy due to acute appendicitis, and 30 had had no prior history of appendicitis. Each participant’s right iris was documented by photography with the use of a non-mydriatic retinal camera that was reset for photographing the iris. The photographs were then randomized by an external person and no identifying data made available to the three raters. The raters included the researcher, who had little experience in iridology and two highly experienced practising iridologists. Data was obtained from the analyses of the photographs wherein the presence or absence of lesions (implying acute appendicitis was indicated by the raters. None of the three raters was able to show a significant success rate in identifying correctly the people with a previous history of acute appendicitis and resultant appendectomies
Wang, He; Dong, Lei; O'Daniel, Jennifer; Mohan, Radhe; Garden, Adam S.; Kian Ang, K.; Kuban, Deborah A.; Bonnen, Mark; Chang, Joe Y.; Cheung, Rex
2005-06-01
A greyscale-based fully automatic deformable image registration algorithm, originally known as the 'demons' algorithm, was implemented for CT image-guided radiotherapy. We accelerated the algorithm by introducing an 'active force' along with an adaptive force strength adjustment during the iterative process. These improvements led to a 40% speed improvement over the original algorithm and a high tolerance of large organ deformations. We used three methods to evaluate the accuracy of the algorithm. First, we created a set of mathematical transformations for a series of patient's CT images. This provides a 'ground truth' solution for quantitatively validating the deformable image registration algorithm. Second, we used a physically deformable pelvic phantom, which can measure deformed objects under different conditions. The results of these two tests allowed us to quantify the accuracy of the deformable registration. Validation results showed that more than 96% of the voxels were within 2 mm of their intended shifts for a prostate and a head-and-neck patient case. The mean errors and standard deviations were 0.5 mm ± 1.5 mm and 0.2 mm ± 0.6 mm, respectively. Using the deformable pelvis phantom, the result showed a tracking accuracy of better than 1.5 mm for 23 seeds implanted in a phantom prostate that was deformed by inflation of a rectal balloon. Third, physician-drawn contours outlining the tumour volumes and certain anatomical structures in the original CT images were deformed along with the CT images acquired during subsequent treatments or during a different respiratory phase for a lung cancer case. Visual inspection of the positions and shapes of these deformed contours agreed well with human judgment. Together, these results suggest that the accelerated demons algorithm has significant potential for delineating and tracking doses in targets and critical structures during CT-guided radiotherapy.
Validation of an Algorithm to Estimate Gestational Age in Electronic Health Plan Databases
Li, Qian; Andrade, Susan E.; Cooper, William O.; Davis, Robert L.; Dublin, Sascha; Hammad, Tarek A.; Pawloski, Pamala A.; Pinheiro, Simone P.; Raebel, Marsha A.; Scott, Pamela E.; Smith, David H.; Dashevsky, Inna; Haffenreffer, Katie; Johnson, Karin E.; Toh, Sengwee
2013-01-01
Purpose To validate an algorithm that uses delivery date and diagnosis codes to define gestational age at birth in electronic health plan databases. Methods Using data from 225,384 live born deliveries among women aged 15–45 years in 2001–2007 within 8 of the 11 health plans participating in the Medication Exposure in Pregnancy Risk Evaluation Program, we compared 1) the algorithm-derived gestational age versus the “gold-standard” gestational age obtained from the infant birth certificate files; and 2) the prenatal exposure status of two antidepressants (fluoxetine and sertraline) and two antibiotics (amoxicillin and azithromycin) as determined by the algorithm-derived versus the gold-standard gestational age. Results The mean algorithm-derived gestational age at birth was lower than the mean obtained from the birth certificate files among singleton deliveries (267.9 versus 273.5 days) but not among multiple-gestation deliveries (253.9 versus 252.6 days). The algorithm-derived prenatal exposure to the antidepressants had a sensitivity and a positive predictive value (PPV) of ≥95%, and a specificity and a negative predictive value (NPV) of almost 100%. Sensitivity and PPV were both ≥90%, and specificity and NPV were both >99% for the antibiotics. Conclusions A gestational age algorithm based upon electronic health plan data correctly classified medication exposure status in most live born deliveries, but misclassification may be higher for drugs typically used for short durations. PMID:23335117
Simulation of MR angiography imaging for validation of cerebral arteries segmentation algorithms.
Klepaczko, Artur; Szczypiński, Piotr; Deistung, Andreas; Reichenbach, Jürgen R; Materka, Andrzej
2016-12-01
Accurate vessel segmentation of magnetic resonance angiography (MRA) images is essential for computer-aided diagnosis of cerebrovascular diseases such as stenosis or aneurysm. The ability of a segmentation algorithm to correctly reproduce the geometry of the arterial system should be expressed quantitatively and observer-independently to ensure objectivism of the evaluation. This paper introduces a methodology for validating vessel segmentation algorithms using a custom-designed MRA simulation framework. For this purpose, a realistic reference model of an intracranial arterial tree was developed based on a real Time-of-Flight (TOF) MRA data set. With this specific geometry blood flow was simulated and a series of TOF images was synthesized using various acquisition protocol parameters and signal-to-noise ratios. The synthesized arterial tree was then reconstructed using a level-set segmentation algorithm available in the Vascular Modeling Toolkit (VMTK). Moreover, to present versatile application of the proposed methodology, validation was also performed for two alternative techniques: a multi-scale vessel enhancement filter and the Chan-Vese variant of the level-set-based approach, as implemented in the Insight Segmentation and Registration Toolkit (ITK). The segmentation results were compared against the reference model. The accuracy in determining the vessels centerline courses was very high for each tested segmentation algorithm (mean error rate = 5.6% if using VMTK). However, the estimated radii exhibited deviations from ground truth values with mean error rates ranging from 7% up to 79%, depending on the vessel size, image acquisition and segmentation method. We demonstrated the practical application of the designed MRA simulator as a reliable tool for quantitative validation of MRA image processing algorithms that provides objective, reproducible results and is observer independent. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Han, Cong; Jin, Peng; Li, Meilin; Wang, Lei; Zheng, Yonghua
2017-08-23
Wounding induces the accumulation of phenolic compounds in carrot. This study uses physiological and transcriptomic analysis to validate previous findings relating primary metabolism and secondary metabolites in wounded carrots. Our data confirmed that increased wounding intensity strengthened the accumulation of phenolics accompanied by enhancing respiration and showed the loss of fructose and glucose and the increase of energy status in carrots. In addition, transcriptomic evaluation of shredded carrots indicated that the respiratory metabolism, sugar metabolism, energy metabolism, and phenolic biosynthesis related pathways, such as "citrate cycle (TCA cycle)", "oxidative phosphorylation" and "phenylpropanoid biosynthesis", were activated by wounding. Also, the differentially expressed genes (DEGs) involved in the conversion of sugars to phenolics were extensively up-regulated after wounding. Thus, the physiological and transcriptomic data validate previous findings that wounding accelerates the primary metabolisms of carrot including respiratory metabolism, sugar metabolism, and energy metabolism to meet the demand for the production of phenolic antioxidants.
Real-World Validation of Three Tipover Algorithms for Mobile Robots
2010-05-01
Real-World Validation of Three Tipover Algorithms for Mobile Robots Philip R. Roan, Aaron Burmeister, Amin Rahimi, Kevin Holz, and David Hooper...Additionally, tipover can result in bending or breaking parts of the robot, requiring expensive repairs. Mobile robots are given critical tasks and sent...vehicle is remotely or autonomously operated, as is often the case with small mobile robots . These robots are likely to tipover because they encounter
Lu, Jianfeng
2016-01-01
In the spirit of the fewest switches surface hopping, the frozen Gaussian approximation with surface hopping (FGA-SH) method samples a path integral representation of the non-adiabatic dynamics in the semiclassical regime. An improved sampling scheme is developed in this work for FGA-SH based on birth and death branching processes. The algorithm is validated for the standard test examples of non-adiabatic dynamics.
Simulation and validation of land surface temperature algorithms for MODIS and AATSR data
J. M. Galve
2007-01-01
Full Text Available A database of global, cloud-free, atmospheric radiosounding profiles was compiled with the aim of simulating radiometric measurements from satellite-borne sensors in the thermal infrared. The objective of the simulation was to use Terra/Moderate Resolution Imaging Spectroradiometer (MODIS and Envisat/Advanced Along Track Scanning Radiometer (AATSR data to generate split-window (SW and dual-angle (DA algorithms for the retrieval of land surface temperature (LST. The database contains 382 radiosounding profiles acquired from land surfaces, with an almost uniform distribution of precipitable water between 0 and 5.5 cm. Radiative transfer calculations were performed with the MODTRAN 4 code for six different viewing angles between 0 and 65º. The resulting radiance spectra were integrated with the response filter functions of MODIS bands 31 and 32 and AATSR channels at 11 and 12 μm. Using the simulation database, SW algorithms adapted for MODIS and AATSR data, and DA algorithms for AATSR data were developed. Both types of algorithms are quadratic in the brightness temperature difference, and depend explicitly on the land surface emissivity. These SW and DA algorithms were validated with actual ground measurements of LST collected concurrently with MODIS and AATSR observations in a large, flat and thermally homogeneous area of rice crops located close to the city of Valencia, Spain. The results were not bias and had a standard deviation of around ± 0.5 K for SW algorithms at the nadir of both sensors; the SW algorithm used in the forward view resulted in a bias of 0.5 K and a standard deviation of ± 0.8 K. The least accurate results were obtained in the DA algorithms with a bias close to -2.0 K and a standard deviation of almost ± 1.0 K.
ZHAO Qian-lu; ZHOU Yong; WANG Yi-long; DONG Ke-hui; WANG Yong-jun
2010-01-01
Background Vascular cognitive impairment (VCI) is considered to be the most common pattern of cognitive impairment. We aimed to devise a diagnostic algorithm for VCI, and evaluate the reliability and validity of our proposed criteria.Methods We based our new algorithm on previous literature, a Delphi consensus method, and preliminary testing. First, successive 100 patients with cerebrovascular disease (CVD) in hospital underwent a structured medical examination. Twenty-five case vignettes fulfilled the proposed criteria of diagnosis for probable or possible VCI were divided into three subtype categories: vascular cognitive impairment, no dementia (VCIND), vascular dementia (VaD) or mixed VCI/Alzheimer's disease (AD). Inter-raters reliability was assessed using a Fleiss kappa analysis. Convergent validity was also evaluated by correlation coefficients (r) between the proposed key points for each subtype and the currently accepted criteria. Forty-five patients with probable VCI were examined to determine the accuracy of identification for each subtype.Results The proposed criteria showed clinical diagnostic validity for VCI, and were able to define probable, possible and definite VCI, three VCI subtypes, and vascular causes. There was good consensus between experts (Cronbach's α=0.96 for both rounds). Significant moderate to good items-total correlations were found for two questionnaires (50-r range, 0.40-0.97 and 0.41-0.99, respectively). Significant slight and moderate inter-raters reliability were obtained for VCI (k=0.13) and three VCI subtypes (k=0.45). Furthermore, good convergent validity was observed in a comparison of significant correlations between criteria: good (4-r range, 0.75-0.92) to perfect (3-r=1.00) validity for the VCIND subtype, and moderate to good validity for the VaD subtype (1-r=0.46; 5-r range, 0.76-0.92) and for the mixed VCI/AD subtype (r=0.92 and 1.00; 4-r range, 0.47-0.70). Importantly, the area under receiver operating characteristic
Yepes, Pablo P.; Eley, John G.; Liu, Amy; Mirkovic, Dragan; Randeniya, Sharmalee; Titt, Uwe; Mohan, Radhe
2016-04-01
Monte Carlo (MC) methods are acknowledged as the most accurate technique to calculate dose distributions. However, due its lengthy calculation times, they are difficult to utilize in the clinic or for large retrospective studies. Track-repeating algorithms, based on MC-generated particle track data in water, accelerate dose calculations substantially, while essentially preserving the accuracy of MC. In this study, we present the validation of an efficient dose calculation algorithm for intensity modulated proton therapy, the fast dose calculator (FDC), based on a track-repeating technique. We validated the FDC algorithm for 23 patients, which included 7 brain, 6 head-and-neck, 5 lung, 1 spine, 1 pelvis and 3 prostate cases. For validation, we compared FDC-generated dose distributions with those from a full-fledged Monte Carlo based on GEANT4 (G4). We compared dose-volume-histograms, 3D-gamma-indices and analyzed a series of dosimetric indices. More than 99% of the voxels in the voxelized phantoms describing the patients have a gamma-index smaller than unity for the 2%/2 mm criteria. In addition the difference relative to the prescribed dose between the dosimetric indices calculated with FDC and G4 is less than 1%. FDC reduces the calculation times from 5 ms per proton to around 5 μs.
Williams, C. R.
2012-12-01
The NASA Global Precipitation Mission (GPM) raindrop size distribution (DSD) Working Group is composed of NASA PMM Science Team Members and is charged to "investigate the correlations between DSD parameters using Ground Validation (GV) data sets that support, or guide, the assumptions used in satellite retrieval algorithms." Correlations between DSD parameters can be used to constrain the unknowns and reduce the degrees-of-freedom in under-constrained satellite algorithms. Over the past two years, the GPM DSD Working Group has analyzed GV data and has found correlations between the mass-weighted mean raindrop diameter (Dm) and the mass distribution standard deviation (Sm) that follows a power-law relationship. This Dm-Sm power-law relationship appears to be robust and has been observed in surface disdrometer and vertically pointing radar observations. One benefit of a Dm-Sm power-law relationship is that a three parameter DSD can be modeled with just two parameters: Dm and Nw that determines the DSD amplitude. In order to incorporate observed DSD correlations into satellite algorithms, the GPM DSD Working Group is developing scattering and integral tables that can be used by satellite algorithms. Scattering tables describe the interaction of electromagnetic waves on individual particles to generate cross sections of backscattering, extinction, and scattering. Scattering tables are independent of the distribution of particles. Integral tables combine scattering table outputs with DSD parameters and DSD correlations to generate integrated normalized reflectivity, attenuation, scattering, emission, and asymmetry coefficients. Integral tables contain both frequency dependent scattering properties and cloud microphysics. The GPM DSD Working Group has developed scattering tables for raindrops at both Dual Precipitation Radar (DPR) frequencies and at all GMI radiometer frequencies less than 100 GHz. Scattering tables include Mie and T-matrix scattering with H- and V
Validation of Statistical Sampling Algorithms in Visual Sample Plan (VSP): Summary Report
Nuffer, Lisa L; Sego, Landon H.; Wilson, John E.; Hassig, Nancy L.; Pulsipher, Brent A.; Matzke, Brett D.
2009-02-18
The U.S. Department of Homeland Security, Office of Technology Development (OTD) contracted with a set of U.S. Department of Energy national laboratories, including the Pacific Northwest National Laboratory (PNNL), to write a Remediation Guidance for Major Airports After a Chemical Attack. The report identifies key activities and issues that should be considered by a typical major airport following an incident involving release of a toxic chemical agent. Four experimental tasks were identified that would require further research in order to supplement the Remediation Guidance. One of the tasks, Task 4, OTD Chemical Remediation Statistical Sampling Design Validation, dealt with statistical sampling algorithm validation. This report documents the results of the sampling design validation conducted for Task 4. In 2005, the Government Accountability Office (GAO) performed a review of the past U.S. responses to Anthrax terrorist cases. Part of the motivation for this PNNL report was a major GAO finding that there was a lack of validated sampling strategies in the U.S. response to Anthrax cases. The report (GAO 2005) recommended that probability-based methods be used for sampling design in order to address confidence in the results, particularly when all sample results showed no remaining contamination. The GAO also expressed a desire that the methods be validated, which is the main purpose of this PNNL report. The objective of this study was to validate probability-based statistical sampling designs and the algorithms pertinent to within-building sampling that allow the user to prescribe or evaluate confidence levels of conclusions based on data collected as guided by the statistical sampling designs. Specifically, the designs found in the Visual Sample Plan (VSP) software were evaluated. VSP was used to calculate the number of samples and the sample location for a variety of sampling plans applied to an actual release site. Most of the sampling designs validated are
Hammoudi, Nadjib; Duprey, Matthieu; Régnier, Philippe; Achkar, Marc; Boubrit, Lila; Preud'homme, Gisèle; Healy-Brucker, Aude; Vignalou, Jean-Baptiste; Pousset, Françoise; Komajda, Michel; Isnard, Richard
2014-02-01
Management of increased referrals for transthoracic echocardiography (TTE) examinations is a challenge. Patients with normal TTE examinations take less time to explore than those with heart abnormalities. A reliable method for assessing pretest probability of a normal TTE may optimize management of requests. To establish and validate, based on requests for examinations, a simple algorithm for defining pretest probability of a normal TTE. In a retrospective phase, factors associated with normality were investigated and an algorithm was designed. In a prospective phase, patients were classified in accordance with the algorithm as being at high or low probability of having a normal TTE. In the retrospective phase, 42% of 618 examinations were normal. In multivariable analysis, age and absence of cardiac history were associated to normality. Low pretest probability of normal TTE was defined by known cardiac history or, in case of doubt about cardiac history, by age>70 years. In the prospective phase, the prevalences of normality were 72% and 25% in high (n=167) and low (n=241) pretest probability of normality groups, respectively. The mean duration of normal examinations was significantly shorter than abnormal examinations (13.8 ± 9.2 min vs 17.6 ± 11.1 min; P=0.0003). A simple algorithm can classify patients referred for TTE as being at high or low pretest probability of having a normal examination. This algorithm might help to optimize management of requests in routine practice. Copyright © 2014 Elsevier Masson SAS. All rights reserved.
Improvements to a five-phase ABS algorithm for experimental validation
Gerard, Mathieu; Pasillas-Lépine, William; de Vries, Edwin; Verhaegen, Michel
2012-10-01
The anti-lock braking system (ABS) is the most important active safety system for passenger cars. Unfortunately, the literature is not really precise about its description, stability and performance. This research improves a five-phase hybrid ABS control algorithm based on wheel deceleration [W. Pasillas-Lépine, Hybrid modeling and limit cycle analysis for a class of five-phase anti-lock brake algorithms, Veh. Syst. Dyn. 44 (2006), pp. 173-188] and validates it on a tyre-in-the-loop laboratory facility. Five relevant effects are modelled so that the simulation matches the reality: oscillations in measurements, wheel acceleration reconstruction, brake pressure dynamics, brake efficiency changes and tyre relaxation. The time delays in measurement and actuation have been identified as the main difficulty for the initial algorithm to work in practice. Three methods are proposed in order to deal with these delays. It is verified that the ABS limit cycles encircle the optimal braking point, without assuming any tyre parameter being a priori known. The ABS algorithm is compared with the commercial algorithm developed by Bosch.
2014-09-26
AD-AI5? 461 VALIDATION OF THE ALGORITHM FOR DEPOT TCTO LABOR COSTS / FOR THE COMPONENT.. (U) INFORMATION SPECTRUM INC ARLINGTON VA S J EINHORN 12 APR...U VALIDATION OF THE ALGORITHM FOR DEPOT TCTO LABOR COSTS I FOR C THE COMPONENT SUPPORT COST SYSTEM (D160B) Contract No. F33600-82-C-0543 12 April 1984...tasks," including a user survey. This report provides the verification and validation of the algorithm called "Depot TCTO Labor Costs." The costs of
Walden, K; Bélanger, L M; Biering-Sørensen, F
2016-01-01
STUDY DESIGN: Validation study. OBJECTIVES: To describe the development and validation of a computerized application of the international standards for neurological classification of spinal cord injury (ISNCSCI). SETTING: Data from acute and rehabilitation care. METHODS: The Rick Hansen Institute......-validation of the algorithm in phase five using 108 new RHSCIR cases did not identify the need for any further changes, as all discrepancies were due to clinician errors. The web-based application and the algorithm code are freely available at www.isncscialgorithm.com. CONCLUSION: The RHI-ISNCSCI Algorithm provides...... by funding from Health Canada and Western Economic Diversification Canada....
Kalter, Henry D; Perin, Jamie; Black, Robert E
2016-06-01
Physician assessment historically has been the most common method of analyzing verbal autopsy (VA) data. Recently, the World Health Organization endorsed two automated methods, Tariff 2.0 and InterVA-4, which promise greater objectivity and lower cost. A disadvantage of the Tariff method is that it requires a training data set from a prior validation study, while InterVA relies on clinically specified conditional probabilities. We undertook to validate the hierarchical expert algorithm analysis of VA data, an automated, intuitive, deterministic method that does not require a training data set. Using Population Health Metrics Research Consortium study hospital source data, we compared the primary causes of 1629 neonatal and 1456 1-59 month-old child deaths from VA expert algorithms arranged in a hierarchy to their reference standard causes. The expert algorithms were held constant, while five prior and one new "compromise" neonatal hierarchy, and three former child hierarchies were tested. For each comparison, the reference standard data were resampled 1000 times within the range of cause-specific mortality fractions (CSMF) for one of three approximated community scenarios in the 2013 WHO global causes of death, plus one random mortality cause proportions scenario. We utilized CSMF accuracy to assess overall population-level validity, and the absolute difference between VA and reference standard CSMFs to examine particular causes. Chance-corrected concordance (CCC) and Cohen's kappa were used to evaluate individual-level cause assignment. Overall CSMF accuracy for the best-performing expert algorithm hierarchy was 0.80 (range 0.57-0.96) for neonatal deaths and 0.76 (0.50-0.97) for child deaths. Performance for particular causes of death varied, with fairly flat estimated CSMF over a range of reference values for several causes. Performance at the individual diagnosis level was also less favorable than that for overall CSMF (neonatal: best CCC = 0.23, range 0
Dobkin, Bruce H.; Xu, Xiaoyu; Batalin, Maxim; Thomas, Seth; Kaiser, William
2015-01-01
Background and Purpose Outcome measures of mobility for large stroke trials are limited to timed walks for short distances in a laboratory, step counters and ordinal scales of disability and quality of life. Continuous monitoring and outcome measurements of the type and quantity of activity in the community would provide direct data about daily performance, including compliance with exercise and skills practice during routine care and clinical trials. Methods Twelve adults with impaired ambulation from hemiparetic stroke and 6 healthy controls wore triaxial accelerometers on their ankles. Walking speed for repeated outdoor walks was determined by machine-learning algorithms and compared to a stopwatch calculation of speed for distances not known to the algorithm. The reliability of recognizing walking, exercise, and cycling by the algorithms was compared to activity logs. Results A high correlation was found between stopwatch-measured outdoor walking speed and algorithm-calculated speed (Pearson coefficient, 0.98; P=0.001) and for repeated measures of algorithm-derived walking speed (P=0.01). Bouts of walking >5 steps, variations in walking speed, cycling, stair climbing, and leg exercises were correctly identified during a day in the community. Compared to healthy subjects, those with stroke were, as expected, more sedentary and slower, and their gait revealed high paretic-to-unaffected leg swing ratios. Conclusions Test–retest reliability and concurrent and construct validity are high for activity pattern-recognition Bayesian algorithms developed from inertial sensors. This ratio scale data can provide real-world monitoring and outcome measurements of lower extremity activities and walking speed for stroke and rehabilitation studies. PMID:21636815
Moraes, Renato; Allard, Fran; Patla, Aftab E
2007-10-01
The goal of this study was to validate dynamic stability and forward progression determinants for the alternate foot placement selection algorithm. Participants were asked to walk on level ground and avoid stepping, when present, on a virtual white planar obstacle. They had a one-step duration to select an alternate foot placement, with the task performed under two conditions: free (participants chose the alternate foot placement that was appropriate) and forced (a green arrow projected over the white planar obstacle cued the alternate foot placement). To validate the dynamic stability determinant, the distance between the extrapolated center of mass (COM) position, which incorporates the dynamics of the body, and the limits of the base of support was calculated in both anteroposterior (AP) and mediolateral (ML) directions in the double support phase. To address the second determinant, COM deviation from straight ahead was measured between adaptive and subsequent steps. The results of this study showed that long and lateral choices were dominant in the free condition, and these adjustments did not compromise stability in both adaptive and subsequent steps compared with the short and medial adjustments, which were infrequent and adversely affected stability. Therefore stability is critical when selecting an alternate foot placement in a cluttered terrain. In addition, changes in the plane of progression resulted in small deviations of COM from the endpoint goal. Forward progression of COM was maintained even for foot placement changes in the frontal plane, validating this determinant as part of the selection algorithm.
National Aeronautics and Space Administration — SSCI proposes to develop and test a framework referred to as the ADVANCE (Algorithm Design and Validation for Adaptive Nonlinear Control Enhancement), within which...
Automation of RELAP5 input calibration and code validation using genetic algorithm
Phung, Viet-Anh, E-mail: vaphung@kth.se [Division of Nuclear Power Safety, Royal Institute of Technology, Roslagstullsbacken 21, 10691 Stockholm (Sweden); Kööp, Kaspar, E-mail: kaspar@safety.sci.kth.se [Division of Nuclear Power Safety, Royal Institute of Technology, Roslagstullsbacken 21, 10691 Stockholm (Sweden); Grishchenko, Dmitry, E-mail: dmitry@safety.sci.kth.se [Division of Nuclear Power Safety, Royal Institute of Technology, Roslagstullsbacken 21, 10691 Stockholm (Sweden); Vorobyev, Yury, E-mail: yura3510@gmail.com [National Research Center “Kurchatov Institute”, Kurchatov square 1, Moscow 123182 (Russian Federation); Kudinov, Pavel, E-mail: pavel@safety.sci.kth.se [Division of Nuclear Power Safety, Royal Institute of Technology, Roslagstullsbacken 21, 10691 Stockholm (Sweden)
2016-04-15
Highlights: • Automated input calibration and code validation using genetic algorithm is presented. • Predictions generally overlap experiments for individual system response quantities (SRQs). • It was not possible to predict simultaneously experimental maximum flow rate and oscillation period. • Simultaneous consideration of multiple SRQs is important for code validation. - Abstract: Validation of system thermal-hydraulic codes is an important step in application of the codes to reactor safety analysis. The goal of the validation process is to determine how well a code can represent physical reality. This is achieved by comparing predicted and experimental system response quantities (SRQs) taking into account experimental and modelling uncertainties. Parameters which are required for the code input but not measured directly in the experiment can become an important source of uncertainty in the code validation process. Quantification of such parameters is often called input calibration. Calibration and uncertainty quantification may become challenging tasks when the number of calibrated input parameters and SRQs is large and dependencies between them are complex. If only engineering judgment is employed in the process, the outcome can be prone to so called “user effects”. The goal of this work is to develop an automated approach to input calibration and RELAP5 code validation against data on two-phase natural circulation flow instability. Multiple SRQs are used in both calibration and validation. In the input calibration, we used genetic algorithm (GA), a heuristic global optimization method, in order to minimize the discrepancy between experimental and simulation data by identifying optimal combinations of uncertain input parameters in the calibration process. We demonstrate the importance of the proper selection of SRQs and respective normalization and weighting factors in the fitness function. In the code validation, we used maximum flow rate as the
Ecodriver. D23.1: Report on test scenarios for val-idation of on-line vehicle algorithms
Seewald, P.; Ivens, T.W.T.; Spronkmans, S.
2014-01-01
This deliverable provides a description of test scenarios that will be used for validation of WP22’s on-line vehicle algorithms. These algorithms consist of the two modules VE³ (Vehicle Energy and Environment Estimator) and RSG (Reference Signal Genera-tor) and will be tested using the Matlab/Simuli
J. T. Sullivan
2015-04-01
Full Text Available The main purpose of the NASA Goddard Space Flight Center TROPospheric OZone DIfferential Absorption Lidar (GSFC TROPOZ DIAL is to measure the vertical distribution of tropospheric ozone for science investigations. Because of the important health and climate impacts of tropospheric ozone, it is imperative to quantify background photochemical and aloft ozone concentrations, especially during air quality episodes. To better characterize tropospheric ozone, the Tropospheric Ozone Lidar Network (TOLNet has recently been developed, which currently consists of five different ozone DIAL instruments, including the TROPOZ. This paper addresses the necessary procedures to validate the TROPOZ retrieval algorithm and develops a primary standard for retrieval consistency and optimization within TOLNet. This paper is focused on ensuring the TROPOZ and future TOLNet algorithms are properly quantifying ozone concentrations and the following paper will focus on defining a systematic uncertainty analysis standard for all TOLNet instruments. Although this paper is used to optimize the TROPOZ retrieval, the methodology presented may be extended and applied to most other DIAL instruments, even if the atmospheric product of interest is not tropospheric ozone (e.g. temperature or water vapor. The analysis begins by computing synthetic lidar returns from actual TROPOZ lidar return signals in combination with a known ozone profile. From these synthetic signals, it is possible to explicitly determine retrieval algorithm biases from the known profile, thereby identifying any areas that may need refinement for a new operational version of the TROPOZ retrieval algorithm. A new vertical resolution scheme is presented, which was upgraded from a constant vertical resolution to a variable vertical resolution, in order to yield a statistical uncertainty of opt has been effective in retrieving nearly 200 m lower to the surface. Also, as compared to the previous version of the
Houchin, J. S.
2014-09-01
A common problem for the off-line validation of the calibration algorithms and algorithm coefficients is being able to run science data through the exact same software used for on-line calibration of that data. The Joint Polar Satellite System (JPSS) program solved part of this problem by making the Algorithm Development Library (ADL) available, which allows the operational algorithm code to be compiled and run on a desktop Linux workstation using flat file input and output. However, this solved only part of the problem, as the toolkit and methods to initiate the processing of data through the algorithms were geared specifically toward the algorithm developer, not the calibration analyst. In algorithm development mode, a limited number of sets of test data are staged for the algorithm once, and then run through the algorithm over and over as the software is developed and debugged. In calibration analyst mode, we are continually running new data sets through the algorithm, which requires significant effort to stage each of those data sets for the algorithm without additional tools. AeroADL solves this second problem by providing a set of scripts that wrap the ADL tools, providing both efficient means to stage and process an input data set, to override static calibration coefficient look-up-tables (LUT) with experimental versions of those tables, and to manage a library containing multiple versions of each of the static LUT files in such a way that the correct set of LUTs required for each algorithm are automatically provided to the algorithm without analyst effort. Using AeroADL, The Aerospace Corporation's analyst team has demonstrated the ability to quickly and efficiently perform analysis tasks for both the VIIRS and OMPS sensors with minimal training on the software tools.
Gupta, Pooja; Durani, Susheel
2015-11-01
Polypeptides have potential to be designed as drugs or inhibitors against the desired targets. In polypeptides, every chiral α-amino acid has enantiomeric structural possibility to become l or d amino acids and can be used as design monomer. Among the various possibilities, use of stereochemistry as a design tool has potential to determine both functional specificity and metabolic stability of the designed polypeptides. The polypeptides with mixed l,d amino acids are a class of peptidomimitics, an attractive drug like molecules and also less susceptible to proteolytic activities. Therefore in this study, a three step algorithm is proposed to design the polypeptides against desired drug targets. For this, all possible configurational isomers of mixed l,d polyleucine (Ac-Leu8-NHMe) structure were randomly modeled with simulated annealing molecular dynamics and the resultant library of discrete folds were scored against HIV protease as a model target. The best scored folds of mixed l,d structures were inverse optimized for sequences in situ and the resultant sequences as inhibitors were validated for conformational integrity using molecular dynamics. This study presents and validates an algorithm to design polypeptides of mixed l,d structures as drugs/inhibitors by inverse fitting them as molecular ligands against desired target.
GLASS Daytime All-Wave Net Radiation Product: Algorithm Development and Preliminary Validation
Bo Jiang
2016-03-01
Full Text Available Mapping surface all-wave net radiation (Rn is critically needed for various applications. Several existing Rn products from numerical models and satellite observations have coarse spatial resolutions and their accuracies may not meet the requirements of land applications. In this study, we develop the Global LAnd Surface Satellite (GLASS daytime Rn product at a 5 km spatial resolution. Its algorithm for converting shortwave radiation to all-wave net radiation using the Multivariate Adaptive Regression Splines (MARS model is determined after comparison with three other algorithms. The validation of the GLASS Rn product based on high-quality in situ measurements in the United States shows a coefficient of determination value of 0.879, an average root mean square error value of 31.61 Wm−2, and an average bias of −17.59 Wm−2. We also compare our product/algorithm with another satellite product (CERES-SYN and two reanalysis products (MERRA and JRA55, and find that the accuracy of the much higher spatial resolution GLASS Rn product is satisfactory. The GLASS Rn product from 2000 to the present is operational and freely available to the public.
Lin, Yuan; Samei, Ehsan
2016-07-01
Dynamic perfusion imaging can provide the morphologic details of the scanned organs as well as the dynamic information of blood perfusion. However, due to the polyenergetic property of the x-ray spectra, beam hardening effect results in undesirable artifacts and inaccurate CT values. To address this problem, this study proposes a segmentation-free polyenergetic dynamic perfusion imaging algorithm (pDP) to provide superior perfusion imaging. Dynamic perfusion usually is composed of two phases, i.e., a precontrast phase and a postcontrast phase. In the precontrast phase, the attenuation properties of diverse base materials (e.g., in a thorax perfusion exam, base materials can include lung, fat, breast, soft tissue, bone, and metal implants) can be incorporated to reconstruct artifact-free precontrast images. If patient motions are negligible or can be corrected by registration, the precontrast images can then be employed as a priori information to derive linearized iodine projections from the postcontrast images. With the linearized iodine projections, iodine perfusion maps can be reconstructed directly without the influence of various influential factors, such as iodine location, patient size, x-ray spectrum, and background tissue type. A series of simulations were conducted on a dynamic iodine calibration phantom and a dynamic anthropomorphic thorax phantom to validate the proposed algorithm. The simulations with the dynamic iodine calibration phantom showed that the proposed algorithm could effectively eliminate the beam hardening effect and enable quantitative iodine map reconstruction across various influential factors. The error range of the iodine concentration factors ([Formula: see text]) was reduced from [Formula: see text] for filtered back-projection (FBP) to [Formula: see text] for pDP. The quantitative results of the simulations with the dynamic anthropomorphic thorax phantom indicated that the maximum error of iodine concentrations can be reduced from
Ladner, Travis R; Greenberg, Jacob K; Guerrero, Nicole; Olsen, Margaret A; Shannon, Chevis N; Yarbrough, Chester K; Piccirillo, Jay F; Anderson, Richard C E; Feldstein, Neil A; Wellons, John C; Smyth, Matthew D; Park, Tae Sung; Limbrick, David D
2016-05-01
OBJECTIVE Administrative billing data may facilitate large-scale assessments of treatment outcomes for pediatric Chiari malformation Type I (CM-I). Validated International Classification of Diseases, Ninth Revision, Clinical Modification (ICD-9-CM) code algorithms for identifying CM-I surgery are critical prerequisites for such studies but are currently only available for adults. The objective of this study was to validate two ICD-9-CM code algorithms using hospital billing data to identify pediatric patients undergoing CM-I decompression surgery. METHODS The authors retrospectively analyzed the validity of two ICD-9-CM code algorithms for identifying pediatric CM-I decompression surgery performed at 3 academic medical centers between 2001 and 2013. Algorithm 1 included any discharge diagnosis code of 348.4 (CM-I), as well as a procedure code of 01.24 (cranial decompression) or 03.09 (spinal decompression or laminectomy). Algorithm 2 restricted this group to the subset of patients with a primary discharge diagnosis of 348.4. The positive predictive value (PPV) and sensitivity of each algorithm were calculated. RESULTS Among 625 first-time admissions identified by Algorithm 1, the overall PPV for CM-I decompression was 92%. Among the 581 admissions identified by Algorithm 2, the PPV was 97%. The PPV for Algorithm 1 was lower in one center (84%) compared with the other centers (93%-94%), whereas the PPV of Algorithm 2 remained high (96%-98%) across all subgroups. The sensitivity of Algorithms 1 (91%) and 2 (89%) was very good and remained so across subgroups (82%-97%). CONCLUSIONS An ICD-9-CM algorithm requiring a primary diagnosis of CM-I has excellent PPV and very good sensitivity for identifying CM-I decompression surgery in pediatric patients. These results establish a basis for utilizing administrative billing data to assess pediatric CM-I treatment outcomes.
Taylor, Thomas E.; O'Dell, Christopher W.; Frankenberg, Christian; Partain, Philip T.; Cronk, Heather Q.; Savtchenko, Andrey; Nelson, Robert R.; Rosenthal, Emily J.; Chang, Albert Y.; Fisher, Brenden; Osterman, Gregory B.; Pollock, Randy H.; Crisp, David; Eldering, Annmarie; Gunson, Michael R.
2016-03-01
The objective of the National Aeronautics and Space Administration's (NASA) Orbiting Carbon Observatory-2 (OCO-2) mission is to retrieve the column-averaged carbon dioxide (CO2) dry air mole fraction (XCO2) from satellite measurements of reflected sunlight in the near-infrared. These estimates can be biased by clouds and aerosols, i.e., contamination, within the instrument's field of view. Screening of the most contaminated soundings minimizes unnecessary calls to the computationally expensive Level 2 (L2) XCO2 retrieval algorithm. Hence, robust cloud screening methods have been an important focus of the OCO-2 algorithm development team. Two distinct, computationally inexpensive cloud screening algorithms have been developed for this application. The A-Band Preprocessor (ABP) retrieves the surface pressure using measurements in the 0.76 µm O2 A band, neglecting scattering by clouds and aerosols, which introduce photon path-length differences that can cause large deviations between the expected and retrieved surface pressure. The Iterative Maximum A Posteriori (IMAP) Differential Optical Absorption Spectroscopy (DOAS) Preprocessor (IDP) retrieves independent estimates of the CO2 and H2O column abundances using observations taken at 1.61 µm (weak CO2 band) and 2.06 µm (strong CO2 band), while neglecting atmospheric scattering. The CO2 and H2O column abundances retrieved in these two spectral regions differ significantly in the presence of cloud and scattering aerosols. The combination of these two algorithms, which are sensitive to different features in the spectra, provides the basis for cloud screening of the OCO-2 data set.To validate the OCO-2 cloud screening approach, collocated measurements from NASA's Moderate Resolution Imaging Spectrometer (MODIS), aboard the Aqua platform, were compared to results from the two OCO-2 cloud screening algorithms. With tuning of algorithmic threshold parameters that allows for processing of ≃ 20-25 % of all OCO-2 soundings
Zaini Paulo A
2010-05-01
Full Text Available Abstract Background From shotgun libraries used for the genomic sequencing of the phytopathogenic bacterium Xanthomonas axonopodis pv. citri (XAC, clones that were representative of the largest possible number of coding sequences (CDSs were selected to create a DNA microarray platform on glass slides (XACarray. The creation of the XACarray allowed for the establishment of a tool that is capable of providing data for the analysis of global genome expression in this organism. Findings The inserts from the selected clones were amplified by PCR with the universal oligonucleotide primers M13R and M13F. The obtained products were purified and fixed in duplicate on glass slides specific for use in DNA microarrays. The number of spots on the microarray totaled 6,144 and included 768 positive controls and 624 negative controls per slide. Validation of the platform was performed through hybridization of total DNA probes from XAC labeled with different fluorophores, Cy3 and Cy5. In this validation assay, 86% of all PCR products fixed on the glass slides were confirmed to present a hybridization signal greater than twice the standard deviation of the deviation of the global median signal-to-noise ration. Conclusions Our validation of the XACArray platform using DNA-DNA hybridization revealed that it can be used to evaluate the expression of 2,365 individual CDSs from all major functional categories, which corresponds to 52.7% of the annotated CDSs of the XAC genome. As a proof of concept, we used this platform in a previously work to verify the absence of genomic regions that could not be detected by sequencing in related strains of Xanthomonas.
GOCI Yonsei Aerosol Retrieval (YAER algorithm and validation during DRAGON-NE Asia 2012 campaign
M. Choi
2015-09-01
Full Text Available The Geostationary Ocean Color Imager (GOCI onboard the Communication, Ocean, and Meteorology Satellites (COMS is the first multi-channel ocean color imager in geostationary orbit. Hourly GOCI top-of-atmosphere radiance has been available for the retrieval of aerosol optical properties over East Asia since March 2011. This study presents improvements to the GOCI Yonsei Aerosol Retrieval (YAER algorithm over ocean and land together with validation results during the DRAGON-NE Asia 2012 campaign. Optical properties of aerosol are retrieved from the GOCI YAER algorithm including aerosol optical depth (AOD at 550 nm, fine-mode fraction (FMF at 550 nm, single scattering albedo (SSA at 440 nm, Angstrom exponent (AE between 440 and 860 nm, and aerosol type from selected aerosol models in calculating AOD. Assumed aerosol models are compiled from global Aerosol Robotic Networks (AERONET inversion data, and categorized according to AOD, FMF, and SSA. Nonsphericity is considered, and unified aerosol models are used over land and ocean. Different assumptions for surface reflectance are applied over ocean and land. Surface reflectance over the ocean varies with geometry and wind speed, while surface reflectance over land is obtained from the 1–3 % darkest pixels in a 6 km × 6 km area during 30 days. In the East China Sea and Yellow Sea, significant area is covered persistently by turbid waters, for which the land algorithm is used for aerosol retrieval. To detect turbid water pixels, TOA reflectance difference at 660 nm is used. GOCI YAER products are validated using other aerosol products from AERONET and the MODIS Collection 6 aerosol data from "Dark Target (DT" and "Deep Blue (DB" algorithms during the DRAGON-NE Asia 2012 campaign from March to May 2012. Comparison of AOD from GOCI and AERONET gives a Pearson correlation coefficient of 0.885 and a linear regression equation with GOCI AOD =1.086 × AERONET AOD – 0.041. GOCI and MODIS AODs are more
GOCI Yonsei Aerosol Retrieval (YAER) algorithm and validation during DRAGON-NE Asia 2012 campaign
Choi, M.; Kim, J.; Lee, J.; Kim, M.; Park, Y. Je; Jeong, U.; Kim, W.; Holben, B.; Eck, T. F.; Lim, J. H.; Song, C. K.
2015-09-01
The Geostationary Ocean Color Imager (GOCI) onboard the Communication, Ocean, and Meteorology Satellites (COMS) is the first multi-channel ocean color imager in geostationary orbit. Hourly GOCI top-of-atmosphere radiance has been available for the retrieval of aerosol optical properties over East Asia since March 2011. This study presents improvements to the GOCI Yonsei Aerosol Retrieval (YAER) algorithm over ocean and land together with validation results during the DRAGON-NE Asia 2012 campaign. Optical properties of aerosol are retrieved from the GOCI YAER algorithm including aerosol optical depth (AOD) at 550 nm, fine-mode fraction (FMF) at 550 nm, single scattering albedo (SSA) at 440 nm, Angstrom exponent (AE) between 440 and 860 nm, and aerosol type from selected aerosol models in calculating AOD. Assumed aerosol models are compiled from global Aerosol Robotic Networks (AERONET) inversion data, and categorized according to AOD, FMF, and SSA. Nonsphericity is considered, and unified aerosol models are used over land and ocean. Different assumptions for surface reflectance are applied over ocean and land. Surface reflectance over the ocean varies with geometry and wind speed, while surface reflectance over land is obtained from the 1-3 % darkest pixels in a 6 km × 6 km area during 30 days. In the East China Sea and Yellow Sea, significant area is covered persistently by turbid waters, for which the land algorithm is used for aerosol retrieval. To detect turbid water pixels, TOA reflectance difference at 660 nm is used. GOCI YAER products are validated using other aerosol products from AERONET and the MODIS Collection 6 aerosol data from "Dark Target (DT)" and "Deep Blue (DB)" algorithms during the DRAGON-NE Asia 2012 campaign from March to May 2012. Comparison of AOD from GOCI and AERONET gives a Pearson correlation coefficient of 0.885 and a linear regression equation with GOCI AOD =1.086 × AERONET AOD - 0.041. GOCI and MODIS AODs are more highly correlated
Vikraman, S; Ramu, M; Karrthick, Kp; Rajesh, T; Senniandavar, V; Sambasivaselli, R; Maragathaveni, S; Dhivya, N; Tejinder, K [Medanta The Medicity Hospital, Gurgaon, Haryana (India); Manigandan, D [Fortis Hospital, Mohali, Punjab (India); Muthukumaran, M [Apollo Speciality Hospital, Chennai, Tamilnadu (India)
2015-06-15
Purpose: The purpose of this study was to validate the advent of COMPASS 3D dosimetry as a routine pre treatment verification tool with commercially available CMS Monaco and Oncentra Masterplan planning system. Methods: Twenty esophagus patients were selected for this study. All these patients underwent radical VMAT treatment in Elekta Linac and plans were generated in Monaco v5.0 with MonteCarlo(MC) dose calculation algorithm. COMPASS 3D dosimetry comprises an advanced dose calculation algorithm of collapsed cone convolution(CCC). To validate CCC algorithm in COMPASS, The DICOM RT Plans generated using Monaco MC algorithm were transferred to Oncentra Masterplan v4.3 TPS. Only final dose calculations were performed using CCC algorithm with out optimization in Masterplan planning system. It is proven that MC algorithm is an accurate algorithm and obvious that there will be a difference with MC and CCC algorithms. Hence CCC in COMPASS should be validated with other commercially available CCC algorithm. To use the CCC as pretreatment verification tool with reference to MC generated treatment plans, CCC in OMP and CCC in COMPASS were validated using dose volume based indices such as D98, D95 for target volumes and OAR doses. Results: The point doses for open beams were observed <1% with reference to Monaco MC algorithms. Comparisons of CCC(OMP) Vs CCC(COMPASS) showed a mean difference of 1.82%±1.12SD and 1.65%±0.67SD for D98 and D95 respectively for Target coverage. Maximum point dose of −2.15%±0.60SD difference was observed in target volume. The mean lung dose of −2.68%±1.67SD was noticed between OMP and COMPASS. The maximum point doses for spinal cord were −1.82%±0.287SD. Conclusion: In this study, the accuracy of CCC algorithm in COMPASS 3D dosimetry was validated by compared with CCC algorithm in OMP TPS. Dose calculation in COMPASS is feasible within < 2% in comparison with commercially available TPS algorithms.
Bitter, Ingmar; Brown, John E.; Brickman, Daniel; Summers, Ronald M.
2004-04-01
The presented method significantly reduces the time necessary to validate a computed tomographic colonography (CTC) computer aided detection (CAD) algorithm of colonic polyps applied to a large patient database. As the algorithm is being developed on Windows PCs and our target, a Beowulf cluster, is running on Linux PCs, we made the application dual platform compatible using a single source code tree. To maintain, share, and deploy source code, we used CVS (concurrent versions system) software. We built the libraries from their sources for each operating system. Next, we made the CTC CAD algorithm dual-platform compatible and validate that both Windows and Linux produced the same results. Eliminating system dependencies was mostly achieved using the Qt programming library, which encapsulates most of the system dependent functionality in order to present the same interface on either platform. Finally, we wrote scripts to execute the CTC CAD algorithm in parallel. Running hundreds of simultaneous copies of the CTC CAD algorithm on a Beowulf cluster computing network enables execution in less than four hours on our entire collection of over 2400 CT scans, as compared to a month a single PC. As a consequence, our complete patient database can be processed daily, boosting research productivity. Large scale validation of a computer aided polyp detection algorithm for CT colonography using cluster computing significantly improves the round trip time of algorithm improvement and revalidation.
Molecular phylogenetic trees - On the validity of the Goodman-Moore augmentation algorithm
Holmquist, R.
1979-01-01
A response is made to the reply of Nei and Tateno (1979) to the letter of Holmquist (1978) supporting the validity of the augmentation algorithm of Moore (1977) in reconstructions of nucleotide substitutions by means of the maximum parsimony principle. It is argued that the overestimation of the augmented numbers of nucleotide substitutions (augmented distances) found by Tateno and Nei (1978) is due to an unrepresentative data sample and that it is only necessary that evolution be stochastically uniform in different regions of the phylogenetic network for the augmentation method to be useful. The importance of the average value of the true distance over all links is explained, and the relative variances of the true and augmented distances are calculated to be almost identical. The effects of topological changes in the phylogenetic tree on the augmented distance and the question of the correctness of ancestral sequences inferred by the method of parsimony are also clarified.
Algorithms for the remote sensing of the Baltic ecosystem (DESAMBEM. Part 2: Empirical validation
Bogdan Woźniak
2008-12-01
Full Text Available This paper is the second of two articles on the methodology ofthe remote sensing of the Baltic ecosystem. In Part~1 the authorspresented the set of DESAMBEM algorithms for determining themajor parameters of this ecosystem on the basis of satellitedata (see Wo/xniak et al.~2008 -- this issue. That article discussedin detail the mathematical apparatus of the algorithms. Part2 presents the effects of the practical application of the algorithmsand their validation, the latter based on satellite maps of selectedBaltic ecosystem parameters: the distributions of the sea surfacetemperature, the Photosynthetically Available Radiation (PARat the sea surface, the surface concentrations of chlorophyll~{it a}and the total primary production of organic matter.~Particularemphasis was laid on analysing the precision of estimates ofthese and other parameters of the Baltic ecosystem, determinedby remote sensing methods. The errors in these estimates turnedout to be relatively small; hence, the set of DESAMBEM algorithmsshould in the future be utilised as the foundation for the effectivesatellite monitoring of the state and functioning of the Balticecosystem.
Empirical validation of the S-Score algorithm in the analysis of gene expression data
Archer Kellie J
2006-03-01
Full Text Available Abstract Background Current methods of analyzing Affymetrix GeneChip® microarray data require the estimation of probe set expression summaries, followed by application of statistical tests to determine which genes are differentially expressed. The S-Score algorithm described by Zhang and colleagues is an alternative method that allows tests of hypotheses directly from probe level data. It is based on an error model in which the detected signal is proportional to the probe pair signal for highly expressed genes, but approaches a background level (rather than 0 for genes with low levels of expression. This model is used to calculate relative change in probe pair intensities that converts probe signals into multiple measurements with equalized errors, which are summed over a probe set to form the S-Score. Assuming no expression differences between chips, the S-Score follows a standard normal distribution, allowing direct tests of hypotheses to be made. Using spike-in and dilution datasets, we validated the S-Score method against comparisons of gene expression utilizing the more recently developed methods RMA, dChip, and MAS5. Results The S-score showed excellent sensitivity and specificity in detecting low-level gene expression changes. Rank ordering of S-Score values more accurately reflected known fold-change values compared to other algorithms. Conclusion The S-score method, utilizing probe level data directly, offers significant advantages over comparisons using only probe set expression summaries.
Physics and Algorithm Enhancements for a Validated MCNP/X Monte Carlo Simulation Tool, Phase VII
McKinney, Gregg W [Los Alamos National Laboratory
2012-07-17
Currently the US lacks an end-to-end (i.e., source-to-detector) radiation transport simulation code with predictive capability for the broad range of DHS nuclear material detection applications. For example, gaps in the physics, along with inadequate analysis algorithms, make it difficult for Monte Carlo simulations to provide a comprehensive evaluation, design, and optimization of proposed interrogation systems. With the development and implementation of several key physics and algorithm enhancements, along with needed improvements in evaluated data and benchmark measurements, the MCNP/X Monte Carlo codes will provide designers, operators, and systems analysts with a validated tool for developing state-of-the-art active and passive detection systems. This project is currently in its seventh year (Phase VII). This presentation will review thirty enhancements that have been implemented in MCNPX over the last 3 years and were included in the 2011 release of version 2.7.0. These improvements include 12 physics enhancements, 4 source enhancements, 8 tally enhancements, and 6 other enhancements. Examples and results will be provided for each of these features. The presentation will also discuss the eight enhancements that will be migrated into MCNP6 over the upcoming year.
Cota, Glenn F.
2001-01-01
The overall goal of this effort is to acquire a large bio-optical database, encompassing most environmental variability in the Arctic, to develop algorithms for phytoplankton biomass and production and other optically active constituents. A large suite of bio-optical and biogeochemical observations have been collected in a variety of high latitude ecosystems at different seasons. The Ocean Research Consortium of the Arctic (ORCA) is a collaborative effort between G.F. Cota of Old Dominion University (ODU), W.G. Harrison and T. Platt of the Bedford Institute of Oceanography (BIO), S. Sathyendranath of Dalhousie University and S. Saitoh of Hokkaido University. ORCA has now conducted 12 cruises and collected over 500 in-water optical profiles plus a variety of ancillary data. Observational suites typically include apparent optical properties (AOPs), inherent optical property (IOPs), and a variety of ancillary observations including sun photometry, biogeochemical profiles, and productivity measurements. All quality-assured data have been submitted to NASA's SeaWIFS Bio-Optical Archive and Storage System (SeaBASS) data archive. Our algorithm development efforts address most of the potential bio-optical data products for the Sea-Viewing Wide Field-of-view Sensor (SeaWiFS), Moderate Resolution Imaging Spectroradiometer (MODIS), and GLI, and provides validation for a specific areas of concern, i.e., high latitudes and coastal waters.
J. Bak
2014-02-01
Full Text Available The accuracy of total ozone computed from the Smithsonian Astrophysical Observatory (SAO optimal estimation (OE ozone profile algorithm (SOE applied to the Ozone Monitoring Instrument (OMI is assessed through comparisons with ground-based Brewer spectrometer measurements from 2005 to 2008. We also make comparisons with the three OMI operational ozone products, derived from the NASA Total Ozone Mapping Spectrometer (TOMS, KNMI Differential Optical Absorption Spectroscopy (DOAS, and KNMI OE (KOE algorithms. Excellent agreement is observed between SAO and Brewer, with a mean difference of less than ±1% at most individual stations. The KNMI OE algorithm systematically overestimates Brewer total ozone by 2% at low/mid latitudes and 5% at high latitudes while the TOMS and DOAS algorithms underestimate it by ~1.65% on average. Standard deviations of ~1.8% are found for both SOE and TOMS, but DOAS and KOE have scatters of 2.2% and 2.6%, respectively. The stability of the SOE algorithm is found to have insignificant dependence on viewing geometry, cloud parameters, total ozone column. In comparison, the KOE differences to Brewer values are significantly correlated with solar and viewing zenith angles, with a significant deviation depending on cloud parameters and total ozone amount. The TOMS algorithm exhibits similar stability to SOE with respect to viewing geometry and total column ozone, but stronger cloud parameter dependence. The dependence of DOAS on the algorithmic variables is marginal compared to KOE, but distinct compared to the SOE and TOMS algorithms. Comparisons of All four OMI products with Brewer show no apparent long-term drift but a seasonally affected feature, especially for KOE and TOMS. The substantial differences in the KOE vs. SOE algorithm performance cannot be sufficiently explained by the use of soft calibration (in SOE and the use of different a priori error covariance matrix, but other algorithm details cause larger fitting
Amsuess, Sebastian; Goebel, Peter; Graimann, Bernhard; Farina, Dario
2015-09-01
Functional replacement of upper limbs by means of dexterous prosthetic devices remains a technological challenge. While the mechanical design of prosthetic hands has advanced rapidly, the human-machine interfacing and the control strategies needed for the activation of multiple degrees of freedom are not reliable enough for restoring hand function successfully. Machine learning methods capable of inferring the user intent from EMG signals generated by the activation of the remnant muscles are regarded as a promising solution to this problem. However, the lack of robustness of the current methods impedes their routine clinical application. In this study, we propose a novel algorithm for controlling multiple degrees of freedom sequentially, inherently proportionally and with high robustness, allowing a good level of prosthetic hand function. The control algorithm is based on the spatial linear combinations of amplitude-related EMG signal features. The weighting coefficients in this combination are derived from the optimization criterion of the common spatial patterns filters which allow for maximal discriminability between movements. An important component of the study is the validation of the method which was performed on both able-bodied and amputee subjects who used physical prostheses with customized sockets and performed three standardized functional tests mimicking daily-life activities of varying difficulty. Moreover, the new method was compared in the same conditions with one clinical/industrial and one academic state-of-the-art method. The novel algorithm outperformed significantly the state-of-the-art techniques in both subject groups for tests that required the activation of more than one degree of freedom. Because of the evaluation in real time control on both able-bodied subjects and final users (amputees) wearing physical prostheses, the results obtained allow for the direct extrapolation of the benefits of the proposed method for the end users. In
Matthews Fiona
2009-07-01
Full Text Available Abstract Background Little evidence is available to determine which patients should undergo repeat biopsy after initial benign extended core biopsy (ECB. Attempts have been made to reduce the frequency of negative repeat biopsies using PSA kinetics, density, free-to-total ratios and Kattan's nomogram, to identify men more likely to harbour cancer but no single tool accurately predicts biopsy outcome. The objective of this study was to develop a predictive nomogram to identify men more likely to have a cancer diagnosed on repeat prostate biopsy. Methods Patients with previous benign ECB undergoing repeat biopsy were identified from a database. Association between age, volume, stage, previous histology, PSA kinetics and positive repeat biopsy was analysed. Variables were entered stepwise into logistic regression models. A risk score giving the probability of positive repeat biopsy was estimated. The performance of this score was assessed using receiver characteristic (ROC analysis. Results 110 repeat biopsies were performed in this period. Cancer was detected in 31% of repeat biopsies at Hospital (1 and 30% at Hospital (2. The most accurate predictive model combined age, PSA, PSA velocity, free-to-total PSA ratio, prostate volume and digital rectal examination (DRE findings. The risk model performed well in an independent sample, area under the curve (AUCROC was 0.818 (95% CI 0.707 to 0.929 for the risk model and 0.696 (95% CI 0.472 to 0.921 for the validation model. It was calculated that using a threshold risk score of > 0.2 to identify high risk individuals would reduce repeat biopsies by 39% while identifying 90% of the men with prostate cancer. Conclusion An accurate multi-variable predictive tool to determine the risk of positive repeat prostate biopsy is presented. This can be used by urologists in an outpatient setting to aid decision-making for men with prior benign histology for whom a repeat biopsy is being considered.
Matthijs Lipperts
2017-10-01
Conclusion: Activity monitoring of orthopaedic patients by counting and timing a large set of relevant daily life events is feasible in a user- and patient-friendly way and at high clinical validity using a generic three-dimensional accelerometer and algorithms based on empirical and physical methods. The algorithms performed well for healthy individuals as well as patients recovering after total joint replacement in a challenging validation set-up. With such a simple and transparent method real-life activity parameters can be collected in orthopaedic practice for diagnostics, treatments, outcome assessment, or biofeedback.
Chan, An-Wen; Fung, Kinwah; Tran, Jennifer M; Kitchen, Jessica; Austin, Peter C; Weinstock, Martin A; Rochon, Paula A
2016-10-01
Keratinocyte carcinoma (nonmelanoma skin cancer) accounts for substantial burden in terms of high incidence and health care costs but is excluded by most cancer registries in North America. Administrative health insurance claims databases offer an opportunity to identify these cancers using diagnosis and procedural codes submitted for reimbursement purposes. To apply recursive partitioning to derive and validate a claims-based algorithm for identifying keratinocyte carcinoma with high sensitivity and specificity. Retrospective study using population-based administrative databases linked to 602 371 pathology episodes from a community laboratory for adults residing in Ontario, Canada, from January 1, 1992, to December 31, 2009. The final analysis was completed in January 2016. We used recursive partitioning (classification trees) to derive an algorithm based on health insurance claims. The performance of the derived algorithm was compared with 5 prespecified algorithms and validated using an independent academic hospital clinic data set of 2082 patients seen in May and June 2011. Sensitivity, specificity, positive predictive value, and negative predictive value using the histopathological diagnosis as the criterion standard. We aimed to achieve maximal specificity, while maintaining greater than 80% sensitivity. Among 602 371 pathology episodes, 131 562 (21.8%) had a diagnosis of keratinocyte carcinoma. Our final derived algorithm outperformed the 5 simple prespecified algorithms and performed well in both community and hospital data sets in terms of sensitivity (82.6% and 84.9%, respectively), specificity (93.0% and 99.0%, respectively), positive predictive value (76.7% and 69.2%, respectively), and negative predictive value (95.0% and 99.6%, respectively). Algorithm performance did not vary substantially during the 18-year period. This algorithm offers a reliable mechanism for ascertaining keratinocyte carcinoma for epidemiological research in the absence of
Park, Sang Hyuk; An, Dongheui; Chang, You Jin; Kim, Hyun Jung; Kim, Kyung Min; Koo, Tai Yeon; Kim, Sollip; Lee, Woochang; Yang, Won Seok; Hong, Sang-Bum; Chun, Sail; Min, Won-Ki
2011-03-01
Arterial blood gas analysis (ABGA) is a useful test that estimates the acid-base status of patients. However, numerically reported test results make rapid interpretation difficult. To overcome this problem, we have developed an algorithm that automatically interprets ABGA results, and assessed the validity of this algorithm for applications in clinical laboratory services. The algorithm was developed based on well-established guidelines using three test results (pH, PaCO₂ and [HCO₃⁻]) as variables. Ninety-nine ABGA test results were analysed by the algorithm. The algorithm's interpretations and the interpretations of two representative web-based ABGA interpretation programs were compared with those of two experienced clinicians. The concordance rates between the interpretations of each of the two clinicians and the algorithm were 91.9% and 97.0%, respectively. The web-based programs could not issue definitive interpretations in 15.2% and 25.3% of cases, respectively, but the algorithm issued definitive interpretations in all cases. Of the 10 cases that invoked disagreement among interpretations by the algorithm and the two clinicians, half were interpreted as compensated acid-base disorders by the algorithm but were assessed as normal by at least one of the two clinicians. In no case did the algorithm indicate a normal condition that the clinicians assessed as an abnormal condition. The interpretations of the algorithm showed a higher concordance rate with those of experienced clinicians than did two web-based programs. The algorithm sensitively detected acid-base disorders. The algorithm may be adopted by the clinical laboratory services to provide rapid and definitive interpretations of test results.
Choi, Myungje; Kim, Jhoon; Lee, Jaehwa; Kim, Mijin; Park, Young-Je; Jeong, Ukkyo; Kim, Woogyung; Hong, Hyunkee; Holben, Brent; Eck, Thomas F.; Song, Chul H.; Lim, Jae-Hyun; Song, Chang-Keun
2016-04-01
The Geostationary Ocean Color Imager (GOCI) onboard the Communication, Ocean, and Meteorological Satellite (COMS) is the first multi-channel ocean color imager in geostationary orbit. Hourly GOCI top-of-atmosphere radiance has been available for the retrieval of aerosol optical properties over East Asia since March 2011. This study presents improvements made to the GOCI Yonsei Aerosol Retrieval (YAER) algorithm together with validation results during the Distributed Regional Aerosol Gridded Observation Networks - Northeast Asia 2012 campaign (DRAGON-NE Asia 2012 campaign). The evaluation during the spring season over East Asia is important because of high aerosol concentrations and diverse types of Asian dust and haze. Optical properties of aerosol are retrieved from the GOCI YAER algorithm including aerosol optical depth (AOD) at 550 nm, fine-mode fraction (FMF) at 550 nm, single-scattering albedo (SSA) at 440 nm, Ångström exponent (AE) between 440 and 860 nm, and aerosol type. The aerosol models are created based on a global analysis of the Aerosol Robotic Networks (AERONET) inversion data, and covers a broad range of size distribution and absorptivity, including nonspherical dust properties. The Cox-Munk ocean bidirectional reflectance distribution function (BRDF) model is used over ocean, and an improved minimum reflectance technique is used over land. Because turbid water is persistent over the Yellow Sea, the land algorithm is used for such cases. The aerosol products are evaluated against AERONET observations and MODIS Collection 6 aerosol products retrieved from Dark Target (DT) and Deep Blue (DB) algorithms during the DRAGON-NE Asia 2012 campaign conducted from March to May 2012. Comparison of AOD from GOCI and AERONET resulted in a Pearson correlation coefficient of 0.881 and a linear regression equation with GOCI AOD = 1.083 × AERONET AOD - 0.042. The correlation between GOCI and MODIS AODs is higher over ocean than land. GOCI AOD shows better
Juneja, Prabhjot; Evans, Philp M; Harris, Emma J
2013-08-01
Validation is required to ensure automated segmentation algorithms are suitable for radiotherapy target definition. In the absence of true segmentation, algorithmic segmentation is validated against expert outlining of the region of interest. Multiple experts are used to overcome inter-expert variability. Several approaches have been studied in the literature, but the most appropriate approach to combine the information from multiple expert outlines, to give a single metric for validation, is unclear. None consider a metric that can be tailored to case-specific requirements in radiotherapy planning. Validation index (VI), a new validation metric which uses experts' level of agreement was developed. A control parameter was introduced for the validation of segmentations required for different radiotherapy scenarios: for targets close to organs-at-risk and for difficult to discern targets, where large variation between experts is expected. VI was evaluated using two simulated idealized cases and data from two clinical studies. VI was compared with the commonly used Dice similarity coefficient (DSCpair - wise) and found to be more sensitive than the DSCpair - wise to the changes in agreement between experts. VI was shown to be adaptable to specific radiotherapy planning scenarios.
Further validation of the hybrid algorithm for CTO PCI; difficult lesions, same success.
Basir, Mir B; Karatasakis, Aris; Alqarqaz, Mohammad; Danek, Barbara; Rangan, Bavana V; Brilakis, Emmanouil S; Kim, Henry; O'Neill, William W; Alaswad, Khaldoon
To evaluate the success rates and outcome of the hybrid algorithm for chronic total occlusion (CTO) percutaneous coronary intervention (PCI) by a single operator in two different clinical settings. We compared 279 consecutive CTO PCIs performed by a single, high-volume operator using the hybrid algorithm in two different clinical settings. Data were collected through the PROGRESS CTO Registry. We compared 145 interventions performed in a community program (cohort A) with 134 interventions performed in a referral center (cohort B). Patient in cohort B had more complex lesions with higher J-CTO (3.0 vs. 3.41; pCTO (1.5 vs.1.8, P=0.003) scores, more moderate to severe tortuosity (38% vs. 64%; pCTO PCI attempts (15% vs. 35%; p=0.001). Both technical (95% vs. 91%; p=0.266) and procedural (94% vs. 88%; p=0.088) success rates were similar between the two cohorts despite significantly different lesion complexity. Overall major adverse cardiovascular events were higher in cohort B (1.4% vs. 7.8%; p=0.012) without any significant difference in mortality (0.7% vs. 2.3%, p=0.351). In spite of higher lesion complexity in the setting of a quaternary-care referral center, use of the hybrid algorithm for CTO PCI enabled similarly high technical and procedural success rates as compared with those previously achieved by the same operator in a community-based program at the expense of a higher rate of MACE. Copyright © 2017 Elsevier Inc. All rights reserved.
Roussel, Marc R; Zhu, Rui
2006-12-08
The quantitative modeling of gene transcription and translation requires a treatment of two key features: stochastic fluctuations due to the limited copy numbers of key molecules (genes, RNA polymerases, ribosomes), and delayed output due to the time required for biopolymer synthesis. Recently proposed algorithms allow for efficient simulations of such systems. However, it is critical to know whether the results of delay stochastic simulations agree with those from more detailed models of the transcription and translation processes. We present a generalization of previous delay stochastic simulation algorithms which allows both for multiple delays and for distributions of delay times. We show that delay stochastic simulations closely approximate simulations of a detailed transcription model except when two-body effects (e.g. collisions between polymerases on a template strand) are important. Finally, we study a delay stochastic model of prokaryotic transcription and translation which reproduces observations from a recent experimental study in which a single gene was expressed under the control of a repressed lac promoter in E. coli cells. This demonstrates our ability to quantitatively model gene expression using these new methods.
Roussel, Marc R.; Zhu, Rui
2006-12-01
The quantitative modeling of gene transcription and translation requires a treatment of two key features: stochastic fluctuations due to the limited copy numbers of key molecules (genes, RNA polymerases, ribosomes), and delayed output due to the time required for biopolymer synthesis. Recently proposed algorithms allow for efficient simulations of such systems. However, it is critical to know whether the results of delay stochastic simulations agree with those from more detailed models of the transcription and translation processes. We present a generalization of previous delay stochastic simulation algorithms which allows both for multiple delays and for distributions of delay times. We show that delay stochastic simulations closely approximate simulations of a detailed transcription model except when two-body effects (e.g. collisions between polymerases on a template strand) are important. Finally, we study a delay stochastic model of prokaryotic transcription and translation which reproduces observations from a recent experimental study in which a single gene was expressed under the control of a repressed lac promoter in E. coli cells. This demonstrates our ability to quantitatively model gene expression using these new methods.
Deformable mesh registration for the validation of automatic target localization algorithms
Robertson, Scott; Weiss, Elisabeth; Hugo, Geoffrey D.
2013-01-01
Purpose: To evaluate deformable mesh registration (DMR) as a tool for validating automatic target registration algorithms used during image-guided radiation therapy. Methods: DMR was implemented in a hierarchical model, with rigid, affine, and B-spline transforms optimized in succession to register a pair of surface meshes. The gross tumor volumes (primary tumor and involved lymph nodes) were contoured by a physician on weekly CT scans in a cohort of lung cancer patients and converted to surface meshes. The meshes from weekly CT images were registered to the mesh from the planning CT, and the resulting registered meshes were compared with the delineated surfaces. Known deformations were also applied to the meshes, followed by mesh registration to recover the known deformation. Mesh registration accuracy was assessed at the mesh surface by computing the symmetric surface distance (SSD) between vertices of each registered mesh pair. Mesh registration quality in regions within 5 mm of the mesh surface was evaluated with respect to a high quality deformable image registration. Results: For 18 patients presenting with a total of 19 primary lung tumors and 24 lymph node targets, the SSD averaged 1.3 ± 0.5 and 0.8 ± 0.2 mm, respectively. Vertex registration errors (VRE) relative to the applied known deformation were 0.8 ± 0.7 and 0.2 ± 0.3 mm for the primary tumor and lymph nodes, respectively. Inside the mesh surface, corresponding average VRE ranged from 0.6 to 0.9 and 0.2 to 0.9 mm, respectively. Outside the mesh surface, average VRE ranged from 0.7 to 1.8 and 0.2 to 1.4 mm. The magnitude of errors generally increased with increasing distance away from the mesh. Conclusions: Provided that delineated surfaces are available, deformable mesh registration is an accurate and reliable method for obtaining a reference registration to validate automatic target registration algorithms for image-guided radiation therapy, specifically in regions on or near the target surfaces
Validation of the TES algorithm for emissivity determination using field measurements
Schmugge, T.; Ogawa, K.; French, A.; Ritchie, J.; Rango, A.
2009-04-01
Knowledge of the surface emissivity is important for determining the radiation balance at the land surface. This is especially true for arid regions with sparse vegetation, where the emissivity of the exposed soils and rocks is highly variable. The multispectral thermal infrared data obtained from the Advanced Spaceborne Thermal Emission and Reflection (ASTER) radiometer on NASA's Terra satellite have been shown to be of good quality and provide a unique new tool for studying the emissivity of the land surface. ASTER has 5 channels in the 8 to 12 micrometer waveband with 90 m spatial resolution, when the data are combined with the Temperature Emissivity Separation (TES) algorithm the surface emissivity over this wavelength region can be determined along with surface temperature. To overcome the problem of having too many unknowns, i.e. 5 emissivities and the surface temperature, TES makes use of an empirical relation between the minimum emissivity and the range of values for the 5 ASTER channels. The TES algorithm was validated using measurements with a multispectral thermal infrared field radiometer (CIMEL 312) which has essentially the same 5 bands as ASTER. The measurements were made on several soils in the Jornada Experimental Range (JER) and the White Sands National Monument in southern New Mexico, USA. The JER is a long-term ecological reserve (LTER) site located at the northern end of the Chihuahuan desert. The site is typical of desert grassland where the main vegetation components are grass and shrubs. At the White Sands National Monument dunes of gypsum sand cover about 700 km2 (275 square miles). Since gypsum has a unique emissivity spectra with a pronounced minimum at the 8.6 micrometer wavelength it is a good target for satellite observations of emissivity. The observed emissivity spectra for these sites in New Mexico show good agreement ( <0.02) with values calculated from the laboratory spectra for the soil samples when the difference of physical
Barbara Gasse
2017-06-01
Full Text Available Amelogenesis imperfecta (AI designates a group of genetic diseases characterized by a large range of enamel disorders causing important social and health problems. These defects can result from mutations in enamel matrix proteins or protease encoding genes. A range of mutations in the enamel cleavage enzyme matrix metalloproteinase-20 gene (MMP20 produce enamel defects of varying severity. To address how various alterations produce a range of AI phenotypes, we performed a targeted analysis to find MMP20 mutations in French patients diagnosed with non-syndromic AI. Genomic DNA was isolated from saliva and MMP20 exons and exon-intron boundaries sequenced. We identified several homozygous or heterozygous mutations, putatively involved in the AI phenotypes. To validate missense mutations and predict sensitive positions in the MMP20 sequence, we evolutionarily compared 75 sequences extracted from the public databases using the Datamonkey webserver. These sequences were representative of mammalian lineages, covering more than 150 million years of evolution. This analysis allowed us to find 324 sensitive positions (out of the 483 MMP20 residues, pinpoint functionally important domains, and build an evolutionary chart of important conserved MMP20 regions. This is an efficient tool to identify new- and previously-identified mutations. We thus identified six functional MMP20 mutations in unrelated families, finding two novel mutated sites. The genotypes and phenotypes of these six mutations are described and compared. To date, 13 MMP20 mutations causing AI have been reported, making these genotypes and associated hypomature enamel phenotypes the most frequent in AI.
Tony Antoniou
Full Text Available OBJECTIVE: We sought to validate a case-finding algorithm for human immunodeficiency virus (HIV infection using administrative health databases in Ontario, Canada. METHODS: We constructed 48 case-finding algorithms using combinations of physician billing claims, hospital and emergency room separations and prescription drug claims. We determined the test characteristics of each algorithm over various time frames for identifying HIV infection, using data abstracted from the charts of 2,040 randomly selected patients receiving care at two medical practices in Toronto, Ontario as the reference standard. RESULTS: With the exception of algorithms using only a single physician claim, the specificity of all algorithms exceeded 99%. An algorithm consisting of three physician claims over a three year period had a sensitivity and specificity of 96.2% (95% CI 95.2%-97.9% and 99.6% (95% CI 99.1%-99.8%, respectively. Application of the algorithm to the province of Ontario identified 12,179 HIV-infected patients in care for the period spanning April 1, 2007 to March 31, 2009. CONCLUSIONS: Case-finding algorithms generated from administrative data can accurately identify adults living with HIV. A relatively simple "3 claims in 3 years" definition can be used for assembling a population-based cohort and facilitating future research examining trends in health service use and outcomes among HIV-infected adults in Ontario.
Validation of diagnostic algorithms for syndromic management of sexually trans mitted diseases
王千秋; 杨凭; 钟铭英; 王广聚
2003-01-01
Objectives To validate our revised syndromic algorithms of the management of sexually trans mitted diseases and determine their sensitivity, specificity, positive predictiv e value and cost-effectiveness.Methods Patients with either urethral discharge, vaginal discharge or genital ulcer, wer e selected during their first visits to three urban sexually transmitted disease clinics in Fujian Province, China. They were managed syndromically according t o our revised flowcharts. The etiology of the syndromes was detected by laborat ory testing. The data were analyzed using EPI INFO V6.0 software. Results A total of 736 patients were enrolled into the study. In male patients with ur ethral discharge, the sensitivities for gonococcal and chlamydial infections wer e 96.7% and 100%, respectively, using the syndromic approach. The total positi ve predictive value was 73%. In female patients with vaginal discharge, the sen sitivity was 90.8%, specificity 46.9%, positive predictive value 50.9%, and n egative predictive value 89.3% for the diagnosis of gonorrhea and/or chlamydial infection by syndromic approach. In patients with genital ulcer, the sensitivi ties were 78.3% and 75.8%, specificities of 83.6% and 42.9%, and positive pr edictive values of 60.0% and 41.0% for the diagnosis of syphilis and genital h erpes, respectively, using the syndromic approach. Cost-effectiveness analysi s indicated that the average cost of treatment for a patient with urethral disch arge was RMB 46.03 yuan using syndromic management, in comparison with RMB 149 .19 yuan by etiological management. Conclusions The syndromic management of urethral discharge was relatively effective and suit ed clinical application. The specificity and positive predictive value for sy ndromic management of vaginal discharge are not satisfactory. The revised flowc hart of genital ulcer syndrome could be suitable for use in clinical settings. Further validation and revision are needed for syndromic approaches of
Koshak, W. J.; Blakeslee, R. J.; Bailey, J. C.
2000-01-01
A linear algebraic solution is provided for the problem of retrieving the location and time of occurrence of lightning ground strikes from an Advanced Lightning Direction Finder (ALDF) network. The ALDF network measures field strength, magnetic bearing, and arrival time of lightning radio emissions. Solutions for the plane (i.e., no earth curvature) are provided that implement all of these measurements. The accuracy of the retrieval method is tested using computer-simulated datasets, and the relative influence of bearing and arrival time data an the outcome of the final solution is formally demonstrated. The algorithm is sufficiently accurate to validate NASA:s Optical Transient Detector and Lightning Imaging Sensor. A quadratic planar solution that is useful when only three arrival time measurements are available is also introduced. The algebra of the quadratic root results are examined in detail to clarify what portions of the analysis region lead to fundamental ambiguities in sc)iirce location, Complex root results are shown to be associated with the presence of measurement errors when the lightning source lies near an outer sensor baseline of the ALDF network. For arbitrary noncollinear network geometries and in the absence of measurement errors, it is shown that the two quadratic roots are equivalent (no source location ambiguity) on the outer sensor baselines. The accuracy of the quadratic planar method is tested with computer-generated datasets, and the results are generally better than those obtained from the three-station linear planar method when bearing errors are about 2 deg.
Niu, Lili; Qian, Ming; Wan, Kun; Yu, Wentao; Jin, Qiaofeng; Ling, Tao; Gao, Shen; Zheng, Hairong
2010-04-01
This paper presents a new algorithm for ultrasonic particle image velocimetry (Echo PIV) for improving the flow velocity measurement accuracy and efficiency in regions with high velocity gradients. The conventional Echo PIV algorithm has been modified by incorporating a multiple iterative algorithm, sub-pixel method, filter and interpolation method, and spurious vector elimination algorithm. The new algorithms' performance is assessed by analyzing simulated images with known displacements, and ultrasonic B-mode images of in vitro laminar pipe flow, rotational flow and in vivo rat carotid arterial flow. Results of the simulated images show that the new algorithm produces much smaller bias from the known displacements. For laminar flow, the new algorithm results in 1.1% deviation from the analytically derived value, and 8.8% for the conventional algorithm. The vector quality evaluation for the rotational flow imaging shows that the new algorithm produces better velocity vectors. For in vivo rat carotid arterial flow imaging, the results from the new algorithm deviate 6.6% from the Doppler-measured peak velocities averagely compared to 15% of that from the conventional algorithm. The new Echo PIV algorithm is able to effectively improve the measurement accuracy in imaging flow fields with high velocity gradients.
Crow, W. T.; Wagner, W.
2009-12-01
Applying basic data assimilation techniques to the evaluation of remote-sensing products can clarify the impact of sensor design issues on the value of retrievals for hydrologic applications. For instance, the impact of incidence angle on the accuracy of radar surface soil moisture retrievals is largely unknown due to discrepancies in theoretical backscatter models as well as limitations in the availability of sufficiently-extensive ground-based soil moisture observations for validation purposes. In this presentation we will describe and apply a data assimilation evaluation technique for scatterometer-based surface soil moisture retrievals that does not require ground-based soil moisture observations to examine the sensitivity of retrieval skill to variations in incidence angle. Past results with the approach have shown that it is capable of detecting relative variations in the correlation between anomalies in remotely-sensed surface soil moisture retrievals and ground-truth soil moisture measurements. Application of the evaluation approach to the TU-Wien WARP5.0 European Space Radar (ERS) soil moisture data set over two regional-scale (~1000 km) domains in the Southern United States indicates a relative reduction in anomaly correlation-based skill of between 20% and 30% when moving between the lowest ( 50 degrees) incidence angle ranges. These changes in anomaly-based correlation provide a useful proxy for relative variations in the value of estimates for data assimilation applications and can therefore be used to inform the design of appropriate retrieval algorithms. For example, the observed sensitivity of correlation-based skill with incidence angle is in approximate agreement with soil moisture retrieval uncertainty predictions made using the WARP5.0 backscatter model. However, the coupling of a bare soil backscatter model with the so-called "vegetation water cloud" model is shown to generally over-estimate the impact of incidence angle on retrieval skill
An extended validation test for data input into parameterized retrieval algorithms
Schaale, Michael; Schroeder, Thomas
2013-05-01
The retrieval of environmental data from multi-spectral remotely sensed data is very often based on the (partial) inversion of extensive radiative transfer simulations (RTS). The inversion can be utilized in different ways, e.g. through the usage of polynomials or artificial neural networks. The inversion algorithms (IA) usually contain numerous parameters, which have to be adapted by regression schemes in a training phase with the help of the RTS data. The subsequent processing of real remotely sensed data by an adapted IA requires a validity test (VT) of the input data (usually a vector consisting of TOA radiances, environmental and geometric data) before inputting them into the IA. This test ensures that these or similar data were included in the training phase of the IA and thus helps to avoid unpredictable extrapolation effects. In standard procedures these "out-of-scope" data are identified by a simple convexity test (CT). CT means that each element of the input vector is tested to lie between the minimum and maximum values of the corresponding element used in the training data set. This assumption is rather crude as it assumes a homogeneously filled data space. But in general the data are not distributed homogeneously and thus a CT is an incomplete and unsatisfactory check. This paper proposes a solution to the problem sketched above by the development and implementation of an enhanced VT (eVT), which is based on a density map of the data space. The density map itself is approximated by an extended neuronal vector quantization method. The newly developed eVT algorithm is tested with known distributions of artificial data. Although the eVT is not limited to a specific retrieval/inversion scheme it is finally applied to an existing retrieval scheme for coastal water constituents from satellite data (MERIS) acquired over coastal regions in Europe (here: FUB/WeW water processor for VISAT-BEAM). A comparison against the data filtered by a simple CT further
Validation of the pulse decomposition analysis algorithm using central arterial blood pressure.
Baruch, Martin C; Kalantari, Kambiz; Gerdt, David W; Adkins, Charles M
2014-07-08
There is a significant need for continuous noninvasive blood pressure (cNIBP) monitoring, especially for anesthetized surgery and ICU recovery. cNIBP systems could lower costs and expand the use of continuous blood pressure monitoring, lowering risk and improving outcomes.The test system examined here is the CareTaker® and a pulse contour analysis algorithm, Pulse Decomposition Analysis (PDA). PDA's premise is that the peripheral arterial pressure pulse is a superposition of five individual component pressure pulses that are due to the left ventricular ejection and reflections and re-reflections from only two reflection sites within the central arteries.The hypothesis examined here is that the model's principal parameters P2P1 and T13 can be correlated with, respectively, systolic and pulse pressures. Central arterial blood pressures of patients (38 m/25 f, mean age: 62.7 y, SD: 11.5 y, mean height: 172.3 cm, SD: 9.7 cm, mean weight: 86.8 kg, SD: 20.1 kg) undergoing cardiac catheterization were monitored using central line catheters while the PDA parameters were extracted from the arterial pulse signal obtained non-invasively using CareTaker system. Qualitative validation of the model was achieved with the direct observation of the five component pressure pulses in the central arteries using central line catheters. Statistically significant correlations between P2P1 and systole and T13 and pulse pressure were established (systole: R square: 0.92 (p pressures obtained through the conversion of PDA parameters to blood pressures of non-invasively obtained pulse signatures with catheter-obtained blood pressures fell within the trend guidelines of the Association for the Advancement of Medical Instrumentation SP-10 standard (standard deviation: 8 mmHg(systole: 5.87 mmHg, diastole: 5.69 mmHg)). The results indicate that arterial blood pressure can be accurately measured and tracked noninvasively and continuously using the CareTaker system and the PDA algorithm. The
Jane Tufvesson
2015-01-01
Full Text Available Introduction. Manual delineation of the left ventricle is clinical standard for quantification of cardiovascular magnetic resonance images despite being time consuming and observer dependent. Previous automatic methods generally do not account for one major contributor to stroke volume, the long-axis motion. Therefore, the aim of this study was to develop and validate an automatic algorithm for time-resolved segmentation covering the whole left ventricle, including basal slices affected by long-axis motion. Methods. Ninety subjects imaged with a cine balanced steady state free precession sequence were included in the study (training set n=40, test set n=50. Manual delineation was reference standard and second observer analysis was performed in a subset (n=25. The automatic algorithm uses deformable model with expectation-maximization, followed by automatic removal of papillary muscles and detection of the outflow tract. Results. The mean differences between automatic segmentation and manual delineation were EDV −11 mL, ESV 1 mL, EF −3%, and LVM 4 g in the test set. Conclusions. The automatic LV segmentation algorithm reached accuracy comparable to interobserver for manual delineation, thereby bringing automatic segmentation one step closer to clinical routine. The algorithm and all images with manual delineations are available for benchmarking.
A 30+ Year AVHRR LAI and FAPAR Climate Data Record: Algorithm Description and Validation
Martin Claverie
2016-03-01
Full Text Available In- land surface models, which are used to evaluate the role of vegetation in the context of global climate change and variability, LAI and FAPAR play a key role, specifically with respect to the carbon and water cycles. The AVHRR-based LAI/FAPAR dataset offers daily temporal resolution, an improvement over previous products. This climate data record is based on a carefully calibrated and corrected land surface reflectance dataset to provide a high-quality, consistent time-series suitable for climate studies. It spans from mid-1981 to the present. Further, this operational dataset is available in near real-time allowing use for monitoring purposes. The algorithm relies on artificial neural networks calibrated using the MODIS LAI/FAPAR dataset. Evaluation based on cross-comparison with MODIS products and in situ data show the dataset is consistent and reliable with overall uncertainties of 1.03 and 0.15 for LAI and FAPAR, respectively. However, a clear saturation effect is observed in the broadleaf forest biomes with high LAI (>4.5 and FAPAR (>0.8 values.
Reisner, A. T.; Khitrov, M. Y.; Chen, L.; Blood, A.; Wilkins, K.; Doyle, W.; Wilcox, S.; Denison, T.; Reifman, J.
2013-01-01
Summary Background Advanced decision-support capabilities for prehospital trauma care may prove effective at improving patient care. Such functionality would be possible if an analysis platform were connected to a transport vital-signs monitor. In practice, there are technical challenges to implementing such a system. Not only must each individual component be reliable, but, in addition, the connectivity between components must be reliable. Objective We describe the development, validation, and deployment of the Automated Processing of Physiologic Registry for Assessment of Injury Severity (APPRAISE) platform, intended to serve as a test bed to help evaluate the performance of decision-support algorithms in a prehospital environment. Methods We describe the hardware selected and the software implemented, and the procedures used for laboratory and field testing. Results The APPRAISE platform met performance goals in both laboratory testing (using a vital-sign data simulator) and initial field testing. After its field testing, the platform has been in use on Boston MedFlight air ambulances since February of 2010. Conclusion These experiences may prove informative to other technology developers and to healthcare stakeholders seeking to invest in connected electronic systems for prehospital as well as in-hospital use. Our experiences illustrate two sets of important questions: are the individual components reliable (e.g., physical integrity, power, core functionality, and end-user interaction) and is the connectivity between components reliable (e.g., communication protocols and the metadata necessary for data interpretation)? While all potential operational issues cannot be fully anticipated and eliminated during development, thoughtful design and phased testing steps can reduce, if not eliminate, technical surprises. PMID:24155791
Oliveira, N; Magder, L S; Blitzer, M G; Baschat, A A
2014-09-01
To evaluate the performance of published first-trimester prediction algorithms for pre-eclampsia (PE) in a prospectively enrolled cohort of women. A MEDLINE search identified first-trimester screening-prediction algorithms for early-onset (requiring delivery algorithms were applied to this population to calculate predicted probabilities for PE. The performance of the prediction algorithms was compared with that in the original publication and evaluated for factors explaining differences in prediction. Six early and two late PE prediction algorithms were applicable to 871-2962 women, depending on the variables required. The prevalence of early PE was 1.0-1.2% and of late PE was 4.1-5.0% in these patient subsets. One early PE prediction algorithm performed better than in the original publication (80% detection rate (DR) of early PE for 10% false-positive rate (FPR)); the remaining five prediction algorithms underperformed (29-53% DR). Prediction algorithms for late PE also underperformed (18-31% DR, 10% FPR). Applying the screening cut-offs based on the highest Youden index probability scores correctly detected 40-80% of women developing early PE and 71-82% who developed late PE. Exclusion of patients on first-trimester aspirin resulted in DRs of 40-83% and 65-82% for early and late PE, respectively. First-trimester prediction algorithms for PE share a high negative predictive value if applied to an external population but underperform in their ability to correctly identify women who develop PE. Further research is required to determine the factors responsible for the suboptimal external validity. Copyright © 2014 ISUOG. Published by John Wiley & Sons Ltd.
2014-09-26
VALIDATION OFSTHE ALGORITHM FOR * "BASE TCTO LABOR COST FOR * THE COMPONENT SUPPORT COST SYSTEM (D160B) Contract No. F33600-82-C-0543 15 August 1983 D I...user survey. This report provides the verification and validation of the algorithm called "Base TCTO Labor Costs." The costs of direct labor performed...inspections of equipment or installation of new equipment." (Reference 1121). The CSCS algorithm for Base TCTO Labor Cost calculates and presents TCTO labor
Yunjun Yao
2014-01-01
Full Text Available Satellite-based vegetation indices (VIs and Apparent Thermal Inertia (ATI derived from temperature change provide valuable information for estimating evapotranspiration (LE and detecting the onset and severity of drought. The modified satellite-based Priestley-Taylor (MS-PT algorithm that we developed earlier, coupling both VI and ATI, is validated based on observed data from 40 flux towers distributed across the world on all continents. The validation results illustrate that the daily LE can be estimated with the Root Mean Square Error (RMSE varying from 10.7 W/m2 to 87.6 W/m2, and with the square of correlation coefficient (R2 from 0.41 to 0.89 (p < 0.01. Compared with the Priestley-Taylor-based LE (PT-JPL algorithm, the MS-PT algorithm improves the LE estimates at most flux tower sites. Importantly, the MS-PT algorithm is also satisfactory in reproducing the inter-annual variability at flux tower sites with at least five years of data. The R2 between measured and predicted annual LE anomalies is 0.42 (p = 0.02. The MS-PT algorithm is then applied to detect the variations of long-term terrestrial LE over Three-North Shelter Forest Region of China and to monitor global land surface drought. The MS-PT algorithm described here demonstrates the ability to map regional terrestrial LE and identify global soil moisture stress, without requiring precipitation information.
Hawkins, Paul C D; Skillman, A Geoffrey; Warren, Gregory L; Ellingson, Benjamin A; Stahl, Matthew T
2010-04-26
Here, we present the algorithm and validation for OMEGA, a systematic, knowledge-based conformer generator. The algorithm consists of three phases: assembly of an initial 3D structure from a library of fragments; exhaustive enumeration of all rotatable torsions using values drawn from a knowledge-based list of angles, thereby generating a large set of conformations; and sampling of this set by geometric and energy criteria. Validation of conformer generators like OMEGA has often been undertaken by comparing computed conformer sets to experimental molecular conformations from crystallography, usually from the Protein Databank (PDB). Such an approach is fraught with difficulty due to the systematic problems with small molecule structures in the PDB. Methods are presented to identify a diverse set of small molecule structures from cocomplexes in the PDB that has maximal reliability. A challenging set of 197 high quality, carefully selected ligand structures from well-solved models was obtained using these methods. This set will provide a sound basis for comparison and validation of conformer generators in the future. Validation results from this set are compared to the results using structures of a set of druglike molecules extracted from the Cambridge Structural Database (CSD). OMEGA is found to perform very well in reproducing the crystallographic conformations from both these data sets using two complementary metrics of success.
Polverino, Pierpaolo; Esposito, Angelo; Pianese, Cesare; Ludwig, Bastian; Iwanschitz, Boris; Mai, Andreas
2016-02-01
In the current energetic scenario, Solid Oxide Fuel Cells (SOFCs) exhibit appealing features which make them suitable for environmental-friendly power production, especially for stationary applications. An example is represented by micro-combined heat and power (μ-CHP) generation units based on SOFC stacks, which are able to produce electric and thermal power with high efficiency and low pollutant and greenhouse gases emissions. However, the main limitations to their diffusion into the mass market consist in high maintenance and production costs and short lifetime. To improve these aspects, the current research activity focuses on the development of robust and generalizable diagnostic techniques, aimed at detecting and isolating faults within the entire system (i.e. SOFC stack and balance of plant). Coupled with appropriate recovery strategies, diagnosis can prevent undesired system shutdowns during faulty conditions, with consequent lifetime increase and maintenance costs reduction. This paper deals with the on-line experimental validation of a model-based diagnostic algorithm applied to a pre-commercial SOFC system. The proposed algorithm exploits a Fault Signature Matrix based on a Fault Tree Analysis and improved through fault simulations. The algorithm is characterized on the considered system and it is validated by means of experimental induction of faulty states in controlled conditions.
Martelotto, Luciano G; Ng, Charlotte Ky; De Filippo, Maria R; Zhang, Yan; Piscuoglio, Salvatore; Lim, Raymond S; Shen, Ronglai; Norton, Larry; Reis-Filho, Jorge S; Weigelt, Britta
2014-10-28
Massively parallel sequencing studies have led to the identification of a large number of mutations present in a minority of cancers of a given site. Hence, methods to identify the likely pathogenic mutations that are worth exploring experimentally and clinically are required. We sought to compare the performance of 15 mutation effect prediction algorithms and their agreement. As a hypothesis-generating aim, we sought to define whether combinations of prediction algorithms would improve the functional effect predictions of specific mutations. Literature and database mining of single nucleotide variants (SNVs) affecting 15 cancer genes was performed to identify mutations supported by functional evidence or hereditary disease association to be classified either as non-neutral (n = 849) or neutral (n = 140) with respect to their impact on protein function. These SNVs were employed to test the performance of 15 mutation effect prediction algorithms. The accuracy of the prediction algorithms varies considerably. Although all algorithms perform consistently well in terms of positive predictive value, their negative predictive value varies substantially. Cancer-specific mutation effect predictors display no-to-almost perfect agreement in their predictions of these SNVs, whereas the non-cancer-specific predictors showed no-to-moderate agreement. Combinations of predictors modestly improve accuracy and significantly improve negative predictive values. The information provided by mutation effect predictors is not equivalent. No algorithm is able to predict sufficiently accurately SNVs that should be taken forward for experimental or clinical testing. Combining algorithms aggregates orthogonal information and may result in improvements in the negative predictive value of mutation effect predictions.
2013-01-01
Background Overtreatment of catheter-associated bacteriuria is a quality and safety problem, despite the availability of evidence-based guidelines. Little is known about how guidelines-based knowledge is integrated into clinicians’ mental models for diagnosing catheter-associated urinary tract infection (CA-UTI). The objectives of this research were to better understand clinicians’ mental models for CA-UTI, and to develop and validate an algorithm to improve diagnostic accuracy for CA-UTI. Methods We conducted two phases of this research project. In phase one, 10 clinicians assessed and diagnosed four patient cases of catheter associated bacteriuria (n= 40 total cases). We assessed the clinical cues used when diagnosing these cases to determine if the mental models were IDSA guideline compliant. In phase two, we developed a diagnostic algorithm derived from the IDSA guidelines. IDSA guideline authors and non-expert clinicians evaluated the algorithm for content and face validity. In order to determine if diagnostic accuracy improved using the algorithm, we had experts and non-experts diagnose 71 cases of bacteriuria. Results Only 21 (53%) diagnoses made by clinicians without the algorithm were guidelines-concordant with fair inter-rater reliability between clinicians (Fleiss’ kappa = 0.35, 95% Confidence Intervals (CIs) = 0.21 and 0.50). Evidence suggests that clinicians’ mental models are inappropriately constructed in that clinicians endorsed guidelines-discordant cues as influential in their decision-making: pyuria, systemic leukocytosis, organism type and number, weakness, and elderly or frail patient. Using the algorithm, inter-rater reliability between the expert and each non-expert was substantial (Cohen’s kappa = 0.72, 95% CIs = 0.52 and 0.93 between the expert and non-expert #1 and 0.80, 95% CIs = 0.61 and 0.99 between the expert and non-expert #2). Conclusions Diagnostic errors occur when clinicians’ mental models for catheter
Validation and application of modeling algorithms for the design of molecularly imprinted polymers.
Liu, Bing; Ou, Lulu; Zhang, Fuyuan; Zhang, Zhijun; Li, Hongying; Zhu, Mengyu; Wang, Shuo
2014-12-01
In the study, four different semiempirical algorithms, modified neglect of diatomic overlap, a reparameterization of Austin Model 1, complete neglect of differential overlap and typed neglect of differential overlap, have been applied for the energy optimization of template, monomer, and template-monomer complexes of imprinted polymers. For phosmet-, estrone-, and metolcarb-imprinted polymers, the binding energies of template-monomer complexes were calculated and the docking configures were assessed in different molar ratio of template/monomer. It was found that two algorithms were not suitable for calculating the binding energy in template-monomers complex system. For the other algorithms, the obtained optimum molar ratio of template and monomers were consistent with the experimental results. Therefore, two algorithms have been selected and applied for the preparation of enrofloxacin-imprinted polymers. Meanwhile using a different molar ratio of template and monomer, we prepared imprinted polymers and nonimprinted polymers, and evaluated the adsorption to template. It was verified that the experimental results were in good agreement with the modeling results. As a result, the semiempirical algorithm had certain feasibility in designing the preparation of imprinted polymers.
Validation of the New Algorithm for Rain Rate Retrieval from AMSR2 Data Using TMI Rain Rate Product
Elizaveta Zabolotskikh
2015-01-01
Full Text Available A new algorithm is derived for rain rate (RR estimation from Advanced Microwave Sounding Radiometer 2 (AMSR2 measurements taken at 6.9, 7.3, and 10.65 GHz. The algorithm is based on the numerical simulation of brightness temperatures (TB for AMSR2 lower frequency channels, using a simplified radiation transfer model. Simultaneous meteorological and hydrological observations, supplemented with modeled values of cloud liquid water content and rain rate values, are used for the calculation of an ensemble of AMSR2 TBs and RRs. Ice clouds are not taken into account. AMSR2 brightness temperature differences at C- and X-band channels are then used as inputs to train a neural network (NN function for RR retrieval. Validation is performed against Tropical Rain Measurement Mission (TRMM Microwave Instrument (TMI RR products. For colocated AMSR2-TMI measurements, obtained within 10 min intervals, errors are about 1 mm/h. The new algorithm is applicable for RR estimation up to 20 mm/h. For RR10 mm/h the algorithm significantly underestimates TMI RR.
Brenton A
2017-05-01
Full Text Available Ashley Brenton,1 Steven Richeimer,2,3 Maneesh Sharma,4 Chee Lee,1 Svetlana Kantorovich,1 John Blanchard,1 Brian Meshkin1 1Proove Biosciences, Irvine, CA, 2Keck school of Medicine, University of Southern California, Los Angeles, CA, 3Departments of Anesthesiology and Psychiatry, University of Southern California, Los Angeles, CA, 4Interventional Pain Institute, Baltimore, MD, USA Background: Opioid abuse in chronic pain patients is a major public health issue, with rapidly increasing addiction rates and deaths from unintentional overdose more than quadrupling since 1999. Purpose: This study seeks to determine the predictability of aberrant behavior to opioids using a comprehensive scoring algorithm incorporating phenotypic risk factors and neuroscience-associated single-nucleotide polymorphisms (SNPs. Patients and methods: The Proove Opioid Risk (POR algorithm determines the predictability of aberrant behavior to opioids using a comprehensive scoring algorithm incorporating phenotypic risk factors and neuroscience-associated SNPs. In a validation study with 258 subjects with diagnosed opioid use disorder (OUD and 650 controls who reported using opioids, the POR successfully categorized patients at high and moderate risks of opioid misuse or abuse with 95.7% sensitivity. Regardless of changes in the prevalence of opioid misuse or abuse, the sensitivity of POR remained >95%. Conclusion: The POR correctly stratifies patients into low-, moderate-, and high-risk categories to appropriately identify patients at need for additional guidance, monitoring, or treatment changes. Keywords: opioid use disorder, addiction, personalized medicine, pharmacogenetics, genetic testing, predictive algorithm
Sys, K; Boon, N; Verstraete, W
2004-06-01
A flexible, extendable tool for the optimization of (micro)biological processes and protocols using evolutionary algorithms was developed. It has been tested using three different theoretical optimization problems: 2 two-dimensional problems, one with three maxima and one with five maxima and a river autopurification optimization problem with boundary conditions. For each problem, different evolutionary parameter settings were used for the optimization. For each combination of evolutionary parameters, 15 generations were run 20 times. It has been shown that in all cases, the evolutionary algorithm gave rise to valuable results. Generally, the algorithms were able to detect the more stable sub-maximum even if there existed less stable maxima. The latter is, from a practical point of view, generally more desired. The most important factors influencing the convergence process were the parameter value randomization rate and distribution. The developed software, described in this work, is available for free.
Rihab Hmida
2016-08-01
Full Text Available In this paper, we present a new stereo vision-based system and its efficient hardware implementation for real-time underwater environments exploration throughout 3D sparse reconstruction based on a number of feature points. The proposed underwater 3D shape reconstruction algorithm details are presented. The main concepts and advantages are discussed and comparison with existing systems is performed. In order to achieve real-time video constraints, a hardware implementation of the algorithm is performed using Xilinx System Generator. The pipelined stereo vision system has been implemented using Field Programmable Gate Arrays (FPGA technology. Both timing constraints and mathematical operations precision have been evaluated in order to validate the proposed hardware implementation of our system. Experimental results show that the proposed system presents high accuracy and execution time performances.
2010-08-01
Validation of the Geostatistical Temporal-Spatial Algorithm (GTS) for Optimization of Long-Term Monitoring (LTM) of Groundwater at Military and... Geostatistical Temporal-Spatial Algorithm (GTS) for Optimization of Long-Term Monitoring (LTM) of Groundwater at Military and Government Sites 5a. CONTRACT NUMBER...ABSTRACT The primary objective of this ESTCP project was to demonstrate and validate use of the Geostatistical Temporal-Spatial (GTS) groundwater
Nieto Solana, Hector; Sandholt, Inge; Aguado, Inmaculada
2011-01-01
Air temperature can be estimated from remote sensing by combining information in thermal infrared and optical wavelengths. The empirical TVX algorithm is based on an estimated linear relationship between observed Land Surface Temperature (LST) and a Spectral Vegetation Index (NDVI). Air temperature...... variation in NDVI of the effective full cover has not been subject for investigation. The present study proposes a novel methodology to estimate NDVImax that uses observed air temperature to calibrate the NDVImax for each vegetation type. To assess the validity of this methodology, we have compared...
Implementing, Adapting, and Validating an Evidence-Based Algorithm for Hip Fracture Surgery
Ban, I.; Palm, H.; Birkelund, Lasse;
2014-01-01
Reoperations are common after surgical treatment of hip fractures but may be reduced by optimal choice of implant based on fracture classification. We hypothesized that implementing a surgical treatment algorithm was possible in our hospital and would result in a reduced reoperation rate....
Emanuele Gandola
2016-09-01
Full Text Available The estimation and quantification of potentially toxic cyanobacteria in lakes and reservoirs are often used as a proxy of risk for water intended for human consumption and recreational activities. Here, we present data sets collected from three volcanic Italian lakes (Albano, Vico, Nemi that present filamentous cyanobacteria strains at different environments. Presented data sets were used to estimate abundance and morphometric characteristics of potentially toxic cyanobacteria comparing manual Vs. automated estimation performed by ACQUA (“ACQUA: Automated Cyanobacterial Quantification Algorithm for toxic filamentous genera using spline curves, pattern recognition and machine learning” (Gandola et al., 2016 [1]. This strategy was used to assess the algorithm performance and to set up the denoising algorithm. Abundance and total length estimations were used for software development, to this aim we evaluated the efficiency of statistical tools and mathematical algorithms, here described. The image convolution with the Sobel filter has been chosen to denoise input images from background signals, then spline curves and least square method were used to parameterize detected filaments and to recombine crossing and interrupted sections aimed at performing precise abundances estimations and morphometric measurements.
C. Keim
2009-05-01
Full Text Available This paper presents a first statistical validation of tropospheric ozone products derived from measurements of the satellite instrument IASI. Since end of 2006, IASI (Infrared Atmospheric Sounding Interferometer aboard the polar orbiter Metop-A measures infrared spectra of the Earth's atmosphere in nadir geometry. This validation covers the northern mid-latitudes and the period from July 2007 to August 2008. The comparison of the ozone products with the vertical ozone concentration profiles from balloon sondes leads to estimates of the systematic and random errors in the IASI ozone products. The intercomparison of the retrieval results from four different sources (including the EUMETSAT ozone products shows systematic differences due to the used methods and algorithms. On average the tropospheric columns have a small bias of less than 2 Dobson Units (DU when compared to the sonde measured columns. The comparison of the still pre-operational EUMETSAT columns shows higher mean differences of about 5 DU.
Gulshan, Varun; Peng, Lily; Coram, Marc; Stumpe, Martin C; Wu, Derek; Narayanaswamy, Arunachalam; Venugopalan, Subhashini; Widner, Kasumi; Madams, Tom; Cuadros, Jorge; Kim, Ramasamy; Raman, Rajiv; Nelson, Philip C; Mega, Jessica L; Webster, Dale R
2016-12-13
Deep learning is a family of computational methods that allow an algorithm to program itself by learning from a large set of examples that demonstrate the desired behavior, removing the need to specify rules explicitly. Application of these methods to medical imaging requires further assessment and validation. To apply deep learning to create an algorithm for automated detection of diabetic retinopathy and diabetic macular edema in retinal fundus photographs. A specific type of neural network optimized for image classification called a deep convolutional neural network was trained using a retrospective development data set of 128 175 retinal images, which were graded 3 to 7 times for diabetic retinopathy, diabetic macular edema, and image gradability by a panel of 54 US licensed ophthalmologists and ophthalmology senior residents between May and December 2015. The resultant algorithm was validated in January and February 2016 using 2 separate data sets, both graded by at least 7 US board-certified ophthalmologists with high intragrader consistency. Deep learning-trained algorithm. The sensitivity and specificity of the algorithm for detecting referable diabetic retinopathy (RDR), defined as moderate and worse diabetic retinopathy, referable diabetic macular edema, or both, were generated based on the reference standard of the majority decision of the ophthalmologist panel. The algorithm was evaluated at 2 operating points selected from the development set, one selected for high specificity and another for high sensitivity. The EyePACS-1 data set consisted of 9963 images from 4997 patients (mean age, 54.4 years; 62.2% women; prevalence of RDR, 683/8878 fully gradable images [7.8%]); the Messidor-2 data set had 1748 images from 874 patients (mean age, 57.6 years; 42.6% women; prevalence of RDR, 254/1745 fully gradable images [14.6%]). For detecting RDR, the algorithm had an area under the receiver operating curve of 0.991 (95% CI, 0.988-0.993) for EyePACS-1 and 0
Tang, Bo-Hui; Wu, Hua-; Li, Zhao-Liang; Nerry, Françoise
2012-07-30
This work addressed the validation of the MODIS-derived bidirectional reflectivity retrieval algorithm in mid-infrared (MIR) channel, proposed by Tang and Li [Int. J. Remote Sens. 29, 4907 (2008)], with ground-measured data, which were collected from a field campaign that took place in June 2004 at the ONERA (Office National d'Etudes et de Recherches Aérospatiales) center of Fauga-Mauzac, on the PIRRENE (Programme Interdisciplinaire de Recherche sur la Radiométrie en Environnement Extérieur) experiment site [Opt. Express 15, 12464 (2007)]. The leaving-surface spectral radiances measured by a BOMEM (MR250 Series) Fourier transform interferometer were used to calculate the ground brightness temperatures with the combination of the inversion of the Planck function and the spectral response functions of MODIS channels 22 and 23, and then to estimate the ground brightness temperature without the contribution of the solar direct beam and the bidirectional reflectivity by using Tang and Li's proposed algorithm. On the other hand, the simultaneously measured atmospheric profiles were used to obtain the atmospheric parameters and then to calculate the ground brightness temperature without the contribution of the solar direct beam, based on the atmospheric radiative transfer equation in the MIR region. Comparison of those two kinds of brightness temperature obtained by two different methods indicated that the Root Mean Square Error (RMSE) between the brightness temperatures estimated respectively using Tang and Li's algorithm and the atmospheric radiative transfer equation is 1.94 K. In addition, comparison of the hemispherical-directional reflectances derived by Tang and Li's algorithm with those obtained from the field measurements showed that the RMSE is 0.011, which indicates that Tang and Li's algorithm is feasible to retrieve the bidirectional reflectivity in MIR channel from MODIS data.
Jorge, L. S.; Bonifacio, D. A. B.; DeWitt, Don; Miyaoka, R. S.
2016-12-01
Continuous scintillator-based detectors have been considered as a competitive and cheaper approach than highly pixelated discrete crystal positron emission tomography (PET) detectors, despite the need for algorithms to estimate 3D gamma interaction position. In this work, we report on the implementation of a positioning algorithm to estimate the 3D interaction position in a continuous crystal PET detector using a Field Programmable Gate Array (FPGA). The evaluated method is the Statistics-Based Processing (SBP) technique that requires light response function and event position characterization. An algorithm has been implemented using the Verilog language and evaluated using a data acquisition board that contains an Altera Stratix III FPGA. The 3D SBP algorithm was previously successfully implemented on a Stratix II FPGA using simulated data and a different module design. In this work, improvements were made to the FPGA coding of the 3D positioning algorithm, reducing the total memory usage to around 34%. Further the algorithm was evaluated using experimental data from a continuous miniature crystal element (cMiCE) detector module. Using our new implementation, average FWHM (Full Width at Half Maximum) for the whole block is 1.71±0.01 mm, 1.70±0.01 mm and 1.632±0.005 mm for x, y and z directions, respectively. Using a pipelined architecture, the FPGA is able to process 245,000 events per second for interactions inside of the central area of the detector that represents 64% of the total block area. The weighted average of the event rate by regional area (corner, border and central regions) is about 198,000 events per second. This event rate is greater than the maximum expected coincidence rate for any given detector module in future PET systems using the cMiCE detector design.
Two-way nesting in split-explicit ocean models: Algorithms, implementation and validation
Debreu, Laurent; Marchesiello, Patrick; Penven, Pierrick; Cambon, Gildas
2012-06-01
A full two-way nesting approach for split-explicit, free surface ocean models is presented. It is novel in three main respects: the treatment of grid refinement at the fast mode (barotropic) level; the use of scale selective update schemes; the conservation of both volume and tracer contents via refluxing. An idealized application to vortex propagation on a β plane shows agreement between nested and high resolution solutions. A realistic application to the California Current System then confirm these results in a complex configuration. The selected algorithm is now part of ROMS_AGRIF. It is fully consistent with ROMS parallel capabilities on both shared and distributed memory architectures. The nesting implementation authorizes several nesting levels and several grids at any particular level. This operational capability, combined with the inner qualities of our two-way nesting algorithm and generally high-order accuracy of ROMS numerics, allow for realistic simulation of coastal and ocean dynamics at multiple, interacting scales.
Srinivasan, Sangeetha; Shetty, Sharan; Natarajan, Viswanathan; Sharma, Tarun; Raman, Rajiv
2016-01-01
Purpose To develop a simplified algorithm to identify and refer diabetic retinopathy (DR) from single-field retinal images specifically for sight-threatening diabetic retinopathy for appropriate care (ii) to determine the agreement and diagnostic accuracy of the algorithm as a pilot study among optometrists versus “gold standard” (retinal specialist grading). Methods The severity of DR was scored based on colour photo using a colour coded algorithm, which included the lesions of DR and number of quadrants involved. A total of 99 participants underwent training followed by evaluation. Data of the 99 participants were analyzed. Fifty posterior pole 45 degree retinal images with all stages of DR were presented. Kappa scores (κ), areas under the receiver operating characteristic curves (AUCs), sensitivity and specificity were determined, with further comparison between working optometrists and optometry students. Results Mean age of the participants was 22 years (range: 19–43 years), 87% being women. Participants correctly identified 91.5% images that required immediate referral (κ) = 0.696), 62.5% of images as requiring review after 6 months (κ = 0.462), and 51.2% of those requiring review after 1 year (κ = 0.532). The sensitivity and specificity of the optometrists were 91% and 78% for immediate referral, 62% and 84% for review after 6 months, and 51% and 95% for review after 1 year, respectively. The AUC was the highest (0.855) for immediate referral, second highest (0.824) for review after 1 year, and 0.727 for review after 6 months criteria. Optometry students performed better than the working optometrists for all grades of referral. Conclusions The diabetic retinopathy algorithm assessed in this work is a simple and a fairly accurate method for appropriate referral based on single-field 45 degree posterior pole retinal images. PMID:27661981
Validation of Three-Dimensional Ray-Tracing Algorithm for Indoor Wireless Propagations
Majdi Salem; Mahamod Ismail; Norbahiah Misran
2011-01-01
A 3D ray tracing simulator has been developed for indoor wireless networks. The simulator uses geometrical optics (GOs) to propagate the electromagnetic waves inside the buildings. The prediction technique takes into account multiple reflections and transmissions of the propagated waves. An interpolation prediction method (IPM) has been proposed to predict the propagated signal and to make the ray-tracing algorithm faster, accurate, and simple. The measurements have been achieved by using a s...
A novel algorithm for validating peptide identification from a shotgun proteomics search engine.
Jian, Ling; Niu, Xinnan; Xia, Zhonghang; Samir, Parimal; Sumanasekera, Chiranthani; Mu, Zheng; Jennings, Jennifer L; Hoek, Kristen L; Allos, Tara; Howard, Leigh M; Edwards, Kathryn M; Weil, P Anthony; Link, Andrew J
2013-03-01
Liquid chromatography coupled with tandem mass spectrometry (LC-MS/MS) has revolutionized the proteomics analysis of complexes, cells, and tissues. In a typical proteomic analysis, the tandem mass spectra from a LC-MS/MS experiment are assigned to a peptide by a search engine that compares the experimental MS/MS peptide data to theoretical peptide sequences in a protein database. The peptide spectra matches are then used to infer a list of identified proteins in the original sample. However, the search engines often fail to distinguish between correct and incorrect peptides assignments. In this study, we designed and implemented a novel algorithm called De-Noise to reduce the number of incorrect peptide matches and maximize the number of correct peptides at a fixed false discovery rate using a minimal number of scoring outputs from the SEQUEST search engine. The novel algorithm uses a three-step process: data cleaning, data refining through a SVM-based decision function, and a final data refining step based on proteolytic peptide patterns. Using proteomics data generated on different types of mass spectrometers, we optimized the De-Noise algorithm on the basis of the resolution and mass accuracy of the mass spectrometer employed in the LC-MS/MS experiment. Our results demonstrate De-Noise improves peptide identification compared to other methods used to process the peptide sequence matches assigned by SEQUEST. Because De-Noise uses a limited number of scoring attributes, it can be easily implemented with other search engines.
A Novel Algorithm for Validating Peptide Identification from a Shotgun Proteomics Search Engine
Jian, Ling; Niu, Xinnan; Xia, Zhonghang; Samir, Parimal; Sumanasekera, Chiranthani; Zheng, Mu; Jennings, Jennifer L.; Hoek, Kristen L.; Allos, Tara; Howard., Leigh M.; Edwards, Kathryn M.; Weil, P. Anthony; Link, Andrew J.
2013-01-01
Liquid chromatography coupled with tandem mass spectrometry has revolutionized the proteomics analysis of complexes, cells, and tissues. In a typical proteomic analysis, the tandem mass spectra from a LC/MS/MS experiment are assigned to a peptide by a search engine that compares the experimental MS/MS peptide data to theoretical peptide sequences in a protein database. The peptide spectra matches are then used to infer a list of identified proteins in the original sample. However, the search engines often fail to distinguish between correct and incorrect peptides assignments. In this study, we designed and implemented a novel algorithm called De-Noise to reduce the number of incorrect peptide matches and maximize the number of correct peptides at a fixed false discovery rate using a minimal number of scoring outputs from the SEQUEST search engine. The novel algorithm uses a three step process: data cleaning, data refining through a SVM-based decision function, and a final data refining step based on proteolytic peptide patterns. Using proteomics data generated on different types of mass spectrometers, we optimized the De-Noise algorithm based on the resolution and mass accuracy of the mass spectrometer employed in the LC/MS/MS experiment. Our results demonstrate De-Noise improves peptide identification compared to other methods used to process the peptide sequence matches assigned by SEQUEST. Because De-Noise uses a limited number of scoring attributes, it can be easily implemented with other search engines. PMID:23402659
Mannino, Antonio; Novak, Michael G.; Hooker, Stanford B.; Hyde, Kimberly; Aurin, Dick
2014-01-01
An extensive set of field measurements have been collected throughout the continental margin of the northeastern U.S. from 2004 to 2011 to develop and validate ocean color satellite algorithms for the retrieval of the absorption coefficient of chromophoric dissolved organic matter (aCDOM) and CDOM spectral slopes for the 275:295 nm and 300:600 nm spectral range (S275:295 and S300:600). Remote sensing reflectance (Rrs) measurements computed from in-water radiometry profiles along with aCDOM() data are applied to develop several types of algorithms for the SeaWiFS and MODIS-Aqua ocean color satellite sensors, which involve least squares linear regression of aCDOM() with (1) Rrs band ratios, (2) quasi-analytical algorithm-based (QAA based) products of total absorption coefficients, (3) multiple Rrs bands within a multiple linear regression (MLR) analysis, and (4) diffuse attenuation coefficient (Kd). The relative error (mean absolute percent difference; MAPD) for the MLR retrievals of aCDOM(275), aCDOM(355), aCDOM(380), aCDOM(412) and aCDOM(443) for our study region range from 20.4-23.9 for MODIS-Aqua and 27.3-30 for SeaWiFS. Because of the narrower range of CDOM spectral slope values, the MAPD for the MLR S275:295 and QAA-based S300:600 algorithms are much lower ranging from 9.9 and 8.3 for SeaWiFS, respectively, and 8.7 and 6.3 for MODIS, respectively. Seasonal and spatial MODIS-Aqua and SeaWiFS distributions of aCDOM, S275:295 and S300:600 processed with these algorithms are consistent with field measurements and the processes that impact CDOM levels along the continental shelf of the northeastern U.S. Several satellite data processing factors correlate with higher uncertainty in satellite retrievals of aCDOM, S275:295 and S300:600 within the coastal ocean, including solar zenith angle, sensor viewing angle, and atmospheric products applied for atmospheric corrections. Algorithms that include ultraviolet Rrs bands provide a better fit to field measurements than
Jules R. Dim
2013-01-01
Full Text Available Potential improvements of aerosols algorithms for future climate-oriented satellites such as the coming Global Change Observation Mission Climate/Second generation Global Imager (GCOM-C/SGLI are discussed based on a validation study of three years’ (2008–2010 daily aerosols properties, that is, the aerosol optical thickness (AOT and the Ångström exponent (AE retrieved from two MODIS algorithms. The ground-truth data used for this validation study are aerosols measurements from 3 SKYNET ground sites. The results obtained show a good agreement between the ground-truth data AOT and that of one of the satellites’ algorithms, then a systematic overestimation (around 0.2 by the other satellites’ algorithm. The examination of the AE shows a clear underestimation (by around 0.2–0.3 by both satellites’ algorithms. The uncertainties explaining these ground-satellites’ algorithms discrepancies are examined: the cloud contamination affects differently the aerosols properties (AOT and AE of both satellites’ algorithms due to the retrieval scale differences between these algorithms. The deviation of the real part of the refractive index values assumed by the satellites’ algorithms from that of the ground tends to decrease the accuracy of the AOT of both satellites’ algorithms. The asymmetry factor (AF of the ground tends to increase the AE ground-satellites discrepancies as well.
SBUV version 8.6 Retrieval Algorithm: Error Analysis and Validation Technique
Kramarova, N. A.; Bhartia, P. K.; Frith, P. K.; McPeters, S. M.; Labow, R. D.; Taylor, G.; Fisher, S.; DeLand, M.
2012-01-01
SBUV version 8.6 algorithm was used to reprocess data from the Back Scattered Ultra Violet (BUV), the Solar Back Scattered Ultra Violet (SBUV) and a number of SBUV/2 instruments, which 'span a 41-year period from 1970 to 2011 (except a 5-year gap in the 1970s)[see Bhartia et al, 2012]. In the new version Daumont et al. [1992] ozone cross section were used, and new ozone [McPeters et ai, 2007] and cloud climatologies Doiner and Bhartia, 1995] were implemented. The algorithm uses the Optimum Estimation technique [Rodgers, 2000] to retrieve ozone profiles as ozone layer (partial column, DU) on 21 pressure layers. The corresponding total ozone values are calculated by summing ozone columns at individual layers. The algorithm is optimized to accurately retrieve monthly zonal mean (mzm) profiles rather than an individual profile, since it uses monthly zonal mean ozone climatology as the A Priori. Thus, the SBUV version 8.6 ozone dataset is better suited for long-term trend analysis and monitoring ozone changes rather than for studying short-term ozone variability. Here we discuss some characteristics of the SBUV algorithm and sources of error in the SBUV profile and total ozone retrievals. For the first time the Averaging Kernels, smoothing errors and weighting functions (or Jacobians) are included in the SBUV metadata. The Averaging Kernels (AK) represent the sensitivity of the retrieved profile to the true state and contain valuable information about the retrieval algorithm, such as Vertical Resolution, Degrees of Freedom for Signals (DFS) and Retrieval Efficiency [Rodgers, 2000]. Analysis of AK for mzm ozone profiles shows that the total number of DFS for ozone profiles varies from 4.4 to 5.5 out of 6-9 wavelengths used for retrieval. The number of wavelengths in turn depends on solar zenith angles. Between 25 and 0.5 hPa, where SBUV vertical resolution is the highest, DFS for individual layers are about 0.5.
Bosmans, H.; Verbeeck, R.; Vandermeulen, D.; Suetens, P.; Wilms, G.; Maaly, M.; Marchal, G.; Baert, A.L. [Louvain Univ. (Belgium)
1995-12-01
The objective of this study was to validate a new post processing algorithm for improved maximum intensity projections (mip) of intracranial MR angiography acquisitions. The core of the post processing procedure is a new brain segmentation algorithm. Two seed areas, background and brain, are automatically detected. A 3D region grower then grows both regions towards each other and this preferentially towards white regions. In this way, the skin gets included into the final `background region` whereas cortical blood vessels and all brain tissues are included in the `brain region`. The latter region is then used for mip. The algorithm runs less than 30 minutes on a full dataset on a Unix workstation. Images from different acquisition strategies including multiple overlapping thin slab acquisition, magnetization transfer (MT) MRA, Gd-DTPA enhanced MRA, normal and high resolution acquisitions and acquisitions from mid field and high field systems were filtered. A series of contrast enhanced MRA acquisitions obtained with identical parameters was filtered to study the robustness of the filter parameters. In all cases, only a minimal manual interaction was necessary to segment the brain. The quality of the mip was significantly improved, especially in post Gd-DTPA acquisitions or using MT, due to the absence of high intensity signals of skin, sinuses and eyes that otherwise superimpose on the angiograms. It is concluded that the filter is a robust technique to improve the quality of MR angiograms.
Macher, H.; Landes, T.; Grussenmeyer, P.
2016-06-01
Laser scanners are widely used for the modelling of existing buildings and particularly in the creation process of as-built BIM (Building Information Modelling). However, the generation of as-built BIM from point clouds involves mainly manual steps and it is consequently time consuming and error-prone. Along the path to automation, a three steps segmentation approach has been developed. This approach is composed of two phases: a segmentation into sub-spaces namely floors and rooms and a plane segmentation combined with the identification of building elements. In order to assess and validate the developed approach, different case studies are considered. Indeed, it is essential to apply algorithms to several datasets and not to develop algorithms with a unique dataset which could influence the development with its particularities. Indoor point clouds of different types of buildings will be used as input for the developed algorithms, going from an individual house of almost one hundred square meters to larger buildings of several thousand square meters. Datasets provide various space configurations and present numerous different occluding objects as for example desks, computer equipments, home furnishings and even wine barrels. For each dataset, the results will be illustrated. The analysis of the results will provide an insight into the transferability of the developed approach for the indoor modelling of several types of buildings.
H. Macher
2016-06-01
Full Text Available Laser scanners are widely used for the modelling of existing buildings and particularly in the creation process of as-built BIM (Building Information Modelling. However, the generation of as-built BIM from point clouds involves mainly manual steps and it is consequently time consuming and error-prone. Along the path to automation, a three steps segmentation approach has been developed. This approach is composed of two phases: a segmentation into sub-spaces namely floors and rooms and a plane segmentation combined with the identification of building elements. In order to assess and validate the developed approach, different case studies are considered. Indeed, it is essential to apply algorithms to several datasets and not to develop algorithms with a unique dataset which could influence the development with its particularities. Indoor point clouds of different types of buildings will be used as input for the developed algorithms, going from an individual house of almost one hundred square meters to larger buildings of several thousand square meters. Datasets provide various space configurations and present numerous different occluding objects as for example desks, computer equipments, home furnishings and even wine barrels. For each dataset, the results will be illustrated. The analysis of the results will provide an insight into the transferability of the developed approach for the indoor modelling of several types of buildings.
Liu, Yi; Yin, Zengshan; Yang, Zhongdong; Zheng, Yuquan; Yan, Changxiang; Tian, Xiangjun; Yang, Dongxu
2016-04-01
After 5 years development, The Chinese carbon dioxide observation satellite (TanSat), the first scientific experimental CO2 satellite of China, step into the pre-launch phase. The characters of pre-launch carbon dioxide spectrometer have been optimized during the laboratory test and calibration. Radiometric calibration shows a SNR of 440 (O2A 0.76um band), 300 (CO2 1.61um band) and 180 (CO2 2.06um band) on average in the typical radiance condition. Instrument line shape was calibrated automatically in using a well design testing system with laser control and record. After a series of test and calibration in laboratory, the instrumental performances meet the design requirements. TanSat will be launched on August 2016. The optimal estimation theory was involved in TanSat XCO2 retrieval algorithm in a full physics way with simulation of the radiance transfer in atmosphere. Gas absorption, aerosol and cirrus scattering and surface reflectance associate with wavelength dispersion have been considered in inversion for better correction the interference errors to XCO2. In order to simulate the radiance transfer precisely and efficiently, we develop a fast vector radiative transfer simulation method. Application of TanSat algorithm on GOSAT observation (ATANGO) is appropriate to evaluate the performance of algorithm. Validated with TCCON measurements, the ATANGO product achieves a 1.5 ppm precision. A Chinese carbon cycle data- assimilation system Tan-Tracker is developed based on the atmospheric chemical transport model GEOS-Chem. Tan-Tracker is a dual-pass data-assimilation system in which both CO2 concentrations and CO2 fluxes are simultaneously assimilated from atmospheric observations. A validation network has been established around China to support a series of CO2 satellite of China, which include 3 IFS-125HR and 4 Optical Spectrum Analyzer etc.
A KAM theory for conformally symplectic systems: Efficient algorithms and their validation
Calleja, Renato C.; Celletti, Alessandra; de la Llave, Rafael
We present a KAM theory for some dissipative systems (geometrically, these are conformally symplectic systems, i.e. systems that transform a symplectic form into a multiple of itself). For systems with n degrees of freedom depending on n parameters we show that it is possible to find solutions with a fixed n-dimensional (Diophantine) frequency by adjusting the parameters. We do not assume that the system is close to integrable, but we present the results in an a-posteriori format. Our unknowns are a parameterization of the quasi-periodic solution and some parameters in the system. We formulate an invariance equation that expresses that the system with the parameters leaves invariant the solution given by the embedding. We show that if there is a sufficiently approximate solution of the invariance equation, which also satisfies some non-degeneracy conditions, then there is a true solution nearby. The smallness assumptions above can be understood either in Sobolev or in analytic norms. The a-posteriori format has several consequences: A) smooth dependence on the parameters, including the singular limit of zero dissipation; B) estimates on the measure of parameters covered by quasi-periodic solutions; C) convergence of perturbative expansions in dissipative analytic systems; D) bootstrap of regularity (i.e. that all tori which are smooth enough are analytic if the map is analytic); E) a numerically efficient criterion for the breakdown of the quasi-periodic solutions. The proof is based on an iterative quadratically convergent method. The iterative step takes advantage of some geometric identities; these identities also lead to an efficient algorithm. If we discretize the parameterization with N terms, a modified Newton step requires O(N) storage and O(Nlog(N)) operations. The a-posteriori theorems allow one to be confident on the numerical results even very close to breakdown. The algorithm does not require that the system is close to integrable, so that a
Calderer, Antoni [Univ. of Minnesota, Minneapolis, MN (United States); Yang, Xiaolei [Stony Brook Univ., NY (United States); Angelidis, Dionysios [Univ. of Minnesota, Minneapolis, MN (United States); Feist, Chris [Univ. of Minnesota, Minneapolis, MN (United States); Guala, Michele [Univ. of Minnesota, Minneapolis, MN (United States); Ruehl, Kelley [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Guo, Xin [Univ. of Minnesota, Minneapolis, MN (United States); Boomsma, Aaron [Univ. of Minnesota, Minneapolis, MN (United States); Shen, Lian [Univ. of Minnesota, Minneapolis, MN (United States); Sotiropoulos, Fotis [Stony Brook Univ., NY (United States)
2015-10-30
The present project involves the development of modeling and analysis design tools for assessing offshore wind turbine technologies. The computational tools developed herein are able to resolve the effects of the coupled interaction of atmospheric turbulence and ocean waves on aerodynamic performance and structural stability and reliability of offshore wind turbines and farms. Laboratory scale experiments have been carried out to derive data sets for validating the computational models.
Rainieri, Carlo; Fabbrocino, Giovanni
2015-08-01
In the last few decades large research efforts have been devoted to the development of methods for automated detection of damage and degradation phenomena at an early stage. Modal-based damage detection techniques are well-established methods, whose effectiveness for Level 1 (existence) and Level 2 (location) damage detection is demonstrated by several studies. The indirect estimation of tensile loads in cables and tie-rods is another attractive application of vibration measurements. It provides interesting opportunities for cheap and fast quality checks in the construction phase, as well as for safety evaluations and structural maintenance over the structure lifespan. However, the lack of automated modal identification and tracking procedures has been for long a relevant drawback to the extensive application of the above-mentioned techniques in the engineering practice. An increasing number of field applications of modal-based structural health and performance assessment are appearing after the development of several automated output-only modal identification procedures in the last few years. Nevertheless, additional efforts are still needed to enhance the robustness of automated modal identification algorithms, control the computational efforts and improve the reliability of modal parameter estimates (in particular, damping). This paper deals with an original algorithm for automated output-only modal parameter estimation. Particular emphasis is given to the extensive validation of the algorithm based on simulated and real datasets in view of continuous monitoring applications. The results point out that the algorithm is fairly robust and demonstrate its ability to provide accurate and precise estimates of the modal parameters, including damping ratios. As a result, it has been used to develop systems for vibration-based estimation of tensile loads in cables and tie-rods. Promising results have been achieved for non-destructive testing as well as continuous
Nicole C Wright
Full Text Available Validation of claims-based algorithms to identify serious hypersensitivity reactions and osteonecrosis of the jaw has not been performed in large osteoporosis populations. The objective of this project is to estimate the positive predictive value of the claims-based algorithms in older women with osteoporosis enrolled in Medicare. Using the 2006-2008 Medicare 5% sample data, we identified potential hypersensitivity and osteonecrosis of the jaw cases based on ICD-9 diagnosis codes. Potential hypersensitivity cases had a 995.0, 995.2, or 995.3 diagnosis code on emergency department or inpatient claims. Potential osteonecrosis of the jaw cases had ≥1 inpatient or outpatient physician claim with a 522.7, 526.4, 526.5, or 733.45 diagnosis code or ≥2 claims of any type with a 526.9 diagnosis code. All retrieved records were redacted and reviewed by experts to determine case status: confirmed, not confirmed, or insufficient information. We calculated the positive predictive value as the number of confirmed cases divided by the total number of retrieved records with sufficient information. We requested 412 potential hypersensitivity and 304 potential osteonecrosis of the jaw records and received 174 (42% and 84 (28% records respectively. Of 84 potential osteonecrosis of the jaw cases, 6 were confirmed, resulting in a positive predictive value (95% CI of 7.1% (2.7, 14.9. Of 174 retrieved potential hypersensitivity records, 95 were confirmed. After exclusion of 25 records with insufficient information for case determination, the overall positive predictive value (95% CI for hypersensitivity reactions was 76.0% (67.5, 83.2. In a random sample of Medicare data, a claim-based algorithm to identify serious hypersensitivity reactions performed well. An algorithm for osteonecrosis of the jaw did not, partly due to the inclusion of diagnosis codes that are not specific for osteoporosis of the jaw.
Wright, Nicole C; Curtis, Jeffrey R; Arora, Tarun; Smith, Wilson K; Kilgore, Meredith L; Saag, Kenneth G; Safford, Monika M; Delzell, Elizabeth S
2015-01-01
Validation of claims-based algorithms to identify serious hypersensitivity reactions and osteonecrosis of the jaw has not been performed in large osteoporosis populations. The objective of this project is to estimate the positive predictive value of the claims-based algorithms in older women with osteoporosis enrolled in Medicare. Using the 2006-2008 Medicare 5% sample data, we identified potential hypersensitivity and osteonecrosis of the jaw cases based on ICD-9 diagnosis codes. Potential hypersensitivity cases had a 995.0, 995.2, or 995.3 diagnosis code on emergency department or inpatient claims. Potential osteonecrosis of the jaw cases had ≥1 inpatient or outpatient physician claim with a 522.7, 526.4, 526.5, or 733.45 diagnosis code or ≥2 claims of any type with a 526.9 diagnosis code. All retrieved records were redacted and reviewed by experts to determine case status: confirmed, not confirmed, or insufficient information. We calculated the positive predictive value as the number of confirmed cases divided by the total number of retrieved records with sufficient information. We requested 412 potential hypersensitivity and 304 potential osteonecrosis of the jaw records and received 174 (42%) and 84 (28%) records respectively. Of 84 potential osteonecrosis of the jaw cases, 6 were confirmed, resulting in a positive predictive value (95% CI) of 7.1% (2.7, 14.9). Of 174 retrieved potential hypersensitivity records, 95 were confirmed. After exclusion of 25 records with insufficient information for case determination, the overall positive predictive value (95% CI) for hypersensitivity reactions was 76.0% (67.5, 83.2). In a random sample of Medicare data, a claim-based algorithm to identify serious hypersensitivity reactions performed well. An algorithm for osteonecrosis of the jaw did not, partly due to the inclusion of diagnosis codes that are not specific for osteoporosis of the jaw.
Nieto Solana, Hector; Sandholt, Inge; Aguado, Inmaculada;
2011-01-01
Air temperature can be estimated from remote sensing by combining information in thermal infrared and optical wavelengths. The empirical TVX algorithm is based on an estimated linear relationship between observed Land Surface Temperature (LST) and a Spectral Vegetation Index (NDVI). Air temperature...... the accuracy of estimates using the new NDVImax and the previous NDVImax that have been proposed in literature with MSG-SEVIRI images in Spain during the year 2005. In addition, a spatio-temporal assessment of residuals has been performed to evaluate the accuracy of retrievals in terms of daily and seasonal...
The 183-WSL Fast Rain Rate Retrieval Algorithm. Part II: Validation Using Ground Radar Measurements
Laviola, Sante; Levizzani, Vincenzo
2014-01-01
The Water vapour Strong Lines at 183 GHz (183-WSL) algorithm is a method for the retrieval of rain rates and precipitation type classification (convectivestratiform), that makes use of the water vapor absorption lines centered at 183.31 GHz of the Advanced Microwave Sounding Unit module B (AMSU-B) and of the Microwave Humidity Sounder (MHS) flying on NOAA-15-18 and NOAA-19Metop-A satellite series, respectively. The characteristics of this algorithm were described in Part I of this paper together with comparisons against analogous precipitation products. The focus of Part II is the analysis of the performance of the 183-WSL technique based on surface radar measurements. The ground truth dataset consists of 2.5 years of rainfall intensity fields from the NIMROD European radar network which covers North-Western Europe. The investigation of the 183-WSL retrieval performance is based on a twofold approach: 1) the dichotomous statistic is used to evaluate the capabilities of the method to identify rain and no-rain clouds; 2) the accuracy statistic is applied to quantify the errors in the estimation of rain rates.The results reveal that the 183-WSL technique shows good skills in the detection of rainno-rain areas and in the quantification of rain rate intensities. The categorical analysis shows annual values of the POD, FAR and HK indices varying in the range 0.80-0.82, 0.330.36 and 0.39-0.46, respectively. The RMSE value is 2.8 millimeters per hour for the whole period despite an overestimation in the retrieved rain rates. Of note is the distribution of the 183-WSL monthly mean rain rate with respect to radar: the seasonal fluctuations of the average rainfalls measured by radar are reproduced by the 183-WSL. However, the retrieval method appears to suffer for the winter seasonal conditions especially when the soil is partially frozen and the surface emissivity drastically changes. This fact is verified observing the discrepancy distribution diagrams where2the 183-WSL
Shah, R.G.; Salafia, C.M.; Girardi, T.; Conrad, L.; Keaty, K.
2015-01-01
Variability in placental chorionic surface vessel networks (PCSVNs) may mark developmental and functional changes in fetal health. Here we report a protocol of manually tracing PCSVNs from digital 2D images of post-delivery placentas and its validation by a shape matching method to compare the similarity between paint-injected and unmanipulated (uninjected and deflated vessels) tracings of PCSVNs. We show that tracings of unmanipulated vessels produce networks that are very comparable to the networks obtained by tracing paint-injected PCSVNs. We suggest that manual tracings of unmanipulated PCSVNs can extract features of PCSVN growth and structure that may impact fetal wellbeing. PMID:26100723
Flexible job-shop scheduling based on genetic algorithm and simulation validation
Zhou Erming
2017-01-01
Full Text Available This paper selects flexible job-shop scheduling problem as the research object, and Constructs mathematical model aimed at minimizing the maximum makespan. Taking the transmission reverse gear production line of a transmission corporation as an example, genetic algorithm is applied for flexible jobshop scheduling problem to get the specific optimal scheduling results with MATLAB. DELMIA/QUEST based on 3D discrete event simulation is applied to construct the physical model of the production workshop. On the basis of the optimal scheduling results, the logical link of the physical model for the production workshop is established, besides, importing the appropriate process parameters to make virtual simulation on the production workshop. Finally, through analyzing the simulated results, it shows that the scheduling results are effective and reasonable.
E. Biffi
2010-01-01
Full Text Available Neurons cultured in vitro on MicroElectrode Array (MEA devices connect to each other, forming a network. To study electrophysiological activity and long term plasticity effects, long period recording and spike sorter methods are needed. Therefore, on-line and real time analysis, optimization of memory use and data transmission rate improvement become necessary. We developed an algorithm for amplitude-threshold spikes detection, whose performances were verified with (a statistical analysis on both simulated and real signal and (b Big O Notation. Moreover, we developed a PCA-hierarchical classifier, evaluated on simulated and real signal. Finally we proposed a spike detection hardware design on FPGA, whose feasibility was verified in terms of CLBs number, memory occupation and temporal requirements; once realized, it will be able to execute on-line detection and real time waveform analysis, reducing data storage problems.
Ben A Lopman
2006-08-01
Full Text Available BACKGROUND: Vital registration and cause of death reporting is incomplete in the countries in which the HIV epidemic is most severe. A reliable tool that is independent of HIV status is needed for measuring the frequency of AIDS deaths and ultimately the impact of antiretroviral therapy on mortality. METHODS AND FINDINGS: A verbal autopsy questionnaire was administered to caregivers of 381 adults of known HIV status who died between 1998 and 2003 in Manicaland, eastern Zimbabwe. Individuals who were HIV positive and did not die in an accident or during childbirth (74%; n = 282 were considered to have died of AIDS in the gold standard. Verbal autopsies were randomly allocated to a training dataset (n = 279 to generate classification criteria or a test dataset (n = 102 to verify criteria. A rule-based algorithm created to minimise false positives had a specificity of 66% and a sensitivity of 76%. Eight predictors (weight loss, wasting, jaundice, herpes zoster, presence of abscesses or sores, oral candidiasis, acute respiratory tract infections, and vaginal tumours were included in the algorithm. In the test dataset of verbal autopsies, 69% of deaths were correctly classified as AIDS/non-AIDS, and it was not necessary to invoke a differential diagnosis of tuberculosis. Presence of any one of these criteria gave a post-test probability of AIDS death of 0.84. CONCLUSIONS: Analysis of verbal autopsy data in this rural Zimbabwean population revealed a distinct pattern of signs and symptoms associated with AIDS mortality. Using these signs and symptoms, demographic surveillance data on AIDS deaths may allow for the estimation of AIDS mortality and even HIV prevalence.
Rani Sharma, Anu; Kharol, Shailesh Kumar; Kvs, Badarinath; Roy, P. S.
In Earth observation, the atmosphere has a non-negligible influence on the visible and infrared radiation which is strong enough to modify the reflected electromagnetic signal and at-target reflectance. Scattering of solar irradiance by atmospheric molecules and aerosol generates path radiance, which increases the apparent surface reflectance over dark surfaces while absorption by aerosols and other molecules in the atmosphere causes loss of brightness to the scene, as recorded by the satellite sensor. In order to derive precise surface reflectance from satellite image data, it is indispensable to apply the atmospheric correction which serves to remove the effects of molecular and aerosol scattering. In the present study, we have implemented a fast atmospheric correction algorithm to IRS-P6 AWiFS satellite data which can effectively retrieve surface reflectance under different atmospheric and surface conditions. The algorithm is based on MODIS climatology products and simplified use of Second Simulation of Satellite Signal in Solar Spectrum (6S) radiative transfer code, which is used to generate look-up-tables (LUTs). The algorithm requires information on aerosol optical depth for correcting the satellite dataset. The proposed method is simple and easy to implement for estimating surface reflectance from the at sensor recorded signal, on a per pixel basis. The atmospheric correction algorithm has been tested for different IRS-P6 AWiFS False color composites (FCC) covering the ICRISAT Farm, Patancheru, Hyderabad, India under varying atmospheric conditions. Ground measurements of surface reflectance representing different land use/land cover, i.e., Red soil, Chick Pea crop, Groundnut crop and Pigeon Pea crop were conducted to validate the algorithm and found a very good match between surface reflectance and atmospherically corrected reflectance for all spectral bands. Further, we aggregated all datasets together and compared the retrieved AWiFS reflectance with
Shahi, Naveen R.; Thapliyal, Pradeep K.; Sharma, Rashmi; Pal, Pradip K.; Sarkar, Abhijit
2011-09-01
This paper presents the development of a methodology to estimate the net surface shortwave radiation (SWR) over tropical oceans using half-hourly geostationary satellite estimates of outgoing longwave radiation (OLR). The collocated data set of SWR measured at 13 buoy locations over the Indian Ocean and a Meteosat-derived OLR for the period of 2002-2009 have been used to derive an empirical relationship. The information from the solar zenith angle that determines the amount of solar radiation received at a particular location is used to normalize the SWR to nadir observation in order to make the empirical relationship location independent. As the relationship between SWR and OLR is valid mostly over the warm-pool regions, the present study restricts SWR estimation in the tropical Indian Ocean domain (30°E-110°E, 30°S-30°N). The SWR estimates are validated with an independent collocated data set and subsequently compared with the SWR estimates from the Global Energy and Water Cycle Experiment-Surface Radiation Budget V3.0 (GEWEX-SRB), International Satellite Cloud Climatology Project-Flux Data (ISCCP-FD), and National Centers for Environmental Prediction (NCEP) reanalysis for the year 2007. The present algorithm provides significantly better accuracy of SWR estimates, with a root-mean-square error of 27.3 W m-2 as compared with the values of 32.7, 37.5, and 59.6 W m-2 obtained from GEWEX-SRB, ISCCP-FD, and NCEP, respectively. The present algorithm also provides consistently better SWR compared with other available products under different sky conditions and seasons over Indian Ocean warm-pool regions.
Minh H. Pham
2017-09-01
Full Text Available IntroductionInertial measurement units (IMUs positioned on various body locations allow detailed gait analysis even under unconstrained conditions. From a medical perspective, the assessment of vulnerable populations is of particular relevance, especially in the daily-life environment. Gait analysis algorithms need thorough validation, as many chronic diseases show specific and even unique gait patterns. The aim of this study was therefore to validate an acceleration-based step detection algorithm for patients with Parkinson’s disease (PD and older adults in both a lab-based and home-like environment.MethodsIn this prospective observational study, data were captured from a single 6-degrees of freedom IMU (APDM (3DOF accelerometer and 3DOF gyroscope worn on the lower back. Detection of heel strike (HS and toe off (TO on a treadmill was validated against an optoelectronic system (Vicon (11 PD patients and 12 older adults. A second independent validation study in the home-like environment was performed against video observation (20 PD patients and 12 older adults and included step counting during turning and non-turning, defined with a previously published algorithm.ResultsA continuous wavelet transform (cwt-based algorithm was developed for step detection with very high agreement with the optoelectronic system. HS detection in PD patients/older adults, respectively, reached 99/99% accuracy. Similar results were obtained for TO (99/100%. In HS detection, Bland–Altman plots showed a mean difference of 0.002 s [95% confidence interval (CI −0.09 to 0.10] between the algorithm and the optoelectronic system. The Bland–Altman plot for TO detection showed mean differences of 0.00 s (95% CI −0.12 to 0.12. In the home-like assessment, the algorithm for detection of occurrence of steps during turning reached 90% (PD patients/90% (older adults sensitivity, 83/88% specificity, and 88/89% accuracy. The detection of steps during non-turning phases
Validation of genetic algorithm-based optimal sampling for ocean data assimilation
Heaney, Kevin D.; Lermusiaux, Pierre F. J.; Duda, Timothy F.; Haley, Patrick J.
2016-08-01
Regional ocean models are capable of forecasting conditions for usefully long intervals of time (days) provided that initial and ongoing conditions can be measured. In resource-limited circumstances, the placement of sensors in optimal locations is essential. Here, a nonlinear optimization approach to determine optimal adaptive sampling that uses the genetic algorithm (GA) method is presented. The method determines sampling strategies that minimize a user-defined physics-based cost function. The method is evaluated using identical twin experiments, comparing hindcasts from an ensemble of simulations that assimilate data selected using the GA adaptive sampling and other methods. For skill metrics, we employ the reduction of the ensemble root mean square error (RMSE) between the "true" data-assimilative ocean simulation and the different ensembles of data-assimilative hindcasts. A five-glider optimal sampling study is set up for a 400 km × 400 km domain in the Middle Atlantic Bight region, along the New Jersey shelf-break. Results are compared for several ocean and atmospheric forcing conditions.
Inversion model validation of ground emissivity. Contribution to the development of SMOS algorithm
Demontoux, François; Ruffié, Gilles; Wigneron, Jean Pierre; Grant, Jennifer; Hernandez, Daniel Medina
2007-01-01
SMOS (Soil Moisture and Ocean Salinity), is the second mission of 'Earth Explorer' to be developed within the program 'Living Planet' of the European Space Agency (ESA). This satellite, containing the very first 1.4GHz interferometric radiometer 2D, will carry out the first cartography on a planetary scale of the moisture of the grounds and the salinity of the oceans. The forests are relatively opaque, and the knowledge of moisture remains problematic. The effect of the vegetation can be corrected thanks a simple radiative model. Nevertheless simulations show that the effect of the litter on the emissivity of a system litter + ground is not negligible. Our objective is to highlight the effects of this layer on the total multi layer system. This will make it possible to lead to a simple analytical formulation of a model of litter which can be integrated into the calculation algorithm of SMOS. Radiometer measurements, coupled to dielectric characterizations of samples in laboratory can enable us to characterize...
Validation of GPM Ka-Radar Algorithm Using a Ground-based Ka-Radar System
Nakamura, Kenji; Kaneko, Yuki; Nakagawa, Katsuhiro; Furukawa, Kinji; Suzuki, Kenji
2016-04-01
GPM led by the Japan Aerospace Exploration Agency (JAXA) and the National Aeronautics and Space Administration of US (NASA) aims to observe global precipitation. The core satellite is equipped with a microwave radiometer (GMI) and a dual-frequency radar (DPR) which is the first spaceborne Ku/Ka-band dual-wavelength radar dedicated for precipitation measurement. In the DPR algorithm, measured radar reflectivity is converted to effective radar reflectivity by estimating the rain attenuation. Here, the scattering/attenuation characteristics of Ka-band radiowaves are crucial, particularly for wet snow. A melting layer observation using a dual Ka-band radar system developed by JAXA was conducted along the slope of Mt. Zao in Yamagata Prefecture, Japan. The dual Ka-band radar system consists of two nearly identical Ka-band FM-CW radars, and the precipitation systems between two radars were observed in opposite directions. From this experiment, equivalent radar reflectivity (Ze) and specific attenuation (k) were obtained. The experiments were conducted for two winter seasons. During the data analyses, it was found that k estimate easily fluctuates because the estimate is based on double difference calculation. With much temporal and spatial averaging, k-Ze relationship was obtained for melting layers. One of the results is that the height of the peak of k seems slightly higher than that of Ze. The results are compared with in-situ precipitation particle measurements.
Wildenschild, D.; Porter, M. L.
2009-04-01
Significant strides have been made in recent years in imaging fluid flow in porous media using x-ray computerized microtomography (CMT) with 1-20 micron resolution; however, difficulties remain in combining representative sample sizes with optimal image resolution and data quality; and in precise quantification of the variables of interest. Tomographic imaging was for many years focused on volume rendering and the more qualitative analyses necessary for rapid assessment of the state of a patient's health. In recent years, many highly quantitative CMT-based studies of fluid flow processes in porous media have been reported; however, many of these analyses are made difficult by the complexities in processing the resulting grey-scale data into reliable applicable information such as pore network structures, phase saturations, interfacial areas, and curvatures. Yet, relatively few rigorous tests of these analysis tools have been reported so far. The work presented here was designed to evaluate the effect of image resolution and quality, as well as the validity of segmentation and surface generation algorithms as they were applied to CMT images of (1) a high-precision glass bead pack and (2) gas-fluid configurations in a number of glass capillary tubes. Interfacial areas calculated with various algorithms were compared to actual interfacial geometries and we found very good agreement between actual and measured surface and interfacial areas. (The test images used are available for download at the website listed below). http://cbee.oregonstate.edu/research/multiphase_data/index.html
Won, Jihye; Park, Kwan-Dong
2015-04-01
Real-time PPP-RTK positioning algorithms were developed for the purpose of getting precise coordinates of moving platforms. In this implementation, corrections for the satellite orbit and satellite clock were taken from the IGS-RTS products while the ionospheric delay was removed through ionosphere-free combination and the tropospheric delay was either taken care of using the Global Pressure and Temperature (GPT) model or estimated as a stochastic parameter. To improve the convergence speed, all the available GPS and GLONASS measurements were used and Extended Kalman Filter parameters were optimized. To validate our algorithms, we collected the GPS and GLONASS data from a geodetic-quality receiver installed on a roof of a moving vehicle in an open-sky environment and used IGS final products of satellite orbits and clock offsets. The horizontal positioning error got less than 10 cm within 5 minutes, and the error stayed below 10 cm even after the vehicle start moving. When the IGS-RTS product and the GPT model were used instead of the IGS precise product, the positioning accuracy of the moving vehicle was maintained at better than 20 cm once convergence was achieved at around 6 minutes.
Amaritsakul, Yongyut; Chao, Ching-Kong; Lin, Jinn
2013-01-01
Short-segment instrumentation for spine fractures is threatened by relatively high failure rates. Failure of the spinal pedicle screws including breakage and loosening may jeopardize the fixation integrity and lead to treatment failure. Two important design objectives, bending strength and pullout strength, may conflict with each other and warrant a multiobjective optimization study. In the present study using the three-dimensional finite element (FE) analytical results based on an L25 orthogonal array, bending and pullout objective functions were developed by an artificial neural network (ANN) algorithm, and the trade-off solutions known as Pareto optima were explored by a genetic algorithm (GA). The results showed that the knee solutions of the Pareto fronts with both high bending and pullout strength ranged from 92% to 94% of their maxima, respectively. In mechanical validation, the results of mathematical analyses were closely related to those of experimental tests with a correlation coefficient of -0.91 for bending and 0.93 for pullout (P design had significantly higher fatigue life (P < 0.01) and comparable pullout strength as compared with commercial screws. Multiobjective optimization study of spinal pedicle screws using the hybrid of ANN and GA could achieve an ideal with high bending and pullout performances simultaneously.
Chander, S.; Ganguly, D.
2016-05-01
Water level was retrieved, using AltiKa radar altimeter onboard the SARAL satellite, over Ukai reservoir using modified retrieval algorithms specifically for inland water bodies. The methodology was based on waveform classification, waveform retracking and dedicated inland range corrections algorithms. The 40 Hz waveforms were classified based on the linear discriminant analysis (LDA) and Bayesian classifier. Waveforms were retracked using Brown, Threshold, and Offset Centre of Gravity methods. Retracking algorithms were implemented on full waveform and sub-waveforms (only one leading edge) for estimating the improvement in the estimated range. ECMWF operational, ERA reanalysis pressure fields and global ionosphere maps were used to exactly estimate the range corrections. The microwave and optical images were used for estimating the extent of the water body and altimeter track location. Four GPS field trips were conducted, same day on the SARAL pass, using two Dual frequency GPS. One GPS was mounted close to Dam as static mode and the other was used on a moving vehicle within the reservoir in Kinematic mode. Tide gauge dataset was provided by the flood cell, Ukai dam authority for the time period 1972-2015. The altimeter retrieved water level results were then validated with the GPS survey and in-situ tide gauge dataset. With good selection of virtual station (waveform classification, back scattering coefficient), Ice-2 retracker and subwavefom retracker both works better with overall RMSE better than 15 cm. The results supports that AltiKa dataset, due to smaller foot-print and sharp trailing edge of Ka band waveform, can be utilized for more accurate water level information over inland water bodies.
Validation of the Saskatoon Falls Prevention Consortium's Falls Screening and Referral Algorithm.
Lawson, Sara Nicole; Zaluski, Neal; Petrie, Amanda; Arnold, Cathy; Basran, Jenny; Dal Bello-Haas, Vanina
2013-01-01
Objectif : Étudier la validité concurrente de l'algorithme de dépistage des risques de chute et de renvoi en consultation (Falls Screening and Referral Algorithm, FSRA) du Saskatoon Falls Prevention Consortium. Méthode : Vingt-neuf personnes âgées (moyenne d'âge [ET] de 77,7 ans [4,0]) vivant dans une résidence pour personnes âgées autonomes satisfaisaient les critères d'inclusion; elles ont rempli un questionnaire démographique et ont été soumises à certaines composantes du FSRA et du test d'équilibre de l'échelle de Berg (EEB). Le FSRA comprend un test de dépistage des risques de chute (Elderly Fall Screening Test, EFST) et le questionnaire multifactoriel en matière de chutes (Multi-Factor Falls Questionnaire, MFQ). Il est conçu pour classer les individus dans trois catégories – risque de chute élevé, modéré ou faible – afin d'établir les approches de gestion appropriées. Un modèle prédictif de probabilité des risques de chute basé sur une étude antérieure a été utilisé pour établir la validité concurrente du FRSA. Résultats : Au total, 79 % des participants ont été classés dans la catégorie à faible risque du FSRA, puisque le modèle prédictif a permis d'établir la probabilité des risques de chute dans leur cas entre 0,04 et 0,74, avec une moyenne de 0,35 (ET=0,25). On n'a pu établir aucune corrélation significative sur le plan statistique entre le FSRA et le modèle prédictif de la probabilité des risques de chute (ρ de Spearman=0,35, p=0,06). Conclusion : Le FSRA manque de validité concurrente si on le compare à un modèle de risques de chute préalablement établi et semble « surclasser » les individus dans le segment à faible risque. D'autres études sur le FSRA en tant qu'outil approprié de dépistage chez les aînés résidant dans la communauté sont recommandées.
SU-E-T-347: Validation of the Condensed History Algorithm of Geant4 Using the Fano Test
Lee, H; Mathis, M; Sawakuchi, G [The Univerity of Texas MD Anderson Cancer Center, Houston, TX (United States)
2014-06-01
Purpose: To validate the condensed history algorithm and physics of the Geant4 Monte Carlo toolkit for simulations of ionization chambers (ICs). This study is the first step to validate Geant4 for calculations of photon beam quality correction factors under the presence of a strong magnetic field for magnetic resonance guided linac system applications. Methods: The electron transport and boundary crossing algorithms of Geant4 version 9.6.p02 were tested under Fano conditions using the Geant4 example/application FanoCavity. User-defined parameters of the condensed history and multiple scattering algorithms were investigated under Fano test conditions for three scattering models (physics lists): G4UrbanMscModel95 (PhysListEmStandard-option3), G4GoudsmitSaundersonMsc (PhysListEmStandard-GS), and G4WentzelVIModel/G4CoulombScattering (PhysListEmStandard-WVI). Simulations were conducted using monoenergetic photon beams, ranging from 0.5 to 7 MeV and emphasizing energies from 0.8 to 3 MeV. Results: The GS and WVI physics lists provided consistent Fano test results (within ±0.5%) for maximum step sizes under 0.01 mm at 1.25 MeV, with improved performance at 3 MeV (within ±0.25%). The option3 physics list provided consistent Fano test results (within ±0.5%) for maximum step sizes above 1 mm. Optimal parameters for the option3 physics list were 10 km maximum step size with default values for other user-defined parameters: 0.2 dRoverRange, 0.01 mm final range, 0.04 range factor, 2.5 geometrical factor, and 1 skin. Simulations using the option3 physics list were ∼70 – 100 times faster compared to GS and WVI under optimal parameters. Conclusion: This work indicated that the option3 physics list passes the Fano test within ±0.5% when using a maximum step size of 10 km for energies suitable for IC calculations in a 6 MV spectrum without extensive computational times. Optimal user-defined parameters using the option3 physics list will be used in future IC simulations to
Eaves, Nick A.; Zhang, Qingan; Liu, Fengshan; Guo, Hongsheng; Dworkin, Seth B.; Thomson, Murray J.
2016-10-01
Mitigation of soot emissions from combustion devices is a global concern. For example, recent EURO 6 regulations for vehicles have placed stringent limits on soot emissions. In order to allow design engineers to achieve the goal of reduced soot emissions, they must have the tools to so. Due to the complex nature of soot formation, which includes growth and oxidation, detailed numerical models are required to gain fundamental insights into the mechanisms of soot formation. A detailed description of the CoFlame FORTRAN code which models sooting laminar coflow diffusion flames is given. The code solves axial and radial velocity, temperature, species conservation, and soot aggregate and primary particle number density equations. The sectional particle dynamics model includes nucleation, PAH condensation and HACA surface growth, surface oxidation, coagulation, fragmentation, particle diffusion, and thermophoresis. The code utilizes a distributed memory parallelization scheme with strip-domain decomposition. The public release of the CoFlame code, which has been refined in terms of coding structure, to the research community accompanies this paper. CoFlame is validated against experimental data for reattachment length in an axi-symmetric pipe with a sudden expansion, and ethylene-air and methane-air diffusion flames for multiple soot morphological parameters and gas-phase species. Finally, the parallel performance and computational costs of the code is investigated.
Sreih, Antoine G; Annapureddy, Narender; Springer, Jason; Casey, George; Byram, Kevin; Cruz, Andy; Estephan, Maya; Frangiosa, Vince; George, Michael D; Liu, Mei; Parker, Adam; Sangani, Sapna; Sharim, Rebecca; Merkel, Peter A
2016-12-01
The aim of this study was to develop and validate case-finding algorithms for granulomatosis with polyangiitis (Wegener's, GPA), microscopic polyangiitis (MPA), and eosinophilic GPA (Churg-Strauss, EGPA). Two hundred fifty patients per disease were randomly selected from two large healthcare systems using the International Classification of Diseases version 9 (ICD9) codes for GPA/EGPA (446.4) and MPA (446.0). Sixteen case-finding algorithms were constructed using a combination of ICD9 code, encounter type (inpatient or outpatient), physician specialty, use of immunosuppressive medications, and the anti-neutrophil cytoplasmic antibody type. Algorithms with the highest average positive predictive value (PPV) were validated in a third healthcare system. An algorithm excluding patients with eosinophilia or asthma and including the encounter type and physician specialty had the highest PPV for GPA (92.4%). An algorithm including patients with eosinophilia and asthma and the physician specialty had the highest PPV for EGPA (100%). An algorithm including patients with one of the diagnoses (alveolar hemorrhage, interstitial lung disease, glomerulonephritis, and acute or chronic kidney disease), encounter type, physician specialty, and immunosuppressive medications had the highest PPV for MPA (76.2%). When validated in a third healthcare system, these algorithms had high PPV (85.9% for GPA, 85.7% for EGPA, and 61.5% for MPA). Adding the anti-neutrophil cytoplasmic antibody type increased the PPV to 94.4%, 100%, and 81.2% for GPA, EGPA, and MPA, respectively. Case-finding algorithms accurately identify patients with GPA, EGPA, and MPA in administrative databases. These algorithms can be used to assemble population-based cohorts and facilitate future research in epidemiology, drug safety, and comparative effectiveness. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
Lüthgens, K; Abele, H; Alkier, R; Hoopmann, M; Kagan, K O
2011-08-01
Validation of the performance of the new algorithm of the FMF London for screening for trisomy 21 using a combination of maternal age, fetal nuchal translucency (NT) and maternal serum free β-hCG and PAPP-A. Between 2002 and 2007, NT was measured prospectively in 39,004 pregnancies in the context of routinely performed first trimester screening in Germany. Individual trisomy 21 risks were calculated by a combination of NT, maternal age, free β-hCG, and PAPP-A using the FMF algorithm in force at the time of investigation. In this study we recalculated the trisomy 21 risks applying the new algorithm of the FMF UK that includes the new mixture model for the NT measurement. 38,751 singleton pregnancies could be included in the study of which 109 (0.3 %) had a trisomy 21. Only 35 % of the NT measurements of euploids were above the median and 25 % of the NT measurements were below the 5th percentile of the FMF UK. For sonographers that were qualified according to level II or III of the German DEGUM system, the median NT of fetuses with trisomy 21 was 0.9 mm above the median of the FMF UK and only 0.5 mm above the median for all other sonographers. Despite the limited performance of the NT measurement, the overall detection rate for a trisomy 21 was 90.8 % when combining the NT with maternal age, PAPP-A and free β-hCG. The overall false-positive rate for a trisomy 21 was 6.5 % at a cut-off value of 1:300. In this study we were able to show that the use of the new risk algorithm of the FMF UK leads to a trisomy 21 detection rate of about 90 % at a 5 % false-positive rate in a German collective despite a significant underestimation of the NT. © Georg Thieme Verlag KG Stuttgart · New York.
R. Stübi
2009-12-01
Full Text Available This paper presents a first statistical validation of tropospheric ozone products derived from measurements of the IASI satellite instrument. Since the end of 2006, IASI (Infrared Atmospheric Sounding Interferometer aboard the polar orbiter Metop-A measures infrared spectra of the Earth's atmosphere in nadir geometry. This validation covers the northern mid-latitudes and the period from July 2007 to August 2008. Retrieval results from four different sources are presented: three are from scientific products (LATMOS, LISA, LPMAA and the fourth one is the pre-operational product distributed by EUMETSAT (version 4.2. The different products are derived from different algorithms with different approaches. The difference and their implications for the retrieved products are discussed. In order to evaluate the quality and the performance of each product, comparisons with the vertical ozone concentration profiles measured by balloon sondes are performed and lead to estimates of the systematic and random errors in the IASI ozone products (profiles and partial columns. A first comparison is performed on the given profiles; a second comparison takes into account the altitude dependent sensitivity of the retrievals. Tropospheric columnar amounts are compared to the sonde for a lower tropospheric column (surface to about 6 km and a "total" tropospheric column (surface to about 11 km. On average both tropospheric columns have small biases for the scientific products, less than 2 Dobson Units (DU for the lower troposphere and less than 1 DU for the total troposphere. The comparison of the still pre-operational EUMETSAT columns shows higher mean differences of about 5 DU.
Minnis, P.; Sun-Mack, S.; Bedka, K. M.; Yost, C. R.; Trepte, Q. Z.; Smith, W. L., Jr.; Painemal, D.; Chen, Y.; Palikonda, R.; Dong, X.; Xi, B.
2016-01-01
Validation is a key component of remote sensing that can take many different forms. The NASA LaRC Satellite ClOud and Radiative Property retrieval System (SatCORPS) is applied to many different imager datasets including those from the geostationary satellites, Meteosat, Himiwari-8, INSAT-3D, GOES, and MTSAT, as well as from the low-Earth orbiting satellite imagers, MODIS, AVHRR, and VIIRS. While each of these imagers have similar sets of channels with wavelengths near 0.65, 3.7, 11, and 12 micrometers, many differences among them can lead to discrepancies in the retrievals. These differences include spatial resolution, spectral response functions, viewing conditions, and calibrations, among others. Even when analyzed with nearly identical algorithms, it is necessary, because of those discrepancies, to validate the results from each imager separately in order to assess the uncertainties in the individual parameters. This paper presents comparisons of various SatCORPS-retrieved cloud parameters with independent measurements and retrievals from a variety of instruments. These include surface and space-based lidar and radar data from CALIPSO and CloudSat, respectively, to assess the cloud fraction, height, base, optical depth, and ice water path; satellite and surface microwave radiometers to evaluate cloud liquid water path; surface-based radiometers to evaluate optical depth and effective particle size; and airborne in-situ data to evaluate ice water content, effective particle size, and other parameters. The results of comparisons are compared and contrasted and the factors influencing the differences are discussed.
M ESWARAN; S ATHUL; P NIRAJ; G R REDDY; M R RAMESH
2017-04-01
Wind-induced and earthquake-induced vibrations of structures such as super-tall towers and bridges can be efficaciously controlled by tuned liquid dampers (TLDs). This work presents a numerical simulation procedure to study the performance of TLDs–structure system through sigma (r)-transformation-based fluid–structure coupled solver. For this, a ‘C’-based computational code has been developed. The structural equations,which are coupled with the fluid equations in order to achieve the transfer of sloshing forces to structure for damping, are solved by the fourth-order Runge–Kutta method, while the fluid equations are solved using finitedifference-based sigma-transformed algorithm. Different iterative and error schemes are used to optimize the code for larger convergence rate and higher accuracy. For validation, a few experiments are conducted with three-storey structure using TLDs arrangement. The present numerical results of response of TLD-installed structures match well with the experimental results. The minimum displacement of structure is observed when the resonance condition of the coupled system is achieved through proper tuning of TLDs. Since real-time excitations are random in nature, the performance study of TLDs under random excitation has also been carried out in which the Bretschneider spectrum is used to generate the random input wave.
Koshak, W. J.; Blakeslee, R. J.; Bailey, J. C.
1997-01-01
A linear algebraic solution is provided for the problem of retrieving the location and time of occurrence of lightning ground strikes from in Advanced Lightning Direction Finder (ALDF) network. The ALDF network measures field strength, magnetic bearing, and arrival time of lightning radio emissions and solutions for the plane (i.e.. no Earth curvature) are provided that implement all of these measurements. The accuracy of the retrieval method is tested using computer-simulated data sets and the relative influence of bearing and arrival time data on the outcome of the final solution is formally demonstrated. The algorithm is sufficiently accurate to validate NASA's Optical Transient Detector (OTD) and Lightning Imaging System (LIS). We also introduce a quadratic planar solution that is useful when only three arrival time measurements are available. The algebra of the quadratic root results are examined in detail to clarify what portions of the analysis region lead to fundamental ambiguities in source location. Complex root results are shown to be associated with the presence of measurement errors when the lightning source lies near an outer sensor baseline of the ALDF network. For arbitrary noncollinear network geometries and in the absence of measurement errors, it is shown that the two quadratic roots are equivalent (no source location ambiguity) on the outer sensor baselines. The accuracy of the quadratic planar method is tested with computer-generated data sets and the results are generally better than those obtained from the three station linear planar method when bearing errors are about 2 degrees.
Kurtz, S.E.; Fields, D.E.
1983-10-01
The KSTEST code presented here is designed to perform the Kolmogorov-Smirnov one-sample test. The code may be used as a stand-alone program or the principal subroutines may be excerpted and used to service other programs. The Kolmogorov-Smirnov one-sample test is a nonparametric goodness-of-fit test. A number of codes to perform this test are in existence, but they suffer from the inability to provide meaningful results in the case of small sample sizes (number of values less than or equal to 80). The KSTEST code overcomes this inadequacy by using two distinct algorithms. If the sample size is greater than 80, an asymptotic series developed by Smirnov is evaluated. If the sample size is 80 or less, a table of values generated by Birnbaum is referenced. Valid results can be obtained from KSTEST when the sample contains from 3 to 300 data points. The program was developed on a Digital Equipment Corporation PDP-10 computer using the FORTRAN-10 language. The code size is approximately 450 card images and the typical CPU execution time is 0.19 s.
Camarlinghi, Niccolò
2013-09-01
Lung cancer is one of the main public health issues in developed countries. Lung cancer typically manifests itself as non-calcified pulmonary nodules that can be detected reading lung Computed Tomography (CT) images. To assist radiologists in reading images, researchers started, a decade ago, the development of Computer Aided Detection (CAD) methods capable of detecting lung nodules. In this work, a CAD composed of two CAD subprocedures is presented: , devoted to the identification of parenchymal nodules, and , devoted to the identification of the nodules attached to the pleura surface. Both CADs are an upgrade of two methods previously presented as Voxel Based Neural Approach CAD . The novelty of this paper consists in the massive training using the public research Lung International Database Consortium (LIDC) database and on the implementation of new features for classification with respect to the original VBNA method. Finally, the proposed CAD is blindly validated on the ANODE09 dataset. The result of the validation is a score of 0.393, which corresponds to the average sensitivity of the CAD computed at seven predefined false positive rates: 1/8, 1/4, 1/2, 1, 2, 4, and 8 FP/CT.
Trevino, Luis; Johnson, Stephen B.; Patterson, Jonathan; Teare, David
2015-01-01
The engineering development of the National Aeronautics and Space Administration's (NASA) new Space Launch System (SLS) requires cross discipline teams with extensive knowledge of launch vehicle subsystems, information theory, and autonomous algorithms dealing with all operations from pre-launch through on orbit operations. The nominal and off-nominal characteristics of SLS's elements and subsystems must be understood and matched with the autonomous algorithm monitoring and mitigation capabilities for accurate control and response to abnormal conditions throughout all vehicle mission flight phases, including precipitating safing actions and crew aborts. This presents a large and complex systems engineering challenge, which is being addressed in part by focusing on the specific subsystems involved in the handling of off-nominal mission and fault tolerance with response management. Using traditional model-based system and software engineering design principles from the Unified Modeling Language (UML) and Systems Modeling Language (SysML), the Mission and Fault Management (M&FM) algorithms for the vehicle are crafted and vetted in Integrated Development Teams (IDTs) composed of multiple development disciplines such as Systems Engineering (SE), Flight Software (FSW), Safety and Mission Assurance (S&MA) and the major subsystems and vehicle elements such as Main Propulsion Systems (MPS), boosters, avionics, Guidance, Navigation, and Control (GNC), Thrust Vector Control (TVC), and liquid engines. These model-based algorithms and their development lifecycle from inception through FSW certification are an important focus of SLS's development effort to further ensure reliable detection and response to off-nominal vehicle states during all phases of vehicle operation from pre-launch through end of flight. To test and validate these M&FM algorithms a dedicated test-bed was developed for full Vehicle Management End-to-End Testing (VMET). For addressing fault management (FM
Hasini Banneheke
2016-01-01
Full Text Available Background: Trichomoniasis is a sexually transmitted parasitic infection. The World Health Organization (WHO advocated flow charts for curable sexually transmitted infections (STIs to improve the care. In this study, an attempt was made to evaluate the validity and reliability of WHO syndromic algorithm for vaginal discharge against trichomonas immunochromatographic test (ICT. Trichomonas ICT is a test with high validity, reliability, and feasibility. Objectives: The objective was to evaluate the validity and reliability of "WHO syndromic algorithm for vaginal discharge" against "trichomonas ICT" as a screening tool for trichomonas infection among women of reproductive age in the Western Province, Sri Lanka. Materials and Methods: This cross-sectional study was conducted in sexually transmitted disease clinics, well woman clinics, gynecology clinics, and institutional health clinics in the Western Province, Sri Lanka. We enrolled 100 women in the age group of 15-45 years using the stratified random sampling method. They were interviewed and examined and the specimens were collected to identify trichomoniasis by culture and ICT. Two-stage analyses were done to evaluate the performance of the WHO algorithm against Trichomonas ICT. Results: In a two-stage analysis, the specificity of syndromic algorithm improved from 80.9% to 94.4% while false positive rate reduced from 19.1% to 5.6%. The net effect of specificity was 98.7% while the false positive rate was 1.3%. Conclusion: The validity and reliability of WHO syndromic algorithm as a diagnostic tool for trichomoniasis can be improved by adding trichomonas ICT.
Trevino, Luis; Johnson, Stephen B.; Patterson, Jonathan; Teare, David
2015-01-01
The development of the Space Launch System (SLS) launch vehicle requires cross discipline teams with extensive knowledge of launch vehicle subsystems, information theory, and autonomous algorithms dealing with all operations from pre-launch through on orbit operations. The characteristics of these systems must be matched with the autonomous algorithm monitoring and mitigation capabilities for accurate control and response to abnormal conditions throughout all vehicle mission flight phases, including precipitating safing actions and crew aborts. This presents a large complex systems engineering challenge being addressed in part by focusing on the specific subsystems handling of off-nominal mission and fault tolerance. Using traditional model based system and software engineering design principles from the Unified Modeling Language (UML), the Mission and Fault Management (M&FM) algorithms are crafted and vetted in specialized Integrated Development Teams composed of multiple development disciplines. NASA also has formed an M&FM team for addressing fault management early in the development lifecycle. This team has developed a dedicated Vehicle Management End-to-End Testbed (VMET) that integrates specific M&FM algorithms, specialized nominal and off-nominal test cases, and vendor-supplied physics-based launch vehicle subsystem models. The flexibility of VMET enables thorough testing of the M&FM algorithms by providing configurable suites of both nominal and off-nominal test cases to validate the algorithms utilizing actual subsystem models. The intent is to validate the algorithms and substantiate them with performance baselines for each of the vehicle subsystems in an independent platform exterior to flight software test processes. In any software development process there is inherent risk in the interpretation and implementation of concepts into software through requirements and test processes. Risk reduction is addressed by working with other organizations such as S
An Algorithm of Determining Validity of Argum ents%判断论证有效性的一个算法
张会凌
2001-01-01
Based on the principle of truth table,this paper intends to give a simple algorithm for determining the validity of a logical argument,which can be achieved directly in computer as we ll as can reduce the amount of work in judging the validity with a combinde trut h table manually.%给出了一个基于真值表原理进行逻辑论证的有效性判断的简化算法.此算法可以减少手工列真值表判断的计算量，也可由计算机直接实现.
Chul Hyoung Lyoo; Paolo Zanotti-Fregonara; Zoghbi, Sami S.; Jeih-San Liow; Rong Xu; Pike, Victor W.; Zarate, Carlos A.; Masahiro Fujita; Innis, Robert B.
2014-01-01
Image-derived input function (IDIF) obtained by manually drawing carotid arteries (manual-IDIF) can be reliably used in [(11)C](R)-rolipram positron emission tomography (PET) scans. However, manual-IDIF is time consuming and subject to inter- and intra-operator variability. To overcome this limitation, we developed a fully automated technique for deriving IDIF with a supervised clustering algorithm (SVCA). To validate this technique, 25 healthy controls and 26 patients with moderate to severe...
Gorham, James D; Ranson, Matthew S; Smith, Janebeth C; Gorham, Beverly J; Muirhead, Kristen-Ashley
2012-12-01
State-of-the-art, genome-wide assessment of mouse genetic background uses single nucleotide polymorphism (SNP) PCR. As SNP analysis can use multiplex testing, it is amenable to high-throughput analysis and is the preferred method for shared resource facilities that offer genetic background assessment of mouse genomes. However, a typical individual SNP query yields only two alleles (A vs. B), limiting the application of this methodology to distinguishing contributions from no more than two inbred mouse strains. By contrast, simple sequence length polymorphism (SSLP) analysis yields multiple alleles but is not amenable to high-throughput testing. We sought to devise a SNP-based technique to identify donor strain origins when three distinct mouse strains potentially contribute to the genetic makeup of an individual mouse. A computational approach was used to devise a three-strain analysis (3SA) algorithm that would permit identification of three genetic backgrounds while still using a binary-output SNP platform. A panel of 15 mosaic mice with contributions from BALB/c, C57Bl/6, and DBA/2 genetic backgrounds was bred and analyzed using a genome-wide SNP panel using 1449 markers. The 3SA algorithm was applied and then validated using SSLP. The 3SA algorithm assigned 85% of 1449 SNPs as informative for the C57Bl/6, BALB/c, or DBA/2 backgrounds, respectively. Testing the panel of 15 F2 mice, the 3SA algorithm predicted donor strain origins genome-wide. Donor strain origins predicted by the 3SA algorithm correlated perfectly with results from individual SSLP markers located on five different chromosomes (n=70 tests). We have established and validated an analysis algorithm based on binary SNP data that can successfully identify the donor strain origins of chromosomal regions in mice that are bred from three distinct inbred mouse strains.
Mcebisi Mkhwanazi
2015-11-01
Full Text Available The Surface Energy Balance Algorithm for Land (SEBAL is one of the remote sensing (RS models that are increasingly being used to determine evapotranspiration (ET. SEBAL is a widely used model, mainly due to the fact that it requires minimum weather data, and also no prior knowledge of surface characteristics is needed. However, it has been observed that it underestimates ET under advective conditions due to its disregard of advection as another source of energy available for evaporation. A modified SEBAL model was therefore developed in this study. An advection component, which is absent in the original SEBAL, was introduced such that the energy available for evapotranspiration was a sum of net radiation and advected heat energy. The improved SEBAL model was termed SEBAL-Advection or SEBAL-A. An important aspect of the improved model is the estimation of advected energy using minimal weather data. While other RS models would require hourly weather data to be able to account for advection (e.g., METRIC, SEBAL-A only requires daily averages of limited weather data, making it appropriate even in areas where weather data at short time steps may not be available. In this study, firstly, the original SEBAL model was evaluated under advective and non-advective conditions near Rocky Ford in southeastern Colorado, a semi-arid area where afternoon advection is common occurrence. The SEBAL model was found to incur large errors when there was advection (which was indicated by higher wind speed and warm and dry air. SEBAL-A was then developed and validated in the same area under standard surface conditions, which were described as healthy alfalfa with height of 40–60 cm, without water-stress. ET values estimated using the original and modified SEBAL were compared to large weighing lysimeter-measured ET values. When the SEBAL ET was compared to SEBAL-A ET values, the latter showed improved performance, with the ET Mean Bias Error (MBE reduced from −17
Krishnamoorthy ES
2008-06-01
Full Text Available Abstract Background The criterion for dementia implicit in DSM-IV is widely used in research but not fully operationalised. The 10/66 Dementia Research Group sought to do this using assessments from their one phase dementia diagnostic research interview, and to validate the resulting algorithm in a population-based study in Cuba. Methods The criterion was operationalised as a computerised algorithm, applying clinical principles, based upon the 10/66 cognitive tests, clinical interview and informant reports; the Community Screening Instrument for Dementia, the CERAD 10 word list learning and animal naming tests, the Geriatric Mental State, and the History and Aetiology Schedule – Dementia Diagnosis and Subtype. This was validated in Cuba against a local clinician DSM-IV diagnosis and the 10/66 dementia diagnosis (originally calibrated probabilistically against clinician DSM-IV diagnoses in the 10/66 pilot study. Results The DSM-IV sub-criteria were plausibly distributed among clinically diagnosed dementia cases and controls. The clinician diagnoses agreed better with 10/66 dementia diagnosis than with the more conservative computerized DSM-IV algorithm. The DSM-IV algorithm was particularly likely to miss less severe dementia cases. Those with a 10/66 dementia diagnosis who did not meet the DSM-IV criterion were less cognitively and functionally impaired compared with the DSMIV confirmed cases, but still grossly impaired compared with those free of dementia. Conclusion The DSM-IV criterion, strictly applied, defines a narrow category of unambiguous dementia characterized by marked impairment. It may be specific but incompletely sensitive to clinically relevant cases. The 10/66 dementia diagnosis defines a broader category that may be more sensitive, identifying genuine cases beyond those defined by our DSM-IV algorithm, with relevance to the estimation of the population burden of this disorder.
Prince, Martin J; de Rodriguez, Juan Llibre; Noriega, L; Lopez, A; Acosta, Daisy; Albanese, Emiliano; Arizaga, Raul; Copeland, John RM; Dewey, Michael; Ferri, Cleusa P; Guerra, Mariella; Huang, Yueqin; Jacob, KS; Krishnamoorthy, ES; McKeigue, Paul; Sousa, Renata; Stewart, Robert J; Salas, Aquiles; Sosa, Ana Luisa; Uwakwa, Richard
2008-01-01
Background The criterion for dementia implicit in DSM-IV is widely used in research but not fully operationalised. The 10/66 Dementia Research Group sought to do this using assessments from their one phase dementia diagnostic research interview, and to validate the resulting algorithm in a population-based study in Cuba. Methods The criterion was operationalised as a computerised algorithm, applying clinical principles, based upon the 10/66 cognitive tests, clinical interview and informant reports; the Community Screening Instrument for Dementia, the CERAD 10 word list learning and animal naming tests, the Geriatric Mental State, and the History and Aetiology Schedule – Dementia Diagnosis and Subtype. This was validated in Cuba against a local clinician DSM-IV diagnosis and the 10/66 dementia diagnosis (originally calibrated probabilistically against clinician DSM-IV diagnoses in the 10/66 pilot study). Results The DSM-IV sub-criteria were plausibly distributed among clinically diagnosed dementia cases and controls. The clinician diagnoses agreed better with 10/66 dementia diagnosis than with the more conservative computerized DSM-IV algorithm. The DSM-IV algorithm was particularly likely to miss less severe dementia cases. Those with a 10/66 dementia diagnosis who did not meet the DSM-IV criterion were less cognitively and functionally impaired compared with the DSMIV confirmed cases, but still grossly impaired compared with those free of dementia. Conclusion The DSM-IV criterion, strictly applied, defines a narrow category of unambiguous dementia characterized by marked impairment. It may be specific but incompletely sensitive to clinically relevant cases. The 10/66 dementia diagnosis defines a broader category that may be more sensitive, identifying genuine cases beyond those defined by our DSM-IV algorithm, with relevance to the estimation of the population burden of this disorder. PMID:18577205
Barsky, Sanford; Gentchev, Lynda; Basu, Amitabha; Jimenez, Rafael; Boussaid, Kamel; Gholap, Abhi
2009-11-01
While tissue microarrays (TMAs) are a form of high-throughput screening, they presently still require manual construction and interpretation. Because of predicted increasing demand for TMAs, we investigated whether their construction could be automated. We created both epithelial recognition algorithms (ERAs) and field of view (FOV) algorithms that could analyze virtual slides and select the areas of highest cancer cell density in the tissue block for coring (algorithmic TMA) and compared these to the cores manually selected (manual TMA) from the same tissue blocks. We also constructed TMAs with TMAker, a robot guided by these algorithms (robotic TMA). We compared each of these TMAs to each other. Our imaging algorithms produced a grid of hundreds of FOVs, identified cancer cells in a stroma background and calculated the epithelial percentage (cancer cell density) in each FOV. Those with the highest percentages guided core selection and TMA construction. Algorithmic TMA and robotic TMA were overall approximately 50% greater in cancer cell density compared with Manual TMA. These observations held for breast, colon, and lung cancer TMAs. Our digital image algorithms were effective in automating TMA construction.
Pruss, Dmitry; Morris, Brian; Hughes, Elisha; Eggington, Julie M; Esterling, Lisa; Robinson, Brandon S; van Kan, Aric; Fernandes, Priscilla H; Roa, Benjamin B; Gutin, Alexander; Wenstrup, Richard J; Bowles, Karla R
2014-08-01
BRCA1 and BRCA2 sequencing analysis detects variants of uncertain clinical significance in approximately 2 % of patients undergoing clinical diagnostic testing in our laboratory. The reclassification of these variants into either a pathogenic or benign clinical interpretation is critical for improved patient management. We developed a statistical variant reclassification tool based on the premise that probands with disease-causing mutations are expected to have more severe personal and family histories than those having benign variants. The algorithm was validated using simulated variants based on approximately 145,000 probands, as well as 286 BRCA1 and 303 BRCA2 true variants. Positive and negative predictive values of ≥99 % were obtained for each gene. Although the history weighting algorithm was not designed to detect alleles of lower penetrance, analysis of the hypomorphic mutations c.5096G>A (p.Arg1699Gln; BRCA1) and c.7878G>C (p.Trp2626Cys; BRCA2) indicated that the history weighting algorithm is able to identify some lower penetrance alleles. The history weighting algorithm is a powerful tool that accurately assigns actionable clinical classifications to variants of uncertain clinical significance. While being developed for reclassification of BRCA1 and BRCA2 variants, the history weighting algorithm is expected to be applicable to other cancer- and non-cancer-related genes.
Bohdan Nosyk
Full Text Available OBJECTIVE: To define a population-level cohort of individuals infected with the human immunodeficiency virus (HIV in the province of British Columbia from available registries and administrative datasets using a validated case-finding algorithm. METHODS: Individuals were identified for possible cohort inclusion from the BC Centre for Excellence in HIV/AIDS (CfE drug treatment program (antiretroviral therapy and laboratory testing datasets (plasma viral load (pVL and CD4 diagnostic test results, the BC Centre for Disease Control (CDC provincial HIV surveillance database (positive HIV tests, as well as databases held by the BC Ministry of Health (MoH; the Discharge Abstract Database (hospitalizations, the Medical Services Plan (physician billing and PharmaNet databases (additional HIV-related medications. A validated case-finding algorithm was applied to distinguish true HIV cases from those likely to have been misclassified. The sensitivity of the algorithms was assessed as the proportion of confirmed cases (those with records in the CfE, CDC and MoH databases positively identified by each algorithm. A priori hypotheses were generated and tested to verify excluded cases. RESULTS: A total of 25,673 individuals were identified as having at least one HIV-related health record. Among 9,454 unconfirmed cases, the selected case-finding algorithm identified 849 individuals believed to be HIV-positive. The sensitivity of this algorithm among confirmed cases was 88%. Those excluded from the cohort were more likely to be female (44.4% vs. 22.5%; p<0.01, had a lower mortality rate (2.18 per 100 person years (100PY vs. 3.14/100PY; p<0.01, and had lower median rates of health service utilization (days of medications dispensed: 9745/100PY vs. 10266/100PY; p<0.01; days of inpatient care: 29/100PY vs. 98/100PY; p<0.01; physician billings: 602/100PY vs. 2,056/100PY; p<0.01. CONCLUSIONS: The application of validated case-finding algorithms and subsequent
Shaheen Abdel
2007-10-01
Full Text Available Abstract Background Acetaminophen overdose is the most common cause of acute liver failure (ALF. Our objective was to develop coding algorithms using administrative data for identifying patients with acetaminophen overdose and hepatic complications. Methods Patients hospitalized for acetaminophen overdose were identified using population-based administrative data (1995–2004. Coding algorithms for acetaminophen overdose, hepatotoxicity (alanine aminotransferase >1,000 U/L and ALF (encephalopathy and international normalized ratio >1.5 were derived using chart abstraction data as the reference and logistic regression analyses. Results Of 1,776 potential acetaminophen overdose cases, the charts of 181 patients were reviewed; 139 (77% had confirmed acetaminophen overdose. An algorithm including codes 965.4 (ICD-9-CM and T39.1 (ICD-10 was highly accurate (sensitivity 90% [95% confidence interval 84–94%], specificity 83% [69–93%], positive predictive value 95% [89–98%], negative predictive value 71% [57–83%], c-statistic 0.87 [0.80–0.93]. Algorithms for hepatotoxicity (including codes for hepatic necrosis, toxic hepatitis and encephalopathy and ALF (hepatic necrosis and encephalopathy were also highly predictive (c-statistics = 0.88. The accuracy of the algorithms was not affected by age, gender, or ICD coding system, but the acetaminophen overdose algorithm varied between hospitals (c-statistics 0.84–0.98; P = 0.003. Conclusion Administrative databases can be used to identify patients with acetaminophen overdose and hepatic complications. If externally validated, these algorithms will facilitate investigations of the epidemiology and outcomes of acetaminophen overdose.
Freedson, Patty S; Lyden, Kate; Kozey-Keadle, Sarah; Staudenmayer, John
2011-12-01
Previous work from our laboratory provided a "proof of concept" for use of artificial neural networks (nnets) to estimate metabolic equivalents (METs) and identify activity type from accelerometer data (Staudenmayer J, Pober D, Crouter S, Bassett D, Freedson P, J Appl Physiol 107: 1330-1307, 2009). The purpose of this study was to develop new nnets based on a larger, more diverse, training data set and apply these nnet prediction models to an independent sample to evaluate the robustness and flexibility of this machine-learning modeling technique. The nnet training data set (University of Massachusetts) included 277 participants who each completed 11 activities. The independent validation sample (n = 65) (University of Tennessee) completed one of three activity routines. Criterion measures were 1) measured METs assessed using open-circuit indirect calorimetry; and 2) observed activity to identify activity type. The nnet input variables included five accelerometer count distribution features and the lag-1 autocorrelation. The bias and root mean square errors for the nnet MET trained on University of Massachusetts and applied to University of Tennessee were +0.32 and 1.90 METs, respectively. Seventy-seven percent of the activities were correctly classified as sedentary/light, moderate, or vigorous intensity. For activity type, household and locomotion activities were correctly classified by the nnet activity type 98.1 and 89.5% of the time, respectively, and sport was correctly classified 23.7% of the time. Use of this machine-learning technique operates reasonably well when applied to an independent sample. We propose the creation of an open-access activity dictionary, including accelerometer data from a broad array of activities, leading to further improvements in prediction accuracy for METs, activity intensity, and activity type.
Postley, John E; Luo, Yanting; Wong, Nathan D; Gardin, Julius M
2015-11-15
Atherosclerotic cardiovascular disease (ASCVD) events are the leading cause of death in the United States and globally. Traditional global risk algorithms may miss 50% of patients who experience ASCVD events. Noninvasive ultrasound evaluation of the carotid and femoral arteries can identify subjects at high risk for ASCVD events. We examined the ability of different global risk algorithms to identify subjects with femoral and/or carotid plaques found by ultrasound. The study population consisted of 1,464 asymptomatic adults (39.8% women) aged 23 to 87 years without previous evidence of ASCVD who had ultrasound evaluation of the carotid and femoral arteries. Three ASCVD risk algorithms (10-year Framingham Risk Score [FRS], 30-year FRS, and lifetime risk) were compared for the 939 subjects who met the algorithm age criteria. The frequency of femoral plaque as the only plaque was 18.3% in the total group and 14.8% in the risk algorithm groups (n = 939) without a significant difference between genders in frequency of femoral plaque as the only plaque. Those identified as high risk by the lifetime risk algorithm included the most men and women who had plaques either femoral or carotid (59% and 55%) but had lower specificity because the proportion of subjects who actually had plaques in the high-risk group was lower (50% and 35%) than in those at high risk defined by the FRS algorithms. In conclusion, ultrasound evaluation of the carotid and femoral arteries can identify subjects at risk of ASCVD events missed by traditional risk-predicting algorithms. The large proportion of subjects with femoral plaque only supports the use of including both femoral and carotid arteries in ultrasound evaluation.
Caroline A. Rickards; Nisarg Vyas; Kathy L. Ryan; Kevin R. Ward; David Andre; Gennifer M. Hurst; Chelsea R. Barrera; Victor A. Convertino
2014-01-01
.... The purpose of this study was to test the hypothesis that low-level physiological signals can be used to develop a machine-learning algorithm for tracking changes in central blood volume that will...
Gongliang Yu
2014-04-01
Full Text Available Satellite remote sensing is a highly useful tool for monitoring chlorophyll-a concentration (Chl-a in water bodies. Remote sensing algorithms based on near-infrared-red (NIR-red wavelengths have demonstrated great potential for retrieving Chl-a in inland waters. This study tested the performance of a recently developed NIR-red based algorithm, SAMO-LUT (Semi-Analytical Model Optimizing and Look-Up Tables, using an extensive dataset collected from five Asian lakes. Results demonstrated that Chl-a retrieved by the SAMO-LUT algorithm was strongly correlated with measured Chl-a (R2 = 0.94, and the root-mean-square error (RMSE and normalized root-mean-square error (NRMS were 8.9 mg∙m−3 and 72.6%, respectively. However, the SAMO-LUT algorithm yielded large errors for sites where Chl-a was less than 10 mg∙m−3 (RMSE = 1.8 mg∙m−3 and NRMS = 217.9%. This was because differences in water-leaving radiances at the NIR-red wavelengths (i.e., 665 nm, 705 nm and 754 nm used in the SAMO-LUT were too small due to low concentrations of water constituents. Using a blue-green algorithm (OC4E instead of the SAMO-LUT for the waters with low constituent concentrations would have reduced the RMSE and NRMS to 1.0 mg∙m−3 and 16.0%, respectively. This indicates (1 the NIR-red algorithm does not work well when water constituent concentrations are relatively low; (2 different algorithms should be used in light of water constituent concentration; and thus (3 it is necessary to develop a classification method for selecting the appropriate algorithm.
A Distributed Firewall Rules Validity Detection Algorithm%一种分布式防火墙规则有效性检测算法
汤昂昂; 陈永波; 姬东鸿
2015-01-01
给出了结构化查询语句的定义，将查询的过程形式化描述，提出了一种基于SFDDs间逻辑运算实现分布式防火墙规则有效性检测算法。该算法在保持原始规则完整性、一致性、紧凑性的基础上，消除独立防火墙规则间异常，保持原始规则语义上的一致。仿真结果表明，基于SFDDs间逻辑运算实现分布式防火墙规则有效性检测算法可以快速有效对规则进行检测。%The paper defines a Structured Query Sentence to describe the query process and proposes a distributed firewall rules validity detection algorithm based on semi -isomorphic firewall decision diagrams (SFDD ) logical operation .The algorithm keeps consistency ,completeness ,and compactness of the original rules ,eliminates the rule anomalies in intra‐firewall .Our simulation results demonstrate that the algorithm achieves a significant improvement in validity detection of rules .
Levine, Adam C; Glavis-Bloom, Justin; Modi, Payal; Nasrin, Sabiha; Atika, Bita; Rege, Soham; Robertson, Sarah; Schmid, Christopher H; Alam, Nur H
2016-01-01
Summary Background Dehydration due to diarrhoea is a leading cause of child death worldwide, yet no clinical tools for assessing dehydration have been validated in resource-limited settings. The Dehydration: Assessing Kids Accurately (DHAKA) score was derived for assessing dehydration in children with diarrhoea in a low-income country setting. In this study, we aimed to externally validate the DHAKA score in a new population of children and compare its accuracy and reliability to the current Integrated Management of Childhood Illness (IMCI) algorithm. Methods DHAKA was a prospective cohort study done in children younger than 60 months presenting to the International Centre for Diarrhoeal Disease Research, Bangladesh, with acute diarrhoea (defined by WHO as three or more loose stools per day for less than 14 days). Local nurses assessed children and classified their dehydration status using both the DHAKA score and the IMCI algorithm. Serial weights were obtained and dehydration status was established by percentage weight change with rehydration. We did regression analyses to validate the DHAKA score and compared the accuracy and reliability of the DHAKA score and IMCI algorithm with receiver operator characteristic (ROC) curves and the weighted κ statistic. This study was registered with ClinicalTrials.gov, number NCT02007733. Findings Between March 22, 2015, and May 15, 2015, 496 patients were included in our primary analyses. On the basis of our criterion standard, 242 (49%) of 496 children had no dehydration, 184 (37%) of 496 had some dehydration, and 70 (14%) of 496 had severe dehydration. In multivariable regression analyses, each 1-point increase in the DHAKA score predicted an increase of 0·6% in the percentage dehydration of the child and increased the odds of both some and severe dehydration by a factor of 1·4. Both the accuracy and reliability of the DHAKA score were significantly greater than those of the IMCI algorithm. Interpretation The DHAKA score
Cippitelli, Enea; Gasparrini, Samuele; Spinsante, Susanna; Gambi, Ennio
2015-01-14
The Microsoft Kinect sensor has gained attention as a tool for gait analysis for several years. Despite the many advantages the sensor provides, however, the lack of a native capability to extract joints from the side view of a human body still limits the adoption of the device to a number of relevant applications. This paper presents an algorithm to locate and estimate the trajectories of up to six joints extracted from the side depth view of a human body captured by the Kinect device. The algorithm is then applied to extract data that can be exploited to provide an objective score for the "Get Up and Go Test", which is typically adopted for gait analysis in rehabilitation fields. Starting from the depth-data stream provided by the Microsoft Kinect sensor, the proposed algorithm relies on anthropometric models only, to locate and identify the positions of the joints. Differently from machine learning approaches, this solution avoids complex computations, which usually require significant resources. The reliability of the information about the joint position output by the algorithm is evaluated by comparison to a marker-based system. Tests show that the trajectories extracted by the proposed algorithm adhere to the reference curves better than the ones obtained from the skeleton generated by the native applications provided within the Microsoft Kinect (Microsoft Corporation, Redmond,WA, USA, 2013) and OpenNI (OpenNI organization, Tel Aviv, Israel, 2013) Software Development Kits.
Enea Cippitelli
2015-01-01
Full Text Available The Microsoft Kinect sensor has gained attention as a tool for gait analysis for several years. Despite the many advantages the sensor provides, however, the lack of a native capability to extract joints from the side view of a human body still limits the adoption of the device to a number of relevant applications. This paper presents an algorithm to locate and estimate the trajectories of up to six joints extracted from the side depth view of a human body captured by the Kinect device. The algorithm is then applied to extract data that can be exploited to provide an objective score for the “Get Up and Go Test”, which is typically adopted for gait analysis in rehabilitation fields. Starting from the depth-data stream provided by the Microsoft Kinect sensor, the proposed algorithm relies on anthropometric models only, to locate and identify the positions of the joints. Differently from machine learning approaches, this solution avoids complex computations, which usually require significant resources. The reliability of the information about the joint position output by the algorithm is evaluated by comparison to a marker-based system. Tests show that the trajectories extracted by the proposed algorithm adhere to the reference curves better than the ones obtained from the skeleton generated by the native applications provided within the Microsoft Kinect (Microsoft Corporation, Redmond,WA, USA, 2013 and OpenNI (OpenNI organization, Tel Aviv, Israel, 2013 Software Development Kits.
Barbier, Paolo; Alimento, Marina; Berna, Giovanni; Celeste, Fabrizio; Gentile, Francesco; Mantero, Antonio; Montericcio, Vincenzo; Muratori, Manuela
2007-05-01
Large files produced by standard compression algorithms slow down spread of digital and tele-echocardiography. We validated echocardiographic video high-grade compression with the new Motion Pictures Expert Groups (MPEG)-4 algorithms with a multicenter study. Seven expert cardiologists blindly scored (5-point scale) 165 uncompressed and compressed 2-dimensional and color Doppler video clips, based on combined diagnostic content and image quality (uncompressed files as references). One digital video and 3 MPEG-4 algorithms (WM9, MV2, and DivX) were used, the latter at 3 compression levels (0%, 35%, and 60%). Compressed file sizes decreased from 12 to 83 MB to 0.03 to 2.3 MB (1:1051-1:26 reduction ratios). Mean SD of differences was 0.81 for intraobserver variability (uncompressed and digital video files). Compared with uncompressed files, only the DivX mean score at 35% (P = .04) and 60% (P = .001) compression was significantly reduced. At subcategory analysis, these differences were still significant for gray-scale and fundamental imaging but not for color or second harmonic tissue imaging. Original image quality, session sequence, compression grade, and bitrate were all independent determinants of mean score. Our study supports use of MPEG-4 algorithms to greatly reduce echocardiographic file sizes, thus facilitating archiving and transmission. Quality evaluation studies should account for the many independent variables that affect image quality grading.
Bircher, Simone; Skou, Niels; Kerr, Yann H.
2013-01-01
The Soil Moisture and Ocean Salinity (SMOS) satellite with a passive L-band radiometer monitors surface soil moisture. In addition to soil moisture, vegetation optical thickness tau(NAD) is retrieved (L2 product) from brightness temperatures (T-B, L1C product) using an algorithm based on the L...... and the most sensitive algorithm parameters were analyzed by network and airborne campaign data collected within one SMOS pixel (44 km diameter). The SMOS retrieval is based on the prevailing low vegetation class. For the L1C comparison, T-B's were calculated from in situ soil moisture using L-MEB. Consistent......-band Microwave Emission of the Biosphere (L-MEB) model with initial guesses on the two parameters (derived from ECMWF products and ECOCLIMAP Leaf Area Index, respectively) and other auxiliary input. This paper presents the validation work carried out in the Skjern River Catchment, Denmark. L1C/L2 data...
CMS Collaboration
2015-01-01
The Mean-Timer algorithm is used for the local reconstruction within the CMS Drift Tubes (DT), for muons that appear to be out-of-time (OOT) or lack measured hits in one of the two space projections. Compared to standard linear fit, this method improves the spatial resoluton for OOT muons. It also allows a precise time measurement that can be used to tag OOT muons, in order either to reject them (e.g. as a result of OOT Pile Up) or to select them for exotic physical analyses. The algorithm was initially developed and tuned on simulation. We present here the first performance results obtained on 2012 data.
Pokhrel, Damodar; Murphy, Martin J.; Todor, Dorin A.; Weiss, Elisabeth; Williamson, Jeffrey F. [Department of Radiation Oncology, School of Medicine, Virginia Commonwealth University, Richmond, Virginia 23298 (United States)
2010-09-15
Purpose: To experimentally validate a new algorithm for reconstructing the 3D positions of implanted brachytherapy seeds from postoperatively acquired 2D conebeam-CT (CBCT) projection images. Methods: The iterative forward projection matching (IFPM) algorithm finds the 3D seed geometry that minimizes the sum of the squared intensity differences between computed projections of an initial estimate of the seed configuration and radiographic projections of the implant. In-house machined phantoms, containing arrays of 12 and 72 seeds, respectively, are used to validate this method. Also, four {sup 103}Pd postimplant patients are scanned using an ACUITY digital simulator. Three to ten x-ray images are selected from the CBCT projection set and processed to create binary seed-only images. To quantify IFPM accuracy, the reconstructed seed positions are forward projected and overlaid on the measured seed images to find the nearest-neighbor distance between measured and computed seed positions for each image pair. Also, the estimated 3D seed coordinates are compared to known seed positions in the phantom and clinically obtained VariSeed planning coordinates for the patient data. Results: For the phantom study, seed localization error is (0.58{+-}0.33) mm. For all four patient cases, the mean registration error is better than 1 mm while compared against the measured seed projections. IFPM converges in 20-28 iterations, with a computation time of about 1.9-2.8 min/iteration on a 1 GHz processor. Conclusions: The IFPM algorithm avoids the need to match corresponding seeds in each projection as required by standard back-projection methods. The authors' results demonstrate {approx}1 mm accuracy in reconstructing the 3D positions of brachytherapy seeds from the measured 2D projections. This algorithm also successfully localizes overlapping clustered and highly migrated seeds in the implant.
Castro, F Javier Sanchez; Pollo, Claudio; Meuli, Reto; Maeder, Philippe; Cuisenaire, Olivier; Cuadra, Meritxell Bach; Villemure, Jean-Guy; Thiran, Jean-Philippe
2006-11-01
Validation of image registration algorithms is a difficult task and open-ended problem, usually application-dependent. In this paper, we focus on deep brain stimulation (DBS) targeting for the treatment of movement disorders like Parkinson's disease and essential tremor. DBS involves implantation of an electrode deep inside the brain to electrically stimulate specific areas shutting down the disease's symptoms. The subthalamic nucleus (STN) has turned out to be the optimal target for this kind of surgery. Unfortunately, the STN is in general not clearly distinguishable in common medical imaging modalities. Usual techniques to infer its location are the use of anatomical atlases and visible surrounding landmarks. Surgeons have to adjust the electrode intraoperatively using electrophysiological recordings and macrostimulation tests. We constructed a ground truth derived from specific patients whose STNs are clearly visible on magnetic resonance (MR) T2-weighted images. A patient is chosen as atlas both for the right and left sides. Then, by registering each patient with the atlas using different methods, several estimations of the STN location are obtained. Two studies are driven using our proposed validation scheme. First, a comparison between different atlas-based and nonrigid registration algorithms with a evaluation of their performance and usability to locate the STN automatically. Second, a study of which visible surrounding structures influence the STN location. The two studies are cross validated between them and against expert's variability. Using this scheme, we evaluated the expert's ability against the estimation error provided by the tested algorithms and we demonstrated that automatic STN targeting is possible and as accurate as the expert-driven techniques currently used. We also show which structures have to be taken into account to accurately estimate the STN location.
Trevino, Luis; Patterson, Jonathan; Teare, David; Johnson, Stephen
2015-01-01
integrates specific M&FM algorithms, specialized nominal and off-nominal test cases, and vendor-supplied physics-based launch vehicle subsystem models. Additionally, the team has developed processes for implementing and validating these algorithms for concept validation and risk reduction for the SLS program. The flexibility of the Vehicle Management End-to-end Testbed (VMET) enables thorough testing of the M&FM algorithms by providing configurable suites of both nominal and off-nominal test cases to validate the developed algorithms utilizing actual subsystem models such as MPS. The intent of VMET is to validate the M&FM algorithms and substantiate them with performance baselines for each of the target vehicle subsystems in an independent platform exterior to the flight software development infrastructure and its related testing entities. In any software development process there is inherent risk in the interpretation and implementation of concepts into software through requirements and test cases into flight software compounded with potential human errors throughout the development lifecycle. Risk reduction is addressed by the M&FM analysis group working with other organizations such as S&MA, Structures and Environments, GNC, Orion, the Crew Office, Flight Operations, and Ground Operations by assessing performance of the M&FM algorithms in terms of their ability to reduce Loss of Mission and Loss of Crew probabilities. In addition, through state machine and diagnostic modeling, analysis efforts investigate a broader suite of failure effects and associated detection and responses that can be tested in VMET to ensure that failures can be detected, and confirm that responses do not create additional risks or cause undesired states through interactive dynamic effects with other algorithms and systems. VMET further contributes to risk reduction by prototyping and exercising the M&FM algorithms early in their implementation and without any inherent hindrances such as meeting FSW
Flockerzi, Dietrich; Heineken, Wolfram
2006-12-01
It is claimed by Rhodes, Morari, and Wiggins [Chaos 9, 108-123 (1999)] that the projection algorithm of Maas and Pope [Combust. Flame 88, 239-264 (1992)] identifies the slow invariant manifold of a system of ordinary differential equations with time-scale separation. A transformation to Fenichel normal form serves as a tool to prove this statement. Furthermore, Rhodes, Morari, and Wiggins [Chaos 9, 108-123 (1999)] conjectured that away from a slow manifold, the criterion of Maas and Pope will never be fulfilled. We present two examples that refute the assertions of Rhodes, Morari, and Wiggins. In the first example, the algorithm of Maas and Pope leads to a manifold that is not invariant but close to a slow invariant manifold. The claim of Rhodes, Morari, and Wiggins that the Maas and Pope projection algorithm is invariant under a coordinate transformation to Fenichel normal form is shown to be not correct in this case. In the second example, the projection algorithm of Maas and Pope leads to a manifold that lies in a region where no slow manifold exists at all. This rejects the conjecture of Rhodes, Morari, and Wiggins mentioned above.
Kuschenerus, Mieke; Cullen, Robert
2016-08-01
To ensure reliability and precision of wave height estimates for future satellite altimetry missions such as Sentinel 6, reliable parameter retrieval algorithms that can extract significant wave heights up to 20 m have to be established. The retrieved parameters, i.e. the retrieval methods need to be validated extensively on a wide range of possible significant wave heights. Although current missions require wave height retrievals up to 20 m, there is little evidence of systematic validation of parameter retrieval methods for sea states with wave heights above 10 m. This paper provides a definition of a set of simulated sea states with significant wave height up to 20 m, that allow simulation of radar altimeter response echoes for extreme sea states in SAR and low resolution mode. The simulated radar responses are used to derive significant wave height estimates, which can be compared with the initial models, allowing precision estimations of the applied parameter retrieval methods. Thus we establish a validation method for significant wave height retrieval for sea states causing high significant wave heights, to allow improved understanding and planning of future satellite altimetry mission validation.
2012-09-30
measure wind speed and direction (Jochen Horstman, NURC ), indentify ocean surface fronts, develop wave breaking detection software, develop ocean...5. Provided X-Band radar data, both FLIP and Sproul, to Jochen Horstman at NURC for use in wind retrieval algorithm development. 6. Completed...processing of SIO MET buoy data for sea surface atmospheric conditions. Provided data to Jochen Horstman at NURC . 3 7. Helped define “grand
Johnson, Robin R; Popovic, Djordje P; Olmstead, Richard E; Stikic, Maja; Levendowski, Daniel J; Berka, Chris
2011-05-01
A great deal of research over the last century has focused on drowsiness/alertness detection, as fatigue-related physical and cognitive impairments pose a serious risk to public health and safety. Available drowsiness/alertness detection solutions are unsatisfactory for a number of reasons: (1) lack of generalizability, (2) failure to address individual variability in generalized models, and/or (3) lack of a portable, un-tethered application. The current study aimed to address these issues, and determine if an individualized electroencephalography (EEG) based algorithm could be defined to track performance decrements associated with sleep loss, as this is the first step in developing a field deployable drowsiness/alertness detection system. The results indicated that an EEG-based algorithm, individualized using a series of brief "identification" tasks, was able to effectively track performance decrements associated with sleep deprivation. Future development will address the need for the algorithm to predict performance decrements due to sleep loss, and provide field applicability.
Moura, Eduardo S., E-mail: emoura@wisc.edu [Department of Medical Physics, University of Wisconsin–Madison, Madison, Wisconsin 53705 and Instituto de Pesquisas Energéticas e Nucleares—IPEN-CNEN/SP, São Paulo 05508-000 (Brazil); Micka, John A.; Hammer, Cliff G.; Culberson, Wesley S.; DeWerd, Larry A. [Department of Medical Physics, University of Wisconsin–Madison, Madison, Wisconsin 53705 (United States); Rostelato, Maria Elisa C. M.; Zeituni, Carlos A. [Instituto de Pesquisas Energéticas e Nucleares—IPEN-CNEN/SP, São Paulo 05508-000 (Brazil)
2015-04-15
Purpose: This work presents the development of a phantom to verify the treatment planning system (TPS) algorithms used for high-dose-rate (HDR) brachytherapy. It is designed to measure the relative dose in a heterogeneous media. The experimental details used, simulation methods, and comparisons with a commercial TPS are also provided. Methods: To simulate heterogeneous conditions, four materials were used: Virtual Water™ (VM), BR50/50™, cork, and aluminum. The materials were arranged in 11 heterogeneity configurations. Three dosimeters were used to measure the relative response from a HDR {sup 192}Ir source: TLD-100™, Gafchromic{sup ®} EBT3 film, and an Exradin™ A1SL ionization chamber. To compare the results from the experimental measurements, the various configurations were modeled in the PENELOPE/penEasy Monte Carlo code. Images of each setup geometry were acquired from a CT scanner and imported into BrachyVision™ TPS software, which includes a grid-based Boltzmann solver Acuros™. The results of the measurements performed in the heterogeneous setups were normalized to the dose values measured in the homogeneous Virtual Water™ setup and the respective differences due to the heterogeneities were considered. Additionally, dose values calculated based on the American Association of Physicists in Medicine-Task Group 43 formalism were compared to dose values calculated with the Acuros™ algorithm in the phantom. Calculated doses were compared at the same points, where measurements have been performed. Results: Differences in the relative response as high as 11.5% were found from the homogeneous setup when the heterogeneous materials were inserted into the experimental phantom. The aluminum and cork materials produced larger differences than the plastic materials, with the BR50/50™ material producing results similar to the Virtual Water™ results. Our experimental methods agree with the PENELOPE/penEasy simulations for most setups and dosimeters. The
Kotrri, Gynter; Fusch, Gerhard; Kwan, Celia; Choi, Dasol; Choi, Arum; Al Kafi, Nisreen; Rochow, Niels; Fusch, Christoph
2016-02-26
Commercial infrared (IR) milk analyzers are being increasingly used in research settings for the macronutrient measurement of breast milk (BM) prior to its target fortification. These devices, however, may not provide reliable measurement if not properly calibrated. In the current study, we tested a correction algorithm for a Near-IR milk analyzer (Unity SpectraStar, Brookfield, CT, USA) for fat and protein measurements, and examined the effect of pasteurization on the IR matrix and the stability of fat, protein, and lactose. Measurement values generated through Near-IR analysis were compared against those obtained through chemical reference methods to test the correction algorithm for the Near-IR milk analyzer. Macronutrient levels were compared between unpasteurized and pasteurized milk samples to determine the effect of pasteurization on macronutrient stability. The correction algorithm generated for our device was found to be valid for unpasteurized and pasteurized BM. Pasteurization had no effect on the macronutrient levels and the IR matrix of BM. These results show that fat and protein content can be accurately measured and monitored for unpasteurized and pasteurized BM. Of additional importance is the implication that donated human milk, generally low in protein content, has the potential to be target fortified.
Rekhtman, Natasha; Ang, Daphne C; Sima, Camelia S; Travis, William D; Moreira, Andre L
2011-10-01
Immunohistochemistry is increasingly utilized to differentiate lung adenocarcinoma and squamous cell carcinoma. However, detailed analysis of coexpression profiles of commonly used markers in large series of whole-tissue sections is lacking. Furthermore, the optimal diagnostic algorithm, particularly the minimal-marker combination, is not firmly established. We therefore studied whole-tissue sections of resected adenocarcinoma and squamous cell carcinoma (n=315) with markers commonly used to identify adenocarcinoma (TTF-1) and squamous cell carcinoma (p63, CK5/6, 34βE12), and prospectively validated the devised algorithm in morphologically unclassifiable small biopsy/cytology specimens (n=38). Analysis of whole-tissue sections showed that squamous cell carcinoma had a highly consistent immunoprofile (TTF-1-negative and p63/CK5/6/34βE12-diffuse) with only rare variation. In contrast, adenocarcinoma showed significant immunoheterogenetity for all 'squamous markers' (p63 (32%), CK5/6 (18%), 34βE12 (82%)) and TTF-1 (89%). As a single marker, only diffuse TTF-1 was specific for adenocarcinoma whereas none of the 'squamous markers,' even if diffuse, were entirely specific for squamous cell carcinoma. In contrast, coexpression profiles of TTF-1/p63 had only minimal overlap between adenocarcinoma and squamous cell carcinoma, and there was no overlap if CK5/6 was added as a third marker. An algorithm was devised in which TTF-1/p63 were used as the first-line panel, and CK5/6 was added for rare indeterminate cases. Prospective validation of this algorithm in small specimens showed 100% accuracy of adenocarcinoma vs squamous cell carcinoma prediction as determined by subsequent resection. In conclusion, although reactivity for 'squamous markers' is common in lung adenocarcinoma, a two-marker panel of TTF-1/p63 is sufficient for subtyping of the majority of tumors as adenocarcinomas vs squamous cell carcinoma, and addition of CK5/6 is needed in only a small subset of cases
Gammelager H
2013-08-01
Full Text Available Henrik Gammelager,1 Claus Sværke,1 Sven Erik Noerholt,2 Bjarne Neumann-Jensen,3 Fei Xue,4 Cathy Critchlow,4 Johan Bergdahl,5 Ylva Trolle Lagerros,5 Helle Kieler,5 Grethe S Tell,6 Vera Ehrenstein11Department of Clinical Epidemiology, Aarhus University Hospital, Aarhus, Denmark; 2Department of Oral and Maxillofacial Surgery, Aarhus University Hospital, Aarhus, Denmark; 3Department of Oral and Maxillofacial Surgery, Aalborg University Hospital, Aalborg, Denmark; 4Center for Observational Research, Amgen Inc., Thousand Oaks, CA, USA; 5Center for Pharmacoepidemiology, Department of Medicine Solna, Karolinska Institutet, Stockholm, Sweden; 6Department of Global Public Health and Primary Care, University of Bergen, Bergen, NorwayBackground: Osteonecrosis of the jaw (ONJ is an adverse effect of drugs that suppress bone turnover – for example, drugs used for the treatment of postmenopausal osteoporosis. The Danish National Registry of Patients (DNRP is potentially valuable for monitoring ONJ and its prognosis; however, no specific code for ONJ exists in the International Classification of Diseases 10th revision (ICD-10, which is currently used in Denmark. Our aim was to estimate the positive predictive value (PPV of an algorithm to capture ONJ cases in the DNRP among women with postmenopausal osteoporosis.Methods: We conducted this cross-sectional validation study in the Central and North Denmark Regions, with approximately 1.8 million inhabitants. In total, 54,956 women with postmenopausal osteoporosis were identified from June 1, 2005 through May 31, 2010. To identify women potentially suffering from ONJ, we applied an algorithm based on ICD-10 codes in the DNRP originating from hospital-based departments of oral and maxillofacial surgery (DOMS. ONJ was adjudicated by chart review and defined by the presence of exposed maxillofacial bone for 8 weeks or more, in the absence of recorded history of craniofacial radiation therapy. We estimated the PPV
Schoorel, E. N. C.; Melman, S.; van Kuijk, S. M. J.; Grobman, W. A.; Kwee, A.; Mol, B. W. J.; Nijhuis, J. G.; Smits, L. J. M.; Aardenburg, R.; de Boer, K.; Delemarre, F. M. C.; van Dooren, I. M.; Franssen, M. T. M.; Kleiverda, G.; Kaplan, M.; Kuppens, S. M. I.; Lim, F. T. H.; Sikkema, J. M.; Smid-Koopman, E.; Visser, H.; Vrouenraets, F. P. J. M.; Woiski, M.; Hermens, R. P. M. G.; Scheepers, H. C. J.
2014-01-01
ObjectiveTo externally validate two models from the USA (entry-to-care [ETC] and close-to-delivery [CTD]) that predict successful intended vaginal birth after caesarean (VBAC) for the Dutch population. DesignA nationwide registration-based cohort study. SettingSeventeen hospitals in the Netherlands.
Cross, D S; McCarty, C A; Hytopoulos, E; Beggs, M; Nolan, N; Harrington, D S; Hastie, T; Tibshirani, R; Tracy, R P; Psaty, B M; McClelland, R; Tsao, P S; Quertermous, T
2012-11-01
Many coronary heart disease (CHD) events occur in individuals classified as intermediate risk by commonly used assessment tools. Over half the individuals presenting with a severe cardiac event, such as myocardial infarction (MI), have at most one risk factor as included in the widely used Framingham risk assessment. Individuals classified as intermediate risk, who are actually at high risk, may not receive guideline recommended treatments. A clinically useful method for accurately predicting 5-year CHD risk among intermediate risk patients remains an unmet medical need. This study sought to develop a CHD Risk Assessment (CHDRA) model that improves 5-year risk stratification among intermediate risk individuals. Assay panels for biomarkers associated with atherosclerosis biology (inflammation, angiogenesis, apoptosis, chemotaxis, etc.) were optimized for measuring baseline serum samples from 1084 initially CHD-free Marshfield Clinic Personalized Medicine Research Project (PMRP) individuals. A multivariable Cox regression model was fit using the most powerful risk predictors within the clinical and protein variables identified by repeated cross-validation. The resulting CHDRA algorithm was validated in a Multiple-Ethnic Study of Atherosclerosis (MESA) case-cohort sample. A CHDRA algorithm of age, sex, diabetes, and family history of MI, combined with serum levels of seven biomarkers (CTACK, Eotaxin, Fas Ligand, HGF, IL-16, MCP-3, and sFas) yielded a clinical net reclassification index of 42.7% (p definition with the MESA samples and inability to include PMRP fatal CHD events. A novel risk score of serum protein levels plus clinical risk factors, developed and validated in independent cohorts, demonstrated clinical utility for assessing the true risk of CHD events in intermediate risk patients. Improved accuracy in cardiovascular risk classification could lead to improved preventive care and fewer deaths.
Gopishankar, N; Bisht, R K [All India Institute of Medical Sciences, New Delhi (India)
2014-06-01
Purpose: To perform dosimetric evaluation of convolution algorithm in Gamma Knife (Perfexion Model) using solid acrylic anthropomorphic phantom. Methods: An in-house developed acrylic phantom with ion chamber insert was used for this purpose. The middle insert was designed to fit ion chamber from top(head) as well as from bottom(neck) of the phantom, henceforth measurement done at two different positions simultaneously. Leksell frame fixed to phantom simulated patient treatment. Prior to dosimetric study, hounsfield units and electron density of acrylic material were incorporated into the calibration curve in the TPS for convolution algorithm calculation. A CT scan of phantom with ion chamber (PTW Freiberg, 0.125cc) was obtained with following scanning parameters: Tube voltage-110kV, Slice thickness-1mm and FOV-240mm. Three separate single shot plans were generated in LGP TPS (Version 10.1.) with collimators 16mm, 8mm and 4mm respectively for both ion chamber positions. Both TMR10 and Convolution algorithm based planning (CABP) were used for dose calculation. A dose of 6Gy at 100% isodose was prescribed at centre of ion chamber visible in the CT scan. The phantom with ion chamber was positioned in the treatment couch for dose delivery. Results: The ion chamber measured dose was 5.98Gy for 16mm collimator shot plan with less than 1% deviation for convolution algorithm whereas with TMR10 measured dose was 5.6Gy. For 8mm and 4mm collimator plan merely a dose of 3.86Gy and 2.18Gy respectively were delivered at TPS calculated time for CABP. Conclusion: CABP is expected to perform accurate prediction of time for dose delivery for all collimators, but significant variation in measured dose was observed for 8mm and 4mm collimator which may be due collimator size effect. Effect of metal artifacts caused by pins and frame on the CT scan also may have role in misinterpreting CABP. The study carried out requires further investigation.
Sung, Sheng-Feng; Hsieh, Cheng-Yang; Lin, Huey-Juan; Chen, Yu-Wei; Yang, Yea-Huei Kao; Li, Chung-Yi
2016-07-15
Stroke patients have a high risk for recurrence, which is positively correlated with the number of risk factors. The assessment of risk factors is essential in both stroke outcomes research and the surveillance of stroke burden. However, methods for assessment of risk factors using claims data are not well developed. We enrolled 6469 patients with acute ischemic stroke, transient ischemic attack, or intracerebral hemorrhage from hospital-based stroke registries, which were linked with Taiwan's National Health Insurance (NHI) claims database. We developed algorithms using diagnosis codes and prescription data to identify stroke risk factors including hypertension, diabetes, hyperlipidemia, atrial fibrillation (AF), and coronary artery disease (CAD) in the claims database using registry data as reference standard. We estimated the kappa statistics to quantify the agreement of information on the risk factors between claims and registry data. The prevalence of risk factors in the registries was hypertension 77.0%, diabetes 39.1%, hyperlipidemia 55.6%, AF 10.1%, and CAD 10.9%. The highest kappa statistics were 0.552 (95% confidence interval 0.528-0.577) for hypertension, 0.861 (0.836-0.885) for diabetes, 0.572 (0.549-0.596) for hyperlipidemia, 0.687 (0.663-0.712) for AF, and 0.480 (0.455-0.504) for CAD. Algorithms based on diagnosis codes alone could achieve moderate to high agreement in identifying the selected risk factors, whereas prescription data helped improve identification of hyperlipidemia. We tested various claims-based algorithms to ascertain important risk factors in stroke patients. These validated algorithms are useful for assessing stroke risk factors in future studies using Taiwan's NHI claims data. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Akbari, H.; Rainer, L.; Heinemeier, K.; Huang, J.; Franconi, E.
1993-01-01
The Southern California Edison Company (SCE) has conducted an extensive metering project in which electricity end use in 53 commercial buildings in Southern California has been measured. The building types monitored include offices, retail stores, groceries, restaurants, and warehouses. One year (June 1989 through May 1990) of the SCE measured hourly end-use data are reviewed in this report. Annual whole-building and end-use energy use intensities (EUIs) and monthly load shapes (LSs) have been calculated for the different building types based on the monitored data. This report compares the monitored buildings' EUIs and LSs to EUIs and LSs determined using whole-building load data and the End-Use Disaggregation Algorithm (EDA). Two sets of EDA determined EUIs and LSs are compared to the monitored data values. The data sets represent: (1) average buildings in the SCE service territory and (2) specific buildings that were monitored.
Research on a randomized real-valued negative selection algorithm
无
2006-01-01
A real-valued negative selection algorithm with good mathematical foundation is presented to solve some of the drawbacks of previous approach. Specifically, it can produce a good estimate of the optimal number of detectors needed to cover the non-self space, and the maximization of the non-self coverage is done through an optimization algorithm with proven convergence properties. Experiments are performed to validate the assumptions made while designing the algorithm and to evaluate its performance.
S. Payan
2007-12-01
Full Text Available The ENVISAT validation programme for the atmospheric instruments MIPAS, SCIAMACHY and GOMOS is based on a number of balloon-bone, aircraft and ground-based correlative measurements. In particular the activities of validation scientists were coordinated by ESA within the ENVISAT Stratospheric Aircraft and Balloon Campaign of ESABC. As part of a series of similar papers on other species [this issue] and in parallel to the contribution of the individual validation teams, the present paper provides a synthesis of comparisons performed between MIPAS CH_{4} and N_{2}O profiles produced by the current ESA operational software (Instrument Processing Facility version 4.61 or IPF v4.61 and correlative measurements obtained from balloon and aircraft experiments as well as from satellite sensors or from ground-based instruments. The MIPAS-E CH_{4} values show a positive bias in the lower stratosphere of about 10%. In case of N_{2}O no systematic deviation with respect to the validation experiments could be identified. The individual used MIPAS data version 4.61 still exhibits some unphysical oscillations in individual CH_{4} and N_{2}O profiles caused by the processing algorithm (with almost no regularization. Taking these problems into account, the MIPAS CH_{4} and N_{2}O profiles are behaving as expected from the internal error estimation of IPF v4.61.
Carles, Montserrat; Fechter, Tobias; Nemer, Ursula; Nanko, Norbert; Mix, Michael; Nestle, Ursula; Schaefer, Andrea
2015-12-01
PET/CT plays an important role in radiotherapy planning for lung tumors. Several segmentation algorithms have been proposed for PET tumor segmentation. However, most of them do not take into account respiratory motion and are not well validated. The aim of this work was to evaluate a semi-automated contrast-oriented algorithm (COA) for PET tumor segmentation adapted to retrospectively gated (4D) images. The evaluation involved a wide set of 4D-PET/CT acquisitions of dynamic experimental phantoms and lung cancer patients. In addition, segmentation accuracy of 4D-COA was compared with four other state-of-the-art algorithms. In phantom evaluation, the physical properties of the objects defined the gold standard. In clinical evaluation, the ground truth was estimated by the STAPLE (Simultaneous Truth and Performance Level Estimation) consensus of three manual PET contours by experts. Algorithm evaluation with phantoms resulted in: (i) no statistically significant diameter differences for different targets and movements (Δ φ =0.3+/- 1.6 mm); (ii) reproducibility for heterogeneous and irregular targets independent of user initial interaction and (iii) good segmentation agreement for irregular targets compared to manual CT delineation in terms of Dice Similarity Coefficient (DSC = 0.66+/- 0.04 ), Positive Predictive Value (PPV = 0.81+/- 0.06 ) and Sensitivity (Sen. = 0.49+/- 0.05 ). In clinical evaluation, the segmented volume was in reasonable agreement with the consensus volume (difference in volume (%Vol) = 40+/- 30 , DSC = 0.71+/- 0.07 and PPV = 0.90+/- 0.13 ). High accuracy in target tracking position (Δ ME) was obtained for experimental and clinical data (Δ ME{{}\\text{exp}}=0+/- 3 mm; Δ ME{{}\\text{clin}}=0.3+/- 1.4 mm). In the comparison with other lung segmentation methods, 4D-COA has shown the highest volume accuracy in both experimental and clinical data. In conclusion, the accuracy in volume
GPU based cloud system for high-performance arrhythmia detection with parallel k-NN algorithm.
Tae Joon Jun; Hyun Ji Park; Hyuk Yoo; Young-Hak Kim; Daeyoung Kim
2016-08-01
In this paper, we propose an GPU based Cloud system for high-performance arrhythmia detection. Pan-Tompkins algorithm is used for QRS detection and we optimized beat classification algorithm with K-Nearest Neighbor (K-NN). To support high performance beat classification on the system, we parallelized beat classification algorithm with CUDA to execute the algorithm on virtualized GPU devices on the Cloud system. MIT-BIH Arrhythmia database is used for validation of the algorithm. The system achieved about 93.5% of detection rate which is comparable to previous researches while our algorithm shows 2.5 times faster execution time compared to CPU only detection algorithm.
Kasaven, C P; McIntyre, G T; Mossey, P A
2017-01-01
Our objective was to assess the accuracy of virtual and printed 3-dimensional models derived from cone-beam computed tomographic (CT) scans to measure the volume of alveolar clefts before bone grafting. Fifteen subjects with unilateral cleft lip and palate had i-CAT cone-beam CT scans recorded at 0.2mm voxel and sectioned transversely into slices 0.2mm thick using i-CAT Vision. Volumes of alveolar clefts were calculated using first a validated algorithm; secondly, commercially-available virtual 3-dimensional model software; and finally 3-dimensional printed models, which were scanned with microCT and analysed using 3-dimensional software. For inter-observer reliability, a two-way mixed model intraclass correlation coefficient (ICC) was used to evaluate the reproducibility of identification of the cranial and caudal limits of the clefts among three observers. We used a Friedman test to assess the significance of differences among the methods, and probabilities of less than 0.05 were accepted as significant. Inter-observer reliability was almost perfect (ICC=0.987). There were no significant differences among the three methods. Virtual and printed 3-dimensional models were as precise as the validated computer algorithm in the calculation of volumes of the alveolar cleft before bone grafting, but virtual 3-dimensional models were the most accurate with the smallest 95% CI and, subject to further investigation, could be a useful adjunct in clinical practice. Copyright © 2016 The British Association of Oral and Maxillofacial Surgeons. Published by Elsevier Ltd. All rights reserved.
Rhodes, Carl; Morari, Manfred; Wiggins, Stephen
2006-12-01
Flockerzi and Heineken [Chaos 16, 048101 (2006)] present two examples with the goal of elucidating issues related to the Maas and Pope method for identifying low dimensional "slow" manifolds in systems with a time-scale separation. The goal of their first example is to show that the result claimed by Rhodes et al. [Chaos 9, 108-123 (1999)] that the Maas and Pope algorithm identifies the slow invariant manifold in the situation in which there is finite time-scale separation is incorrect. We show that their arguments result from an incomplete understanding of the situation and that, in fact, their example supports, and is completely consistent with, the result in Rhodes et al.. Their second example claims to be a counterexample to a conjecture in Rhodes et al. that away from the slow manifold the criterion of Maas and Pope [Combust. Flame 88, 239-264 (1992)] will never be fulfilled. While this conjecture may indeed be false, we argue that it is not clear that the example presented by Flockerzi and Heineken is indeed a counterexample.
Calderer, Antoni [Univ. of Minnesota, Minneapolis, MN (United States); Yang, Xiaolei [Stony Brook Univ., NY (United States); Feist, Christ [Univ. of Minnesota, Minneapolis, MN (United States); Guala, Michele [Univ. of Minnesota, Minneapolis, MN (United States); Angelidis, Dionysios [Univ. of Minnesota, Minneapolis, MN (United States); Ruehl, Kelley [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Guo, Xin [Univ. of Minnesota, Minneapolis, MN (United States); Boomsma, Aaron [Univ. of Minnesota, Minneapolis, MN (United States); Shen, Lian [Univ. of Minnesota, Minneapolis, MN (United States); Sotiropoulos, Fotis [Stony Brook Univ., NY (United States)
2015-10-30
The present project involves the development of modeling and analysis design tools for assessing offshore wind turbine technologies. The computational tools developed herein are able to resolve the effects of the coupled interaction of atmospheric turbulence and ocean waves on aerodynamic performance and structural stability and reliability of offshore wind turbines and farms. Laboratory scale experiments have been carried out to derive data sets for validating the computational models. Subtask 1.1 Turbine Scale Model: A novel computational framework for simulating the coupled interaction of complex floating structures with large-scale ocean waves and atmospheric turbulent winds has been developed. This framework is based on a domain decomposition approach coupling a large-scale far-field domain, where realistic wind and wave conditions representative from offshore environments are developed, with a near-field domain, where wind-wave body interactions can be investigated. The method applied in the near-field domain is based on a fluid-structure interaction (FSI) approach combining the curvilinear immersed boundary (CURVIB) method with a two-phase flow level set formulation and is capable of solving free surface flows interacting non-linearly with floating wind turbines. For coupling the far-field and near-field domains, a wave generation method for incorporating complex wave fields into Navier-Stokes solvers has been proposed. The wave generation method was validated for a variety of wave cases including a broadband spectrum. The computational framework has been further validated for wave-body interactions by replicating the experiment of floating wind turbine model subject to different sinusoidal wave forces (task 3). Finally, the full capabilities of the framework have been demonstrated by carrying out large eddy simulation (LES) of a floating wind turbine interacting with realistic ocean wind and wave conditions Subtask 1.2 Farm Scale Model: Several actuator
Queirós, Sandro; Barbosa, Daniel; Engvall, Jan; Ebbers, Tino; Nagel, Eike; Sarvari, Sebastian I; Claus, Piet; Fonseca, Jaime C; Vilaça, João L; D'hooge, Jan
2016-10-01
Quantitative analysis of cine cardiac magnetic resonance (CMR) images for the assessment of global left ventricular morphology and function remains a routine task in clinical cardiology practice. To date, this process requires user interaction and therefore prolongs the examination (i.e. cost) and introduces observer variability. In this study, we sought to validate the feasibility, accuracy, and time efficiency of a novel framework for automatic quantification of left ventricular global function in a clinical setting. Analyses of 318 CMR studies, acquired at the enrolment of patients in a multi-centre imaging trial (DOPPLER-CIP), were performed automatically, as well as manually. For comparative purposes, intra- and inter-observer variability was also assessed in a subset of patients. The extracted morphological and functional parameters were compared between both analyses, and time efficiency was evaluated. The automatic analysis was feasible in 95% of the cases (302/318) and showed a good agreement with manually derived reference measurements, with small biases and narrow limits of agreement particularly for end-diastolic volume (-4.08 ± 8.98 mL), end-systolic volume (1.18 ± 9.74 mL), and ejection fraction (-1.53 ± 4.93%). These results were comparable with the agreement between two independent observers. A complete automatic analysis took 5.61 ± 1.22 s, which is nearly 150 times faster than manual contouring (14 ± 2 min, P cine CMR images. Published on behalf of the European Society of Cardiology. All rights reserved. © The Author 2015. For permissions please email: journals.permissions@oup.com.
Marneffe, Alice; Suppa, Mariano; Miyamoto, Makiko; Del Marmol, Véronique; Boone, Marc
2016-09-01
Actinic keratoses (AKs) commonly arise on sun-damaged skin. Visible lesions are often associated with subclinical lesions on surrounding skin, giving rise to field cancerization. To avoid multiple biopsies to diagnose subclinical/early invasive lesions, there is an increasing interest in non-invasive diagnostic tools, such as high-definition optical coherence tomography (HD-OCT). We previously developed a HD-OCT-based diagnostic algorithm for the discrimination of AK from squamous cell carcinoma (SCC) and normal skin. The aim of this study was to test the applicability of HD-OCT for non-invasive discrimination of AK from SCC and normal skin using this algorithm. Three-dimensional (3D) HD-OCT images of histopathologically proven AKs and SCCs and images of normal skin were collected. All images were shown in a random sequence to three independent observers with different experience in HD-OCT, blinded to the clinical and histopathological data and with different experience with HD-OCT. Observers classified each image as AK, SCC or normal skin based on the diagnostic algorithm. A total of 106 (38 AKs, 16 SCCs and 52 normal skin sites) HD-OCT images from 71 patients were included. Sensitivity and specificity for the most experienced observer were 81.6% and 92.6% for AK diagnosis and 93.8% and 98.9% for SCC diagnosis. A moderate interobserver agreement was demonstrated. HD-OCT represents a promising technology for the non-invasive diagnosis of AKs. Thanks to its high potential in discriminating SCC from AK, HD-OCT could be used as a relevant tool for second-level examination, increasing diagnostic confidence and sparing patients unnecessary excisions.
Cluster Validity Indexes for FCM Clustering Algorithm%模糊C均值算法的聚类有效性评价
朴尚哲; 超木日力格; 于剑
2015-01-01
The clustering quality of fuzzy C-means ( FCM) clustering algorithm is affected by several factors, such as initial setting of cluster centroid, the number of clusters and fuzzy index. In this paper, a comparative study on recently published five clustering validity measurement in different application fields is presented, e. g. , different dimension of data, different cluster number and different fuzzy index. The experimental results show that the validity index based on ratio of within-class compactness and between-class separation is robust to data dimension and noise, and the validity index based on degree of membership can be applied to dataset with low dimension. The research results provide researchers with an option of selecting a suitable fuzzy clustering validity index for different application environments.%模糊C均值( FCM)聚类算法最终形成的聚类质量会受到初始值的设定、簇的个数选定及参数选择等多方面因素的影响。文中对最近发表的5种代表性聚类有效性指数在不同的数据维数、聚类个数和参数等条件下对FCM的聚类有效性评价结果进行对比分析。实验结果表明基于类内紧致度和类间离散度比值的聚类有效性指数对数据维度及噪声较为鲁棒，基于隶属度的聚类有效性指数不适于高维数据等，上述结果可帮助研究人员在不同的应用环境下选择合适的模糊聚类有效性函数。
Merentie, Mari; Lipponen, Jukka A; Hedman, Marja; Hedman, Antti; Hartikainen, Juha; Huusko, Jenni; Lottonen-Raikaslehto, Line; Parviainen, Viktor; Laidinen, Svetlana; Karjalainen, Pasi A; Ylä-Herttuala, Seppo
2015-12-01
Mouse models are extremely important in studying cardiac pathologies and related electrophysiology, but very few mouse ECG analysis programs are readily available. Therefore, a mouse ECG analysis algorithm was developed and validated. Surface ECG (lead II) was acquired during transthoracic echocardiography from C57Bl/6J mice under isoflurane anesthesia. The effect of aging was studied in young (2-3 months), middle-aged (14 months) and old (20-24 months) mice. The ECG changes associated with pharmacological interventions and common cardiac pathologies, that is, acute myocardial infarction (AMI) and progressive left ventricular hypertrophy (LVH), were studied. The ECG raw data were analyzed with an in-house ECG analysis program, modified specially for mouse ECG. Aging led to increases in P-wave duration, atrioventricular conduction time (PQ interval), and intraventricular conduction time (QRS complex width), while the R-wave amplitude decreased. In addition, the prevalence of arrhythmias increased during aging. Anticholinergic atropine shortened PQ time, and beta blocker metoprolol and calcium-channel blocker verapamil increased PQ interval and decreased heart rate. The ECG changes after AMI included early JT elevation, development of Q waves, decreased R-wave amplitude, and later changes in JT/T segment. In progressive LVH model, QRS complex width was increased at 2 and especially 4 weeks timepoint, and also repolarization abnormalities were seen. Aging, drugs, AMI, and LVH led to similar ECG changes in mice as seen in humans, which could be reliably detected with this new algorithm. The developed method will be very useful for studies on cardiovascular diseases in mice.
Yamina BOUGHARI
2017-06-01
Full Text Available In this paper the Cessna Citation X clearance criteria were evaluated for a new Flight Controller. The Flight Control Law were optimized and designed for the Cessna Citation X flight envelope by combining the Deferential Evolution algorithm, the Linear Quadratic Regulator method, and the Proportional Integral controller during a previous research presented in part 1. The optimal controllers were used to reach satisfactory aircraft’s dynamic and safe flight operations with respect to the augmentation systems’ handling qualities, and design requirements. Furthermore the number of controllers used to control the aircraft in its flight envelope was optimized using the Linear Fractional Representations features. To validate the controller over the whole aircraft flight envelope, the linear stability, eigenvalue, and handling qualities criteria in addition of the nonlinear analysis criteria were investigated during this research to assess the business aircraft for flight control clearance and certification. The optimized gains provide a very good stability margins as the eigenvalue analysis shows that the aircraft has a high stability, and a very good flying qualities of the linear aircraft models are ensured in its entire flight envelope, its robustness is demonstrated with respect to uncertainties due to its mass and center of gravity variations.
Grilli, Stéphan T.; Guérin, Charles-Antoine; Shelby, Michael; Grilli, Annette R.; Moran, Patrick; Grosdidier, Samuel; Insua, Tania L.
2017-08-01
In past work, tsunami detection algorithms (TDAs) have been proposed, and successfully applied to offline tsunami detection, based on analyzing tsunami currents inverted from high-frequency (HF) radar Doppler spectra. With this method, however, the detection of small and short-lived tsunami currents in the most distant radar ranges is challenging due to conflicting requirements on the Doppler spectra integration time and resolution. To circumvent this issue, in Part I of this work, we proposed an alternative TDA, referred to as time correlation (TC) TDA, that does not require inverting currents, but instead detects changes in patterns of correlations of radar signal time series measured in pairs of cells located along the main directions of tsunami propagation (predicted by geometric optics theory); such correlations can be maximized when one signal is time-shifted by the pre-computed long wave propagation time. We initially validated the TC-TDA based on numerical simulations of idealized tsunamis in a simplified geometry. Here, we further develop, extend, and apply the TC algorithm to more realistic tsunami case studies. These are performed in the area West of Vancouver Island, BC, where Ocean Networks Canada recently deployed a HF radar (in Tofino, BC), to detect tsunamis from far- and near-field sources, up to a 110 km range. Two case studies are considered, both simulated using long wave models (1) a far-field seismic, and (2) a near-field landslide, tsunami. Pending the availability of radar data, a radar signal simulator is parameterized for the Tofino HF radar characteristics, in particular its signal-to-noise ratio with range, and combined with the simulated tsunami currents to produce realistic time series of backscattered radar signal from a dense grid of cells. Numerical experiments show that the arrival of a tsunami causes a clear change in radar signal correlation patterns, even at the most distant ranges beyond the continental shelf, thus making an
Mosimann, Beatrice; Pfiffner, Chantal; Amylidi-Mohr, Sofia; Risch, Lorenz; Surbek, Daniel; Raio, Luigi
2017-09-05
Preeclampsia (PE) is associated with severe maternal and fetal morbidity in the acute presentation and there is increasing evidence that it is also an important risk factor for cardiovascular disease later in life. Therefore, preventive strategies are of utmost importance. The Fetal Medicine Foundation (FMF) London recently developed a first trimester screening algorithm for placenta-related pregnancy complications, in particular early onset preeclampsia (eoPE) requiring delivery before 34 weeks, and preterm small for gestational age (pSGA), with a birth weight <5th percentile and delivery before 37 weeks of gestation, based on maternal history and characteristics, and biochemical and biophysical parameters. The aim of this study was to test the performance of this algorithm in our setting and to perform an external validation of the screening algorithm. Between September 2013 and April 2016, all consecutive women with singleton pregnancies who agreed to this screening were included in the study. The proposed cut-offs of ≥1:200 for eoPE, and ≥1:150 for pSGA were applied. Risk calculations were performed with Viewpoint® program (GE, Mountainview, CA, USA) and statistical analysis with GraphPad version 5.0 for Windows. 1372 women agreed to PE screening; the 1129 with complete data and a live birth were included in this study. Nineteen (1.68%) developed PE: 14 (1.24%) at term (tPE) and 5 (0.44%) preterm (pPE, <37 weeks), including 2 (0.18%) with eoPE. Overall, 97/1129 (8.6%) screened positive for eoPE, including both pregnancies that resulted in eoPE and 4/5 (80%) that resulted in pPE. Forty-nine of 1110 (4.41%) pregnancies without PE resulted in SGA, 3 (0.27%) of them in pSGA. A total of 210/1110 (18.9%) non-PE pregnancies screened positive for pSGA, including 2/3 (66.7%) of the pSGA deliveries and 18/46 (39.1%) of term SGA infants. Our results show that first trimester PE screening in our population performs well and according to expectations, whereas screening
Fink, Wolfgang; Tarbell, Mark A
2009-12-01
While artificial vision prostheses are quickly becoming a reality, actual testing time with visual prosthesis carriers is at a premium. Moreover, it is helpful to have a more realistic functional approximation of a blind subject. Instead of a normal subject with a healthy retina looking at a low-resolution (pixelated) image on a computer monitor or head-mounted display, a more realistic approximation is achieved by employing a subject-independent mobile robotic platform that uses a pixelated view as its sole visual input for navigation purposes. We introduce CYCLOPS: an AWD, remote controllable, mobile robotic platform that serves as a testbed for real-time image processing and autonomous navigation systems for the purpose of enhancing the visual experience afforded by visual prosthesis carriers. Complete with wireless Internet connectivity and a fully articulated digital camera with wireless video link, CYCLOPS supports both interactive tele-commanding via joystick, and autonomous self-commanding. Due to its onboard computing capabilities and extended battery life, CYCLOPS can perform complex and numerically intensive calculations, such as image processing and autonomous navigation algorithms, in addition to interfacing to additional sensors. Its Internet connectivity renders CYCLOPS a worldwide accessible testbed for researchers in the field of artificial vision systems. CYCLOPS enables subject-independent evaluation and validation of image processing and autonomous navigation systems with respect to the utility and efficiency of supporting and enhancing visual prostheses, while potentially reducing to a necessary minimum the need for valuable testing time with actual visual prosthesis carriers.
Aircraft Derived Data Validation Algorithms
2012-08-06
to be equipped with Flight Management Systems (FMSs) that use sophisticated digital computers to assist pilots, allowing them to fly more fuel...some basic data is prepared. These include calculations of aircraft position projeted on a three-dimensional Cartesian coordinate system, and...Administration FMS Flight Management System GA General Aviation NextGen Next Generation Air Transportation System NGA National Geospatial-Intelligence
Contact Modelling in Resistance Welding, Part II: Experimental Validation
Song, Quanfeng; Zhang, Wenqi; Bay, Niels
2006-01-01
Contact algorithms in resistance welding presented in the previous paper are experimentally validated in the present paper. In order to verify the mechanical contact algorithm, two types of experiments, i.e. sandwich upsetting of circular, cylindrical specimens and compression tests of discs...... with a solid ring projection towards a flat ring, are carried out at room temperature. The complete algorithm, involving not only the mechanical model but also the thermal and electrical models, is validated by projection welding experiments. The experimental results are in satisfactory agreement...
On the Hopcroft's minimization algorithm
Paun, Andrei
2007-01-01
We show that the absolute worst case time complexity for Hopcroft's minimization algorithm applied to unary languages is reached only for de Bruijn words. A previous paper by Berstel and Carton gave the example of de Bruijn words as a language that requires O(n log n) steps by carefully choosing the splitting sets and processing these sets in a FIFO mode. We refine the previous result by showing that the Berstel/Carton example is actually the absolute worst case time complexity in the case of unary languages. We also show that a LIFO implementation will not achieve the same worst time complexity for the case of unary languages. Lastly, we show that the same result is valid also for the cover automata and a modification of the Hopcroft's algorithm, modification used in minimization of cover automata.
Heinrich, Josué Miguel; Niizawa, Ignacio; Botta, Fausto Adrián; Trombert, Alejandro Raúl; Irazoqui, Horacio Antonio
2012-01-01
In a previous study, we developed a methodology to assess the intrinsic optical properties governing the radiation field in algae suspensions. With these properties at our disposal, a Monte Carlo simulation program is developed and used in this study as a predictive autonomous program applied to the simulation of experiments that reproduce the common illumination conditions that are found in processes of large scale production of microalgae, especially when using open ponds such as raceway ponds. The simulation module is validated by comparing the results of experimental measurements made on artificially illuminated algal suspension with those predicted by the Monte Carlo program. This experiment deals with a situation that resembles that of an open pond or that of a raceway pond, except for the fact that for convenience, the experimental arrangement appears as if those reactors were turned upside down. It serves the purpose of assessing to what extent the scattering phenomena are important for the prediction of the spatial distribution of the radiant energy density. The simulation module developed can be applied to compute the local energy density inside photobioreactors with the goal to optimize its design and their operating conditions.
Lyoo, Chul Hyoung; Zanotti-Fregonara, Paolo; Zoghbi, Sami S; Liow, Jeih-San; Xu, Rong; Pike, Victor W; Zarate, Carlos A; Fujita, Masahiro; Innis, Robert B
2014-01-01
Image-derived input function (IDIF) obtained by manually drawing carotid arteries (manual-IDIF) can be reliably used in [(11)C](R)-rolipram positron emission tomography (PET) scans. However, manual-IDIF is time consuming and subject to inter- and intra-operator variability. To overcome this limitation, we developed a fully automated technique for deriving IDIF with a supervised clustering algorithm (SVCA). To validate this technique, 25 healthy controls and 26 patients with moderate to severe major depressive disorder (MDD) underwent T1-weighted brain magnetic resonance imaging (MRI) and a 90-minute [(11)C](R)-rolipram PET scan. For each subject, metabolite-corrected input function was measured from the radial artery. SVCA templates were obtained from 10 additional healthy subjects who underwent the same MRI and PET procedures. Cluster-IDIF was obtained as follows: 1) template mask images were created for carotid and surrounding tissue; 2) parametric image of weights for blood were created using SVCA; 3) mask images to the individual PET image were inversely normalized; 4) carotid and surrounding tissue time activity curves (TACs) were obtained from weighted and unweighted averages of each voxel activity in each mask, respectively; 5) partial volume effects and radiometabolites were corrected using individual arterial data at four points. Logan-distribution volume (V T/f P) values obtained by cluster-IDIF were similar to reference results obtained using arterial data, as well as those obtained using manual-IDIF; 39 of 51 subjects had a V T/f P error of 10%. With automatic voxel selection, cluster-IDIF curves were less noisy than manual-IDIF and free of operator-related variability. Cluster-IDIF showed widespread decrease of about 20% [(11)C](R)-rolipram binding in the MDD group. Taken together, the results suggest that cluster-IDIF is a good alternative to full arterial input function for estimating Logan-V T/f P in [(11)C](R)-rolipram PET clinical scans. This
Chul Hyoung Lyoo
Full Text Available Image-derived input function (IDIF obtained by manually drawing carotid arteries (manual-IDIF can be reliably used in [(11C](R-rolipram positron emission tomography (PET scans. However, manual-IDIF is time consuming and subject to inter- and intra-operator variability. To overcome this limitation, we developed a fully automated technique for deriving IDIF with a supervised clustering algorithm (SVCA. To validate this technique, 25 healthy controls and 26 patients with moderate to severe major depressive disorder (MDD underwent T1-weighted brain magnetic resonance imaging (MRI and a 90-minute [(11C](R-rolipram PET scan. For each subject, metabolite-corrected input function was measured from the radial artery. SVCA templates were obtained from 10 additional healthy subjects who underwent the same MRI and PET procedures. Cluster-IDIF was obtained as follows: 1 template mask images were created for carotid and surrounding tissue; 2 parametric image of weights for blood were created using SVCA; 3 mask images to the individual PET image were inversely normalized; 4 carotid and surrounding tissue time activity curves (TACs were obtained from weighted and unweighted averages of each voxel activity in each mask, respectively; 5 partial volume effects and radiometabolites were corrected using individual arterial data at four points. Logan-distribution volume (V T/f P values obtained by cluster-IDIF were similar to reference results obtained using arterial data, as well as those obtained using manual-IDIF; 39 of 51 subjects had a V T/f P error of 10%. With automatic voxel selection, cluster-IDIF curves were less noisy than manual-IDIF and free of operator-related variability. Cluster-IDIF showed widespread decrease of about 20% [(11C](R-rolipram binding in the MDD group. Taken together, the results suggest that cluster-IDIF is a good alternative to full arterial input function for estimating Logan-V T/f P in [(11C](R-rolipram PET clinical scans. This
Lazaro, D
2003-10-01
Monte Carlo simulations are currently considered in nuclear medical imaging as a powerful tool to design and optimize detection systems, and also to assess reconstruction algorithms and correction methods for degrading physical effects. Among the many simulators available, none of them is considered as a standard in nuclear medical imaging: this fact has motivated the development of a new generic Monte Carlo simulation platform (GATE), based on GEANT4 and dedicated to SPECT/PET (single photo emission computed tomography / positron emission tomography) applications. We participated during this thesis to the development of the GATE platform within an international collaboration. GATE was validated in SPECT by modeling two gamma cameras characterized by a different geometry, one dedicated to small animal imaging and the other used in a clinical context (Philips AXIS), and by comparing the results obtained with GATE simulations with experimental data. The simulation results reproduce accurately the measured performances of both gamma cameras. The GATE platform was then used to develop a new 3-dimensional reconstruction method: F3DMC (fully 3-dimension Monte-Carlo) which consists in computing with Monte Carlo simulation the transition matrix used in an iterative reconstruction algorithm (in this case, ML-EM), including within the transition matrix the main physical effects degrading the image formation process. The results obtained with the F3DMC method were compared to the results obtained with three other more conventional methods (FBP, MLEM, MLEMC) for different phantoms. The results of this study show that F3DMC allows to improve the reconstruction efficiency, the spatial resolution and the signal to noise ratio with a satisfactory quantification of the images. These results should be confirmed by performing clinical experiments and open the door to a unified reconstruction method, which could be applied in SPECT but also in PET. (author)
Novel algorithm for management of acute epididymitis.
Hongo, Hiroshi; Kikuchi, Eiji; Matsumoto, Kazuhiro; Yazawa, Satoshi; Kanao, Kent; Kosaka, Takeo; Mizuno, Ryuichi; Miyajima, Akira; Saito, Shiro; Oya, Mototsugu
2017-01-01
To identify predictive factors for the severity of epididymitis and to develop an algorithm guiding decisions on how to manage patients with this disease. A retrospective study was carried out on 160 epididymitis patients at Keio University Hospital. We classified cases into severe and non-severe groups, and compared clinical findings at the first visit. Based on statistical analyses, we developed an algorithm for predicting severe cases. We validated the algorithm by applying it to an external cohort of 96 patients at Tokyo Medical Center. The efficacy of the algorithm was investigated by a decision curve analysis. A total of 19 patients (11.9%) had severe epididymitis. Patient characteristics including older age, previous history of diabetes mellitus and fever, as well as laboratory data including a higher white blood cell count, C-reactive protein level and blood urea nitrogen level were independently associated with severity. A predictive algorithm was created with the ability to classify epididymitis cases into three risk groups. In the Keio University Hospital cohort, 100%, 23.5%, and 3.4% of cases in the high-, intermediate-, and low-risk groups, respectively, became severe. The specificity of the algorithm for predicting severe epididymitis proved to be 100% in the Keio University Hospital cohort and 98.8% in the Tokyo Medical Center cohort. The decision curve analysis also showed the high efficacy of the algorithm. This algorithm might aid in decision-making for the clinical management of acute epididymitis. © 2016 The Japanese Urological Association.
No Previous Public Services Required
Taylor, Kelley R.
2009-01-01
In 2007, the Supreme Court heard a case that involved the question of whether a school district could be required to reimburse parents who unilaterally placed their child in private school when the child had not previously received special education and related services in a public institution ("Board of Education v. Tom F."). The…
Vergés, Alvaro; Steinley, Douglas; Trull, Timothy J.; Sher, Kenneth J.
2010-01-01
The validity of the abuse/dependence distinction within alcohol use disorders (AUDs) has been increasingly questioned on psychometric and conceptual grounds. Two types of findings are often cited as support for the validity of this distinction: (1) dependence is more persistent than abuse, and (2) dependence is more highly comorbid with other Axis I and Axis II disorders than is abuse. Using data from the National Epidemiologic Survey of Alcohol and Related Conditions (NESARC), we examine...
A new incremental updating algorithm for association rules
WANG Zuo-cheng; XUE Li-xia
2007-01-01
Incremental data mining is an attractive goal for many kinds of mining in large databases or data warehouses. A new incremental updating algorithm rule growing algorithm (RGA) is presented for efficient maintenance discovered association rules when new transaction data is added to a transaction database. The algorithm RGA makes use of previous association rules as seed rules. By RGA, the seed rules whether are strong or not can be confirmed without scanning all the transaction DB in most cases. If the distributing of item of transaction DB is not uniform, the inflexion of robustness curve comes very quickly, and RGA gets great efficiency, saving lots of time for I/O. Experiments validate the algorithm and the test results showed that this algorithm is efficient.
Kostsov, Vladimir; Ionov, Dmitry; Biryukov, Egor; Zaitsev, Nikita
2017-04-01
A built-in operational regression algorithm (REA) of liquid water path (LWP) retrieval supplied by the manufacturer of the RPG-HATPRO microwave radiometer has been compared to a so-called physical algorithm (PHA) based on the inversion of the radiative transfer equation. The comparison has been performed for different scenarios of microwave observations by the RPG-HATPRO instrument that has been operating at St.Petersburg University since June 2012. The data for the scenarios have been collected within the time period December 2012 - December 2014. The estimations of bias and random error for both REA and PHA have been obtained. Special attention has been paid to the analysis of the quality of the LWP retrievals during and after rain events that have been detected by the built-in rain sensor. The estimation has been done of the time period after a rain event when the retrieval quality has to be considered as insufficient.
Herzfeld, Ute C.; Trantow, Thomas M.; Harding, David; Dabney, Philip W.
2017-01-01
Glacial acceleration is a main source of uncertainty in sea-level-change assessment. Measurement of ice-surface heights with a spatial and temporal resolution that not only allows elevation-change calculation, but also captures ice-surface morphology and its changes is required to aid in investigations of the geophysical processes associated with glacial acceleration.The Advanced Topographic Laser Altimeter System aboard NASAs future ICESat-2 Mission (launch 2017) will implement multibeam micropulse photon-counting lidar altimetry aimed at measuring ice-surface heights at 0.7-m along-track spacing. The instrument is designed to resolve spatial and temporal variability of rapidly changing glaciers and ice sheets and the Arctic sea ice. The new technology requires the development of a new mathematical algorithm for the retrieval of height information.We introduce the density-dimension algorithm (DDA) that utilizes the radial basis function to calculate a weighted density as a form of data aggregation in the photon cloud and considers density an additional dimension as an aid in auto-adaptive threshold determination. The auto-adaptive capability of the algorithm is necessary to separate returns from noise and signal photons under changing environmental conditions. The algorithm is evaluated using data collected with an ICESat-2 simulator instrument, the Slope Imaging Multi-polarization Photon-counting Lidar, over the heavily crevassed Giesecke Braer in Northwestern Greenland in summer 2015. Results demonstrate that ICESat-2 may be expected to provide ice-surface height measurements over crevassed glaciers and other complex ice surfaces. The DDA is generally applicable for the analysis of airborne and spaceborne micropulse photon-counting lidar data over complex and simple surfaces.
Learning Bayesian networks using genetic algorithm
Chen Fei; Wang Xiufeng; Rao Yimei
2007-01-01
A new method to evaluate the fitness of the Bayesian networks according to the observed data is provided. The main advantage of this criterion is that it is suitable for both the complete and incomplete cases while the others not.Moreover it facilitates the computation greatly. In order to reduce the search space, the notation of equivalent class proposed by David Chickering is adopted. Instead of using the method directly, the novel criterion, variable ordering, and equivalent class are combined,moreover the proposed mthod avoids some problems caused by the previous one. Later, the genetic algorithm which allows global convergence, lack in the most of the methods searching for Bayesian network is applied to search for a good model in thisspace. To speed up the convergence, the genetic algorithm is combined with the greedy algorithm. Finally, the simulation shows the validity of the proposed approach.
K. S. Olsen
2015-10-01
Full Text Available Motivated by the initial selection of a high-resolution solar occultation Fourier transform spectrometer (FTS to fly to Mars on the ExoMars Trace Gas Orbiter, we have been developing algorithms for retrieving volume mixing ratio vertical profiles of trace gases, the primary component of which is a new algorithm and software for retrieving vertical profiles of temperature and pressure from the spectra. In contrast to Earth-observing instruments, which can rely on accurate meteorological models, a priori information, and spacecraft position, Mars retrievals require a method with minimal reliance on such data. The temperature and pressure retrieval algorithms developed for this work were evaluated using Earth-observing spectra from the Atmospheric Chemistry Experiment (ACE FTS, a solar occultation instrument in orbit since 2003, and the basis for the instrument selected for a Mars mission. ACE-FTS makes multiple measurements during an occultation, separated in altitude by 1.5–5 km, and we analyze 10 CO2 vibration-rotation bands at each altitude, each with a different usable altitude range. We describe the algorithms and present results of their application and their comparison to the ACE-FTS data products. The Constellation Observing System for Meteorology, Ionosphere, and Climate (COSMIC provides vertical profiles of temperature up to 40 km with high vertical resolution. Using six satellites and GPS radio occultation, COSMIC's data product has excellent temporal and spatial coverage, allowing us to find coincident measurements with ACE with very tight criteria: less than 1.5 h and 150 km. We present an inter-comparison of temperature profiles retrieved from ACE-FTS using our algorithm, that of the ACE Science Team (v3.5, and from COSMIC. When our retrievals are compared to ACE-FTS v3.5, we find mean differences between −5 and +2 K, and that our retrieved profiles have no seasonal or zonal biases, but do have a warm bias in the stratosphere and
Hoang Duc, Albert K., E-mail: albert.hoangduc.ucl@gmail.com; McClelland, Jamie; Modat, Marc; Cardoso, M. Jorge; Mendelson, Alex F. [Center for Medical Image Computing, University College London, London WC1E 6BT (United Kingdom); Eminowicz, Gemma; Mendes, Ruheena; Wong, Swee-Ling; D’Souza, Derek [Radiotherapy Department, University College London Hospitals, 235 Euston Road, London NW1 2BU (United Kingdom); Veiga, Catarina [Department of Medical Physics and Bioengineering, University College London, London WC1E 6BT (United Kingdom); Kadir, Timor [Mirada Medical UK, Oxford Center for Innovation, New Road, Oxford OX1 1BY (United Kingdom); Ourselin, Sebastien [Centre for Medical Image Computing, University College London, London WC1E 6BT (United Kingdom)
2015-09-15
Purpose: The aim of this study was to assess whether clinically acceptable segmentations of organs at risk (OARs) in head and neck cancer can be obtained automatically and efficiently using the novel “similarity and truth estimation for propagated segmentations” (STEPS) compared to the traditional “simultaneous truth and performance level estimation” (STAPLE) algorithm. Methods: First, 6 OARs were contoured by 2 radiation oncologists in a dataset of 100 patients with head and neck cancer on planning computed tomography images. Each image in the dataset was then automatically segmented with STAPLE and STEPS using those manual contours. Dice similarity coefficient (DSC) was then used to compare the accuracy of these automatic methods. Second, in a blind experiment, three separate and distinct trained physicians graded manual and automatic segmentations into one of the following three grades: clinically acceptable as determined by universal delineation guidelines (grade A), reasonably acceptable for clinical practice upon manual editing (grade B), and not acceptable (grade C). Finally, STEPS segmentations graded B were selected and one of the physicians manually edited them to grade A. Editing time was recorded. Results: Significant improvements in DSC can be seen when using the STEPS algorithm on large structures such as the brainstem, spinal canal, and left/right parotid compared to the STAPLE algorithm (all p < 0.001). In addition, across all three trained physicians, manual and STEPS segmentation grades were not significantly different for the brainstem, spinal canal, parotid (right/left), and optic chiasm (all p > 0.100). In contrast, STEPS segmentation grades were lower for the eyes (p < 0.001). Across all OARs and all physicians, STEPS produced segmentations graded as well as manual contouring at a rate of 83%, giving a lower bound on this rate of 80% with 95% confidence. Reduction in manual interaction time was on average 61% and 93% when automatic
Downing Harriet
2012-07-01
Full Text Available Abstract Background Urinary tract infection (UTI is common in children, and may cause serious illness and recurrent symptoms. However, obtaining a urine sample from young children in primary care is challenging and not feasible for large numbers. Evidence regarding the predictive value of symptoms, signs and urinalysis for UTI in young children is urgently needed to help primary care clinicians better identify children who should be investigated for UTI. This paper describes the protocol for the Diagnosis of Urinary Tract infection in Young children (DUTY study. The overall study aim is to derive and validate a cost-effective clinical algorithm for the diagnosis of UTI in children presenting to primary care acutely unwell. Methods/design DUTY is a multicentre, diagnostic and prospective observational study aiming to recruit at least 7,000 children aged before their fifth birthday, being assessed in primary care for any acute, non-traumatic, illness of ≤ 28 days duration. Urine samples will be obtained from eligible consented children, and data collected on medical history and presenting symptoms and signs. Urine samples will be dipstick tested in general practice and sent for microbiological analysis. All children with culture positive urines and a random sample of children with urine culture results in other, non-positive categories will be followed up to record symptom duration and healthcare resource use. A diagnostic algorithm will be constructed and validated and an economic evaluation conducted. The primary outcome will be a validated diagnostic algorithm using a reference standard of a pure/predominant growth of at least >103, but usually >105 CFU/mL of one, but no more than two uropathogens. We will use logistic regression to identify the clinical predictors (i.e. demographic, medical history, presenting signs and symptoms and urine dipstick analysis results most strongly associated with a positive urine culture result. We will
Entropy Message Passing Algorithm
Ilic, Velimir M; Branimir, Todorovic T
2009-01-01
Message passing over factor graph can be considered as generalization of many well known algorithms for efficient marginalization of multivariate function. A specific instance of the algorithm is obtained by choosing an appropriate commutative semiring for the range of the function to be marginalized. Some examples are Viterbi algorithm, obtained on max-product semiring and forward-backward algorithm obtained on sum-product semiring. In this paper, Entropy Message Passing algorithm (EMP) is developed. It operates over entropy semiring, previously introduced in automata theory. It is shown how EMP extends the use of message passing over factor graphs to probabilistic model algorithms such as Expectation Maximization algorithm, gradient methods and computation of model entropy, unifying the work of different authors.
Validation and Analysis of Water Column Correction Algorithm at Sanya Bay%水体校正模型在三亚湾的验证及分析
杨超宇; 杨顶田; 叶海彬; 曹文熙
2011-01-01
水体校正一直是光学浅水遥感检测中的重点难题.为了提高三亚湾底质遥感监测精度,分别采用理想水体光辐射分布和Christian模型剔除水体信号的影响,尝试从水面之下遥感反射率信号中提取其底质光谱反射率值.结果表明,两种模型模拟结果差异较大,其模拟与实测值的相关性分别是0.93和0.26.经分析认为Christian模型在三亚湾湾内水域失效的这一结果与三亚湾光学浅水和光学深水的差异性相关.三亚湾湾内水域光学性质复杂,光学参数随着地理坐标位置变化较大,即使在光学浅水水域内的邻近站点,其衰减系数也是具有较大差异的.因而,Christian的模型在此区域很难发挥出优势.%Water column correction has been a substantial challenge for remote sensing. In order to improve the accuracy of coastal ocean monitoring where optical properties are complex, optical property of shallow water at Sanya Bay and the suitable water column correction algorithms were studies in the present paper. The authors extracted the bottom reflectance without water column effects by using a water column correction algorithm which is based on the simulation of the underwater light field in idealized water. And we compared the results which were calculated by the model and Christian's model respectively. Based on a detailed analysis, we concluded that: Because the optical properties of Sanya Bay are complex and vary greatly with location,Christian's model lost its advantage in the area. Conversely, the bottom reflectance calculating by the algorithm based on the simulation of the underwater light field in idealized water agreed well with in situ measured bottom reflectance, although the reflectance was lower than in situ measured reflectance value between 400 and 500 nm. So, it is reasonable to extract bottom information by using the water column correction algorithm in local bay area where optical properties are complex.
Nouizi, F.; Erkol, H.; Luk, A.; Marks, M.; Unlu, M. B.; Gulsen, G.
2016-10-01
We previously introduced photo-magnetic imaging (PMI), an imaging technique that illuminates the medium under investigation with near-infrared light and measures the induced temperature increase using magnetic resonance thermometry (MRT). Using a multiphysics solver combining photon migration and heat diffusion, PMI models the spatiotemporal distribution of temperature variation and recovers high resolution optical absorption images using these temperature maps. In this paper, we present a new fast non-iterative reconstruction algorithm for PMI. This new algorithm uses analytic methods during the resolution of the forward problem and the assembly of the sensitivity matrix. We validate our new analytic-based algorithm with the first generation finite element method (FEM) based reconstruction algorithm previously developed by our team. The validation is performed using, first synthetic data and afterwards, real MRT measured temperature maps. Our new method accelerates the reconstruction process 30-fold when compared to a single iteration of the FEM-based algorithm.
闫欢欢; 李晓静; 张兴赢; 王维和; 陈良富; 张美根; 徐晋
2016-01-01
Meter for Atmospheric CHartographY (SCIAMACHY), and Ozone Monitoring Instrument (OMI) have high SO2 monitoring capability. The OMI, which was launched on the EOS/Aura platform in July 2004, has the same hyperspectral measurements as the GOME and SCIAMACHY, but offers the improved spatial resolution at nadir (13 × 24 km2) and daily global coverage for short-lifetime SO2. For OMI operational SO2 planetary boundary layer (PBL) retrieval, the previous band residual difference (BRD) algorithm has been replaced by principal component analysis (PCA) algorithm, which effectively reduces the systematic biases in SO2 column retrievals. However, there are few studies on the evaluations and validations of PCA SO2 retrievals over China, and the long-term comparisons with BRD SO2 retrievals also need to be conducted. In this study, the accuracies of PCA and BRD SO2 retrievals are validated by using ground-based multi axis differential optical absorption spectroscopy (MAX-DOAS) located in Beijing, and regional atmospheric modeling system, community multi-scale air quality (RAMS-CMAQ) modeling system model which can simulate the vertical distribution of atmospheric SO2. Moreover, BRD and PCA SO2 retrievals from oceanic area, eastern China and Reunion volcanic eruption are compared to find the long-term trend and spatiotemporal dif-ferences between SO2 columns. Finally, the uncertainty of SO2 retrieval, caused by measurement errors, band selection and input parameter errors in radiative transfer model, are analysed to understand the limitations of BRD and PCA algorithms. Results show that both PCA and BRD SO2 retrievals over Beijing are lower than ground-based MAX-DOAS mea-surements of SO2. PCA and BRD SO2 retrievals over eastern China are lower than the simulated SO2 columns fromRAMS-CMAQ in winter 2008, but in July and August BRD SO2 columns are higher than RAMS-CMAQ simulations. The values of SO2 columns from BRD over China are more consistent with those from ground-based MAX-DOAS and
Akbari, H.; Rainer, L.; Heinemeier, K.; Huang, J.; Franconi, E.
1993-01-01
The Southern California Edison Company (SCE) has conducted an extensive metering project in which electricity end use in 53 commercial buildings in Southern California has been measured. The building types monitored include offices, retail stores, groceries, restaurants, and warehouses. One year (June 1989 through May 1990) of the SCE measured hourly end-use data are reviewed in this report. Annual whole-building and end-use energy use intensities (EUIs) and monthly load shapes (LSs) have been calculated for the different building types based on the monitored data. This report compares the monitored buildings` EUIs and LSs to EUIs and LSs determined using whole-building load data and the End-Use Disaggregation Algorithm (EDA). Two sets of EDA determined EUIs and LSs are compared to the monitored data values. The data sets represent: (1) average buildings in the SCE service territory and (2) specific buildings that were monitored.
Bascil, M Serdar; Temurtas, Feyzullah
2011-06-01
In this study, a hepatitis disease diagnosis study was realized using neural network structure. For this purpose, a multilayer neural network structure was used. Levenberg-Marquardt algorithm was used as training algorithm for the weights update of the neural network. The results of the study were compared with the results of the previous studies reported focusing on hepatitis disease diagnosis and using same UCI machine learning database. We obtained a classification accuracy of 91.87% via tenfold cross validation.
Empirical Testing of an Algorithm for Defining Somatization in Children
Eisman, Howard D.; Fogel, Joshua; Lazarovich, Regina; Pustilnik, Inna
2007-01-01
Introduction A previous article proposed an algorithm for defining somatization in children by classifying them into three categories: well, medically ill, and somatizer; the authors suggested further empirical validation of the algorithm (Postilnik et al., 2006). We use the Child Behavior Checklist (CBCL) to provide this empirical validation. Method Parents of children seen in pediatric clinics completed the CBCL (n=126). The physicians of these children completed specially-designed questionnaires. The sample comprised of 62 boys and 64 girls (age range 2 to 15 years). Classification categories included: well (n=53), medically ill (n=55), and somatizer (n=18). Analysis of variance (ANOVA) was used for statistical comparisons. Discriminant function analysis was conducted with the CBCL subscales. Results There were significant differences between the classification categories for the somatic complaints (p=algorithm proposed by Postilnik et al. (2006) shows promise for classification of children and adolescents with somatic symptoms. PMID:18421368
Jethva, Hiren; Torres, Omar; Remer, Lorraine; Redemann, Jens; Livingston, John; Dunagan, Stephen; Shinozuka, Yohei; Kacenelenbogen, Meloe; Segal Rosenheimer, Michal; Spurr, Rob
2016-10-01
We present the validation analysis of above-cloud aerosol optical depth (ACAOD) retrieved from the "color ratio" method applied to MODIS cloudy-sky reflectance measurements using the limited direct measurements made by NASA's airborne Ames Airborne Tracking Sunphotometer (AATS) and Spectrometer for Sky-Scanning, Sun-Tracking Atmospheric Research (4STAR) sensors. A thorough search of the airborne database collection revealed a total of five significant events in which an airborne sun photometer, coincident with the MODIS overpass, observed partially absorbing aerosols emitted from agricultural biomass burning, dust, and wildfires over a low-level cloud deck during SAFARI-2000, ACE-ASIA 2001, and SEAC4RS 2013 campaigns, respectively. The co-located satellite-airborne matchups revealed a good agreement (root-mean-square difference < 0.1), with most matchups falling within the estimated uncertainties associated the MODIS retrievals (about -10 to +50 %). The co-retrieved cloud optical depth was comparable to that of the MODIS operational cloud product for ACE-ASIA and SEAC4RS, however, higher by 30-50 % for the SAFARI-2000 case study. The reason for this discrepancy could be attributed to the distinct aerosol optical properties encountered during respective campaigns. A brief discussion on the sources of uncertainty in the satellite-based ACAOD retrieval and co-location procedure is presented. Field experiments dedicated to making direct measurements of aerosols above cloud are needed for the extensive validation of satellite-based retrievals.
Jethva, Hiren; Torres, Omar; Remer, Lorraine; Redemann, Jens; Livingston, John; Dunagan, Stephen; Shinozuka, Yohei; Kacenelenbogen, Meloe; Segal Rozenhaimer, Michal; Spurr, Rob
2016-01-01
We present the validation analysis of above-cloud aerosol optical depth (ACAOD) retrieved from the color ratio method applied to MODIS cloudy-sky reflectance measurements using the limited direct measurements made by NASAs airborne Ames Airborne Tracking Sunphotometer (AATS) and Spectrometer for Sky-Scanning, Sun-Tracking Atmospheric Research (4STAR) sensors. A thorough search of the airborne database collection revealed a total of five significant events in which an airborne sun photometer, coincident with the MODIS overpass, observed partially absorbing aerosols emitted from agricultural biomass burning, dust, and wildfires over a low-level cloud deck during SAFARI-2000, ACE-ASIA 2001, and SEAC4RS 2013 campaigns, respectively. The co-located satellite-airborne match ups revealed a good agreement (root-mean-square difference less than 0.1), with most match ups falling within the estimated uncertainties associated with the MODIS retrievals (about -10 to +50 ). The co-retrieved cloud optical depth was comparable to that of the MODIS operational cloud product for ACE-ASIA and SEAC4RS, however, higher by 30-50% for the SAFARI-2000 case study. The reason for this discrepancy could be attributed to the distinct aerosol optical properties encountered during respective campaigns. A brief discussion on the sources of uncertainty in the satellite-based ACAOD retrieval and co-location procedure is presented. Field experiments dedicated to making direct measurements of aerosols above cloud are needed for the extensive validation of satellite based retrievals.
Sato, S; Miyabe, Y; Nakata, M; Tsuruta, Y; Nakamura, M; Mizowaki, T; Hiraoka, M
2012-06-01
To evaluate a dosimetric accuracy of AcurosXB dose calculation algorithm for 4 MV photon beam. Four MV beam (Clinac-6EX) and AAA and AcurosXB algorithms (pre-release version 11.0.03.) were used in this study. The differences of the calculation with AAA (EAAA) and AcurosXB (EAXB) to the measurement were evaluated in the depth doses to 25 cm depth and dose profiles within the water and slab phantoms (water, lung and bone equivalent). In addition, the clinical cases, including three whole breast plans and three head and neck IMRT plans, were evaluated. First the AAA plans were calculated, then AcurosXB plans were recalculated with dose-to-medium with identical beam setup and monitor units as in the AAA plan. In the water phantom study, the EAAA and EAXB were up to 2.2% and 1.5% in the depth doses for the open field (field size = 4 - 40cm square), respectively. Under the heterogeneity conditions, the EAAA and EAXB were less than 4.4% and 2.2% in lung region, and less than 12.5% and 6.3% in bone region, respectively. In the re-buildup region after passing through the lung phantom, the AAA overestimated the doses about 10%; however AcurosXB had good agreement with measurement within 3%. Dose profiles with AcurosXB were better agreement with measurement than AAA. In the clinical cases, the dose of the skin surface region with AcurosXB were higher than AAA by at least 10%, and the dose differences over 5% appeared in heterogeneous region. However, DVH shapes of each organ were similar between AAA and AcurosXB within 2%. In phantom study, AcurosXB had better agreement to measurement than AAA, especially in heterogeneous region and re-buildup region. In the clinical cases, there were large differences between AcurosXB and AAA in the surface region. Evaluation Agreement of non-clinical versions of Acuros XB withã€€Varian Medical Systems. © 2012 American Association of Physicists in Medicine.
DNA Coding Based Knowledge Discovery Algorithm
LI Ji-yun; GENG Zhao-feng; SHAO Shi-huang
2002-01-01
A novel DNA coding based knowledge discovery algorithm was proposed, an example which verified its validity was given. It is proved that this algorithm can discover new simplified rules from the original rule set efficiently.
Kumar, Pankaj; Al-Shafai, Mashael; Al Muftah, Wadha Ahmed; Chalhoub, Nader; Elsaid, Mahmoud F; Aleem, Alice Abdel; Suhre, Karsten
2014-10-22
With diminishing costs of next generation sequencing (NGS), whole genome analysis becomes a standard tool for identifying genetic causes of inherited diseases. Commercial NGS service providers in general not only provide raw genomic reads, but further deliver SNP calls to their clients. However, the question for the user arises whether to use the SNP data as is, or process the raw sequencing data further through more sophisticated SNP calling pipelines with more advanced algorithms. Here we report a detailed comparison of SNPs called using the popular GATK multiple-sample calling protocol to SNPs delivered as part of a 40x whole genome sequencing project by Illumina Inc of 171 human genomes of Arab descent (108 unrelated Qatari genomes, 19 trios, and 2 families with rare diseases) and compare them to variants provided by the Illumina CASAVA pipeline. GATK multi-sample calling identifies more variants than the CASAVA pipeline. The additional variants from GATK are robust for Mendelian consistencies but weak in terms of statistical parameters such as TsTv ratio. However, these additional variants do not make a difference in detecting the causative variants in the studied phenotype. Both pipelines, GATK multi-sample calling and Illumina CASAVA single sample calling, have highly similar performance in SNP calling at the level of putatively causative variants.
Ferreyra, M; Salinas Aranda, F; Dodat, D; Sansogne, R; Arbiser, S [Vidt Centro Medico, Ciudad Autonoma De Buenos Aires, Ciudad Autonoma de Buenos Aire (Argentina)
2016-06-15
Purpose: To use end-to-end testing to validate a 6 MV high dose rate photon beam, configured for Eclipse AAA algorithm using Golden Beam Data (GBD), for SBRT treatments using RapidArc. Methods: Beam data was configured for Varian Eclipse AAA algorithm using the GBD provided by the vendor. Transverse and diagonals dose profiles, PDDs and output factors down to a field size of 2×2 cm2 were measured on a Varian Trilogy Linac and compared with GBD library using 2% 2mm 1D gamma analysis. The MLC transmission factor and dosimetric leaf gap were determined to characterize the MLC in Eclipse. Mechanical and dosimetric tests were performed combining different gantry rotation speeds, dose rates and leaf speeds to evaluate the delivery system performance according to VMAT accuracy requirements. An end-to-end test was implemented planning several SBRT RapidArc treatments on a CIRS 002LFC IMRT Thorax Phantom. The CT scanner calibration curve was acquired and loaded in Eclipse. PTW 31013 ionization chamber was used with Keithley 35617EBS electrometer for absolute point dose measurements in water and lung equivalent inserts. TPS calculated planar dose distributions were compared to those measured using EPID and MapCheck, as an independent verification method. Results were evaluated with gamma criteria of 2% dose difference and 2mm DTA for 95% of points. Results: GBD set vs. measured data passed 2% 2mm 1D gamma analysis even for small fields. Machine performance tests show results are independent of machine delivery configuration, as expected. Absolute point dosimetry comparison resulted within 4% for the worst case scenario in lung. Over 97% of the points evaluated in dose distributions passed gamma index analysis. Conclusion: Eclipse AAA algorithm configuration of the 6 MV high dose rate photon beam using GBD proved efficient. End-to-end test dose calculation results indicate it can be used clinically for SBRT using RapidArc.
Kleinberg, Jon
2006-01-01
Algorithm Design introduces algorithms by looking at the real-world problems that motivate them. The book teaches students a range of design and analysis techniques for problems that arise in computing applications. The text encourages an understanding of the algorithm design process and an appreciation of the role of algorithms in the broader field of computer science.
Wang, Lui; Bayer, Steven E.
1991-01-01
Genetic algorithms are mathematical, highly parallel, adaptive search procedures (i.e., problem solving methods) based loosely on the processes of natural genetics and Darwinian survival of the fittest. Basic genetic algorithms concepts are introduced, genetic algorithm applications are introduced, and results are presented from a project to develop a software tool that will enable the widespread use of genetic algorithm technology.
Self-Organizing Tree Using Cluster Validity
Sasaki, Yasue; Suzuki, Yukinori; Miyamoto, Takayuki; Maeda, Junji
Self-organizing tree (S-TREE) models solve clustering problems by imposing tree-structured constraints on the solution. It has a self-organizing capacity and has better performance than previous tree-structured algorithms. S-TREE carries out pruning to reduce the effect of bad leaf nodes when the tree reaches a predetermined maximum size (U), However, it is difficult to determine U beforehand because it is problem-dependent. U gives the limit of tree growth and can also prevent self-organization of the tree. It may produce an unnatural clustering. In this paper, we propose an algorithm for pruning algorithm that does not require U. This algorithm prunes extra nodes based on a significant level of cluster validity and allows the S-TREE to grow by a self-organization. The performance of the new algorithm was examined by experiments on vector quantization. The results of experiments show that natural leaf nodes are formed by this algorithm without setting the limit for the growth of the S-TREE.
A Cooperative Optimization Algorithm Inspired by Chaos–Order Transition
Fangzhen Ge
2015-01-01
Full Text Available The growing complexity of optimization problems in distributed systems (DSs has motivated computer scientists to strive for efficient approaches. This paper presents a novel cooperative algorithm inspired by chaos–order transition in a chaotic ant swarm (CAS. This work analyzes the basic dynamic characteristics of a DS in light of a networked multiagent system at microlevel and models a mapping from state set to self-organization mechanism set under the guide of system theory at macrolevel. A collaborative optimization algorithm (COA in DS based on the chaos–order transition of CAS is then devised. To verify the validity of the proposed model and algorithm, we solve a locality-based task allocation in a networked multiagent system that uses COA. Simulations show that our algorithm is feasible and effective compared with previous task allocation approaches, thereby illustrating that our design ideas are correct.
一种基于交叉验证的稳健SL0目标参数提取算法%Cross validation based robust-SL0 algorithm for target parameter extraction
贺亚鹏; 庄珊娜; 张燕洪; 朱晓华
2012-01-01
Utilizing the space sparsity property of radar targets, a compressive sensing based pseudo-random step frequency radar (CS-PRSFR) is studied. Firstly, the CS-PRSFR targets echo is analyzed and the targets parameter extracting model is constructed. To solve the problem of inapplicability of traditional sparse signal reconstruction algorithms amid noise of unknown statistics, a cross validation based robust SLO (CV-RSLO) algorithm extracting the parameter of targets is proposed. Because of the better incoherence of the sensing matrix, the CS-PRSFR can obtain a higher range-velocity joint resolution performance. The proposed algorithm needs no prior information of the noise statistics, and the performance of its targets parameter extraction can rapidly approach the lower bound of the best estimator as the signal to noise ratio improving. Simulation results illuminate the correctness and efficiency of this method.%利用雷达目标在空间的稀疏特性,研究了一种基于压缩感知的伪随机频率步进雷达( compressive sensing based pseudo-random step frequency radar,CS-PRSFR).首先,在分析CS-PRSFR目标回波的基础上,建立了目标参数提取模型；然后,针对在噪声统计特性未知时,传统稀疏信号重构算法无法适用的问题,提出一种基于交叉验证的稳健SL0 (robust SL0 based on cross validation,CV-RSL0)目标参数提取算法.CS-PRSFR由于其感知矩阵较强的非相关性,可获得更高的距离-速度联合分辨性能；该算法无需已知噪声统计特性,随着信噪比的提高,其目标参数提取性能能够快速逼近最佳估计的下限.仿真结果表明该方法的正确性和有效性.
Chen, Jun; Quan, Wenting; Cui, Tingwei
2015-01-01
In this study, two sample semi-analytical algorithms and one new unified multi-band semi-analytical algorithm (UMSA) for estimating chlorophyll-a (Chla) concentration were constructed by specifying optimal wavelengths. The three sample semi-analytical algorithms, including the three-band semi-analytical algorithm (TSA), four-band semi-analytical algorithm (FSA), and UMSA algorithm, were calibrated and validated by the dataset collected in the Yellow River Estuary between September 1 and 10, 2009. By comparing of the accuracy of assessment of TSA, FSA, and UMSA algorithms, it was found that the UMSA algorithm had a superior performance in comparison with the two other algorithms, TSA and FSA. Using the UMSA algorithm in retrieving Chla concentration in the Yellow River Estuary decreased by 25.54% NRMSE (normalized root mean square error) when compared with the FSA algorithm, and 29.66% NRMSE in comparison with the TSA algorithm. These are very significant improvements upon previous methods. Additionally, the study revealed that the TSA and FSA algorithms are merely more specific forms of the UMSA algorithm. Owing to the special form of the UMSA algorithm, if the same bands were used for both the TSA and UMSA algorithms or FSA and UMSA algorithms, the UMSA algorithm would theoretically produce superior results in comparison with the TSA and FSA algorithms. Thus, good results may also be produced if the UMSA algorithm were to be applied for predicting Chla concentration for datasets of Gitelson et al. (2008) and Le et al. (2009).
Asch, T.
2005-12-15
The Pierre Auger Observatory analyses air shower events of ultra high energy cosmic rays. For the first time the two detector techniques to measure Cherenkov and fluorescence light have been combined to detect primary particle with energies >10{sup 19}eV. The raw data rate, as measured by the telescope's electronics, is in the order of 9 Gigabyte per second. A multi level trigger system, which reduces the data systematically in several levels and complexities without rejecting important shower events, is necessary. The different trigger levels are realised in hardware as well as in software. A new ansatz for the first software trigger and its functionality is developed and discussed. The trigger is based on the so far not used information of the readout electronics. The resulting trigger level is more efficient and rejects sheet lightning better compared with present trigger level. Thus, the trigger rate to the next trigger level is decreased and the DAQ system is released. Different calibration methods, which are made regularly, are essential for an experiment. The results of different calibration methods have to be consistent to each other. The single electron resolution of the photomultiplier tubes play an important role in this context. The single electron resolution is a geometry and material dependent factor and up to now only known from Monte Carlo simulations. The experimental validation through direct measurement and the importance of the single electron resolution are discussed. The measurement was possible with small modifications of the configuration. The result of the single electron resolution is within its error in good agreement with the one known from Monte Carlo simulations. The low statistical error of 4% shows a low manufacturing tolerance, so that we can assume the resolution to be constant for the type of photomultiplier tubes used. (orig.)
OPTIMIZED STRAPDOWN CONING CORRECTION ALGORITHM
黄磊; 刘建业; 曾庆化
2013-01-01
Traditional coning algorithms are based on the first-order coning correction reference model .Usually they reduce the algorithm error of coning axis (z) by increasing the sample numbers in one iteration interval .But the increase of sample numbers requires the faster output rates of sensors .Therefore ,the algorithms are often lim-ited in practical use .Moreover ,the noncommutivity error of rotation usually exists on all three axes and the in-crease of sample numbers has little positive effect on reducing the algorithm errors of orthogonal axes (x ,y) . Considering the errors of orthogonal axes cannot be neglected in the high-precision applications ,a coning algorithm with an additional second-order coning correction term is developed to further improve the performance of coning algorithm .Compared with the traditional algorithms ,the new second-order coning algorithm can effectively reduce the algorithm error without increasing the sample numbers .Theoretical analyses validate that in a coning environ-ment with low frequency ,the new algorithm has the better performance than the traditional time-series and fre-quency-series coning algorithms ,while in a maneuver environment the new algorithm has the same order accuracy as the traditional time-series and frequency-series algorithms .Finally ,the practical feasibility of the new coning al-gorithm is demonstrated by digital simulations and practical turntable tests .
Joux, Antoine
2009-01-01
Illustrating the power of algorithms, Algorithmic Cryptanalysis describes algorithmic methods with cryptographically relevant examples. Focusing on both private- and public-key cryptographic algorithms, it presents each algorithm either as a textual description, in pseudo-code, or in a C code program.Divided into three parts, the book begins with a short introduction to cryptography and a background chapter on elementary number theory and algebra. It then moves on to algorithms, with each chapter in this section dedicated to a single topic and often illustrated with simple cryptographic applic
Validation in Fusion Research: Towards Guidelines and Best Practices
Terry, P W; Leboeuf, J -N; McKee, G R; Mikkelsen, D R; Nevins, W M; Newman, D E; Stotler, D P
2008-01-01
Because experiment/model comparisons in magnetic confinement fusion have not yet satisfied the requirements for validation as understood broadly, a set of approaches to validating mathematical models and numerical algorithms are recommended as good practices. Previously identified procedures, such as verification, qualification, and analysis of error and uncertainty, remain important. However, particular challenges intrinsic to fusion plasmas and physical measurement therein lead to identification of new or less familiar concepts that are also critical in validation. These include the primacy hierarchy, which tracks the integration of measurable quantities, and sensitivity analysis, which assesses how model output is apportioned to different sources of variation. The use of validation metrics for individual measurements is extended to multiple measurements, with provisions for the primacy hierarchy and sensitivity. This composite validation metric is essential for quantitatively evaluating comparisons with ex...
Hougardy, Stefan
2016-01-01
Algorithms play an increasingly important role in nearly all fields of mathematics. This book allows readers to develop basic mathematical abilities, in particular those concerning the design and analysis of algorithms as well as their implementation. It presents not only fundamental algorithms like the sieve of Eratosthenes, the Euclidean algorithm, sorting algorithms, algorithms on graphs, and Gaussian elimination, but also discusses elementary data structures, basic graph theory, and numerical questions. In addition, it provides an introduction to programming and demonstrates in detail how to implement algorithms in C++. This textbook is suitable for students who are new to the subject and covers a basic mathematical lecture course, complementing traditional courses on analysis and linear algebra. Both authors have given this "Algorithmic Mathematics" course at the University of Bonn several times in recent years.
Chang, Ming; Wong, Audrey J S; Raugi, Dana N; Smith, Robert A; Seilie, Annette M; Ortega, Jose P; Bogusz, Kyle M; Sall, Fatima; Ba, Selly; Seydi, Moussa; Gottlieb, Geoffrey S; Coombs, Robert W
2017-01-01
The 2014 CDC 4th generation HIV screening algorithm includes an orthogonal immunoassay to confirm and discriminate HIV-1 and HIV-2 antibodies. Additional nucleic acid testing (NAT) is recommended to resolve indeterminate or undifferentiated HIV seroreactivity. HIV-2 NAT requires a second-line assay to detect HIV-2 total nucleic acid (TNA) in patients' blood cells, as a third of untreated patients have undetectable plasma HIV-2 RNA. To validate a qualitative HIV-2 TNA assay using peripheral blood mononuclear cells (PBMC) from HIV-2-infected Senegalese study participants. We evaluated the assay precision, sensitivity, specificity, and diagnostic performance of an HIV-2 TNA assay. Matched plasma and PBMC samples were collected from 25 HIV-1, 30 HIV-2, 8 HIV-1/-2 dual-seropositive and 25 HIV seronegative individuals. Diagnostic performance was evaluated by comparing the outcome of the TNA assay to the results obtained by the 4th generation HIV screening and confirmatory immunoassays. All PBMC from 30 HIV-2 seropositive participants tested positive for HIV-2 TNA including 23 patients with undetectable plasma RNA. Of the 30 matched plasma specimens, one was HIV non-reactive. Samples from 50 non-HIV-2 infected individuals were confirmed as non-reactive for HIV-2 Ab and negative for HIV-2 TNA. The agreement between HIV-2 TNA and the combined immunoassay results was 98.8% (79/80). Furthermore, HIV-2 TNA was detected in 7 of 8 PBMC specimens from HIV-1/HIV-2 dual-seropositive participants. Our TNA assay detected HIV-2 DNA/RNA in PBMC from serologically HIV-2 reactive, HIV indeterminate or HIV undifferentiated individuals with undetectable plasma RNA, and is suitable for confirming HIV-2 infection in the HIV testing algorithm. Copyright © 2016 Elsevier B.V. All rights reserved.
Abrams, D.; Williams, C.
1999-01-01
This thesis describes several new quantum algorithms. These include a polynomial time algorithm that uses a quantum fast Fourier transform to find eigenvalues and eigenvectors of a Hamiltonian operator, and that can be applied in cases for which all know classical algorithms require exponential time.
Tel, G.
1993-01-01
We define the notion of total algorithms for networks of processes. A total algorithm enforces that a "decision" is taken by a subset of the processes, and that participation of all processes is required to reach this decision. Total algorithms are an important building block in the design of distri
Transformer Fault Diagnosis Based on C-SVC and Cross-validation Algorithm%基于支持向量机和交叉验证的变压器故障诊断
张艳; 吴玲
2012-01-01
为及时监测变压器潜伏性故障和准确诊断故障,提出基于优化惩罚因子C参数的支持向量机算法(C-SVC:C-support vector classification)和交叉验证算法相结合的变压器故障诊断方法.该方法利用变压器在故障时产生的氢气、甲烷、乙烷、乙烯、乙炔的体积分数数据建立训练集和测试集.在训练集中,该方法能自动优化出(寻找最佳)支持向量机的核函数的参数γ和惩罚因子C,利用优化的参数对训练集进行训练,可得到最佳的支持向量机模型,并用该模型对测试集进行分类,从而诊断出变压器的故障类型.变压器故障诊断实例分析结果证明,该方法可行,有效,且具有较高的故障诊断准确率.%A novel method for power transformer fault diagnosis based on the C-SVC (support vector classification with the optimized penalty parameter C) and cross-validation algorithm is presented, which can monitor and detect latent transformer faults timely and accurately. The training and testing sets of the C-SVC algorithm are built upon the data about the dissolved gases including hydrogen, methyl hydride, ethane, aethylenum and acetylene produced from transformer faults. Through the optimizing process of the penalty parameter and kernel function parameter y in the training set, the optimal support vector machine model can be gotten, with which the classification of data in the testing set can be conducted to determine fault features. The method has been validated by many practical examples to be feasible and efficient with high fault diagnosis accuracy.
Enhanced clinical pharmacy service targeting tools: risk-predictive algorithms.
El Hajji, Feras W D; Scullin, Claire; Scott, Michael G; McElnay, James C
2015-04-01
This study aimed to determine the value of using a mix of clinical pharmacy data and routine hospital admission spell data in the development of predictive algorithms. Exploration of risk factors in hospitalized patients, together with the targeting strategies devised, will enable the prioritization of clinical pharmacy services to optimize patient outcomes. Predictive algorithms were developed using a number of detailed steps using a 75% sample of integrated medicines management (IMM) patients, and validated using the remaining 25%. IMM patients receive targeted clinical pharmacy input throughout their hospital stay. The algorithms were applied to the validation sample, and predicted risk probability was generated for each patient from the coefficients. Risk threshold for the algorithms were determined by identifying the cut-off points of risk scores at which the algorithm would have the highest discriminative performance. Clinical pharmacy staffing levels were obtained from the pharmacy department staffing database. Numbers of previous emergency admissions and admission medicines together with age-adjusted co-morbidity and diuretic receipt formed a 12-month post-discharge and/or readmission risk algorithm. Age-adjusted co-morbidity proved to be the best index to predict mortality. Increased numbers of clinical pharmacy staff at ward level was correlated with a reduction in risk-adjusted mortality index (RAMI). Algorithms created were valid in predicting risk of in-hospital and post-discharge mortality and risk of hospital readmission 3, 6 and 12 months post-discharge. The provision of ward-based clinical pharmacy services is a key component to reducing RAMI and enabling the full benefits of pharmacy input to patient care to be realized. © 2014 John Wiley & Sons, Ltd.
Islam, T.; Hulley, G. C.; Malakar, N.; Hook, S. J.
2015-12-01
Land Surface Temperature and Emissivity (LST&E) data are acknowledged as critical Environmental Data Records (EDRs) by the NASA Earth Science Division. The current operational LST EDR for the recently launched Suomi National Polar-orbiting Partnership's (NPP) Visible Infrared Imaging Radiometer Suite (VIIRS) payload utilizes a split-window algorithm that relies on previously-generated fixed emissivity dependent coefficients and does not produce a dynamically varying and multi-spectral land surface emissivity product. Furthermore, this algorithm deviates from its MODIS counterpart (MOD11) resulting in a discontinuity in the MODIS/VIIRS LST time series. This study presents an alternative physics based algorithm for generation of the NASA VIIRS LST&E EDR in order to provide continuity with its MODIS counterpart algorithm (MOD21). The algorithm, known as temperature emissivity separation (TES) algorithm, uses a fast radiative transfer model - Radiative Transfer for (A)TOVS (RTTOV) in combination with an emissivity calibration model to isolate the surface radiance contribution retrieving temperature and emissivity. Further, a new water-vapor scaling (WVS) method is developed and implemented to improve the atmospheric correction process within the TES system. An independent assessment of the VIIRS LST&E outputs is performed against in situ LST measurements and laboratory measured emissivity spectra samples over dedicated validation sites in the Southwest USA. Emissivity retrievals are also validated with the latest ASTER Global Emissivity Database Version 4 (GEDv4). An overview and current status of the algorithm as well as the validation results will be discussed.
A comparison between physicians and computer algorithms for form CMS-2728 data reporting.
Malas, Mohammed Said; Wish, Jay; Moorthi, Ranjani; Grannis, Shaun; Dexter, Paul; Duke, Jon; Moe, Sharon
2017-01-01
CMS-2728 form (Medical Evidence Report) assesses 23 comorbidities chosen to reflect poor outcomes and increased mortality risk. Previous studies questioned the validity of physician reporting on forms CMS-2728. We hypothesize that reporting of comorbidities by computer algorithms identifies more comorbidities than physician completion, and, therefore, is more reflective of underlying disease burden. We collected data from CMS-2728 forms for all 296 patients who had incident ESRD diagnosis and received chronic dialysis from 2005 through 2014 at Indiana University outpatient dialysis centers. We analyzed patients' data from electronic medical records systems that collated information from multiple health care sources. Previously utilized algorithms or natural language processing was used to extract data on 10 comorbidities for a period of up to 10 years prior to ESRD incidence. These algorithms incorporate billing codes, prescriptions, and other relevant elements. We compared the presence or unchecked status of these comorbidities on the forms to the presence or absence according to the algorithms. Computer algorithms had higher reporting of comorbidities compared to forms completion by physicians. This remained true when decreasing data span to one year and using only a single health center source. The algorithms determination was well accepted by a physician panel. Importantly, algorithms use significantly increased the expected deaths and lowered the standardized mortality ratios. Using computer algorithms showed superior identification of comorbidities for form CMS-2728 and altered standardized mortality ratios. Adapting similar algorithms in available EMR systems may offer more thorough evaluation of comorbidities and improve quality reporting. © 2016 International Society for Hemodialysis.
Solving chemical dynamic optimization problems with ranking-based differential evolution algorithms
Xu Chen; Wenli Du; Feng Qian
2016-01-01
Dynamic optimization problems (DOPs) described by differential equations are often encountered in chemical engineering. Deterministic techniques based on mathematic programming become invalid when the models are non-differentiable or explicit mathematical descriptions do not exist. Recently, evolutionary algorithms are gaining popularity for DOPs as they can be used as robust alternatives when the deterministic techniques are in-valid. In this article, a technology named ranking-based mutation operator (RMO) is presented to enhance the previous differential evolution (DE) algorithms to solve DOPs using control vector parameterization. In the RMO, better individuals have higher probabilities to produce offspring, which is helpful for the performance enhancement of DE algorithms. Three DE-RMO algorithms are designed by incorporating the RMO. The three DE-RMO algorithms and their three original DE algorithms are applied to solve four constrained DOPs from the literature. Our simulation results indicate that DE-RMO algorithms exhibit better performance than previous non-ranking DE algorithms and other four evolutionary algorithms.
Hay, Alastair D; Birnie, Kate; Busby, John; Delaney, Brendan; Downing, Harriet; Dudley, Jan; Durbaba, Stevo; Fletcher, Margaret; Harman, Kim; Hollingworth, William; Hood, Kerenza; Howe, Robin; Lawton, Michael; Lisles, Catherine; Little, Paul; MacGowan, Alasdair; O'Brien, Kathryn; Pickles, Timothy; Rumsby, Kate; Sterne, Jonathan Ac; Thomas-Jones, Emma; van der Voort, Judith; Waldron, Cherry-Ann; Whiting, Penny; Wootton, Mandy; Butler, Christopher C
2016-01-01
BACKGROUND It is not clear which young children presenting acutely unwell to primary care should be investigated for urinary tract infection (UTI) and whether or not dipstick testing should be used to inform antibiotic treatment. OBJECTIVES To develop algorithms to accurately identify pre-school children in whom urine should be obtained; assess whether or not dipstick urinalysis provides additional diagnostic information; and model algorithm cost-effectiveness. DESIGN Multicentre, prospective diagnostic cohort study. SETTING AND PARTICIPANTS Children < 5 years old presenting to primary care with an acute illness and/or new urinary symptoms. METHODS One hundred and seven clinical characteristics (index tests) were recorded from the child's past medical history, symptoms, physical examination signs and urine dipstick test. Prior to dipstick results clinician opinion of UTI likelihood ('clinical diagnosis') and urine sampling and treatment intentions ('clinical judgement') were recorded. All index tests were measured blind to the reference standard, defined as a pure or predominant uropathogen cultured at ≥ 10(5) colony-forming units (CFU)/ml in a single research laboratory. Urine was collected by clean catch (preferred) or nappy pad. Index tests were sequentially evaluated in two groups, stratified by urine collection method: parent-reported symptoms with clinician-reported signs, and urine dipstick results. Diagnostic accuracy was quantified using area under receiver operating characteristic curve (AUROC) with 95% confidence interval (CI) and bootstrap-validated AUROC, and compared with the 'clinician diagnosis' AUROC. Decision-analytic models were used to identify optimal urine sampling strategy compared with 'clinical judgement'. RESULTS A total of 7163 children were recruited, of whom 50% were female and 49% were < 2 years old. Culture results were available for 5017 (70%); 2740 children provided clean-catch samples, 94% of whom were ≥ 2 years old
DNA Microarray Data Analysis: A Novel Biclustering Algorithm Approach
Tchagang, Alain B.; Tewfik, Ahmed H.
2006-12-01
Biclustering algorithms refer to a distinct class of clustering algorithms that perform simultaneous row-column clustering. Biclustering problems arise in DNA microarray data analysis, collaborative filtering, market research, information retrieval, text mining, electoral trends, exchange analysis, and so forth. When dealing with DNA microarray experimental data for example, the goal of biclustering algorithms is to find submatrices, that is, subgroups of genes and subgroups of conditions, where the genes exhibit highly correlated activities for every condition. In this study, we develop novel biclustering algorithms using basic linear algebra and arithmetic tools. The proposed biclustering algorithms can be used to search for all biclusters with constant values, biclusters with constant values on rows, biclusters with constant values on columns, and biclusters with coherent values from a set of data in a timely manner and without solving any optimization problem. We also show how one of the proposed biclustering algorithms can be adapted to identify biclusters with coherent evolution. The algorithms developed in this study discover all valid biclusters of each type, while almost all previous biclustering approaches will miss some.
DNA Microarray Data Analysis: A Novel Biclustering Algorithm Approach
Tewfik Ahmed H
2006-01-01
Full Text Available Biclustering algorithms refer to a distinct class of clustering algorithms that perform simultaneous row-column clustering. Biclustering problems arise in DNA microarray data analysis, collaborative filtering, market research, information retrieval, text mining, electoral trends, exchange analysis, and so forth. When dealing with DNA microarray experimental data for example, the goal of biclustering algorithms is to find submatrices, that is, subgroups of genes and subgroups of conditions, where the genes exhibit highly correlated activities for every condition. In this study, we develop novel biclustering algorithms using basic linear algebra and arithmetic tools. The proposed biclustering algorithms can be used to search for all biclusters with constant values, biclusters with constant values on rows, biclusters with constant values on columns, and biclusters with coherent values from a set of data in a timely manner and without solving any optimization problem. We also show how one of the proposed biclustering algorithms can be adapted to identify biclusters with coherent evolution. The algorithms developed in this study discover all valid biclusters of each type, while almost all previous biclustering approaches will miss some.
77 FR 70176 - Previous Participation Certification
2012-11-23
... URBAN DEVELOPMENT Previous Participation Certification AGENCY: Office of the Chief Information Officer... digital submission of all data and certifications is available via HUD's secure Internet systems. However...: Previous Participation Certification. OMB Approval Number: 2502-0118. Form Numbers: HUD-2530 ....
New recursive algorithm for matrix inversion
Cao Jianshu; Wang Xuegang
2008-01-01
To reduce the computational complexity of matrix inversion, which is the majority of processing in many practical applications, two numerically efficient recursive algorithms (called algorithms Ⅰ and Ⅱ, respectively)are presented. Algorithm Ⅰ is used to calculate the inverse of such a matrix, whose leading principal minors are all nonzero. Algorithm Ⅱ, whereby, the inverse of an arbitrary nonsingular matrix can be evaluated is derived via improving the algorithm Ⅰ. The implementation, for algorithm Ⅱ or Ⅰ, involves matrix-vector multiplications and vector outer products. These operations are computationally fast and highly parallelizable. MATLAB simulations show that both recursive algorithms are valid.
Building Better Nurse Scheduling Algorithms
Aickelin, Uwe
2008-01-01
The aim of this research is twofold: Firstly, to model and solve a complex nurse scheduling problem with an integer programming formulation and evolutionary algorithms. Secondly, to detail a novel statistical method of comparing and hence building better scheduling algorithms by identifying successful algorithm modifications. The comparison method captures the results of algorithms in a single figure that can then be compared using traditional statistical techniques. Thus, the proposed method of comparing algorithms is an objective procedure designed to assist in the process of improving an algorithm. This is achieved even when some results are non-numeric or missing due to infeasibility. The final algorithm outperforms all previous evolutionary algorithms, which relied on human expertise for modification.
Interaction Enhanced Imperialist Competitive Algorithms
Meng-Shiou Li
2012-10-01
Full Text Available Imperialist Competitive Algorithm (ICA is a new population-based evolutionary algorithm. It divides its population of solutions into several sub-populations, and then searches for the optimal solution through two operations: assimilation and competition. The assimilation operation moves each non-best solution (called colony in a sub-population toward the best solution (called imperialist in the same sub-population. The competition operation removes a colony from the weakest sub-population and adds it to another sub-population. Previous work on ICA focuses mostly on improving the assimilation operation or replacing the assimilation operation with more powerful meta-heuristics, but none focuses on the improvement of the competition operation. Since the competition operation simply moves a colony (i.e., an inferior solution from one sub-population to another sub-population, it incurs weak interaction among these sub-populations. This work proposes Interaction Enhanced ICA that strengthens the interaction among the imperialists of all sub-populations. The performance of Interaction Enhanced ICA is validated on a set of benchmark functions for global optimization. The results indicate that the performance of Interaction Enhanced ICA is superior to that of ICA and its existing variants.
Kim, Ye-seul; Park, Hye-suk; Lee, Haeng-Hwa; Choi, Young-Wook; Choi, Jae-Gu; Kim, Hak Hee; Kim, Hee-Joung
2016-02-01
Digital breast tomosynthesis (DBT) is a recently developed system for three-dimensional imaging that offers the potential to reduce the false positives of mammography by preventing tissue overlap. Many qualitative evaluations of digital breast tomosynthesis were previously performed by using a phantom with an unrealistic model and with heterogeneous background and noise, which is not representative of real breasts. The purpose of the present work was to compare reconstruction algorithms for DBT by using various breast phantoms; validation was also performed by using patient images. DBT was performed by using a prototype unit that was optimized for very low exposures and rapid readout. Three algorithms were compared: a back-projection (BP) algorithm, a filtered BP (FBP) algorithm, and an iterative expectation maximization (EM) algorithm. To compare the algorithms, three types of breast phantoms (homogeneous background phantom, heterogeneous background phantom, and anthropomorphic breast phantom) were evaluated, and clinical images were also reconstructed by using the different reconstruction algorithms. The in-plane image quality was evaluated based on the line profile and the contrast-to-noise ratio (CNR), and out-of-plane artifacts were evaluated by means of the artifact spread function (ASF). Parenchymal texture features of contrast and homogeneity were computed based on reconstructed images of an anthropomorphic breast phantom. The clinical images were studied to validate the effect of reconstruction algorithms. The results showed that the CNRs of masses reconstructed by using the EM algorithm were slightly higher than those obtained by using the BP algorithm, whereas the FBP algorithm yielded much lower CNR due to its high fluctuations of background noise. The FBP algorithm provides the best conspicuity for larger calcifications by enhancing their contrast and sharpness more than the other algorithms; however, in the case of small-size and low
Validation of Core Temperature Estimation Algorithm
2016-01-20
thermoregulation that can affect heart rate. These factors include diet, caffeine , sleep , and psychological stress. Of the 47,549 data points that...nonphysiological measurements were removed from the analysis. After removing poor- quality data, the total amount of data available for each subject and day
Validation of Data Association for Monocular SLAM
Edmundo Guerra
2013-01-01
Full Text Available Simultaneous Mapping and Localization (SLAM is a multidisciplinary problem with ramifications within several fields. One of the key aspects for its popularity and success is the data fusion produced by SLAM techniques, providing strong and robust sensory systems even with simple devices, such as webcams in Monocular SLAM. This work studies a novel batch validation algorithm, the highest order hypothesis compatibility test (HOHCT, against one of the most popular approaches, the JCCB. The HOHCT approach has been developed as a way to improve performance of the delayed inverse-depth initialization monocular SLAM, a previously developed monocular SLAM algorithm based on parallax estimation. Both HOHCT and JCCB are extensively tested and compared within a delayed inverse-depth initialization monocular SLAM framework, showing the strengths and costs of this proposal.
Model-based Bayesian signal extraction algorithm for peripheral nerves
Eggers, Thomas E.; Dweiri, Yazan M.; McCallum, Grant A.; Durand, Dominique M.
2017-10-01
Objective. Multi-channel cuff electrodes have recently been investigated for extracting fascicular-level motor commands from mixed neural recordings. Such signals could provide volitional, intuitive control over a robotic prosthesis for amputee patients. Recent work has demonstrated success in extracting these signals in acute and chronic preparations using spatial filtering techniques. These extracted signals, however, had low signal-to-noise ratios and thus limited their utility to binary classification. In this work a new algorithm is proposed which combines previous source localization approaches to create a model based method which operates in real time. Approach. To validate this algorithm, a saline benchtop setup was created to allow the precise placement of artificial sources within a cuff and interference sources outside the cuff. The artificial source was taken from five seconds of chronic neural activity to replicate realistic recordings. The proposed algorithm, hybrid Bayesian signal extraction (HBSE), is then compared to previous algorithms, beamforming and a Bayesian spatial filtering method, on this test data. An example chronic neural recording is also analyzed with all three algorithms. Main results. The proposed algorithm improved the signal to noise and signal to interference ratio of extracted test signals two to three fold, as well as increased the correlation coefficient between the original and recovered signals by 10-20%. These improvements translated to the chronic recording example and increased the calculated bit rate between the recovered signals and the recorded motor activity. Significance. HBSE significantly outperforms previous algorithms in extracting realistic neural signals, even in the presence of external noise sources. These results demonstrate the feasibility of extracting dynamic motor signals from a multi-fascicled intact nerve trunk, which in turn could extract motor command signals from an amputee for the end goal of
Nonlinear system identification and control using state transition algorithm
Yang, Chunhua; Gui, Weihua
2012-01-01
This paper presents a novel optimization method named state transition algorithm (STA) to solve the problem of identification and control for nonlinear system. In the proposed algorithm, a solution to optimization problem is considered as a state, and the updating of a solution equates to the process of state transition, which makes the STA easy to understand and convenient to be implemented. First, the STA is applied to identify the optimal parameters of the estimated system with previously known structure. With the accurate estimated model, an off-line PID controller is then designed optimally by using the STA as well. Experimental results demonstrate the validity of the methodology, and comparison to STA with other optimization algorithms confirms that STA is a promising alternative method for system identification and control due to its stronger search ability, faster convergence speed and more stable performance.
El Bitar, Ziad [Ecole Doctorale des Sciences Fondamentales, Universite Blaise Pascal, U.F.R de Recherches Scientifiques et Techniques, 34, avenue Carnot - BP 185, 63006 Clermont-Ferrand Cedex (France); Laboratoire de Physique Corpusculaire, CNRS/IN2P3, 63177 Aubiere (France)
2006-12-15
Although time consuming, Monte-Carlo simulations remain an efficient tool enabling to assess correction methods for degrading physical effects in medical imaging. We have optimized and validated a reconstruction method baptized F3DMC (Fully 3D Monte Carlo) in which the physical effects degrading the image formation process were modelled using Monte-Carlo methods and integrated within the system matrix. We used the Monte-Carlo simulation toolbox GATE. We validated GATE in SPECT by modelling the gamma-camera (Philips AXIS) used in clinical routine. Techniques of threshold, filtering by a principal component analysis and targeted reconstruction (functional regions, hybrid regions) were used in order to improve the precision of the system matrix and to reduce the number of simulated photons as well as the time consumption required. The EGEE Grid infrastructures were used to deploy the GATE simulations in order to reduce their computation time. Results obtained with F3DMC were compared with the reconstruction methods (FBP, ML-EM, MLEMC) for a simulated phantom and with the OSEM-C method for the real phantom. Results have shown that the F3DMC method and its variants improve the restoration of activity ratios and the signal to noise ratio. By the use of the grid EGEE, a significant speed-up factor of about 300 was obtained. These results should be confirmed by performing studies on complex phantoms and patients and open the door to a unified reconstruction method, which could be used in SPECT and also in PET. (author)
Detection of algorithmic trading
Bogoev, Dimitar; Karam, Arzé
2017-10-01
We develop a new approach to reflect the behavior of algorithmic traders. Specifically, we provide an analytical and tractable way to infer patterns of quote volatility and price momentum consistent with different types of strategies employed by algorithmic traders, and we propose two ratios to quantify these patterns. Quote volatility ratio is based on the rate of oscillation of the best ask and best bid quotes over an extremely short period of time; whereas price momentum ratio is based on identifying patterns of rapid upward or downward movement in prices. The two ratios are evaluated across several asset classes. We further run a two-stage Artificial Neural Network experiment on the quote volatility ratio; the first stage is used to detect the quote volatility patterns resulting from algorithmic activity, while the second is used to validate the quality of signal detection provided by our measure.
Neural Network-Based Hyperspectral Algorithms
2016-06-07
Neural Network-Based Hyperspectral Algorithms Walter F. Smith, Jr. and Juanita Sandidge Naval Research Laboratory Code 7340, Bldg 1105 Stennis Space...our effort is development of robust numerical inversion algorithms , which will retrieve inherent optical properties of the water column as well as...validate the resulting inversion algorithms with in-situ data and provide estimates of the error bounds associated with the inversion algorithm . APPROACH
Two Proposed Algorithms for Re-Entrant Flow Shop Problem
Abe, Kazumi; Ida, Kenichi
In a re-entrant flow shop scheduling problem we proposed some algorithms to get a better TAT (turn around time) with a genetic search method. One is an operation which searches for a solution that shifts the start timing in limited areas of each lot. Another is an operation which searches for a solution that shifts left and chooses the machine which starts fastest. Some algorithms are effective on the benchmark including those proposed by Taji et al. In the first step, it is easiest to choose the probabilistic problem by local search. The second step is to search for the solution that shifts the start timing in limited areas of each lot, makes the Gantt chart, chooses the machine and gets the results. The third step is to search for the solution that again shifts left, makes the Gantt chart, chooses the machine and gets the results. The proposed algorithms are more valid than local search methods by Taji et al, such as swap, move, swap-2 neighborhood and FIFO (first in first out). The first algorithm has produced the best result in an experimental test when interval time was short. The second algorithm produced the best result of all solutions. The results have shown that the proposed algorithms are effective for interval time cut and get better TAT than previous methods.
Han, K.R.; Bleumer, I.; Pantuck, A.J.; Kim, H.L.; Dorey, F.J.; Janzen, N.K.; Zisman, A.; Dinney, C.P.; Wood, C.G.; Swanson, D.A.; Said, J.W.; Figlin, R.A.; Mulders, P.F.A.; Belldegrun, A.S.
2003-01-01
PURPOSE: Outcome prediction for patients with renal cell carcinoma is based on a combination of factors. In this study a previously published clinical outcome algorithm based on 1997 T stage, Fuhrman grade and performance score is validated using an international database. MATERIALS AND METHODS: A t
A compensatory algorithm for the slow-down effect on constant-time-separation approaches
Abbott, Terence S.
1991-01-01
In seeking methods to improve airport capacity, the question arose as to whether an electronic display could provide information which would enable the pilot to be responsible for self-separation under instrument conditions to allow for the practical implementation of reduced separation, multiple glide path approaches. A time based, closed loop algorithm was developed and simulator validated for in-trail (one aircraft behind the other) approach and landing. The algorithm was designed to reduce the effects of approach speed reduction prior to landing for the trailing aircraft as well as the dispersion of the interarrival times. The operational task for the validation was an instrument approach to landing while following a single lead aircraft on the same approach path. The desired landing separation was 60 seconds for these approaches. An open loop algorithm, previously developed, was used as a basis for comparison. The results showed that relative to the open loop algorithm, the closed loop one could theoretically provide for a 6 pct. increase in runway throughput. Also, the use of the closed loop algorithm did not affect the path tracking performance and pilot comments indicated that the guidance from the closed loop algorithm would be acceptable from an operational standpoint. From these results, it is concluded that by using a time based, closed loop spacing algorithm, precise interarrival time intervals may be achievable with operationally acceptable pilot workload.
THE ALGORITHMS OF AN INTEGER PARTITIONING WITH ITS APPLICATIONS
曹立明; 周强
1994-01-01
In the light of the ideals of Artificial Intelligence(AI) , three algorithms of an integer partitioning have been given in this paper:generate and test algorithm ,and two heuristic algorithms about forward partition and backward partition. PROLOG has been used to describe algorithms, it is reasonable, direct and simple. In the sight of describing algorithms ,it is a new and valid try. At last, some intresting applications of the algorithms mentioned in the paper have been presented.
Hromkovic, Juraj
2009-01-01
Explores the science of computing. This book starts with the development of computer science, algorithms and programming, and then explains and shows how to exploit the concepts of infinity, computability, computational complexity, nondeterminism and randomness.
An Algorithm for Learning the Essential Graph
Noble, John M
2010-01-01
This article presents an algorithm for learning the essential graph of a Bayesian network. The basis of the algorithm is the Maximum Minimum Parents and Children algorithm developed by previous authors, with three substantial modifications. The MMPC algorithm is the first stage of the Maximum Minimum Hill Climbing algorithm for learning the directed acyclic graph of a Bayesian network, introduced by previous authors. The MMHC algorithm runs in two phases; firstly, the MMPC algorithm to locate the skeleton and secondly an edge orientation phase. The computationally expensive part is the edge orientation phase. The first modification introduced to the MMPC algorithm, which requires little additional computational cost, is to obtain the immoralities and hence the essential graph. This renders the edge orientation phase, the computationally expensive part, unnecessary, since the entire Markov structure that can be derived from data is present in the essential graph. Secondly, the MMPC algorithm can accept indepen...
An Efficient Pattern Matching Algorithm
Sleit, Azzam; Almobaideen, Wesam; Baarah, Aladdin H.; Abusitta, Adel H.
In this study, we present an efficient algorithm for pattern matching based on the combination of hashing and search trees. The proposed solution is classified as an offline algorithm. Although, this study demonstrates the merits of the technique for text matching, it can be utilized for various forms of digital data including images, audio and video. The performance superiority of the proposed solution is validated analytically and experimentally.
A new algorithm for attitude-independent magnetometer calibration
Alonso, Roberto; Shuster, Malcolm D.
1994-01-01
A new algorithm is developed for inflight magnetometer bias determination without knowledge of the attitude. This algorithm combines the fast convergence of a heuristic algorithm currently in use with the correct treatment of the statistics and without discarding data. The algorithm performance is examined using simulated data and compared with previous algorithms.
Benchmarking monthly homogenization algorithms
V. K. C. Venema
2011-08-01
Full Text Available The COST (European Cooperation in Science and Technology Action ES0601: Advances in homogenization methods of climate series: an integrated approach (HOME has executed a blind intercomparison and validation study for monthly homogenization algorithms. Time series of monthly temperature and precipitation were evaluated because of their importance for climate studies and because they represent two important types of statistics (additive and multiplicative. The algorithms were validated against a realistic benchmark dataset. The benchmark contains real inhomogeneous data as well as simulated data with inserted inhomogeneities. Random break-type inhomogeneities were added to the simulated datasets modeled as a Poisson process with normally distributed breakpoint sizes. To approximate real world conditions, breaks were introduced that occur simultaneously in multiple station series within a simulated network of station data. The simulated time series also contained outliers, missing data periods and local station trends. Further, a stochastic nonlinear global (network-wide trend was added.
Participants provided 25 separate homogenized contributions as part of the blind study as well as 22 additional solutions submitted after the details of the imposed inhomogeneities were revealed. These homogenized datasets were assessed by a number of performance metrics including (i the centered root mean square error relative to the true homogeneous value at various averaging scales, (ii the error in linear trend estimates and (iii traditional contingency skill scores. The metrics were computed both using the individual station series as well as the network average regional series. The performance of the contributions depends significantly on the error metric considered. Contingency scores by themselves are not very informative. Although relative homogenization algorithms typically improve the homogeneity of temperature data, only the best ones improve
Subsequent pregnancy outcome after previous foetal death
Nijkamp, J. W.; Korteweg, F. J.; Holm, J. P.; Timmer, A.; Erwich, J. J. H. M.; van Pampus, M. G.
2013-01-01
Objective: A history of foetal death is a risk factor for complications and foetal death in subsequent pregnancies as most previous risk factors remain present and an underlying cause of death may recur. The purpose of this study was to evaluate subsequent pregnancy outcome after foetal death and to
Algorithms for Global Positioning
Borre, Kai; Strang, Gilbert
The emergence of satellite technology has changed the lives of millions of people. In particular, GPS has brought an unprecedented level of accuracy to the field of geodesy. This text is a guide to the algorithms and mathematical principles that account for the success of GPS technology and repla......The emergence of satellite technology has changed the lives of millions of people. In particular, GPS has brought an unprecedented level of accuracy to the field of geodesy. This text is a guide to the algorithms and mathematical principles that account for the success of GPS technology....... At the heart of the matter are the positioning algorithms on which GPS technology relies, the discussion of which will affirm the mathematical contents of the previous chapters. Numerous ready-to-use MATLAB codes are included for the reader. This comprehensive guide will be invaluable for engineers...
Functional validation and comparison framework for EIT lung imaging.
Bartłomiej Grychtol
Full Text Available INTRODUCTION: Electrical impedance tomography (EIT is an emerging clinical tool for monitoring ventilation distribution in mechanically ventilated patients, for which many image reconstruction algorithms have been suggested. We propose an experimental framework to assess such algorithms with respect to their ability to correctly represent well-defined physiological changes. We defined a set of clinically relevant ventilation conditions and induced them experimentally in 8 pigs by controlling three ventilator settings (tidal volume, positive end-expiratory pressure and the fraction of inspired oxygen. In this way, large and discrete shifts in global and regional lung air content were elicited. METHODS: We use the framework to compare twelve 2D EIT reconstruction algorithms, including backprojection (the original and still most frequently used algorithm, GREIT (a more recent consensus algorithm for lung imaging, truncated singular value decomposition (TSVD, several variants of the one-step Gauss-Newton approach and two iterative algorithms. We consider the effects of using a 3D finite element model, assuming non-uniform background conductivity, noise modeling, reconstructing for electrode movement, total variation (TV reconstruction, robust error norms, smoothing priors, and using difference vs. normalized difference data. RESULTS AND CONCLUSIONS: Our results indicate that, while variation in appearance of images reconstructed from the same data is not negligible, clinically relevant parameters do not vary considerably among the advanced algorithms. Among the analysed algorithms, several advanced algorithms perform well, while some others are significantly worse. Given its vintage and ad-hoc formulation backprojection works surprisingly well, supporting the validity of previous studies in lung EIT.
Quantum Central Processing Unit and Quantum Algorithm
王安民
2002-01-01
Based on a scalable and universal quantum network, quantum central processing unit, proposed in our previous paper [Chin. Phys. Left. 18 (2001)166], the whole quantum network for the known quantum algorithms,including quantum Fourier transformation, Shor's algorithm and Grover's algorithm, is obtained in a unitied way.
Induced vaginal birth after previous caesarean section
Akylbek Tussupkaliyev; Andrey Gayday; Bibigul Karimsakova; Saule Bermagambetova; Lunara Uteniyazova; Guldana Iztleuova; Gulkhanym Kusherbayeva; Meruyert Konakbayeva; Assylzada Merekeyeva; Zamira Imangaliyeva
2016-01-01
Introduction The rate of operative birth by Caesarean section is constantly rising. In Kazakhstan, it reaches 27 per cent. Research data confirm that the percentage of successful vaginal births after previous Caesarean section is 50–70 per cent. How safe the induction of vaginal birth after Caesarean (VBAC) remains unclear. Methodology The studied techniques of labour induction were amniotomy of the foetal bladder with the vulsellum ramus, intravaginal administra...
Optimal Fungal Space Searching Algorithms.
Asenova, Elitsa; Lin, Hsin-Yu; Fu, Eileen; Nicolau, Dan V; Nicolau, Dan V
2016-10-01
Previous experiments have shown that fungi use an efficient natural algorithm for searching the space available for their growth in micro-confined networks, e.g., mazes. This natural "master" algorithm, which comprises two "slave" sub-algorithms, i.e., collision-induced branching and directional memory, has been shown to be more efficient than alternatives, with one, or the other, or both sub-algorithms turned off. In contrast, the present contribution compares the performance of the fungal natural algorithm against several standard artificial homologues. It was found that the space-searching fungal algorithm consistently outperforms uninformed algorithms, such as Depth-First-Search (DFS). Furthermore, while the natural algorithm is inferior to informed ones, such as A*, this under-performance does not importantly increase with the increase of the size of the maze. These findings suggest that a systematic effort of harvesting the natural space searching algorithms used by microorganisms is warranted and possibly overdue. These natural algorithms, if efficient, can be reverse-engineered for graph and tree search strategies.
Can experimental data in humans verify the finite element-based bone remodeling algorithm?
Wong, C.; Gehrchen, P.M.; Kiaer, T.
2008-01-01
STUDY DESIGN: A finite element analysis-based bone remodeling study in human was conducted in the lumbar spine operated on with pedicle screws. Bone remodeling results were compared to prospective experimental bone mineral content data of patients operated on with pedicle screws. OBJECTIVE......: The validity of 2 bone remodeling algorithms was evaluated by comparing against prospective bone mineral content measurements. Also, the potential stress shielding effect was examined using the 2 bone remodeling algorithms and the experimental bone mineral data. SUMMARY OF BACKGROUND DATA: In previous studies......, in the human spine, the bone remodeling algorithms have neither been evaluated experimentally nor been examined by comparing to unsystematic experimental data. METHODS: The site-specific and nonsite-specific iterative bone remodeling algorithms were applied to a finite element model of the lumbar spine...
B. Dils
2013-10-01
Full Text Available Column-averaged dry-air mole fractions of carbon dioxide and methane have been retrieved from spectra acquired by the TANSO-FTS and SCIAMACHY instruments on board GOSAT and ENVISAT using a range of European retrieval algorithms. These retrievals have been compared with data from ground-based high-resolution Fourier Transform Spectrometers (FTS from the Total Carbon Column Observing Network (TCCON. The participating algorithms are the Weighting Function Modified Differential Optical Absorption Spectroscopy (DOAS algorithm (WFMD, University of Bremen, the Bremen Optimal Estimation DOAS algorithm (BESD, University of Bremen, the Iterative Maximum A Posteriori DOAS (IMAP, Jet Propulsion Laboratory (JPL and Netherlands Institute for Space Research algorithm (SRON, the proxy and full-physics versions of SRON's RemoTeC algorithm (SRPR and SRFP respectively and the proxy and full-physics versions of the University of Leicester's adaptation of the OCO (Orbiting Carbon Observatory algorithm (OCPR and OCFP respectively. The goal of this algorithm inter-comparison was to identify strengths and weaknesses of the various so-called Round Robin data sets generated with the various algorithms so as to determine which of the competing algorithms would proceed to the next round of the European Space Agency's (ESA Greenhouse Gas Climate Change Initiative (GHG-CCI project, which is the generation of the so-called Climate Research Data Package (CRDP, which is the first version of the Essential Climate Variable (ECV "Greenhouse Gases" (GHG. For CO2, all algorithms reach the precision requirements for inverse modelling ( For CH4, the precision for both SCIAMACHY products (50.2 ppb for IMAP and 76.4 ppb for WFMD fail to meet the XCH4 precision ranges between 18.1 and 14.0 ppb. Looking at the SRA, all GOSAT algorithm products reach the < 10 ppm threshold (values ranging between 5.4 and 6.2 ppb. For SCIAMACHY, IMAP and WFMD have a SRA of 17.2 ppb and 10.5 ppb respectively.
Hu, T C
2002-01-01
Newly enlarged, updated second edition of a valuable text presents algorithms for shortest paths, maximum flows, dynamic programming and backtracking. Also discusses binary trees, heuristic and near optimums, matrix multiplication, and NP-complete problems. 153 black-and-white illus. 23 tables.Newly enlarged, updated second edition of a valuable, widely used text presents algorithms for shortest paths, maximum flows, dynamic programming and backtracking. Also discussed are binary trees, heuristic and near optimums, matrix multiplication, and NP-complete problems. New to this edition: Chapter 9
VIPR: A probabilistic algorithm for analysis of microbial detection microarrays
Holbrook Michael R
2010-07-01
Full Text Available Abstract Background All infectious disease oriented clinical diagnostic assays in use today focus on detecting the presence of a single, well defined target agent or a set of agents. In recent years, microarray-based diagnostics have been developed that greatly facilitate the highly parallel detection of multiple microbes that may be present in a given clinical specimen. While several algorithms have been described for interpretation of diagnostic microarrays, none of the existing approaches is capable of incorporating training data generated from positive control samples to improve performance. Results To specifically address this issue we have developed a novel interpretive algorithm, VIPR (Viral Identification using a PRobabilistic algorithm, which uses Bayesian inference to capitalize on empirical training data to optimize detection sensitivity. To illustrate this approach, we have focused on the detection of viruses that cause hemorrhagic fever (HF using a custom HF-virus microarray. VIPR was used to analyze 110 empirical microarray hybridizations generated from 33 distinct virus species. An accuracy of 94% was achieved as measured by leave-one-out cross validation. Conclusions VIPR outperformed previously described algorithms for this dataset. The VIPR algorithm has potential to be broadly applicable to clinical diagnostic settings, wherein positive controls are typically readily available for generation of training data.
Partitional clustering algorithms
2015-01-01
This book summarizes the state-of-the-art in partitional clustering. Clustering, the unsupervised classification of patterns into groups, is one of the most important tasks in exploratory data analysis. Primary goals of clustering include gaining insight into, classifying, and compressing data. Clustering has a long and rich history that spans a variety of scientific disciplines including anthropology, biology, medicine, psychology, statistics, mathematics, engineering, and computer science. As a result, numerous clustering algorithms have been proposed since the early 1950s. Among these algorithms, partitional (nonhierarchical) ones have found many applications, especially in engineering and computer science. This book provides coverage of consensus clustering, constrained clustering, large scale and/or high dimensional clustering, cluster validity, cluster visualization, and applications of clustering. Examines clustering as it applies to large and/or high-dimensional data sets commonly encountered in reali...
Bat Algorithm for Multi-objective Optimisation
Yang, Xin-She
2012-01-01
Engineering optimization is typically multiobjective and multidisciplinary with complex constraints, and the solution of such complex problems requires efficient optimization algorithms. Recently, Xin-She Yang proposed a bat-inspired algorithm for solving nonlinear, global optimisation problems. In this paper, we extend this algorithm to solve multiobjective optimisation problems. The proposed multiobjective bat algorithm (MOBA) is first validated against a subset of test functions, and then applied to solve multiobjective design problems such as welded beam design. Simulation results suggest that the proposed algorithm works efficiently.
The Geometry of Algorithms with Orthogonality Constraints
Edelman, A; Smith, S T; Edelman, Alan; Smith, Steven T.
1998-01-01
In this paper we develop new Newton and conjugate gradient algorithms on the Grassmann and Stiefel manifolds. These manifolds represent the constraints that arise in such areas as the symmetric eigenvalue problem, nonlinear eigenvalue problems, electronic structures computations, and signal processing. In addition to the new algorithms, we show how the geometrical framework gives penetrating new insights allowing us to create, understand, and compare algorithms. The theory proposed here provides a taxonomy for numerical linear algebra algorithms that provide a top level mathematical view of previously unrelated algorithms. It is our hope that developers of new algorithms and perturbation theories will benefit from the theory, methods, and examples in this paper.
Five modified boundary scan adaptive test generation algorithms
Niu Chunping; Ren Zheping; Yao Zongzhong
2006-01-01
To study the diagnostic problem of Wire-OR (W-O) interconnect fault of PCB (Printed Circuit Board), five modified boundary scan adaptive algorithms for interconnect test are put forward. These algorithms apply Global-diagnosis sequence algorithm to replace the equal weight algorithm of primary test, and the test time is shortened without changing the fault diagnostic capability. The descriptions of five modified adaptive test algorithms are presented, and the capability comparison between the modified algorithm and the original algorithm is made to prove the validity of these algorithms.
Anna Bourmistrova
2011-02-01
Full Text Available The autodriver algorithm is an intelligent method to eliminate the need of steering by a driver on a well-defined road. The proposed method performs best on a four-wheel steering (4WS vehicle, though it is also applicable to two-wheel-steering (TWS vehicles. The algorithm is based on coinciding the actual vehicle center of rotation and road center of curvature, by adjusting the kinematic center of rotation. The road center of curvature is assumed prior information for a given road, while the dynamic center of rotation is the output of dynamic equations of motion of the vehicle using steering angle and velocity measurements as inputs. We use kinematic condition of steering to set the steering angles in such a way that the kinematic center of rotation of the vehicle sits at a desired point. At low speeds the ideal and actual paths of the vehicle are very close. With increase of forward speed the road and tire characteristics, along with the motion dynamics of the vehicle cause the vehicle to turn about time-varying points. By adjusting the steering angles, our algorithm controls the dynamic turning center of the vehicle so that it coincides with the road curvature center, hence keeping the vehicle on a given road autonomously. The position and orientation errors are used as feedback signals in a closed loop control to adjust the steering angles. The application of the presented autodriver algorithm demonstrates reliable performance under different driving conditions.
Cataract surgery in previously vitrectomized eyes.
Akinci, A; Batman, C; Zilelioglu, O
2008-05-01
To evaluate the results of extracapsular cataract extraction (ECCE) and phacoemulsification (PHACO) performed in previously vitrectomized eyes. In this retrospective study, 56 vitrectomized eyes that had ECCE and 60 vitrectomized eyes that had PHACO were included in the study group while 65 eyes that had PHACO in the control group. The evaluated parameters were the incidence of intra-operative and postoperative complications (IPC) and visual outcomes. Chi-squared, independent samples and paired samples tests were used for comparing the results. Deep anterior chamber (AC) was significantly more common in the PHACO group of vitrectomized eyes (PGVE) and observed in eyes that had undergone extensive vitreous removal (p ECCE group and the PGVE (p > 0.05). Some of the intra-operative conditions such as posterior synechiae, primary posterior capsular opacification (PCO) and postoperative complications such as retinal detachment (RD), PCO were significantly more common in vitrectomized eyes than the controls (p ECCE group and the PGVE (p > 0.05). Deep AC is more common in eyes with extensive vitreous removal during PHACO than ECCE. Decreasing the bottle height is advised in this case. Except for this, the results of ECCE and PHACO are similar in previously vitrectomized eyes. Posterior synechiaes, primary and postoperative PCO and RD are more common in vitrectomized eyes than the controls.
Efficient Algorithm for Rectangular Spiral Search
Brugarolas, Paul; Breckenridge, William
2008-01-01
An algorithm generates grid coordinates for a computationally efficient spiral search pattern covering an uncertain rectangular area spanned by a coordinate grid. The algorithm does not require that the grid be fixed; the algorithm can search indefinitely, expanding the grid and spiral, as needed, until the target of the search is found. The algorithm also does not require memory of coordinates of previous points on the spiral to generate the current point on the spiral.
A Deterministic and Polynomial Modified Perceptron Algorithm
Olof Barr
2006-01-01
Full Text Available We construct a modified perceptron algorithm that is deterministic, polynomial and also as fast as previous known algorithms. The algorithm runs in time O(mn3lognlog(1/ρ, where m is the number of examples, n the number of dimensions and ρ is approximately the size of the margin. We also construct a non-deterministic modified perceptron algorithm running in timeO(mn2lognlog(1/ρ.
Genetic algorithms for route discovery.
Gelenbe, Erol; Liu, Peixiang; Lainé, Jeremy
2006-12-01
Packet routing in networks requires knowledge about available paths, which can be either acquired dynamically while the traffic is being forwarded, or statically (in advance) based on prior information of a network's topology. This paper describes an experimental investigation of path discovery using genetic algorithms (GAs). We start with the quality-of-service (QoS)-driven routing protocol called "cognitive packet network" (CPN), which uses smart packets (SPs) to dynamically select routes in a distributed autonomic manner based on a user's QoS requirements. We extend it by introducing a GA at the source routers, which modifies and filters the paths discovered by the CPN. The GA can combine the paths that were previously discovered to create new untested but valid source-to-destination paths, which are then selected on the basis of their "fitness." We present an implementation of this approach, where the GA runs in background mode so as not to overload the ingress routers. Measurements conducted on a network test bed indicate that when the background-traffic load of the network is light to medium, the GA can result in improved QoS. When the background-traffic load is high, it appears that the use of the GA may be detrimental to the QoS experienced by users as compared to CPN routing because the GA uses less timely state information in its decision making.
GRB Flares: A New Detection Algorithm, Previously Undetected Flares, and Implications on GRB Physics
Swenson, C A
2013-01-01
Flares in GRB light curves have been observed since shortly after the discovery of the first GRB afterglow. However, it was not until the launch of the Swift satellite that it was realized how common flares are, appearing in nearly 50% of all X-ray afterglows as observed by the XRT instrument. The majority of these observed X-ray flares are easily distinguishable by eye and have been measured to have up to as much fluence as the original prompt emission. Through studying large numbers of these X-ray flares it has been determined that they likely result from a distinct emission source different than that powering the GRB afterglow. These findings could be confirmed if similar results were found using flares in other energy ranges. However, until now, the UVOT instrument on Swift seemed to have observed far fewer flares in the UV/optical than were seen in the X-ray. This was primarily due to poor sampling and data being spread across multiple filters, but a new optimal co-addition and normalization of the UVOT ...
Books average previous decade of economic misery.
R Alexander Bentley
Full Text Available For the 20(th century since the Depression, we find a strong correlation between a 'literary misery index' derived from English language books and a moving average of the previous decade of the annual U.S. economic misery index, which is the sum of inflation and unemployment rates. We find a peak in the goodness of fit at 11 years for the moving average. The fit between the two misery indices holds when using different techniques to measure the literary misery index, and this fit is significantly better than other possible correlations with different emotion indices. To check the robustness of the results, we also analysed books written in German language and obtained very similar correlations with the German economic misery index. The results suggest that millions of books published every year average the authors' shared economic experiences over the past decade.
Obinutuzumab for previously untreated chronic lymphocytic leukemia.
Abraham, Jame; Stegner, Mark
2014-04-01
Obinutuzumab was approved by the Food and Drug Administration in late 2013 for use in combination with chlorambucil for the treatment of patients with previously untreated chronic lymphocytic leukemia (CLL). The approval was based on results of an open-label phase 3 trial that showed improved progression-free survival (PFS) with the combination of obinutuzumab plus chlorambucil compared with chlorambucil alone. Obinutuzumab is a monoclonal antibody that targets CD20 antigen expressed on the surface of pre B- and mature B-lymphocytes. After binding to CD20, obinutuzumab mediates B-cell lysis by engaging immune effector cells, directly activating intracellular death signaling pathways, and activating the complement cascade. Immune effector cell activities include antibody-dependent cellular cytotoxicity and antibody-dependent cellular phagocytosis.
Can previous learning alter future plasticity mechanisms?
Crestani, Ana Paula; Quillfeldt, Jorge Alberto
2016-02-01
The dynamic processes related to mnemonic plasticity have been extensively researched in the last decades. More recently, studies have attracted attention because they show an unusual plasticity mechanism that is independent of the receptor most usually related to first-time learning--that is, memory acquisition-the NMDA receptor. An interesting feature of this type of learning is that a previous experience may cause modifications in the plasticity mechanism of a subsequent learning, suggesting that prior experience in a very similar task triggers a memory acquisition process that does not depend on NMDARs. The intracellular molecular cascades necessary to assist the learning process seem to depend on the activation of hippocampal CP-AMPARs. Moreover, most of these studies were performed on hippocampus-dependent tasks, even though other brain areas, such as the basolateral amygdala, also display NMDAR-independent learning.
Books average previous decade of economic misery.
Bentley, R Alexander; Acerbi, Alberto; Ormerod, Paul; Lampos, Vasileios
2014-01-01
For the 20(th) century since the Depression, we find a strong correlation between a 'literary misery index' derived from English language books and a moving average of the previous decade of the annual U.S. economic misery index, which is the sum of inflation and unemployment rates. We find a peak in the goodness of fit at 11 years for the moving average. The fit between the two misery indices holds when using different techniques to measure the literary misery index, and this fit is significantly better than other possible correlations with different emotion indices. To check the robustness of the results, we also analysed books written in German language and obtained very similar correlations with the German economic misery index. The results suggest that millions of books published every year average the authors' shared economic experiences over the past decade.
Validation and Classification of Web Services using Equalization Validation Classification
ALAMELU MUTHUKRISHNAN
2012-12-01
Full Text Available In the business process world, web services present a managed and middleware to connect huge number of services. Web service transaction is a mechanism to compose services with their desired quality parameters. If enormous transactions occur, the provider could not acquire the accurate data at the correct time. So it is necessary to reduce the overburden of web service t ransactions. In order to reduce the excess of transactions form customers to providers, this paper propose a new method called Equalization Validation Classification. This method introduces a new weight - reducing algorithm called Efficient Trim Down algorit hm to reduce the overburden of the incoming client requests. When this proposed algorithm is compared with Decision tree algorithms of (J48, Random Tree, Random Forest, AD Tree it produces a better accuracy and Validation than the existing algorithms. The proposed trimming method was analyzed with the Decision tree algorithms and the results implementation shows that the ETD algorithm provides better performance in terms of improved accuracy with Effective Validation. Therefore, the proposed method provide s a good gateway to reduce the overburden of the client requests in web services. Moreover analyzing the requests arrived from a vast number of clients and preventing the illegitimate requests save the service provider time
Induced vaginal birth after previous caesarean section
Akylbek Tussupkaliyev
2016-11-01
Full Text Available Introduction The rate of operative birth by Caesarean section is constantly rising. In Kazakhstan, it reaches 27 per cent. Research data confirm that the percentage of successful vaginal births after previous Caesarean section is 50–70 per cent. How safe the induction of vaginal birth after Caesarean (VBAC remains unclear. Methodology The studied techniques of labour induction were amniotomy of the foetal bladder with the vulsellum ramus, intravaginal administration of E1 prostaglandin (Misoprostol, and intravenous infusion of Oxytocin-Richter. The assessment of rediness of parturient canals was conducted by Bishop’s score; the labour course was assessed by a partogram. The effectiveness of labour induction techniques was assessed by the number of administered doses, the time of onset of regular labour, the course of labour and the postpartum period and the presence of complications, and the course of the early neonatal period, which implied the assessment of the child’s condition, described in the newborn development record. The foetus was assessed by medical ultrasound and antenatal and intranatal cardiotocography (CTG. Obtained results were analysed with SAS statistical processing software. Results The overall percentage of successful births with intravaginal administration of Misoprostol was 93 per cent (83 of cases. This percentage was higher than in the amniotomy group (relative risk (RR 11.7 and was similar to the oxytocin group (RR 0.83. Amniotomy was effective in 54 per cent (39 of cases, when it induced regular labour. Intravenous oxytocin infusion was effective in 94 per cent (89 of cases. This percentage was higher than that with amniotomy (RR 12.5. Conclusions The success of vaginal delivery after previous Caesarean section can be achieved in almost 70 per cent of cases. At that, labour induction does not decrease this indicator and remains within population boundaries.
Pholsiri, Chalongrath; English, James; Seberino, Charles; Lim, Yi-Je
2010-01-01
The Excavator Design Validation tool verifies excavator designs by automatically generating control systems and modeling their performance in an accurate simulation of their expected environment. Part of this software design includes interfacing with human operations that can be included in simulation-based studies and validation. This is essential for assessing productivity, versatility, and reliability. This software combines automatic control system generation from CAD (computer-aided design) models, rapid validation of complex mechanism designs, and detailed models of the environment including soil, dust, temperature, remote supervision, and communication latency to create a system of high value. Unique algorithms have been created for controlling and simulating complex robotic mechanisms automatically from just a CAD description. These algorithms are implemented as a commercial cross-platform C++ software toolkit that is configurable using the Extensible Markup Language (XML). The algorithms work with virtually any mobile robotic mechanisms using module descriptions that adhere to the XML standard. In addition, high-fidelity, real-time physics-based simulation algorithms have also been developed that include models of internal forces and the forces produced when a mechanism interacts with the outside world. This capability is combined with an innovative organization for simulation algorithms, new regolith simulation methods, and a unique control and study architecture to make powerful tools with the potential to transform the way NASA verifies and compares excavator designs. Energid's Actin software has been leveraged for this design validation. The architecture includes parametric and Monte Carlo studies tailored for validation of excavator designs and their control by remote human operators. It also includes the ability to interface with third-party software and human-input devices. Two types of simulation models have been adapted: high-fidelity discrete
In vitro culture of previously uncultured oral bacterial phylotypes.
Thompson, Hayley; Rybalka, Alexandra; Moazzez, Rebecca; Dewhirst, Floyd E; Wade, William G
2015-12-01
Around a third of oral bacteria cannot be grown using conventional bacteriological culture media. Community profiling targeting 16S rRNA and shotgun metagenomics methods have proved valuable in revealing the complexity of the oral bacterial community. Studies investigating the role of oral bacteria in health and disease require phenotypic characterizations that are possible only with live cultures. The aim of this study was to develop novel culture media and use an in vitro biofilm model to culture previously uncultured oral bacteria. Subgingival plaque samples collected from subjects with periodontitis were cultured on complex mucin-containing agar plates supplemented with proteose peptone (PPA), beef extract (BEA), or Gelysate (GA) as well as on fastidious anaerobe agar plus 5% horse blood (FAA). In vitro biofilms inoculated with the subgingival plaque samples and proteose peptone broth (PPB) as the growth medium were established using the Calgary biofilm device. Specific PCR primers were designed and validated for the previously uncultivated oral taxa Bacteroidetes bacteria HOT 365 and HOT 281, Lachnospiraceae bacteria HOT 100 and HOT 500, and Clostridiales bacterium HOT 093. All agar media were able to support the growth of 10 reference strains of oral bacteria. One previously uncultivated phylotype, Actinomyces sp. HOT 525, was cultivated on FAA. Of 93 previously uncultivated phylotypes found in the inocula, 26 were detected in in vitro-cultivated biofilms. Lachnospiraceae bacterium HOT 500 was successfully cultured from biofilm material harvested from PPA plates in coculture with Parvimonas micra or Veillonella dispar/parvula after colony hybridization-directed enrichment. The establishment of in vitro biofilms from oral inocula enables the cultivation of previously uncultured oral bacteria and provides source material for isolation in coculture.
Parallelization of TMVA Machine Learning Algorithms
Hajili, Mammad
2017-01-01
This report reflects my work on Parallelization of TMVA Machine Learning Algorithms integrated to ROOT Data Analysis Framework during summer internship at CERN. The report consists of 4 impor- tant part - data set used in training and validation, algorithms that multiprocessing applied on them, parallelization techniques and re- sults of execution time changes due to number of workers.
Carrara, Marta; Carozzi, Luca; Moss, Travis J; de Pasquale, Marco; Cerutti, Sergio; Lake, Douglas E; Moorman, J Randall; Ferrario, Manuela
2015-01-01
Identification of atrial fibrillation (AF) is a clinical imperative. Heartbeat interval time series are increasingly available from personal monitors, allowing new opportunity for AF diagnosis. Previously, we devised numerical algorithms for identification of normal sinus rhythm (NSR), AF, and SR with frequent ectopy using dynamical measures of heart rate. Here, we wished to validate them in the canonical MIT-BIH ECG databases. We tested algorithms on the NSR, AF and arrhythmia databases. When the databases were combined, the positive predictive value of the new algorithms exceeded 95% for NSR and AF, and was 40% for SR with ectopy. Further, dynamical measures did not distinguish atrial from ventricular ectopy. Inspection of individual 24hour records showed good correlation of observed and predicted rhythms. Heart rate dynamical measures are effective ingredients in numerical algorithms to classify cardiac rhythm from the heartbeat intervals time series alone. Copyright © 2015 Elsevier Inc. All rights reserved.
A Hybrid Immigrants Scheme for Genetic Algorithms in Dynamic Environments
Shengxiang Yang; Renato Tinós
2007-01-01
Dynamic optimization problems are a kind of optimization problems that involve changes over time. They pose a serious challenge to traditional optimization methods as well as conventional genetic algorithms since the goal is no longer to search for the optimal solution(s) of a fixed problem but to track the moving optimum over time. Dynamic optimization problems have attracted a growing interest from the genetic algorithm community in recent years. Several approaches have been developed to enhance the performance of genetic algorithms in dynamic environments. One approach is to maintain the diversity of the population via random immigrants. This paper proposes a hybrid immigrants scheme that combines the concepts of elitism, dualism and random immigrants for genetic algorithms to address dynamic optimization problems. In this hybrid scheme, the best individual, i.e., the elite, from the previous generation and its dual individual are retrieved as the bases to create immigrants via traditional mutation scheme. These elitism-based and dualism-based immigrants together with some random immigrants are substituted into the current population, replacing the worst individuals in the population. These three kinds of immigrants aim to address environmental changes of slight, medium and significant degrees respectively and hence efficiently adapt genetic algorithms to dynamic environments that are subject to different severities of changes. Based on a series of systematically constructed dynamic test problems, experiments are carried out to investigate the performance of genetic algorithms with the hybrid immigrants scheme and traditional random immigrants scheme. Experimental results validate the efficiency of the proposed hybrid immigrants scheme for improving the performance of genetic algorithms in dynamic environments.
Markham, Annette
layered set of accounts to help build our understanding of how individuals relate to their devices, search systems, and social network sites. This work extends critical analyses of the power of algorithms in implicating the social self by offering narrative accounts from multiple perspectives. It also......This paper takes an actor network theory approach to explore some of the ways that algorithms co-construct identity and relational meaning in contemporary use of social media. Based on intensive interviews with participants as well as activity logging and data tracking, the author presents a richly...... contributes an innovative method for blending actor network theory with symbolic interaction to grapple with the complexity of everyday sensemaking practices within networked global information flows....
Control algorithm for multiscale flow simulations of water
Kotsalis, E. M.; Walther, Jens Honore; Kaxiras, E.
2009-01-01
. The use of a mass conserving specular wall results in turn to spurious oscillations in the density profile of the atomistic description of water. These oscillations can be eliminated by using an external boundary force that effectively accounts for the virial component of the pressure. In this Rapid......We present a multiscale algorithm to couple atomistic water models with continuum incompressible flow simulations via a Schwarz domain decomposition approach. The coupling introduces an inhomogeneity in the description of the atomistic domain and prevents the use of periodic boundary conditions...... Communication, we extend a control algorithm, previously introduced for monatomic molecules, to the case of atomistic water and demonstrate the effectiveness of this approach. The proposed computational method is validated for the cases of equilibrium and Couette flow of water....
Control algorithm for multiscale flow simulations of water
Kotsalis, Evangelos M.; Walther, Jens H.; Kaxiras, Efthimios; Koumoutsakos, Petros
2009-04-01
We present a multiscale algorithm to couple atomistic water models with continuum incompressible flow simulations via a Schwarz domain decomposition approach. The coupling introduces an inhomogeneity in the description of the atomistic domain and prevents the use of periodic boundary conditions. The use of a mass conserving specular wall results in turn to spurious oscillations in the density profile of the atomistic description of water. These oscillations can be eliminated by using an external boundary force that effectively accounts for the virial component of the pressure. In this Rapid Communication, we extend a control algorithm, previously introduced for monatomic molecules, to the case of atomistic water and demonstrate the effectiveness of this approach. The proposed computational method is validated for the cases of equilibrium and Couette flow of water.
Previous gastric bypass surgery complicating total thyroidectomy.
Alfonso, Bianca; Jacobson, Adam S; Alon, Eran E; Via, Michael A
2015-03-01
Hypocalcemia is a well-known complication of total thyroidectomy. Patients who have previously undergone gastric bypass surgery may be at increased risk of hypocalcemia due to gastrointestinal malabsorption, secondary hyperparathyroidism, and an underlying vitamin D deficiency. We present the case of a 58-year-old woman who underwent a total thyroidectomy for the follicular variant of papillary thyroid carcinoma. Her history included Roux-en-Y gastric bypass surgery. Following the thyroid surgery, she developed postoperative hypocalcemia that required large doses of oral calcium carbonate (7.5 g/day), oral calcitriol (up to 4 μg/day), intravenous calcium gluconate (2.0 g/day), calcium citrate (2.0 g/day), and ergocalciferol (50,000 IU/day). Her serum calcium levels remained normal on this regimen after hospital discharge despite persistent hypoparathyroidism. Bariatric surgery patients who undergo thyroid surgery require aggressive supplementation to maintain normal serum calcium levels. Preoperative supplementation with calcium and vitamin D is strongly recommended.
Sebacinales everywhere: previously overlooked ubiquitous fungal endophytes.
Weiss, Michael; Sýkorová, Zuzana; Garnica, Sigisfredo; Riess, Kai; Martos, Florent; Krause, Cornelia; Oberwinkler, Franz; Bauer, Robert; Redecker, Dirk
2011-02-15
Inconspicuous basidiomycetes from the order Sebacinales are known to be involved in a puzzling variety of mutualistic plant-fungal symbioses (mycorrhizae), which presumably involve transport of mineral nutrients. Recently a few members of this fungal order not fitting this definition and commonly referred to as 'endophytes' have raised considerable interest by their ability to enhance plant growth and to increase resistance of their host plants against abiotic stress factors and fungal pathogens. Using DNA-based detection and electron microscopy, we show that Sebacinales are not only extremely versatile in their mycorrhizal associations, but are also almost universally present as symptomless endophytes. They occurred in field specimens of bryophytes, pteridophytes and all families of herbaceous angiosperms we investigated, including liverworts, wheat, maize, and the non-mycorrhizal model plant Arabidopsis thaliana. They were present in all habitats we studied on four continents. We even detected these fungi in herbarium specimens originating from pioneering field trips to North Africa in the 1830s/40s. No geographical or host patterns were detected. Our data suggest that the multitude of mycorrhizal interactions in Sebacinales may have arisen from an ancestral endophytic habit by specialization. Considering their proven beneficial influence on plant growth and their ubiquity, endophytic Sebacinales may be a previously unrecognized universal hidden force in plant ecosystems.
Surgery of intracranial aneurysms previously treated endovascularly.
Tirakotai, Wuttipong; Sure, Ulrich; Yin, Yuhua; Benes, Ludwig; Schulte, Dirk Michael; Bien, Siegfried; Bertalanffy, Helmut
2007-11-01
To perform a retrospective study on the patients who underwent aneurysmal surgery following endovascular treatment. We performed a retrospective study on eight patients who underwent aneurysmal surgery following endovascular treatment (-attempts) with gugliemi detachable coils (GDCs). The indications for surgery, surgical techniques and clinical outcomes were analyzed. The indications for surgical treatment after GDC coiling of aneurysm were classified into three groups. First group: surgery of incompletely coiled aneurysms (n=4). Second group: surgery of mass effect on the neural structures due to coil compaction or rebleeding (n=2). Third group: surgery of vascular complications after endovascular procedure due to parent artery occlusion or thrombus propagation from aneurysm (n=2). Aneurysm obliterations could be performed in all cases confirmed by postoperative angiography. Six patients had an excellent outcome and returned to their profession. Patient's visual acuity was improved. One individual experienced right hemiparesis (grade IV/V) and hemihypesthesia. Microsurgical clipping is rarely necessary for previously coiled aneurysms. Surgical treatment is uncommonly required when an acute complication arises during endovascular treatment, or when there is a dynamic change of a residual aneurysm configuration over time that is considered to be insecure.
[Electronic cigarettes - effects on health. Previous reports].
Napierała, Marta; Kulza, Maksymilian; Wachowiak, Anna; Jabłecka, Katarzyna; Florek, Ewa
2014-01-01
Currently very popular in the market of tobacco products have gained electronic cigarettes (ang. E-cigarettes). These products are considered to be potentially less harmful in compared to traditional tobacco products. However, current reports indicate that the statements of the producers regarding to the composition of the e- liquids not always are sufficient, and consumers often do not have reliable information on the quality of the product used by them. This paper contain a review of previous reports on the composition of e-cigarettes and their impact on health. Most of the observed health effects was related to symptoms of the respiratory tract, mouth, throat, neurological complications and sensory organs. Particularly hazardous effects of the e-cigarettes were: pneumonia, congestive heart failure, confusion, convulsions, hypotension, aspiration pneumonia, face second-degree burns, blindness, chest pain and rapid heartbeat. In the literature there is no information relating to passive exposure by the aerosols released during e-cigarette smoking. Furthermore, the information regarding to the use of these products in the long term are not also available.
Sebacinales everywhere: previously overlooked ubiquitous fungal endophytes.
Michael Weiss
Full Text Available Inconspicuous basidiomycetes from the order Sebacinales are known to be involved in a puzzling variety of mutualistic plant-fungal symbioses (mycorrhizae, which presumably involve transport of mineral nutrients. Recently a few members of this fungal order not fitting this definition and commonly referred to as 'endophytes' have raised considerable interest by their ability to enhance plant growth and to increase resistance of their host plants against abiotic stress factors and fungal pathogens. Using DNA-based detection and electron microscopy, we show that Sebacinales are not only extremely versatile in their mycorrhizal associations, but are also almost universally present as symptomless endophytes. They occurred in field specimens of bryophytes, pteridophytes and all families of herbaceous angiosperms we investigated, including liverworts, wheat, maize, and the non-mycorrhizal model plant Arabidopsis thaliana. They were present in all habitats we studied on four continents. We even detected these fungi in herbarium specimens originating from pioneering field trips to North Africa in the 1830s/40s. No geographical or host patterns were detected. Our data suggest that the multitude of mycorrhizal interactions in Sebacinales may have arisen from an ancestral endophytic habit by specialization. Considering their proven beneficial influence on plant growth and their ubiquity, endophytic Sebacinales may be a previously unrecognized universal hidden force in plant ecosystems.
A previously undescribed pathway for pyrimidine catabolism.
Loh, Kevin D; Gyaneshwar, Prasad; Markenscoff Papadimitriou, Eirene; Fong, Rebecca; Kim, Kwang-Seo; Parales, Rebecca; Zhou, Zhongrui; Inwood, William; Kustu, Sydney
2006-03-28
The b1012 operon of Escherichia coli K-12, which is composed of seven unidentified ORFs, is one of the most highly expressed operons under control of nitrogen regulatory protein C. Examination of strains with lesions in this operon on Biolog Phenotype MicroArray (PM3) plates and subsequent growth tests indicated that they failed to use uridine or uracil as the sole nitrogen source and that the parental strain could use them at room temperature but not at 37 degrees C. A strain carrying an ntrB(Con) mutation, which elevates transcription of genes under nitrogen regulatory protein C control, could also grow on thymidine as the sole nitrogen source, whereas strains with lesions in the b1012 operon could not. Growth-yield experiments indicated that both nitrogens of uridine and thymidine were available. Studies with [(14)C]uridine indicated that a three-carbon waste product from the pyrimidine ring was excreted. After trimethylsilylation and gas chromatography, the waste product was identified by mass spectrometry as 3-hydroxypropionic acid. In agreement with this finding, 2-methyl-3-hydroxypropionic acid was released from thymidine. Both the number of available nitrogens and the waste products distinguished the pathway encoded by the b1012 operon from pyrimidine catabolic pathways described previously. We propose that the genes of this operon be named rutA-G for pyrimidine utilization. The product of the divergently transcribed gene, b1013, is a tetracycline repressor family regulator that controls transcription of the b1012 operon negatively.
A study on the application of topic models to motif finding algorithms.
Basha Gutierrez, Josep; Nakai, Kenta
2016-12-22
Topic models are statistical algorithms which try to discover the structure of a set of documents according to the abstract topics contained in them. Here we try to apply this approach to the discovery of the structure of the transcription factor binding sites (TFBS) contained in a set of biological sequences, which is a fundamental problem in molecular biology research for the understanding of transcriptional regulation. Here we present two methods that make use of topic models for motif finding. First, we developed an algorithm in which first a set of biological sequences are treated as text documents, and the k-mers contained in them as words, to then build a correlated topic model (CTM) and iteratively reduce its perplexity. We also used the perplexity measurement of CTMs to improve our previous algorithm based on a genetic algorithm and several statistical coefficients. The algorithms were tested with 56 data sets from four different species and compared to 14 other methods by the use of several coefficients both at nucleotide and site level. The results of our first approach showed a performance comparable to the other methods studied, especially at site level and in sensitivity scores, in which it scored better than any of the 14 existing tools. In the case of our previous algorithm, the new approach with the addition of the perplexity measurement clearly outperformed all of the other methods in sensitivity, both at nucleotide and site level, and in overall performance at site level. The statistics obtained show that the performance of a motif finding method based on the use of a CTM is satisfying enough to conclude that the application of topic models is a valid method for developing motif finding algorithms. Moreover, the addition of topic models to a previously developed method dramatically increased its performance, suggesting that this combined algorithm can be a useful tool to successfully predict motifs in different kinds of sets of DNA sequences.
An Improved Fuzzy Based Missing Value Estimation in DNA Microarray Validated by Gene Ranking
Sujay Saha
2016-01-01
Full Text Available Most of the gene expression data analysis algorithms require the entire gene expression matrix without any missing values. Hence, it is necessary to devise methods which would impute missing data values accurately. There exist a number of imputation algorithms to estimate those missing values. This work starts with a microarray dataset containing multiple missing values. We first apply the modified version of the fuzzy theory based existing method LRFDVImpute to impute multiple missing values of time series gene expression data and then validate the result of imputation by genetic algorithm (GA based gene ranking methodology along with some regular statistical validation techniques, like RMSE method. Gene ranking, as far as our knowledge, has not been used yet to validate the result of missing value estimation. Firstly, the proposed method has been tested on the very popular Spellman dataset and results show that error margins have been drastically reduced compared to some previous works, which indirectly validates the statistical significance of the proposed method. Then it has been applied on four other 2-class benchmark datasets, like Colorectal Cancer tumours dataset (GDS4382, Breast Cancer dataset (GSE349-350, Prostate Cancer dataset, and DLBCL-FL (Leukaemia for both missing value estimation and ranking the genes, and the results show that the proposed method can reach 100% classification accuracy with very few dominant genes, which indirectly validates the biological significance of the proposed method.
Casanova, Henri; Robert, Yves
2008-01-01
""…The authors of the present book, who have extensive credentials in both research and instruction in the area of parallelism, present a sound, principled treatment of parallel algorithms. … This book is very well written and extremely well designed from an instructional point of view. … The authors have created an instructive and fascinating text. The book will serve researchers as well as instructors who need a solid, readable text for a course on parallelism in computing. Indeed, for anyone who wants an understandable text from which to acquire a current, rigorous, and broad vi
Gustavson, Fred G.; Reid, John K.; Wasniewski, Jerzy
2007-01-01
variables, and the speed is usually better than that of the LAPACK algorithm that uses full storage (n2 variables). Included are subroutines for rearranging a matrix whose upper or lower-triangular part is packed by columns to this format and for the inverse rearrangement. Also included is a kernel......We present subroutines for the Cholesky factorization of a positive-definite symmetric matrix and for solving corresponding sets of linear equations. They exploit cache memory by using the block hybrid format proposed by the authors in a companion article. The matrix is packed into n(n + 1)/2 real...
NBA-Palm: prediction of palmitoylation site implemented in Naïve Bayes algorithm.
Xue, Yu; Chen, Hu; Jin, Changjiang; Sun, Zhirong; Yao, Xuebiao
2006-10-17
Protein palmitoylation, an essential and reversible post-translational modification (PTM), has been implicated in cellular dynamics and plasticity. Although numerous experimental studies have been performed to explore the molecular mechanisms underlying palmitoylation processes, the intrinsic feature of substrate specificity has remained elusive. Thus, computational approaches for palmitoylation prediction are much desirable for further experimental design. In this work, we present NBA-Palm, a novel computational method based on Naïve Bayes algorithm for prediction of palmitoylation site. The training data is curated from scientific literature (PubMed) and includes 245 palmitoylated sites from 105 distinct proteins after redundancy elimination. The proper window length for a potential palmitoylated peptide is optimized as six. To evaluate the prediction performance of NBA-Palm, 3-fold cross-validation, 8-fold cross-validation and Jack-Knife validation have been carried out. Prediction accuracies reach 85.79% for 3-fold cross-validation, 86.72% for 8-fold cross-validation and 86.74% for Jack-Knife validation. Two more algorithms, RBF network and support vector machine (SVM), also have been employed and compared with NBA-Palm. Taken together, our analyses demonstrate that NBA-Palm is a useful computational program that provides insights for further experimentation. The accuracy of NBA-Palm is comparable with our previously described tool CSS-Palm. The NBA-Palm is freely accessible from: http://www.bioinfo.tsinghua.edu.cn/NBA-Palm.
NBA-Palm: prediction of palmitoylation site implemented in Naïve Bayes algorithm
Jin Changjiang
2006-10-01
Full Text Available Abstract Background Protein palmitoylation, an essential and reversible post-translational modification (PTM, has been implicated in cellular dynamics and plasticity. Although numerous experimental studies have been performed to explore the molecular mechanisms underlying palmitoylation processes, the intrinsic feature of substrate specificity has remained elusive. Thus, computational approaches for palmitoylation prediction are much desirable for further experimental design. Results In this work, we present NBA-Palm, a novel computational method based on Naïve Bayes algorithm for prediction of palmitoylation site. The training data is curated from scientific literature (PubMed and includes 245 palmitoylated sites from 105 distinct proteins after redundancy elimination. The proper window length for a potential palmitoylated peptide is optimized as six. To evaluate the prediction performance of NBA-Palm, 3-fold cross-validation, 8-fold cross-validation and Jack-Knife validation have been carried out. Prediction accuracies reach 85.79% for 3-fold cross-validation, 86.72% for 8-fold cross-validation and 86.74% for Jack-Knife validation. Two more algorithms, RBF network and support vector machine (SVM, also have been employed and compared with NBA-Palm. Conclusion Taken together, our analyses demonstrate that NBA-Palm is a useful computational program that provides insights for further experimentation. The accuracy of NBA-Palm is comparable with our previously described tool CSS-Palm. The NBA-Palm is freely accessible from: http://www.bioinfo.tsinghua.edu.cn/NBA-Palm.
Validating a UAV artificial intelligence control system using an autonomous test case generator
Straub, Jeremy; Huber, Justin
2013-05-01
The validation of safety-critical applications, such as autonomous UAV operations in an environment which may include human actors, is an ill posed problem. To confidence in the autonomous control technology, numerous scenarios must be considered. This paper expands upon previous work, related to autonomous testing of robotic control algorithms in a two dimensional plane, to evaluate the suitability of similar techniques for validating artificial intelligence control in three dimensions, where a minimum level of airspeed must be maintained. The results of human-conducted testing are compared to this automated testing, in terms of error detection, speed and testing cost.
Reducing Validity in Epistemic ATL to Validity in Epistemic CTL
Dimitar P. Guelev
2013-03-01
Full Text Available We propose a validity preserving translation from a subset of epistemic Alternating-time Temporal Logic (ATL to epistemic Computation Tree Logic (CTL. The considered subset of epistemic ATL is known to have the finite model property and decidable model-checking. This entails the decidability of validity but the implied algorithm is unfeasible. Reducing the validity problem to that in a corresponding system of CTL makes the techniques for automated deduction for that logic available for the handling of the apparently more complex system of ATL.
Enghauser, Michael [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)
2016-02-01
The goal of the Domestic Nuclear Detection Office (DNDO) Algorithm Improvement Program (AIP) is to facilitate gamma-radiation detector nuclide identification algorithm development, improvement, and validation. Accordingly, scoring criteria have been developed to objectively assess the performance of nuclide identification algorithms. In addition, a Microsoft Excel spreadsheet application for automated nuclide identification scoring has been developed. This report provides an overview of the equations, nuclide weighting factors, nuclide equivalencies, and configuration weighting factors used by the application for scoring nuclide identification algorithm performance. Furthermore, this report presents a general overview of the nuclide identification algorithm scoring application including illustrative examples.
Validation of maritime spectral features
Hubler, Matthew J.
In 1991 a research team led by Klaus Hasselmann developed a general technique to build synthetic aperture radar (SAR) spectra from scans of the ocean surface; however these techniques were verified on older equipment. The algorithms input a SAR spectrum from an ocean spectrum, an inversion from SAR spectrum to ocean spectrum, and determine the threshold of the azimuthal cutoff. Originally designed for platforms that have since fulfilled their missions, the question remains as to whether the algorithms are valid with newer systems such as TerraSAR-X operated by German Aerospace Centre (DLR). One of the larger differences that may skew data analysis by these algorithms is that TerraSAR-X has much finer resolution, pixels being on the scale of 5--10 meters (or less), while older satellites returned images with pixel scaling on the order of kilometers. The finer pixel scaling allows for more detail to be recovered and analyzed, specifically the individual waves on the ocean surface become visible. To that end, algorithms developed for older satellites will be employed on data collected from TerraSAR-X and compared to ground truth data in order to assess the compatibility of existing algorithms. During the course of the validation, several sets of code, written in Matlab, will be employed and discussed, each providing a different approach, more focused results. In aggregate a clearer picture will emerge describing the accuracy that older algorithms have with newer machinery. The imagery data, being satellite borne, comes with individual collection geometry that needs to be addressed in the processing as well, currently through parsing the accompanying metadata. The determination that these algorithms indeed work with newer systems and the validation of an azimuthal cutoff demonstrate that little fine tuning of older algorithms is needed at these higher resolutions. While the Hasselmann algorithms become cumbersome to use, a new approach to the algorithms yield useful
SMOS derived sea ice thickness: algorithm baseline, product specifications and initial verification
X. Tian-Kunze
2013-12-01
Full Text Available Following the launch of ESA's Soil Moisture and Ocean salinity (SMOS mission it has been shown that brightness temperatures at a low microwave frequency of 1.4 GHz (L-band are sensitive to sea ice properties. In a first demonstration study, sea ice thickness has been derived using a semi-empirical algorithm with constant tie-points. Here we introduce a novel iterative retrieval algorithm that is based on a sea ice thermodynamic model and a three-layer radiative transfer model, which explicitly takes variations of ice temperature and ice salinity into account. In addition, ice thickness variations within a SMOS footprint are considered through a statistical thickness distribution function derived from high-resolution ice thickness measurements from NASA's Operation IceBridge campaign. This new algorithm has been used for the continuous operational production of a SMOS based sea ice thickness data set from 2010 on. This data set is compared and validated with estimates from assimilation systems, remote sensing data, and airborne electromagnetic sounding data. The comparisons show that the new retrieval algorithm has a considerably better agreement with the validation data and delivers a more realistic Arctic-wide ice thickness distribution than the algorithm used in the previous study.
Mathematical algorithms for approximate reasoning
Murphy, John H.; Chay, Seung C.; Downs, Mary M.
1988-01-01
from the conclusion. These algorithms allow one to reason accurately with uncertain data. The above environment can replicate state-f-the-art expert system environments which provides a continuity between the current expert systems which cannot be validated or verified and future expert systems which should be both validated and verified
Validation of community robustness
Carissimo, Annamaria; Defeis, Italia
2016-01-01
The large amount of work on community detection and its applications leaves unaddressed one important question: the statistical validation of the results. In this paper we present a methodology able to clearly detect if the community structure found by some algorithms is statistically significant or is a result of chance, merely due to edge positions in the network. Given a community detection method and a network of interest, our proposal examines the stability of the partition recovered against random perturbations of the original graph structure. To address this issue, we specify a perturbation strategy and a null model to build a set of procedures based on a special measure of clustering distance, namely Variation of Information, using tools set up for functional data analysis. The procedures determine whether the obtained clustering departs significantly from the null model. This strongly supports the robustness against perturbation of the algorithm used to identify the community structure. We show the r...
An Improved Heuristic Algorithm of Attribute Reduction in Rough Set
ShunxiangWu; MaoqingLi; WentingHuang; SifengLiu
2004-01-01
This paper introduces background of rough set theory, then proposes a new algorithm for finding optimal reduction and make comparison between the original algorithm and the improved one by the experiment about the nine standard data set in UL database to explain the validity of the improved heuristic algorithm.
A FUZZY CLOPE ALGORITHM AND ITS OPTIMAL PARAMETER CHOICE
无
2006-01-01
Among the available clustering algorithms in data mining, the CLOPE algorithm attracts much more attention with its high speed and good performance. However, the proper choice of some parameters in the CLOPE algorithm directly affects the validity of the clustering results, which is still an open issue. For this purpose, this paper proposes a fuzzy CLOPE algorithm, and presents a method for the optimal parameter choice by defining a modified partition fuzzy degree as a clustering validity function. The experimental results with real data set illustrate the effectiveness of the proposed fuzzy CLOPE algorithm and optimal parameter choice method based on the modified partition fuzzy degree.
Rates of induced abortion in Denmark according to age, previous births and previous abortions
Marie-Louise H. Hansen
2009-11-01
Full Text Available Background: Whereas the effects of various socio-demographic determinants on a woman's risk of having an abortion are relatively well-documented, less attention has been given to the effect of previous abortions and births. Objective: To study the effect of previous abortions and births on Danish women's risk of an abortion, in addition to a number of demographic and personal characteristics. Data and methods: From the Fertility of Women and Couples Dataset we obtained data on the number of live births and induced abortions by year (1981-2001, age (16-39, county of residence and marital status. Logistic regression analysis was used to estimate the influence of the explanatory variables on the probability of having an abortion in a relevant year. Main findings and conclusion: A woman's risk of having an abortion increases with the number of previous births and previous abortions. Some interactions were was found in the way a woman's risk of abortion varies with calendar year, age and parity. The risk of an abortion for women with no children decreases while the risk of an abortion for women with children increases over time. Furthermore, the risk of an abortion decreases with age, but relatively more so for women with children compared to childless women. Trends for teenagers are discussed in a separate section.
Sasaki, Makoto; Kudo, Kohsuke; Uwano, Ikuko; Goodwin, Jonathan; Higuchi, Satomi; Ito, Kenji; Yamashita, Fumio [Iwate Medical University, Division of Ultrahigh Field MRI, Institute for Biomedical Sciences, Yahaba (Japan); Boutelier, Timothe; Pautot, Fabrice [Olea Medical, Department of Research and Innovation, La Ciotat (France); Christensen, Soren [University of Melbourne, Department of Neurology and Radiology, Royal Melbourne Hospital, Victoria (Australia)
2013-10-15
A new deconvolution algorithm, the Bayesian estimation algorithm, was reported to improve the precision of parametric maps created using perfusion computed tomography. However, it remains unclear whether quantitative values generated by this method are more accurate than those generated using optimized deconvolution algorithms of other software packages. Hence, we compared the accuracy of the Bayesian and deconvolution algorithms by using a digital phantom. The digital phantom data, in which concentration-time curves reflecting various known values for cerebral blood flow (CBF), cerebral blood volume (CBV), mean transit time (MTT), and tracer delays were embedded, were analyzed using the Bayesian estimation algorithm as well as delay-insensitive singular value decomposition (SVD) algorithms of two software packages that were the best benchmarks in a previous cross-validation study. Correlation and agreement of quantitative values of these algorithms with true values were examined. CBF, CBV, and MTT values estimated by all the algorithms showed strong correlations with the true values (r = 0.91-0.92, 0.97-0.99, and 0.91-0.96, respectively). In addition, the values generated by the Bayesian estimation algorithm for all of these parameters showed good agreement with the true values [intraclass correlation coefficient (ICC) = 0.90, 0.99, and 0.96, respectively], while MTT values from the SVD algorithms were suboptimal (ICC = 0.81-0.82). Quantitative analysis using a digital phantom revealed that the Bayesian estimation algorithm yielded CBF, CBV, and MTT maps strongly correlated with the true values and MTT maps with better agreement than those produced by delay-insensitive SVD algorithms. (orig.)
Passalia, Claudio; Alfano, Orlando M. [INTEC - Instituto de Desarrollo Tecnologico para la Industria Quimica, CONICET - UNL, Gueemes 3450, 3000 Santa Fe (Argentina); FICH - Departamento de Medio Ambiente, Facultad de Ingenieria y Ciencias Hidricas, Universidad Nacional del Litoral, Ciudad Universitaria, 3000 Santa Fe (Argentina); Brandi, Rodolfo J., E-mail: rbrandi@santafe-conicet.gov.ar [INTEC - Instituto de Desarrollo Tecnologico para la Industria Quimica, CONICET - UNL, Gueemes 3450, 3000 Santa Fe (Argentina); FICH - Departamento de Medio Ambiente, Facultad de Ingenieria y Ciencias Hidricas, Universidad Nacional del Litoral, Ciudad Universitaria, 3000 Santa Fe (Argentina)
2012-04-15
Highlights: Black-Right-Pointing-Pointer Indoor pollution control via photocatalytic reactors. Black-Right-Pointing-Pointer Scaling-up methodology based on previously determined mechanistic kinetics. Black-Right-Pointing-Pointer Radiation interchange model between catalytic walls using configuration factors. Black-Right-Pointing-Pointer Modeling and experimental validation of a complex geometry photocatalytic reactor. - Abstract: A methodology for modeling photocatalytic reactors for their application in indoor air pollution control is carried out. The methodology implies, firstly, the determination of intrinsic reaction kinetics for the removal of formaldehyde. This is achieved by means of a simple geometry, continuous reactor operating under kinetic control regime and steady state. The kinetic parameters were estimated from experimental data by means of a nonlinear optimization algorithm. The second step was the application of the obtained kinetic parameters to a very different photoreactor configuration. In this case, the reactor is a corrugated wall type using nanosize TiO{sub 2} as catalyst irradiated by UV lamps that provided a spatially uniform radiation field. The radiative transfer within the reactor was modeled through a superficial emission model for the lamps, the ray tracing method and the computation of view factors. The velocity and concentration fields were evaluated by means of a commercial CFD tool (Fluent 12) where the radiation model was introduced externally. The results of the model were compared experimentally in a corrugated wall, bench scale reactor constructed in the laboratory. The overall pollutant conversion showed good agreement between model predictions and experiments, with a root mean square error less than 4%.
Extrapolation of acenocoumarol pharmacogenetic algorithms.
Jiménez-Varo, Enrique; Cañadas-Garre, Marisa; Garcés-Robles, Víctor; Gutiérrez-Pimentel, María José; Calleja-Hernández, Miguel Ángel
2015-11-01
Acenocoumarol (ACN) has a narrow therapeutic range that is especially difficult to control at the start of its administration. Various dosing pharmacogenetic-guided dosing algorithms have been developed, but further work on their external validation is required. The aim of this study was to evaluate the extrapolation of pharmacogenetic algorithms for ACN as an alternative to the development of a specific algorithm for a given population. The predictive performance, deviation, accuracy, and clinical significance of five pharmacogenetic algorithms (EU-PACT, Borobia, Rathore, Markatos, Krishna Kumar) were compared in 189 stable ACN patients representing all indications for anticoagulant treatment. The correlation between the dose predictions of the five pharmacogenetic models ranged from 7.7 to 70.6% and the percentage of patients with a correct prediction (deviation ≤20% from actual ACN dose) ranged from 5.9 to 40.7%. EU-PACT and Borobia pharmacogenetic dosing algorithms were the most accurate in our setting and evidenced the best clinical performance. Among the five models studied, the EU-PACT and Borobia pharmacogenetic dosing algorithms demonstrated the best potential for extrapolation. Copyright © 2015 Elsevier Inc. All rights reserved.
Boosting of Image Denoising Algorithms
Romano, Yaniv; Elad, Michael
2015-01-01
In this paper we propose a generic recursive algorithm for improving image denoising methods. Given the initial denoised image, we suggest repeating the following "SOS" procedure: (i) (S)trengthen the signal by adding the previous denoised image to the degraded input image, (ii) (O)perate the denoising method on the strengthened image, and (iii) (S)ubtract the previous denoised image from the restored signal-strengthened outcome. The convergence of this process is studied for the K-SVD image ...
Evolutionary Graph Drawing Algorithms
Huang Jing-wei; Wei Wen-fang
2003-01-01
In this paper, graph drawing algorithms based on genetic algorithms are designed for general undirected graphs and directed graphs. As being shown, graph drawing algorithms designed by genetic algorithms have the following advantages: the frames of the algorithms are unified, the method is simple, different algorithms may be attained by designing different objective functions, therefore enhance the reuse of the algorithms. Also, aesthetics or constrains may be added to satisfy different requirements.
A novel algorithm for satellite data transmission
无
2009-01-01
For remote sensing satellite data transmission,a novel algorithm is proposed in this paper.It integrates different type feature descriptors into multistage recognizers.In the first level,the dynamic clustering algorithm is used.In the second level,the improved support vector machines algorithm demonstrates its validity.In the third level,the shape matrices similarity comparison algorithm shows its excellent performance.The single child recognizers are connected in series,but they are independent of each other.Objects which are not recognized correctly by the lower level recognizers are then put into the higher level recognizers.Experimental results show that the multistage recognition algorithm improves the accuracy greatly with higher level feature descriptors and higher level recognizers.The algorithm may offer a new methodology for high speed satellite data transmission.
Relative Pose Estimation Algorithm with Gyroscope Sensor
Shanshan Wei
2016-01-01
Full Text Available This paper proposes a novel vision and inertial fusion algorithm S2fM (Simplified Structure from Motion for camera relative pose estimation. Different from current existing algorithms, our algorithm estimates rotation parameter and translation parameter separately. S2fM employs gyroscopes to estimate camera rotation parameter, which is later fused with the image data to estimate camera translation parameter. Our contributions are in two aspects. (1 Under the circumstance that no inertial sensor can estimate accurately enough translation parameter, we propose a translation estimation algorithm by fusing gyroscope sensor and image data. (2 Our S2fM algorithm is efficient and suitable for smart devices. Experimental results validate efficiency of the proposed S2fM algorithm.
A new cloud algorithm for gome data
Grzegorski, M.; Beierle, S.; Friedeburg, C.; Hollwedel, J.; Khokhar, F.; Kühl, S.; Platt, U.; Wenig, M.; Wilms-Grabe, W.; Wagner, T.
2003-04-01
The Global Ozone Monitoring Experiment (GOME) on the ERS-2 satellite allows the measurement of many tropospheric trace gases (e.g. NO_2, SO_2, BrO, HCHO, H_2O) using the DOAS technique. Cloud algorithms are essential for the accurate retrieval of the tropospheric vertical column density of these trace gases. A new algorithm using PMD-data is presented. The results are validated through comparison with other algorithms (e.g. FRESCO, CRUSA). Problems found in existing algorithms such as overestimated cloud fractions over desert regions and negative values over oceans are significantly improved with the new algorithm. Also other possible errorsources like the systematic intensity decrease across the subpixels influences the calculation of the cloud fractions. The new algorithm tries to correct this effect.
A Class of Coning Algorithms Based on a Half-Compressed Structure
Chuanye Tang
2014-08-01
Full Text Available Aiming to advance the coning algorithm performance of strapdown inertial navigation systems, a new half-compressed coning correction structure is presented. The half-compressed algorithm structure is analytically proven to be equivalent to the traditional compressed structure under coning environments. The half-compressed algorithm coefficients allow direct configuration from traditional compressed algorithm coefficients. A type of algorithm error model is defined for coning algorithm performance evaluation under maneuver environment conditions. Like previous uncompressed algorithms, the half-compressed algorithm has improved maneuver accuracy and retained coning accuracy compared with its corresponding compressed algorithm. Compared with prior uncompressed algorithms, the formula for the new algorithm coefficients is simpler.
Fontana, W.
1990-12-13
In this paper complex adaptive systems are defined by a self- referential loop in which objects encode functions that act back on these objects. A model for this loop is presented. It uses a simple recursive formal language, derived from the lambda-calculus, to provide a semantics that maps character strings into functions that manipulate symbols on strings. The interaction between two functions, or algorithms, is defined naturally within the language through function composition, and results in the production of a new function. An iterated map acting on sets of functions and a corresponding graph representation are defined. Their properties are useful to discuss the behavior of a fixed size ensemble of randomly interacting functions. This function gas'', or Turning gas'', is studied under various conditions, and evolves cooperative interaction patterns of considerable intricacy. These patterns adapt under the influence of perturbations consisting in the addition of new random functions to the system. Different organizations emerge depending on the availability of self-replicators.
Optimal Multistage Algorithm for Adjoint Computation
Aupy, Guillaume; Herrmann, Julien; Hovland, Paul; Robert, Yves
2016-01-01
We reexamine the work of Stumm and Walther on multistage algorithms for adjoint computation. We provide an optimal algorithm for this problem when there are two levels of checkpoints, in memory and on disk. Previously, optimal algorithms for adjoint computations were known only for a single level of checkpoints with no writing and reading costs; a well-known example is the binomial checkpointing algorithm of Griewank and Walther. Stumm and Walther extended that binomial checkpointing algorithm to the case of two levels of checkpoints, but they did not provide any optimality results. We bridge the gap by designing the first optimal algorithm in this context. We experimentally compare our optimal algorithm with that of Stumm and Walther to assess the difference in performance.
Microgenetic optimization algorithm for optimal wavefront shaping
Anderson, Benjamin R; Gunawidjaja, Ray; Eilers, Hergen
2015-01-01
One of the main limitations of utilizing optimal wavefront shaping in imaging and authentication applications is the slow speed of the optimization algorithms currently being used. To address this problem we develop a micro-genetic optimization algorithm ($\\mu$GA) for optimal wavefront shaping. We test the abilities of the $\\mu$GA and make comparisons to previous algorithms (iterative and simple-genetic) by using each algorithm to optimize transmission through an opaque medium. From our experiments we find that the $\\mu$GA is faster than both the iterative and simple-genetic algorithms and that both genetic algorithms are more resistant to noise and sample decoherence than the iterative algorithm.
Juliette Richetin
Full Text Available Since the development of D scores for the Implicit Association Test, few studies have examined whether there is a better scoring method. In this contribution, we tested the effect of four relevant parameters for IAT data that are the treatment of extreme latencies, the error treatment, the method for computing the IAT difference, and the distinction between practice and test critical trials. For some options of these different parameters, we included robust statistic methods that can provide viable alternative metrics to existing scoring algorithms, especially given the specificity of reaction time data. We thus elaborated 420 algorithms that result from the combination of all the different options and test the main effect of the four parameters with robust statistical analyses as well as their interaction with the type of IAT (i.e., with or without built-in penalty included in the IAT procedure. From the results, we can elaborate some recommendations. A treatment of extreme latencies is preferable but only if it consists in replacing rather than eliminating them. Errors contain important information and should not be discarded. The D score seems to be still a good way to compute the difference although the G score could be a good alternative, and finally it seems better to not compute the IAT difference separately for practice and test critical trials. From this recommendation, we propose to improve the traditional D scores with small yet effective modifications.
Solving Hitchcock's transportation problem by a genetic algorithm
CHEN Hai-feng; CHO Joong Rae; LEE Jeong.Tae
2004-01-01
Genetic algorithms (GAs) employ the evolutionary process of Darwin's nature selection theory to find the solutions of optimization problems. In this paper, an implementation of genetic algorithm is put forward to solve a classical transportation problem, namely the Hitchcock's Transportation Problem (HTP), and the GA is improved to search for all optimal solutions and identify them automatically. The algorithm is coded with C++ and validated by numerical examples. The computational results show that the algorithm is efficient for solving the Hitchcock's transportation problem.
A NEW ALGORITHM OF THE NONLINEAR ADAPTIVE INTERPOLATION
Shi Lingfeng; Guo Baolong
2006-01-01
The paper presents a new algorithm of NonLinearly Adaptive Interpolation (NLAI). NLAI is based on both the gradients and the curvature of the signals with the predicted subsection. It is characterized by adaptive nonlinear interpolation method with extracting the characteristics of signals. Experimental research testifies the validity of the algorithm using the echoes of the Ground Penetrating Radar (GPR). A comparison of this algorithm with other traditional algorithms demonstrates that it is feasible.
A Clustal Alignment Improver Using Evolutionary Algorithms
Thomsen, Rene; Fogel, Gary B.; Krink, Thimo
2002-01-01
Multiple sequence alignment (MSA) is a crucial task in bioinformatics. In this paper we extended previous work with evolutionary algorithms (EA) by using MSA solutions obtained from the wellknown Clustal V algorithm as a candidate solution seed of the initial EA population. Our results clearly show...
Kalman plus weights: a time scale algorithm
Greenhall, C. A.
2001-01-01
KPW is a time scale algorithm that combines Kalman filtering with the basic time scale equation (BTSE). A single Kalman filter that estimates all clocks simultaneously is used to generate the BTSE frequency estimates, while the BTSE weights are inversely proportional to the white FM variances of the clocks. Results from simulated clock ensembles are compared to previous simulation results from other algorithms.
Identifying Primary Spontaneous Pneumothorax from Administrative Databases: A Validation Study
Eric Frechette
2016-01-01
Full Text Available Introduction. Primary spontaneous pneumothorax (PSP is a disorder commonly encountered in healthy young individuals. There is no differentiation between PSP and secondary pneumothorax (SP in the current version of the International Classification of Diseases (ICD-10. This complicates the conduct of epidemiological studies on the subject. Objective. To validate the accuracy of an algorithm that identifies cases of PSP from administrative databases. Methods. The charts of 150 patients who consulted the emergency room (ER with a recorded main diagnosis of pneumothorax were reviewed to define the type of pneumothorax that occurred. The corresponding hospital administrative data collected during previous hospitalizations and ER visits were processed through the proposed algorithm. The results were compared over two different age groups. Results. There were 144 cases of pneumothorax correctly coded (96%. The results obtained from the PSP algorithm demonstrated a significantly higher sensitivity (97% versus 81%, p=0.038 and positive predictive value (87% versus 46%, p<0.001 in patients under 40 years of age than in older patients. Conclusions. The proposed algorithm is adequate to identify cases of PSP from administrative databases in the age group classically associated with the disease. This makes possible its utilization in large population-based studies.
Fatigue evaluation algorithms: Review
Passipoularidis, V.A.; Broendsted, P.
2009-11-15
A progressive damage fatigue simulator for variable amplitude loads named FADAS is discussed in this work. FADAS (Fatigue Damage Simulator) performs ply by ply stress analysis using classical lamination theory and implements adequate stiffness discount tactics based on the failure criterion of Puck, to model the degradation caused by failure events in ply level. Residual strength is incorporated as fatigue damage accumulation metric. Once the typical fatigue and static properties of the constitutive ply are determined,the performance of an arbitrary lay-up under uniaxial and/or multiaxial load time series can be simulated. The predictions are validated against fatigue life data both from repeated block tests at a single stress ratio as well as against spectral fatigue using the WISPER, WISPERX and NEW WISPER load sequences on a Glass/Epoxy multidirectional laminate typical of a wind turbine rotor blade construction. Two versions of the algorithm, the one using single-step and the other using incremental application of each load cycle (in case of ply failure) are implemented and compared. Simulation results confirm the ability of the algorithm to take into account load sequence effects. In general, FADAS performs well in predicting life under both spectral and block loading fatigue. (author)
Austin, T M; Ovtchinnikov, S; Werner, G R; Bellantoni, L
2010-01-01
The recently developed frequency extraction algorithm [G.R. Werner and J.R. Cary, J. Comp. Phys. 227, 5200 (2008)] that enables a simple FDTD algorithm to be transformed into an efficient eigenmode solver is applied to a realistic accelerator cavity modeled with embedded boundaries and Richardson extrapolation. Previously, the frequency extraction method was shown to be capable of distinguishing M degenerate modes by running M different simulations and to permit mode extraction with minimal post-processing effort that only requires solving a small eigenvalue problem. Realistic calculations for an accelerator cavity are presented in this work to establish the validity of the method for realistic modeling scenarios and to illustrate the complexities of the computational validation process. The method is found to be able to extract the frequencies with error that is less than a part in 10^5. The corrected experimental and computed values differ by about one parts in 10^$, which is accounted for (in largest part)...
Van Uytven, Eric, E-mail: eric.vanuytven@cancercare.mb.ca; Van Beek, Timothy [Medical Physics Department, CancerCare Manitoba, 675 McDermot Avenue, Winnipeg, Manitoba R3E 0V9 (Canada); McCowan, Peter M. [Department of Physics and Astronomy, University of Manitoba, Winnipeg, Manitoba R3T 2N2, Canada and Medical Physics Department, CancerCare Manitoba, 675 McDermot Avenue, Winnipeg, Manitoba R3E 0V9 (Canada); Chytyk-Praznik, Krista [Medical Physics Department, Nova Scotia Cancer Centre, 5820 University Avenue, Halifax, Nova Scotia B3H 1V7 (Canada); Greer, Peter B. [School of Mathematical and Physical Sciences, University of Newcastle, Newcastle, NSW 2308 (Australia); Department of Radiation Oncology, Calvary Mater Newcastle Hospital, Newcastle, NSW 2298 (Australia); McCurdy, Boyd M. C. [Department of Physics and Astronomy, University of Manitoba, Winnipeg, Manitoba R3T 2N2 (Canada); Medical Physics Department, CancerCare Manitoba, 675 McDermot Avenue, Winnipeg, Manitoba R3E 0V9 (Canada); Department of Radiology, University of Manitoba, 820 Sherbrook Street, Winnipeg, Manitoba R3A 1R9 (Canada)
2015-12-15
Purpose: Radiation treatments are trending toward delivering higher doses per fraction under stereotactic radiosurgery and hypofractionated treatment regimens. There is a need for accurate 3D in vivo patient dose verification using electronic portal imaging device (EPID) measurements. This work presents a model-based technique to compute full three-dimensional patient dose reconstructed from on-treatment EPID portal images (i.e., transmission images). Methods: EPID dose is converted to incident fluence entering the patient using a series of steps which include converting measured EPID dose to fluence at the detector plane and then back-projecting the primary source component of the EPID fluence upstream of the patient. Incident fluence is then recombined with predicted extra-focal fluence and used to calculate 3D patient dose via a collapsed-cone convolution method. This method is implemented in an iterative manner, although in practice it provides accurate results in a single iteration. The robustness of the dose reconstruction technique is demonstrated with several simple slab phantom and nine anthropomorphic phantom cases. Prostate, head and neck, and lung treatments are all included as well as a range of delivery techniques including VMAT and dynamic intensity modulated radiation therapy (IMRT). Results: Results indicate that the patient dose reconstruction algorithm compares well with treatment planning system computed doses for controlled test situations. For simple phantom and square field tests, agreement was excellent with a 2%/2 mm 3D chi pass rate ≥98.9%. On anthropomorphic phantoms, the 2%/2 mm 3D chi pass rates ranged from 79.9% to 99.9% in the planning target volume (PTV) region and 96.5% to 100% in the low dose region (>20% of prescription, excluding PTV and skin build-up region). Conclusions: An algorithm to reconstruct delivered patient 3D doses from EPID exit dosimetry measurements was presented. The method was applied to phantom and patient
Influence of Previous Knowledge in Torrance Tests of Creative Thinking
María Aranguren
2015-07-01
Full Text Available The aim of this work is to analyze the influence of study field, expertise and recreational activities participation in Torrance Tests of Creative Thinking (TTCT, 1974 performance. Several hypotheses were postulated to explore the possible effects of previous knowledge in TTCT verbal and TTCT figural university students’ outcomes. Participants in this study included 418 students from five study fields: Psychology;Philosophy and Literature, Music; Engineering; and Journalism and Advertising (Communication Sciences. Results found in this research seem to indicate that there in none influence of the study field, expertise and recreational activities participation in neither of the TTCT tests. Instead, the findings seem to suggest some kind of interaction between certain skills needed to succeed in specific studies fields and performance on creativity tests, such as the TTCT. These results imply that TTCT is a useful and valid instrument to measure creativity and that some cognitive process involved in innovative thinking can be promoted using different intervention programs in schools and universities regardless the students study field.
Grandin, Robert; Gray, Tim
2017-02-01
The Center for NDE (CNDE) at Iowa State University has a long history of developing physics models for NDE and packaging these models into simulation tools which make the modeling capabilities accessible to CNDEs industrial sponsors. Recent work at CNDE has led to the development of a new ultrasonic simulation package, UTSim2, which aims to continue this tradition of supporting industrial application of CNDE models. In order to meet this goal, UTSim2 has been designed as an extensible software package which can support previously-developed physics models as well as future models yet to be developed. Initial work has focused on the implementation of a Gauss-Hermite beam model, a paraxial approximation, which is implemented as part of the Thompson-Gray measurement model. This paper will present recent validation results and include comparisons against both previously-validated model output and newly-performed experiments.
Liu, Dong-sheng; Fan, Shu-jiang
2014-01-01
In order to offer mobile customers better service, we should classify the mobile user firstly. Aimed at the limitations of previous classification methods, this paper puts forward a modified decision tree algorithm for mobile user classification, which introduced genetic algorithm to optimize the results of the decision tree algorithm. We also take the context information as a classification attributes for the mobile user and we classify the context into public context and private context classes. Then we analyze the processes and operators of the algorithm. At last, we make an experiment on the mobile user with the algorithm, we can classify the mobile user into Basic service user, E-service user, Plus service user, and Total service user classes and we can also get some rules about the mobile user. Compared to C4.5 decision tree algorithm and SVM algorithm, the algorithm we proposed in this paper has higher accuracy and more simplicity.
Seismogeochemical algorithms for earthquake prediction: an overview
F. Quattrocchi
1997-06-01
Full Text Available While the literature abounds with case histories related to geochemical precursory phenomena, only a few studies on definite seismogeochemical algorithms have been published so far. Currently, available theoretical algorithms are based on obsolete views of fluid migration processes that do not take into account the possibility of rapid and long-distance gas migration from the focal zone. Empirical algorithms are often based on a limited number of data and need validation for several geostructural environments. The algorithms of Sardarov (1981 and Rikitake (1987, for Rn and other geochemical elements, suggest that a definite relationship exists between geochemical parameters and seismic events. Their validation must be based on the verification of independence (maintained by the former author or dependence (maintained by the latter of the precursor time on the seismic data.
Hippisley-Cox, Julia; Coupland, Carol
2014-07-28
To develop and validate risk algorithms (QBleed) for estimating the absolute risk of upper gastrointestinal and intracranial bleed for patients with and without anticoagulation aged 21-99 years in primary care. Open cohort study using routinely collected data from general practice linked to hospital episode statistics data and mortality data during the five year study period between 1 January 2008 and 1 October 2013. 565 general practices in England contributing to the national QResearch database to develop the algorithm and 188 different QResearch practices to validate the algorithm. All 753 general practices had data linked to hospital episode statistics and mortality data at individual patient level. Gastrointestinal bleed and intracranial bleed recorded on either the linked mortality data or the linked hospital records. We studied 4.4 million patients in the derivation cohort with 16.4 million person years of follow-up. During follow-up, 21,641 patients had an incident upper gastrointestinal bleed and 9040 had an intracranial bleed. For the validation cohort, we identified 1.4 million patients contributing over 4.9 million person years of follow-up. During follow-up, 6600 patients had an incident gastrointestinal bleed and 2820 had an intracranial bleed. We excluded patients without a valid Townsend score for deprivation and those prescribed anticoagulants in the 180 days before study entry. Candidate variables recorded on the general practice computer system before entry to the cohort, including personal variables (age, sex, Townsend deprivation score, ethnicity), lifestyle variables (smoking, alcohol intake), chronic diseases, prescribed drugs, clinical values (body mass index, systolic blood pressure), and laboratory test results (haemoglobin, platelets). We also included previous bleed recorded before entry to the study. The final QBleed algorithms incorporated 21 variables. When applied to the validation cohort, the algorithms in women explained 40% of the
A Hybrid Demon Algorithm for the Two-Dimensional Orthogonal Strip Packing Problem
Bili Chen
2015-01-01
Full Text Available This paper develops a hybrid demon algorithm for a two-dimensional orthogonal strip packing problem. This algorithm combines a placement procedure based on an improved heuristic, local search, and demon algorithm involved in setting one parameter. The hybrid algorithm is tested on a wide set of benchmark instances taken from the literature and compared with other well-known algorithms. The computation results validate the quality of the solutions and the effectiveness of the proposed algorithm.
Born approximation, scattering, and algorithm
Martinez, Alex; Hu, Mengqi; Gu, Haicheng; Qiao, Zhijun
2015-05-01
In the past few decades, there were many imaging algorithms designed in the case of the absence of multiple scattering. Recently, we discussed an algorithm for removing high order scattering components from collected data. This paper is a continuation of our previous work. First, we investigate the current state of multiple scattering in SAR. Then, we revise our method and test it. Given an estimate of our target reflectivity, we compute the multi scattering effects in the target region for various frequencies. Furthermore, we propagate this energy through free space towards our antenna, and remove it from the collected data.
Algorithm Animation with Galant.
Stallmann, Matthias F
2017-01-01
Although surveys suggest positive student attitudes toward the use of algorithm animations, it is not clear that they improve learning outcomes. The Graph Algorithm Animation Tool, or Galant, challenges and motivates students to engage more deeply with algorithm concepts, without distracting them with programming language details or GUIs. Even though Galant is specifically designed for graph algorithms, it has also been used to animate other algorithms, most notably sorting algorithms.
Adaptive cuckoo search algorithm for unconstrained optimization.
Ong, Pauline
2014-01-01
Modification of the intensification and diversification approaches in the recently developed cuckoo search algorithm (CSA) is performed. The alteration involves the implementation of adaptive step size adjustment strategy, and thus enabling faster convergence to the global optimal solutions. The feasibility of the proposed algorithm is validated against benchmark optimization functions, where the obtained results demonstrate a marked improvement over the standard CSA, in all the cases.
Fast Algorithm of Multivariable Generalized Predictive Control
Jin,Yuanyu; Pang,Zhonghua; Cui,Hong
2005-01-01
To avoid the shortcoming of the traditional (previous)generalized predictive control (GPC) algorithms, too large amounts of computation, a fast algorithm of multivariable generalized predictive control is presented in which only the current control actions are computed exactly on line and the rest (the future control actions) are approximately done off line. The algorithm is simple and can be used in the arbitary-dimension input arbitary-dimension output (ADIADO) linear systems. Because it dose not need solving Diophantine equation and reduces the dimension of the inverse matrix, it decreases largely the computational burden. Finally, simulation results show that the presented algorithm is effective and practicable.
Large-scale prediction of microRNA-disease associations by combinatorial prioritization algorithm
Yu, Hua; Chen, Xiaojun; Lu, Lu
2017-03-01
Identification of the associations between microRNA molecules and human diseases from large-scale heterogeneous biological data is an important step for understanding the pathogenesis of diseases in microRNA level. However, experimental verification of microRNA-disease associations is expensive and time-consuming. To overcome the drawbacks of conventional experimental methods, we presented a combinatorial prioritization algorithm to predict the microRNA-disease associations. Importantly, our method can be used to predict microRNAs (diseases) associated with the diseases (microRNAs) without the known associated microRNAs (diseases). The predictive performance of our proposed approach was evaluated and verified by the internal cross-validations and external independent validations based on standard association datasets. The results demonstrate that our proposed method achieves the impressive performance for predicting the microRNA-disease association with the Area Under receiver operation characteristic Curve (AUC), 86.93%, which is indeed outperform the previous prediction methods. Particularly, we observed that the ensemble-based method by integrating the predictions of multiple algorithms can give more reliable and robust prediction than the single algorithm, with the AUC score improved to 92.26%. We applied our combinatorial prioritization algorithm to lung neoplasms and breast neoplasms, and revealed their top 30 microRNA candidates, which are in consistent with the published literatures and databases.
Large-scale prediction of microRNA-disease associations by combinatorial prioritization algorithm
Yu, Hua; Chen, Xiaojun; Lu, Lu
2017-01-01
Identification of the associations between microRNA molecules and human diseases from large-scale heterogeneous biological data is an important step for understanding the pathogenesis of diseases in microRNA level. However, experimental verification of microRNA-disease associations is expensive and time-consuming. To overcome the drawbacks of conventional experimental methods, we presented a combinatorial prioritization algorithm to predict the microRNA-disease associations. Importantly, our method can be used to predict microRNAs (diseases) associated with the diseases (microRNAs) without the known associated microRNAs (diseases). The predictive performance of our proposed approach was evaluated and verified by the internal cross-validations and external independent validations based on standard association datasets. The results demonstrate that our proposed method achieves the impressive performance for predicting the microRNA-disease association with the Area Under receiver operation characteristic Curve (AUC), 86.93%, which is indeed outperform the previous prediction methods. Particularly, we observed that the ensemble-based method by integrating the predictions of multiple algorithms can give more reliable and robust prediction than the single algorithm, with the AUC score improved to 92.26%. We applied our combinatorial prioritization algorithm to lung neoplasms and breast neoplasms, and revealed their top 30 microRNA candidates, which are in consistent with the published literatures and databases. PMID:28317855
Issues Challenges and Tools of Clustering Algorithms
Parul Agarwal
2011-05-01
Full Text Available Clustering is an unsupervised technique of Data Mining. It means grouping similar objects together and separating the dissimilar ones. Each object in the data set is assigned a class label in the clustering process using a distance measure. This paper has captured the problems that are faced in real when clustering algorithms are implemented .It also considers the most extensively used tools which are readily available and support functions which ease the programming. Once algorithms have been implemented, they also need to be tested for its validity. There exist several validation indexes for testing the performance and accuracy which have also been discussed here.
Algorithms for intravenous insulin delivery.
Braithwaite, Susan S; Clement, Stephen
2008-08-01
This review aims to classify algorithms for intravenous insulin infusion according to design. Essential input data include the current blood glucose (BG(current)), the previous blood glucose (BG(previous)), the test time of BG(current) (test time(current)), the test time of BG(previous) (test time(previous)), and the previous insulin infusion rate (IR(previous)). Output data consist of the next insulin infusion rate (IR(next)) and next test time. The classification differentiates between "IR" and "MR" algorithm types, both defined as a rule for assigning an insulin infusion rate (IR), having a glycemic target. Both types are capable of assigning the IR for the next iteration of the algorithm (IR(next)) as an increasing function of BG(current), IR(previous), and rate-of-change of BG with respect to time, each treated as an independent variable. Algorithms of the IR type directly seek to define IR(next) as an incremental adjustment to IR(previous). At test time(current), under an IR algorithm the differences in values of IR(next) that might be assigned depending upon the value of BG(current) are not necessarily continuously dependent upon, proportionate to, or commensurate with either the IR(previous) or the rate-of-change of BG. Algorithms of the MR type create a family of IR functions of BG differing according to maintenance rate (MR), each being an iso-MR curve. The change of IR(next) with respect to BG(current) is a strictly increasing function of MR. At test time(current), algorithms of the MR type use IR(previous) and the rate-of-change of BG to define the MR, multiplier, or column assignment, which will be used for patient assignment to the right iso-MR curve and as precedent for IR(next). Bolus insulin therapy is especially effective when used in proportion to carbohydrate load to cover anticipated incremental transitory enteral or parenteral carbohydrate exposure. Specific distinguishing algorithm design features and choice of parameters may be important to
ALICE; ATLAS; CMS; LHCb; Golling, Tobias
2008-09-06
The four experiments, ALICE, ATLAS, CMS and LHCb are currently under constructionat CERN. They will study the products of proton-proton collisions at the Large Hadron Collider. All experiments are equipped with sophisticated tracking systems, unprecedented in size and complexity. Full exploitation of both the inner detector andthe muon system requires an accurate alignment of all detector elements. Alignmentinformation is deduced from dedicated hardware alignment systems and the reconstruction of charged particles. However, the system is degenerate which means the data is insufficient to constrain all alignment degrees of freedom, so the techniques are prone to converging on wrong geometries. This deficiency necessitates validation and monitoring of the alignment. An exhaustive discussion of means to validate is subject to this document, including examples and plans from all four LHC experiments, as well as other high energy experiments.
An Active Learning Algorithm for Control of Epidural Electrostimulation.
Desautels, Thomas A; Choe, Jaehoon; Gad, Parag; Nandra, Mandheerej S; Roy, Roland R; Zhong, Hui; Tai, Yu-Chong; Edgerton, V Reggie; Burdick, Joel W
2015-10-01
Epidural electrostimulation has shown promise for spinal cord injury therapy. However, finding effective stimuli on the multi-electrode stimulating arrays employed requires a laborious manual search of a vast space for each patient. Widespread clinical application of these techniques would be greatly facilitated by an autonomous, algorithmic system which choses stimuli to simultaneously deliver effective therapy and explore this space. We propose a method based on GP-BUCB, a Gaussian process bandit algorithm. In n = 4 spinally transected rats, we implant epidural electrode arrays and examine the algorithm's performance in selecting bipolar stimuli to elicit specified muscle responses. These responses are compared with temporally interleaved intra-animal stimulus selections by a human expert. GP-BUCB successfully controlled the spinal electrostimulation preparation in 37 testing sessions, selecting 670 stimuli. These sessions included sustained autonomous operations (ten-session duration). Delivered performance with respect to the specified metric was as good as or better than that of the human expert. Despite receiving no information as to anatomically likely locations of effective stimuli, GP-BUCB also consistently discovered such a pattern. Further, GP-BUCB was able to extrapolate from previous sessions' results to make predictions about performance in new testing sessions, while remaining sufficiently flexible to capture temporal variability. These results provide validation for applying automated stimulus selection methods to the problem of spinal cord injury therapy.
Weber, Paula D.; Rudeen, David Keith; Lord, David L.
2014-08-01
SANSMIC is solution mining software that was developed and utilized by SNL in its role as geotechnical advisor to the US DOE SPR for planning purposes. Three SANSMIC leach modes - withdrawal, direct, and reverse leach - have been revalidated with multiple test cases for each mode. The withdrawal mode was validated using high quality data from recent leach activity while the direct and reverse modes utilized data from historical cavern completion reports. Withdrawal results compared very well with observed data, including the location and size of shelves due to string breaks with relative leached volume differences ranging from 6 - 10% and relative radius differences from 1.5 - 3%. Profile comparisons for the direct mode were very good with relative leached volume differences ranging from 6 - 12% and relative radius differences from 5 - 7%. First, second, and third reverse configurations were simulated in order to validate SANSMIC over a range of relative hanging string and OBI locations. The first-reverse was simulated reasonably well with relative leached volume differences ranging from 1 - 9% and relative radius differences from 5 - 12%. The second-reverse mode showed the largest discrepancies in leach profile. Leached volume differences ranged from 8 - 12% and relative radius differences from 1 - 10%. In the third-reverse, relative leached volume differences ranged from 10 - 13% and relative radius differences were %7E4 %. Comparisons to historical reports were quite good, indicating that SANSMIC is essentially the same as documented and validated in the early 1980's.
An Algorithm for Successive Identification of Reflections
Hansen, Kim Vejlby; Larsen, Jan
1994-01-01
A new algorithm for successive identification of seismic reflections is proposed. Generally, the algorithm can be viewed as a curve matching method for images with specific structure. However, in the paper, the algorithm works on seismic signals assembled to constitute an image in which...... the investigated reflections produce curves. In numerical examples, the authors work on signals assembled in CMP gathers. The key idea of the algorithm is to estimate the reflection curve parameters and the reflection coefficients along these curves by combining the multipulse technique and the generalized Radon...... stops the reflection estimation when the actual estimated reflection is insignificant. The reflection validation procedure ensures that the estimated reflections follow the shape of the investigated reflection curves. The algorithm is successfully used in two numerical examples. One is based...
A Learning Algorithm for Multimodal Grammar Inference.
D'Ulizia, A; Ferri, F; Grifoni, P
2011-12-01
The high costs of development and maintenance of multimodal grammars in integrating and understanding input in multimodal interfaces lead to the investigation of novel algorithmic solutions in automating grammar generation and in updating processes. Many algorithms for context-free grammar inference have been developed in the natural language processing literature. An extension of these algorithms toward the inference of multimodal grammars is necessary for multimodal input processing. In this paper, we propose a novel grammar inference mechanism that allows us to learn a multimodal grammar from its positive samples of multimodal sentences. The algorithm first generates the multimodal grammar that is able to parse the positive samples of sentences and, afterward, makes use of two learning operators and the minimum description length metrics in improving the grammar description and in avoiding the over-generalization problem. The experimental results highlight the acceptable performances of the algorithm proposed in this paper since it has a very high probability of parsing valid sentences.
The ethics of algorithms: Mapping the debate
Brent Daniel Mittelstadt
2016-11-01
Full Text Available In information societies, operations, decisions and choices previously left to humans are increasingly delegated to algorithms, which may advise, if not decide, about how data should be interpreted and what actions should be taken as a result. More and more often, algorithms mediate social processes, business transactions, governmental decisions, and how we perceive, understand, and interact among ourselves and with the environment. Gaps between the design and operation of algorithms and our understanding of their ethical implications can have severe consequences affecting individuals as well as groups and whole societies. This paper makes three contributions to clarify the ethical importance of algorithmic mediation. It provides a prescriptive map to organise the debate. It reviews the current discussion of ethical aspects of algorithms. And it assesses the available literature in order to identify areas requiring further work to develop the ethics of algorithms.
A new algorithm for hip fracture surgery
Palm, Henrik; Krasheninnikoff, Michael; Holck, Kim
2012-01-01
Background and purpose Treatment of hip fracture patients is controversial. We implemented a new operative and supervision algorithm (the Hvidovre algorithm) for surgical treatment of all hip fractures, primarily based on own previously published results. Methods 2,000 consecutive patients over 50...... years of age who were admitted and operated on because of a hip fracture were prospectively included. 1,000 of these patients were included after implementation of the algorithm. Demographic parameters, hospital treatment, and reoperations within the first postoperative year were assessed from patient...... by reoperations was reduced from 24% of total hospitalization before the algorithm was introduced to 18% after it was introduced. Interpretation It is possible to implement an algorithm for treatment of all hip fracture patients in a large teaching hospital. In our case, the Hvidovre algorithm both raised...
Paone, G; De Angelis, G.; Portalone, L.; Greco, S.; Giosué, S.; Taglienti, A.; Bisetti, A; Ameglio, F
1997-01-01
By means of a mathematical score previously generated by discriminant analysis on 90 lung cancer patients, a new and larger group of 261 subjects [209 with non-small-cell lung cancer (NSCLC) and 52 with small-cell lung cancer (SCLC)] was analysed to confirm the ability of the method to distinguish between these two types of cancers. The score, which included the serum neuron-specific enolase (NSE) and CYFRA-21.1 levels, permitted correct classification of 93% of the patients. When the misclas...
A Disjoint Set Algorithm for the Watershed Transform
Meijster, Arnold; Roerdink, Jos B.T.M.; Theodoridis, S; Pitas, I; Stouraitis, A; Kalouptsidis, N
1998-01-01
In this paper the implementation of a watershed transform based on Tarjan’s Union-Find algorithm is described. The algorithm computes the watershed as defined previously. The algorithm consists of two stages. In the first stage the image to be segmented is transformed into a lower complete image,
A Disjoint Set Algorithm for the Watershed Transform
Meijster, Arnold; Roerdink, Jos B.T.M.; Theodoridis, S; Pitas, I; Stouraitis, A; Kalouptsidis, N
1998-01-01
In this paper the implementation of a watershed transform based on Tarjan’s Union-Find algorithm is described. The algorithm computes the watershed as defined previously. The algorithm consists of two stages. In the first stage the image to be segmented is transformed into a lower complete image, us
Skiena, Steven S
2008-01-01
Explaining designing algorithms, and analyzing their efficacy and efficiency, this book covers combinatorial algorithms technology, stressing design over analysis. It presents instruction on methods for designing and analyzing computer algorithms. It contains the catalog of algorithmic resources, implementations and a bibliography
Hamiltonian Algorithm Sound Synthesis
大矢, 健一
2013-01-01
Hamiltonian Algorithm (HA) is an algorithm for searching solutions is optimization problems. This paper introduces a sound synthesis technique using Hamiltonian Algorithm and shows a simple example. "Hamiltonian Algorithm Sound Synthesis" uses phase transition effect in HA. Because of this transition effect, totally new waveforms are produced.
Bucher, Taina
2017-01-01
of algorithms affect people's use of these platforms, if at all? To help answer these questions, this article examines people's personal stories about the Facebook algorithm through tweets and interviews with 25 ordinary users. To understand the spaces where people and algorithms meet, this article develops...... the notion of the algorithmic imaginary. It is argued that the algorithmic imaginary – ways of thinking about what algorithms are, what they should be and how they function – is not just productive of different moods and sensations but plays a generative role in moulding the Facebook algorithm itself...
Geist, G.A. [Oak Ridge National Lab., TN (United States). Computer Science and Mathematics Div.; Howell, G.W. [Florida Inst. of Tech., Melbourne, FL (United States). Dept. of Applied Mathematics; Watkins, D.S. [Washington State Univ., Pullman, WA (United States). Dept. of Pure and Applied Mathematics
1997-11-01
The BR algorithm, a new method for calculating the eigenvalues of an upper Hessenberg matrix, is introduced. It is a bulge-chasing algorithm like the QR algorithm, but, unlike the QR algorithm, it is well adapted to computing the eigenvalues of the narrowband, nearly tridiagonal matrices generated by the look-ahead Lanczos process. This paper describes the BR algorithm and gives numerical evidence that it works well in conjunction with the Lanczos process. On the biggest problems run so far, the BR algorithm beats the QR algorithm by a factor of 30--60 in computing time and a factor of over 100 in matrix storage space.
Validation of tsunami inundation model TUNA-RP using OAR-PMEL-135 benchmark problem set
Koh, H. L.; Teh, S. Y.; Tan, W. K.; Kh'ng, X. Y.
2017-05-01
A standard set of benchmark problems, known as OAR-PMEL-135, is developed by the US National Tsunami Hazard Mitigation Program for tsunami inundation model validation. Any tsunami inundation model must be tested for its accuracy and capability using this standard set of benchmark problems before it can be gainfully used for inundation simulation. The authors have previously developed an in-house tsunami inundation model known as TUNA-RP. This inundation model solves the two-dimensional nonlinear shallow water equations coupled with a wet-dry moving boundary algorithm. This paper presents the validation of TUNA-RP against the solutions provided in the OAR-PMEL-135 benchmark problem set. This benchmark validation testing shows that TUNA-RP can indeed perform inundation simulation with accuracy consistent with that in the tested benchmark problem set.
An Immunohistochemical Algorithm for Ovarian Carcinoma Typing
Rahimi, Kurosh; Rambau, Peter F.; Naugler, Christopher; Le Page, Cécile; Meunier, Liliane; de Ladurantaye, Manon; Lee, Sandra; Leung, Samuel; Goode, Ellen L.; Ramus, Susan J.; Carlson, Joseph W.; Li, Xiaodong; Ewanowich, Carol A.; Kelemen, Linda E.; Vanderhyden, Barbara; Provencher, Diane; Huntsman, David; Lee, Cheng-Han; Gilks, C. Blake; Mes Masson, Anne-Marie
2016-01-01
There are 5 major histotypes of ovarian carcinomas. Diagnostic typing criteria have evolved over time, and past cohorts may be misclassified by current standards. Our objective was to reclassify the recently assembled Canadian Ovarian Experimental Unified Resource and the Alberta Ovarian Tumor Type cohorts using immunohistochemical (IHC) biomarkers and to develop an IHC algorithm for ovarian carcinoma histotyping. A total of 1626 ovarian carcinoma samples from the Canadian Ovarian Experimental Unified Resource and the Alberta Ovarian Tumor Type were subjected to a reclassification by comparing the original with the predicted histotype. Histotype prediction was derived from a nominal logistic regression modeling using a previously reclassified cohort (N=784) with the binary input of 8 IHC markers. Cases with discordant original or predicted histotypes were subjected to arbitration. After reclassification, 1762 cases from all cohorts were subjected to prediction models (χ2 Automatic Interaction Detection, recursive partitioning, and nominal logistic regression) with a variable IHC marker input. The histologic type was confirmed in 1521/1626 (93.5%) cases of the Canadian Ovarian Experimental Unified Resource and the Alberta Ovarian Tumor Type cohorts. The highest misclassification occurred in the endometrioid type, where most of the changes involved reclassification from endometrioid to high-grade serous carcinoma, which was additionally supported by mutational data and outcome. Using the reclassified histotype as the endpoint, a 4-marker prediction model correctly classified 88%, a 6-marker 91%, and an 8-marker 93% of the 1762 cases. This study provides statistically validated, inexpensive IHC algorithms, which have versatile applications in research, clinical practice, and clinical trials. PMID:26974996
Algorithmically specialized parallel computers
Snyder, Lawrence; Gannon, Dennis B
1985-01-01
Algorithmically Specialized Parallel Computers focuses on the concept and characteristics of an algorithmically specialized computer.This book discusses the algorithmically specialized computers, algorithmic specialization using VLSI, and innovative architectures. The architectures and algorithms for digital signal, speech, and image processing and specialized architectures for numerical computations are also elaborated. Other topics include the model for analyzing generalized inter-processor, pipelined architecture for search tree maintenance, and specialized computer organization for raster
Clonal Selection Based Memetic Algorithm for Job Shop Scheduling Problems
Jin-hui Yang; Liang Sun; Heow Pueh Lee; Yun Qian; Yan-chun Liang
2008-01-01
A clonal selection based memetic algorithm is proposed for solving job shop scheduling problems in this paper. In the proposed algorithm, the clonal selection and the local search mechanism are designed to enhance exploration and exploitation. In the clonal selection mechanism, clonal selection, hypermutation and receptor edit theories are presented to construct an evolutionary searching mechanism which is used for exploration. In the local search mechanism, a simulated annealing local search algorithm based on Nowicki and Smutnicki's neighborhood is presented to exploit local optima. The proposed algorithm is examined using some well-known benchmark problems. Numerical results validate the effectiveness of the proposed algorithm.
Diagonally loaded SMI algorithm based on inverse matrix recursion
Cao Jianshu; Wang Xuegang
2007-01-01
The derivation of a diagonally loaded sample-matrix inversion (LSMI) algorithm on the busis of inverse matrix recursion (i.e. LSMI-IMR algorithm) is conducted by reconstructing the recursive formulation of covariance matrix. For the new algorithm, diagonal loading is by setting initial inverse matrix without any addition of computation. In addition, acorresponding improved recursive algorithm is presented, which is low computational complexity. This eliminates the complex multiplications of the scalar coefficient and updating matrix, resulting in significant computational savings.Simulations show that the LSMI-IMR algorithm is valid.
Faster Algorithms on Branch and Clique Decompositions
Bodlaender, Hans L.; van Leeuwen, Erik Jan; van Rooij, Johan M. M.; Vatshelle, Martin
We combine two techniques recently introduced to obtain faster dynamic programming algorithms for optimization problems on graph decompositions. The unification of generalized fast subset convolution and fast matrix multiplication yields significant improvements to the running time of previous algorithms for several optimization problems. As an example, we give an O^{*}(3^{ω/2k}) time algorithm for Minimum Dominating Set on graphs of branchwidth k, improving on the previous O *(4 k ) algorithm. Here ω is the exponent in the running time of the best matrix multiplication algorithm (currently ω< 2.376). For graphs of cliquewidth k, we improve from O *(8 k ) to O *(4 k ). We also obtain an algorithm for counting the number of perfect matchings of a graph, given a branch decomposition of width k, that runs in time O^{*}(2^{ω/2k}). Generalizing these approaches, we obtain faster algorithms for all so-called [ρ,σ]-domination problems on branch decompositions if ρ and σ are finite or cofinite. The algorithms presented in this paper either attain or are very close to natural lower bounds for these problems.
Improvements on EMG-based handwriting recognition with DTW algorithm.
Li, Chengzhang; Ma, Zheren; Yao, Lin; Zhang, Dingguo
2013-01-01
Previous works have shown that Dynamic Time Warping (DTW) algorithm is a proper method of feature extraction for electromyography (EMG)-based handwriting recognition. In this paper, several modifications are proposed to improve the classification process and enhance recognition accuracy. A two-phase template making approach has been introduced to generate templates with more salient features, and modified Mahalanobis Distance (mMD) approach is used to replace Euclidean Distance (ED) in order to minimize the interclass variance. To validate the effectiveness of such modifications, experiments were conducted, in which four subjects wrote lowercase letters at a normal speed and four-channel EMG signals from forearms were recorded. Results of offline analysis show that the improvements increased the average recognition accuracy by 9.20%.
Validating MEDIQUAL Constructs
Lee, Sang-Gun; Min, Jae H.
In this paper, we validate MEDIQUAL constructs through the different media users in help desk service. In previous research, only two end-users' constructs were used: assurance and responsiveness. In this paper, we extend MEDIQUAL constructs to include reliability, empathy, assurance, tangibles, and responsiveness, which are based on the SERVQUAL theory. The results suggest that: 1) five MEDIQUAL constructs are validated through the factor analysis. That is, importance of the constructs have relatively high correlations between measures of the same construct using different methods and low correlations between measures of the constructs that are expected to differ; and 2) five MEDIQUAL constructs are statistically significant on media users' satisfaction in help desk service by regression analysis.
Application of a New Fuzzy Clustering Algorithm in Intrusion Detection
无
2008-01-01
This paper presents a new Section Set Adaptive FCM algorithm. The algorithm solved the shortcomings of localoptimality, unsure classification and clustering numbers ascertained previously. And it improved on the architecture of FCM al-gorithm, enhanced the analysis for effective clustering. During the clustering processing, it may adjust clustering numbers dy-namically. Finally, it used the method of section set decreasing the time of classification. By experiments, the algorithm can im-prove dependability of clustering and correctness of classification.
An Algorithm for the Convolution of Legendre Series
Hale, Nicholas
2014-01-01
An O(N2) algorithm for the convolution of compactly supported Legendre series is described. The algorithm is derived from the convolution theorem for Legendre polynomials and the recurrence relation satisfied by spherical Bessel functions. Combining with previous work yields an O(N 2) algorithm for the convolution of Chebyshev series. Numerical results are presented to demonstrate the improved efficiency over the existing algorithm. © 2014 Society for Industrial and Applied Mathematics.
Margolis, C Z
1983-02-04
The clinical algorithm (flow chart) is a text format that is specially suited for representing a sequence of clinical decisions, for teaching clinical decision making, and for guiding patient care. A representative clinical algorithm is described in detail; five steps for writing an algorithm and seven steps for writing a set of algorithms are outlined. Five clinical education and patient care uses of algorithms are then discussed, including a map for teaching clinical decision making and protocol charts for guiding step-by-step care of specific problems. Clinical algorithms are compared as to their clinical usefulness with decision analysis. Three objections to clinical algorithms are answered, including the one that they restrict thinking. It is concluded that methods should be sought for writing clinical algorithms that represent expert consensus. A clinical algorithm could then be written for any area of medical decision making that can be standardized. Medical practice could then be taught more effectively, monitored accurately, and understood better.
Intuitionistic Fuzzy Possibilistic C Means Clustering Algorithms
Arindam Chaudhuri
2015-01-01
Full Text Available Intuitionistic fuzzy sets (IFSs provide mathematical framework based on fuzzy sets to describe vagueness in data. It finds interesting and promising applications in different domains. Here, we develop an intuitionistic fuzzy possibilistic C means (IFPCM algorithm to cluster IFSs by hybridizing concepts of FPCM, IFSs, and distance measures. IFPCM resolves inherent problems encountered with information regarding membership values of objects to each cluster by generalizing membership and nonmembership with hesitancy degree. The algorithm is extended for clustering interval valued intuitionistic fuzzy sets (IVIFSs leading to interval valued intuitionistic fuzzy possibilistic C means (IVIFPCM. The clustering algorithm has membership and nonmembership degrees as intervals. Information regarding membership and typicality degrees of samples to all clusters is given by algorithm. The experiments are performed on both real and simulated datasets. It generates valuable information and produces overlapped clusters with different membership degrees. It takes into account inherent uncertainty in information captured by IFSs. Some advantages of algorithms are simplicity, flexibility, and low computational complexity. The algorithm is evaluated through cluster validity measures. The clustering accuracy of algorithm is investigated by classification datasets with labeled patterns. The algorithm maintains appreciable performance compared to other methods in terms of pureness ratio.
Tian-qi WU; Min YAO; Jian-hua YANG
2016-01-01
By adopting the distributed problem-solving strategy, swarm intelligence algorithms have been successfully applied to many optimization problems that are difficult to deal with using traditional methods. At present, there are many well-implemented algorithms, such as particle swarm optimization, genetic algorithm, artificial bee colony algorithm, and ant colony optimization. These algorithms have already shown favorable performances. However, with the objects becoming increasingly complex, it is becoming gradually more difficult for these algorithms to meet human’s demand in terms of accuracy and time. Designing a new algorithm to seek better solutions for optimization problems is becoming increasingly essential. Dolphins have many noteworthy biological characteristics and living habits such as echolocation, information exchanges, cooperation, and division of labor. Combining these biological characteristics and living habits with swarm intelligence and bringing them into optimization prob-lems, we propose a brand new algorithm named the ‘dolphin swarm algorithm’ in this paper. We also provide the definitions of the algorithm and specific descriptions of the four pivotal phases in the algorithm, which are the search phase, call phase, reception phase, and predation phase. Ten benchmark functions with different properties are tested using the dolphin swarm algorithm, particle swarm optimization, genetic algorithm, and artificial bee colony algorithm. The convergence rates and benchmark func-tion results of these four algorithms are compared to testify the effect of the dolphin swarm algorithm. The results show that in most cases, the dolphin swarm algorithm performs better. The dolphin swarm algorithm possesses some great features, such as first-slow-then-fast convergence, periodic convergence, local-optimum-free, and no specific demand on benchmark functions. Moreover, the dolphin swarm algorithm is particularly appropriate to optimization problems, with more
New focused crawling algorithm
Su Guiyang; Li Jianhua; Ma Yinghua; Li Shenghong; Song Juping
2005-01-01
Focused carawling is a new research approach of search engine. It restricts information retrieval and provides search service in specific topic area. Focused crawling search algorithm is a key technique of focused crawler which directly affects the search quality. This paper first introduces several traditional topic-specific crawling algorithms, then an inverse link based topic-specific crawling algorithm is put forward. Comparison experiment proves that this algorithm has a good performance in recall, obviously better than traditional Breadth-First and Shark-Search algorithms. The experiment also proves that this algorithm has a good precision.
Symplectic algebraic dynamics algorithm
2007-01-01
Based on the algebraic dynamics solution of ordinary differential equations andintegration of ,the symplectic algebraic dynamics algorithm sn is designed,which preserves the local symplectic geometric structure of a Hamiltonian systemand possesses the same precision of the na ve algebraic dynamics algorithm n.Computer experiments for the 4th order algorithms are made for five test modelsand the numerical results are compared with the conventional symplectic geometric algorithm,indicating that sn has higher precision,the algorithm-inducedphase shift of the conventional symplectic geometric algorithm can be reduced,and the dynamical fidelity can be improved by one order of magnitude.
Adaptive cockroach swarm algorithm
Obagbuwa, Ibidun C.; Abidoye, Ademola P.
2017-07-01
An adaptive cockroach swarm optimization (ACSO) algorithm is proposed in this paper to strengthen the existing cockroach swarm optimization (CSO) algorithm. The ruthless component of CSO algorithm is modified by the employment of blend crossover predator-prey evolution method which helps algorithm prevent any possible population collapse, maintain population diversity and create adaptive search in each iteration. The performance of the proposed algorithm on 16 global optimization benchmark function problems was evaluated and compared with the existing CSO, cuckoo search, differential evolution, particle swarm optimization and artificial bee colony algorithms.
Decoherence in Search Algorithms
Abal, G; Marquezino, F L; Oliveira, A C; Portugal, R
2009-01-01
Recently several quantum search algorithms based on quantum walks were proposed. Those algorithms differ from Grover's algorithm in many aspects. The goal is to find a marked vertex in a graph faster than classical algorithms. Since the implementation of those new algorithms in quantum computers or in other quantum devices is error-prone, it is important to analyze their robustness under decoherence. In this work we analyze the impact of decoherence on quantum search algorithms implemented on two-dimensional grids and on hypercubes.
22 CFR 40.91 - Certain aliens previously removed.
2010-04-01
... 22 Foreign Relations 1 2010-04-01 2010-04-01 false Certain aliens previously removed. 40.91... IMMIGRANTS UNDER THE IMMIGRATION AND NATIONALITY ACT, AS AMENDED Aliens Previously Removed § 40.91 Certain aliens previously removed. (a) 5-year bar. An alien who has been found inadmissible, whether as a result...
余靓平; 宋洪涛; 曾志勇; 王齐敏; 邱罕凡
2012-01-01
Objective To assess whether the existing three types of pharmacogenetics-based Warfarin dosing algorithms appropriately predict the actual maintenance dose in Han Chinese mechanical heart valve replacement patients (n =130 ).Methods The patients' CYP2C9 and VKORC1 genetic polymorphisms were detected by PCR-RFLP.The genotype of CYP2C9,VKORC1 and other information were used to calculate predicted doses.Accuracy of the models was assessed using the absolute value of the difference between predicted dose and actual dose,calculated on both an absolute and percentage basis.Actual weekly dose was also regressed on predicted weekly dose,from which we obtained R2 values.Clinical accuracy of the predictions was assessed by computing the proportion in which the predicted dose was 20％ or more below the actual dose (under dosed),within 20％ of the actual dose (ideally dosed ),or 20％ or greater above the actual dose ( over dosed).Results The average absolute error is the smallest for the predictions made by the Wen model (3.74 mg/wk),followed by the Ohno model(4.07 mg/wk) and IWPC model(5.05 mg/wk).R2 was 40.2％ in the Wen model,38.2％ in the Ohno model and 26.7％ in the IWPC model.When comparing the percentage of patients for whom the predicted doses were ideal,the Wen model works the best (50.0％) in low-dose group (≤21 mg/wk),but the Ohno model works the best (85.29％) in middle-dose group (21 -49 mg/wk),followed by the Wen model.Conclusion The best accuracy is achieved by the Wen model and the best clinical accuracy is obtained by the Ohno model for predicting the actual maintenance dose in Han Chinese mechanical heart valve replacement patients.%目的 对目前已建立的3种华法林个体化给药模型在中国汉族人群中进行验证,评价模型的准确性和临床实用性.方法 采用聚合酶链反应-限制性片段长度多态性(PCR-RFLP)技术对患者进行细胞色素P450 2C9 * 3(CYP2C9*3)、维生素K环氧化物还原酶复合体亚单位1
FINITE DEFORMATION ELASTO-PLASTIC THEORY AND CONSISTENT ALGORITHM
Liu Xuejun; Li Mingrui; Huang Wenbin
2001-01-01
By using the logarithmic strain, the finite deformation plastic theory, corresponding to the infinitesimal plastic theory, is established successively. The plastic consistent algorithm with first order accuracy for the finite element method (FEM) is developed. Numerical examples are presented to illustrate the validity of the theory and effectiveness of the algorithm.
New MPPT algorithm based on hybrid dynamical theory
Elmetennani, Shahrazed
2014-11-01
This paper presents a new maximum power point tracking algorithm based on the hybrid dynamical theory. A multiceli converter has been considered as an adaptation stage for the photovoltaic chain. The proposed algorithm is a hybrid automata switching between eight different operating modes, which has been validated by simulation tests under different working conditions. © 2014 IEEE.
Genetic algorithms for protein threading.
Yadgari, J; Amir, A; Unger, R
1998-01-01
Despite many years of efforts, a direct prediction of protein structure from sequence is still not possible. As a result, in the last few years researchers have started to address the "inverse folding problem": Identifying and aligning a sequence to the fold with which it is most compatible, a process known as "threading". In two meetings in which protein folding predictions were objectively evaluated, it became clear that threading as a concept promises a real breakthrough, but that much improvement is still needed in the technique itself. Threading is a NP-hard problem, and thus no general polynomial solution can be expected. Still a practical approach with demonstrated ability to find optimal solutions in many cases, and acceptable solutions in other cases, is needed. We applied the technique of Genetic Algorithms in order to significantly improve the ability of threading algorithms to find the optimal alignment of a sequence to a structure, i.e. the alignment with the minimum free energy. A major progress reported here is the design of a representation of the threading alignment as a string of fixed length. With this representation validation of alignments and genetic operators are effectively implemented. Appropriate data structure and parameters have been selected. It is shown that Genetic Algorithm threading is effective and is able to find the optimal alignment in a few test cases. Furthermore, the described algorithm is shown to perform well even without pre-definition of core elements. Existing threading methods are dependent on such constraints to make their calculations feasible. But the concept of core elements is inherently arbitrary and should be avoided if possible. While a rigorous proof is hard to submit yet an, we present indications that indeed Genetic Algorithm threading is capable of finding consistently good solutions of full alignments in search spaces of size up to 10(70).
Gui, Shupeng; Rice, Andrew P; Chen, Rui; Wu, Liang; Liu, Ji; Miao, Hongyu
2017-01-31
Gene regulatory interactions are of fundamental importance to various biological functions and processes. However, only a few previous computational studies have claimed success in revealing genome-wide regulatory landscapes from temporal gene expression data, especially for complex eukaryotes like human. Moreover, recent work suggests that these methods still suffer from the curse of dimensionality if a network size increases to 100 or higher. Here we present a novel scalable algorithm for identifying genome-wide gene regulatory network (GRN) structures, and we have verified the algorithm performances by extensive simulation studies based on the DREAM challenge benchmark data. The highlight of our method is that its superior performance does not degenerate even for a network size on the order of 10(4), and is thus readily applicable to large-scale complex networks. Such a breakthrough is achieved by considering both prior biological knowledge and multiple topological properties (i.e., sparsity and hub gene structure) of complex networks in the regularized formulation. We also validate and illustrate the application of our algorithm in practice using the time-course gene expression data from a study on human respiratory epithelial cells in response to influenza A virus (IAV) infection, as well as the CHIP-seq data from ENCODE on transcription factor (TF) and target gene interactions. An interesting finding, owing to the proposed algorithm, is that the biggest hub structures (e.g., top ten) in the GRN all center at some transcription factors in the context of epithelial cell infection by IAV. The proposed algorithm is the first scalable method for large complex network structure identification. The GRN structure identified by our algorithm could reveal possible biological links and help researchers to choose which gene functions to investigate in a biological event. The algorithm described in this article is implemented in MATLAB (Ⓡ) , and the source code is
Walking Algorithm of Humanoid Robot on Uneven Terrain with Terrain Estimation
Jiang Yi
2016-02-01
Full Text Available Humanoid robots are expected to achieve stable walking on uneven terrains. In this paper, a control algorithm for humanoid robots walking on previously unknown terrains with terrain estimation is proposed, which requires only minimum modification to the original walking gait. The swing foot trajectory is redesigned to ensure that the foot lands at the desired horizontal positions under various terrain height. A compliant terrain adaptation method is applied to the landing foot to achieve a firm contact with the ground. Then a terrain estimation method that takes into account the deformations of the linkages is applied, providing the target for the following correction and adjustment. The algorithm was validated through walking experiments on uneven terrains with the full-size humanoid robot Kong.
An improved real-time endovascular guidewire position simulation using shortest path algorithm.
Qiu, Jianpeng; Qu, Zhiyi; Qiu, Haiquan; Zhang, Xiaomin
2016-09-01
In this study, we propose a new graph-theoretical method to simulate guidewire paths inside the carotid artery. The minimum energy guidewire path can be obtained by applying the shortest path algorithm, such as Dijkstra's algorithm for graphs, based on the principle of the minimal total energy. Compared to previous results, experiments of three phantoms were validated, revealing that the first and second phantoms overlap completely between simulated and real guidewires. In addition, 95 % of the third phantom overlaps completely, and the remaining 5 % closely coincides. The results demonstrate that our method achieves 87 and 80 % improvements for the first and third phantoms under the same conditions, respectively. Furthermore, 91 % improvements were obtained for the second phantom under the condition with reduced graph construction complexity.
Algorithm for Generating Train Calendar Texts
Karel Greiner
2013-04-01
Full Text Available The article describes a possibility of generating train calendar text for the needs of compiling the annual timetable in the conditions of the Czech Republic. Based on the analysis of the types of texts of calendars that appear in various print outputs, a heuristic algorithm was designed to generate a text from a set of calendar days. The algorithm is a part of an application that also provides a tool to define the text of the calendar by using a mask of sub-periods and calendars to be displayed in them. The algorithm was tested on real data of the timetable. In most cases, the algorithm shows the same or better results than the previously used tools. In several cases, however, a better result can be obtained by the user. The described algorithm to generate the text of the calendar is a part of a program that is used for compiling the timetable for trains in the Czech Republic.
Fast chromosome karyotyping by auction algorithm.
Wu, Xiaolin; Dumitrescu, Sorina; Biyani, Pravesh; Wu, Qiang
2005-01-01
We consider the problem of automated classification of human chromosomes or karyotyping and study discrete optimisation algorithms to solve the problem as one of joint maximum likelihood classification. We demonstrate that the auction algorithm offers a simpler and more efficient solution for chromosome karyotyping than the previously known transportation algorithm, while still guaranteeing global optimality. This improvement in algorithm efficiency is made possible by first casting chromosome karyotyping into a problem of optimal assignment and then exploiting the sparsity of the assignment problem due to the inherent properties of chromosome data. Furthermore, the auction algorithm also works when the chromosome data in a cell are incomplete due to the exclusion of overlapped or severely bent chromosomes, as often encountered in routine quality data.
Software For Genetic Algorithms
Wang, Lui; Bayer, Steve E.
1992-01-01
SPLICER computer program is genetic-algorithm software tool used to solve search and optimization problems. Provides underlying framework and structure for building genetic-algorithm application program. Written in Think C.
Deriving and validating a risk estimation tool for screening asymptomatic chlamydia and gonorrhea.
Falasinnu, Titilola; Gilbert, Mark; Gustafson, Paul; Shoveller, Jean
2014-12-01
There has been considerable interest in the development of innovative service delivery modules for prioritizing resources in sexual health delivery in response to dwindling fiscal resources and rising infection rates. This study aims to derive and validate a risk scoring algorithm to accurately identify asymptomatic patients at increased risk for chlamydia and/or gonorrhea infection. We examined the electronic records of patient visits at sexual health clinics in Vancouver, Canada. We derived risk scores from regression coefficients of multivariable logistic regression model using visits between 2000 and 2006. We evaluated the model's discrimination, calibration, and screening performance. Temporal validation was assessed in visits from 2007 to 2012. The prevalence of infection was 1.8% (n = 10,437) and 2.1% (n = 14,956) in the derivation and validation data sets, respectively. The final model included younger age, nonwhite ethnicity, multiple sexual partners, and previous infection and showed reasonable performance in the derivation (area under the receiver operating characteristic curve = 0.74; Hosmer-Lemeshow P = 0.91) and validation (area under the receiver operating characteristic curve = 0.64; Hosmer-Lemeshow P = 0.36) data sets. A risk score cutoff point of at least 6 detected 91% and 83% of cases by screening 68% and 68% of the derivation and validation populations, respectively. These findings support the use of the algorithm for individualized risk assessment and have important implications for reducing unnecessary screening and saving costs. Specifically, we anticipate that the algorithm has potential uses in alternative settings such as Internet-based testing contexts by facilitating personalized test recommendations, stimulating health care-seeking behavior, and aiding risk communication by increasing sexually transmitted infection risk perception through the creation of tailored risk messages to different groups.
Progressive geometric algorithms
Sander P.A. Alewijnse
2015-01-01
Full Text Available Progressive algorithms are algorithms that, on the way to computing a complete solution to the problem at hand, output intermediate solutions that approximate the complete solution increasingly well. We present a framework for analyzing such algorithms, and develop efficient progressive algorithms for two geometric problems: computing the convex hull of a planar point set, and finding popular places in a set of trajectories.
Approximate iterative algorithms
Almudevar, Anthony Louis
2014-01-01
Iterative algorithms often rely on approximate evaluation techniques, which may include statistical estimation, computer simulation or functional approximation. This volume presents methods for the study of approximate iterative algorithms, providing tools for the derivation of error bounds and convergence rates, and for the optimal design of such algorithms. Techniques of functional analysis are used to derive analytical relationships between approximation methods and convergence properties for general classes of algorithms. This work provides the necessary background in functional analysis a
Borbely, Eva
2007-01-01
A quantum algorithm is a set of instructions for a quantum computer, however, unlike algorithms in classical computer science their results cannot be guaranteed. A quantum system can undergo two types of operation, measurement and quantum state transformation, operations themselves must be unitary (reversible). Most quantum algorithms involve a series of quantum state transformations followed by a measurement. Currently very few quantum algorithms are known and no general design methodology e...
Competing Sudakov Veto Algorithms
Kleiss, Ronald
2016-01-01
We present a way to analyze the distribution produced by a Monte Carlo algorithm. We perform these analyses on several versions of the Sudakov veto algorithm, adding a cutoff, a second variable and competition between emission channels. The analysis allows us to prove that multiple, seemingly different competition algorithms, including those that are currently implemented in most parton showers, lead to the same result. Finally, we test their performance and show that there are significantly faster alternatives to the commonly used algorithms.
Autonomous Star Tracker Algorithms
Betto, Maurizio; Jørgensen, John Leif; Kilsgaard, Søren
1998-01-01
Proposal, in response to an ESA R.f.P., to design algorithms for autonomous star tracker operations.The proposal also included the development of a star tracker breadboard to test the algorithms performances.......Proposal, in response to an ESA R.f.P., to design algorithms for autonomous star tracker operations.The proposal also included the development of a star tracker breadboard to test the algorithms performances....
Can the square law be validated
Hartley, D.S. III.
1989-03-01
This paper addresses the question of validating the homogeneous Lanchestrian square law of attrition by the use of historical data. The available data and some analysis techniques are examined. The result is that the homogeneous Lanchester square law cannot be regarded as a proven attrition algorithm for warfare; however, the square law cannot be regarded as disproved either. To validate the square law or any other proposed attrition law, data on more battles are required. 21 refs., 31 figs., 11 tabs.
Information criterion based fast PCA adaptive algorithm
Li Jiawen; Li Congxin
2007-01-01
The novel information criterion (NIC) algorithm can find the principal subspace quickly, but it is not an actual principal component analysis (PCA) algorithm and hence it cannot find the orthonormal eigen-space which corresponds to the principal component of input vector.This defect limits its application in practice.By weighting the neural network's output of NIC, a modified novel information criterion (MNIC) algorithm is presented.MNIC extractes the principal components and corresponding eigenvectors in a parallel online learning program, and overcomes the NIC's defect.It is proved to have a single global optimum and nonquadratic convergence rate, which is superior to the conventional PCA online algorithms such as Oja and LMSER.The relationship among Oja, LMSER and MNIC is exhibited.Simulations show that MNIC could converge to the optimum fast.The validity of MNIC is proved.
AN IMPROVED ALGORITHM FOR DPIV CORRELATION ANALYSIS
WU Long-hua
2007-01-01
In a Digital Particle Image Velocimetry (DPIV) system, the correlation of digital images is normally used to acquire the displacement information of particles and give estimates of the flow field. The accuracy and robustness of the correlation algorithm directly affect the validity of the analysis result. In this article, an improved algorithm for the correlation analysis was proposed which could be used to optimize the selection/determination of the correlation window, analysis area and search path. This algorithm not only reduces largely the amount of calculation, but also improves effectively the accuracy and reliability of the correlation analysis. The algorithm was demonstrated to be accurate and efficient in the measurement of the velocity field in a flocculation pool.
Duan, Qian-Qian; Yang, Gen-Ke; Pan, Chang-Chun
2014-01-01
A hybrid optimization algorithm combining finite state method (FSM) and genetic algorithm (GA) is proposed to solve the crude oil scheduling problem. The FSM and GA are combined to take the advantage of each method and compensate deficiencies of individual methods. In the proposed algorithm, the finite state method makes up for the weakness of GA which is poor at local searching ability. The heuristic returned by the FSM can guide the GA algorithm towards good solutions. The idea behind this is that we can generate promising substructure or partial solution by using FSM. Furthermore, the FSM can guarantee that the entire solution space is uniformly covered. Therefore, the combination of the two algorithms has better global performance than the existing GA or FSM which is operated individually. Finally, a real-life crude oil scheduling problem from the literature is used for conducting simulation. The experimental results validate that the proposed method outperforms the state-of-art GA method.
Husfeldt, Thore
2015-01-01
This chapter presents an introduction to graph colouring algorithms. The focus is on vertex-colouring algorithms that work for general classes of graphs with worst-case performance guarantees in a sequential model of computation. The presentation aims to demonstrate the breadth of available techniques and is organized by algorithmic paradigm.
Optimal Mixing Evolutionary Algorithms
Thierens, D.; Bosman, P.A.N.; Krasnogor, N.
2011-01-01
A key search mechanism in Evolutionary Algorithms is the mixing or juxtaposing of partial solutions present in the parent solutions. In this paper we look at the efficiency of mixing in genetic algorithms (GAs) and estimation-of-distribution algorithms (EDAs). We compute the mixing probabilities of
Implementation of Parallel Algorithms
1991-09-30
Lecture Notes in Computer Science , Warwich, England, July 16-20... Lecture Notes in Computer Science , Springer-Verlag, Bangalor, India, December 1990. J. Reif, J. Canny, and A. Page, "An Exact Algorithm for Kinodynamic...Parallel Algorithms and its Impact on Computational Geometry, in Optimal Algorithms, H. Djidjev editor, Springer-Verlag Lecture Notes in Computer Science
Semiclassical Shor's Algorithm
Giorda, P; Sen, S; Sen, S; Giorda, Paolo; Iorio, Alfredo; Sen, Samik; Sen, Siddhartha
2003-01-01
We propose a semiclassical version of Shor's quantum algorithm to factorize integer numbers, based on spin-1/2 SU(2) generalized coherent states. Surprisingly, we find numerical evidence that the algorithm's success probability is not too severely modified by our semiclassical approximation. This suggests that it is worth pursuing practical implementations of the algorithm on semiclassical devices.
Nature-inspired optimization algorithms
Yang, Xin-She
2014-01-01
Nature-Inspired Optimization Algorithms provides a systematic introduction to all major nature-inspired algorithms for optimization. The book's unified approach, balancing algorithm introduction, theoretical background and practical implementation, complements extensive literature with well-chosen case studies to illustrate how these algorithms work. Topics include particle swarm optimization, ant and bee algorithms, simulated annealing, cuckoo search, firefly algorithm, bat algorithm, flower algorithm, harmony search, algorithm analysis, constraint handling, hybrid methods, parameter tuning
On quantum algorithms for noncommutative hidden subgroups
Ettinger, M. [Los Alamos National Lab., NM (United States); Hoeyer, P. [Odense Univ. (Denmark)
1998-12-01
Quantum algorithms for factoring and discrete logarithm have previously been generalized to finding hidden subgroups of finite Abelian groups. This paper explores the possibility of extending this general viewpoint to finding hidden subgroups of noncommutative groups. The authors present a quantum algorithm for the special case of dihedral groups which determines the hidden subgroup in a linear number of calls to the input function. They also explore the difficulties of developing an algorithm to process the data to explicitly calculate a generating set for the subgroup. A general framework for the noncommutative hidden subgroup problem is discussed and they indicate future research directions.
Some multigrid algorithms for SIMD machines
Dendy, J.E. Jr. [Los Alamos National Lab., NM (United States)
1996-12-31
Previously a semicoarsening multigrid algorithm suitable for use on SIMD architectures was investigated. Through the use of new software tools, the performance of this algorithm has been considerably improved. The method has also been extended to three space dimensions. The method performs well for strongly anisotropic problems and for problems with coefficients jumping by orders of magnitude across internal interfaces. The parallel efficiency of this method is analyzed, and its actual performance on the CM-5 is compared with its performance on the CRAY-YMP. A standard coarsening multigrid algorithm is also considered, and we compare its performance on these two platforms as well.
Exponential algorithmic speedup by quantum walk
Childs, A M; Deotto, E; Farhi, E; Gutmann, S; Spielman, D A; Childs, Andrew M.; Cleve, Richard; Deotto, Enrico; Farhi, Edward; Gutmann, Sam; Spielman, Daniel A.
2002-01-01
We construct an oracular problem that can be solved exponentially faster on a quantum computer than on a classical computer. The quantum algorithm is based on a continuous time quantum walk, and thus employs a different technique from previous quantum algorithms based on quantum Fourier transforms. We show how to implement the quantum walk efficiently in our oracular setting. We then show how this quantum walk can be used to solve our problem by rapidly traversing a graph. Finally, we prove that no classical algorithm can solve this problem with high probability in subexponential time.
Mehlsen, Jesper; Wiinberg, Niels; Joergensen, Bjarne S
2010-01-01
The presence of peripheral arterial disease (PAD) in patients with other manifestations of cardiovascular disease identifies a population at increased risk of complications both during acute coronary events and on a long-term basis and possibly a population in whom secondary prevention of cardiov...... of cardiovascular events should be addressed aggressively. The present study was aimed at providing a valid estimate on the prevalence of PAD in patients attending their general practitioner and having previously suffered a cardio- or cerebrovascular event....
Composite multiobjective optimization beamforming based on genetic algorithms
Shi Jing; Meng Weixiao; Zhang Naitong; Wang Zheng
2006-01-01
All thc parameters of beamforming are usually optimized simultaneously in implementing the optimization of antenna array pattern with multiple objectives and parameters by genetic algorithms (GAs).Firstly, this paper analyzes the performance of fitness functions of previous algorithms. It shows that original algorithms make the fitness functions too complex leading to large amount of calculation, and also the selection of the weight of parameters very sensitive due to many parameters optimized simultaneously. This paper proposes a kind of algorithm of composite beamforming, which detaches the antenna array into two parts corresponding to optimization of different objective parameters respectively. New algorithm substitutes the previous complex fitness function with two simpler functions. Both theoretical analysis and simulation results show that this method simplifies the selection of weighting parameters and reduces the complexity of calculation. Furthermore, the algorithm has better performance in lowering side lobe and interferences in comparison with conventional algorithms of beamforming in the case of slightly widening the main lobe.
Generalization of Selection Test Validity.
Colbert, G. A.; Taylor, L. R.
1978-01-01
This is part three of a three-part series concerned with the empirical development of homogeneous families of insurance company jobs based on data from the Position Analysis Questionnaire (PAQ). This part involves validity generalizations within the job families which resulted from the previous research. (Editor/RK)
Algorithms for Quantum Computers
Smith, Jamie
2010-01-01
This paper surveys the field of quantum computer algorithms. It gives a taste of both the breadth and the depth of the known algorithms for quantum computers, focusing on some of the more recent results. It begins with a brief review of quantum Fourier transform based algorithms, followed by quantum searching and some of its early generalizations. It continues with a more in-depth description of two more recent developments: algorithms developed in the quantum walk paradigm, followed by tensor network evaluation algorithms (which include approximating the Tutte polynomial).
Akl, Selim G
1985-01-01
Parallel Sorting Algorithms explains how to use parallel algorithms to sort a sequence of items on a variety of parallel computers. The book reviews the sorting problem, the parallel models of computation, parallel algorithms, and the lower bounds on the parallel sorting problems. The text also presents twenty different algorithms, such as linear arrays, mesh-connected computers, cube-connected computers. Another example where algorithm can be applied is on the shared-memory SIMD (single instruction stream multiple data stream) computers in which the whole sequence to be sorted can fit in the
Rover-based visual target tracking validation and mission infusion
Kim, Won S.; Steele, Robert D.; Ansar, Adnan I.; Ali, Khaled; Nesnas, Issa
2005-01-01
The Mars Exploration Rovers (MER'03), Spirit and Opportunity, represent the state of the art in rover operations on Mars. This paper presents validation experiments of different visual tracking algorithms using the rover's navigation camera.
Last-passage Monte Carlo algorithm for mutual capacitance.
Hwang, Chi-Ok; Given, James A
2006-08-01
We develop and test the last-passage diffusion algorithm, a charge-based Monte Carlo algorithm, for the mutual capacitance of a system of conductors. The first-passage algorithm is highly efficient because it is charge based and incorporates importance sampling; it averages over the properties of Brownian paths that initiate outside the conductor and terminate on its surface. However, this algorithm does not seem to generalize to mutual capacitance problems. The last-passage algorithm, in a sense, is the time reversal of the first-passage algorithm; it involves averages over particles that initiate on an absorbing surface, leave that surface, and diffuse away to infinity. To validate this algorithm, we calculate the mutual capacitance matrix of the circular-disk parallel-plate capacitor and compare with the known numerical results. Good agreement is obtained.
Modified Clipped LMS Algorithm
Lotfizad Mojtaba
2005-01-01
Full Text Available Abstract A new algorithm is proposed for updating the weights of an adaptive filter. The proposed algorithm is a modification of an existing method, namely, the clipped LMS, and uses a three-level quantization ( scheme that involves the threshold clipping of the input signals in the filter weight update formula. Mathematical analysis shows the convergence of the filter weights to the optimum Wiener filter weights. Also, it can be proved that the proposed modified clipped LMS (MCLMS algorithm has better tracking than the LMS algorithm. In addition, this algorithm has reduced computational complexity relative to the unmodified one. By using a suitable threshold, it is possible to increase the tracking capability of the MCLMS algorithm compared to the LMS algorithm, but this causes slower convergence. Computer simulations confirm the mathematical analysis presented.
Group Leaders Optimization Algorithm
Daskin, Anmer
2010-01-01
Complexity of global optimization algorithms makes implementation of the algorithms difficult and leads the algorithms to require more computer resources for the optimization process. The ability to explore the whole solution space without increasing the complexity of algorithms has a great importance to not only get reliable results but so also make the implementation of these algorithms more convenient for higher dimensional and complex-real world problems in science and engineering. In this paper, we present a new global optimization algorithm in which the influence of the leaders in social groups is used as an inspiration for the evolutionary technique that is designed into a group architecture similar to the architecture of Cooperative Coevolutionary Algorithms. Therefore, we present the implementation method and the experimental results for the single and multidimensional optimization test problems and a scientific real world problem, the energies and the geometric structures of Lennard-Jones clusters.
McGeachy, P; Madamesila, J; Beauchamp, A; Khan, R
2015-01-01
An open source optimizer that generates seed distributions for low-dose-rate prostate brachytherapy was designed, tested, and validated. The optimizer was a simple genetic algorithm (SGA) that, given a set of prostate and urethra contours, determines the optimal seed distribution in terms of coverage of the prostate with the prescribed dose while avoiding hotspots within the urethra. The algorithm was validated in a retrospective study on 45 previously contoured low-dose-rate prostate brachytherapy patients. Dosimetric indices were evaluated to ensure solutions adhered to clinical standards. The SGA performance was further benchmarked by comparing solutions obtained from a commercial optimizer (inverse planning simulated annealing [IPSA]) with the same cohort of 45 patients. Clinically acceptable target coverage by the prescribed dose (V100) was obtained for both SGA and IPSA, with a mean ± standard deviation of 98 ± 2% and 99.5 ± 0.5%, respectively. For the prostate D90, SGA and IPSA yielded 177 ± 8 Gy and 186 ± 7 Gy, respectively, which were both clinically acceptable. Both algorithms yielded reasonable dose to the rectum, with V100 open source SGA was validated that provides a research tool for the brachytherapy community. Copyright © 2015 American Brachytherapy Society. Published by Elsevier Inc. All rights reserved.
A novel algorithm for Bluetooth ECG.
Pandya, Utpal T; Desai, Uday B
2012-11-01
In wireless transmission of ECG, data latency will be significant when battery power level and data transmission distance are not maintained. In applications like home monitoring or personalized care, to overcome the joint effect of previous issues of wireless transmission and other ECG measurement noises, a novel filtering strategy is required. Here, a novel algorithm, identified as peak rejection adaptive sampling modified moving average (PRASMMA) algorithm for wireless ECG is introduced. This algorithm first removes error in bit pattern of received data if occurred in wireless transmission and then removes baseline drift. Afterward, a modified moving average is implemented except in the region of each QRS complexes. The algorithm also sets its filtering parameters according to different sampling rate selected for acquisition of signals. To demonstrate the work, a prototyped Bluetooth-based ECG module is used to capture ECG with different sampling rate and in different position of patient. This module transmits ECG wirelessly to Bluetooth-enabled devices where the PRASMMA algorithm is applied on captured ECG. The performance of PRASMMA algorithm is compared with moving average and S-Golay algorithms visually as well as numerically. The results show that the PRASMMA algorithm can significantly improve the ECG reconstruction by efficiently removing the noise and its use can be extended to any parameters where peaks are importance for diagnostic purpose.
A fast meteor detection algorithm
Gural, P.
2016-01-01
A low latency meteor detection algorithm for use with fast steering mirrors had been previously developed to track and telescopically follow meteors in real-time (Gural, 2007). It has been rewritten as a generic clustering and tracking software module for meteor detection that meets both the demanding throughput requirements of a Raspberry Pi while also maintaining a high probability of detection. The software interface is generalized to work with various forms of front-end video pre-processing approaches and provides a rich product set of parameterized line detection metrics. Discussion will include the Maximum Temporal Pixel (MTP) compression technique as a fast thresholding option for feeding the detection module, the detection algorithm trade for maximum processing throughput, details on the clustering and tracking methodology, processing products, performance metrics, and a general interface description.
28 CFR 10.5 - Incorporation of papers previously filed.
2010-07-01
... 28 Judicial Administration 1 2010-07-01 2010-07-01 false Incorporation of papers previously filed... CARRYING ON ACTIVITIES WITHIN THE UNITED STATES Registration Statement § 10.5 Incorporation of papers previously filed. Papers and documents already filed with the Attorney General pursuant to the said act...
2 CFR 1.215 - Relationship to previous issuances.
2010-01-01
... 2 Grants and Agreements 1 2010-01-01 2010-01-01 false Relationship to previous issuances. 1.215 Section 1.215 Grants and Agreements ABOUT TITLE 2 OF THE CODE OF FEDERAL REGULATIONS AND SUBTITLE A Introduction toSubtitle A § 1.215 Relationship to previous issuances. Although some of the guidance was...
49 CFR 236.1031 - Previously approved PTC systems.
2010-10-01
... 49 Transportation 4 2010-10-01 2010-10-01 false Previously approved PTC systems. 236.1031 Section... Train Control Systems § 236.1031 Previously approved PTC systems. (a) Any PTC system fully implemented and operational prior to March 16, 2010, may receive PTC System Certification if the applicable PTC...
AN IMPROVED FAST BLIND DECONVOLUTION ALGORITHM BASED ON DECORRELATION AND BLOCK MATRIX
Yang Jun'an; He Xuefan; Tan Ying
2008-01-01
In order to alleviate the shortcomings of most blind deconvolution algorithms,this paper proposes an improved fast algorithm for blind deconvolution based on decorrelation technique and broadband block matrix. Althougth the original algorithm can overcome the shortcomings of current blind deconvolution algorithms,it has a constraint that the number of the source signals must be less than that of the channels. The improved algorithm deletes this constraint by using decorrelation technique. Besides,the improved algorithm raises the separation speed in terms of improving the computing methods of the output signal matrix. Simulation results demonstrate the validation and fast separation of the improved algorithm.
Piretzidis, Dimitrios; Sra, Gurveer; Karantaidis, George; Sideris, Michael G.
2017-04-01
A new method for identifying correlated errors in Gravity Recovery and Climate Experiment (GRACE) monthly harmonic coefficients has been developed and tested. Correlated errors are present in the differences between monthly GRACE solutions, and can be suppressed using a de-correlation filter. In principle, the de-correlation filter should be implemented only on coefficient series with correlated errors to avoid losing useful geophysical information. In previous studies, two main methods of implementing the de-correlation filter have been utilized. In the first one, the de-correlation filter is implemented starting from a specific minimum order until the maximum order of the monthly solution examined. In the second one, the de-correlation filter is implemented only on specific coefficient series, the selection of which is based on statistical testing. The method proposed in the present study exploits the capabilities of supervised machine learning algorithms such as neural networks and support vector machines (SVMs). The pattern of correlated errors can be described by several numerical and geometric features of the harmonic coefficient series. The features of extreme cases of both correlated and uncorrelated coefficients are extracted and used for the training of the machine learning algorithms. The trained machine learning algorithms are later used to identify correlated errors and provide the probability of a coefficient series to be correlated. Regarding SVMs algorithms, an extensive study is performed with various kernel functions in order to find the optimal training model for prediction. The selection of the optimal training model is based on the classification accuracy of the trained SVM algorithm on the same samples used for training. Results show excellent performance of all algorithms with a classification accuracy of 97% - 100% on a pre-selected set of training samples, both in the validation stage of the training procedure and in the subsequent use of
Actuator Placement Via Genetic Algorithm for Aircraft Morphing
Crossley, William A.; Cook, Andrea M.
2001-01-01
This research continued work that began under the support of NASA Grant NAG1-2119. The focus of this effort was to continue investigations of Genetic Algorithm (GA) approaches that could be used to solve an actuator placement problem by treating this as a discrete optimization problem. In these efforts, the actuators are assumed to be "smart" devices that change the aerodynamic shape of an aircraft wing to alter the flow past the wing, and, as a result, provide aerodynamic moments that could provide flight control. The earlier work investigated issued for the problem statement, developed the appropriate actuator modeling, recognized the importance of symmetry for this problem, modified the aerodynamic analysis routine for more efficient use with the genetic algorithm, and began a problem size study to measure the impact of increasing problem complexity. The research discussed in this final summary further investigated the problem statement to provide a "combined moment" problem statement to simultaneously address roll, pitch and yaw. Investigations of problem size using this new problem statement provided insight into performance of the GA as the number of possible actuator locations increased. Where previous investigations utilized a simple wing model to develop the GA approach for actuator placement, this research culminated with application of the GA approach to a high-altitude unmanned aerial vehicle concept to demonstrate that the approach is valid for an aircraft configuration.
Pardo, Juan; Zamora-Martínez, Francisco; Botella-Rocamora, Paloma
2015-04-21
Time series forecasting is an important predictive methodology which can be applied to a wide range of problems. Particularly, forecasting the indoor temperature permits an improved utilization of the HVAC (Heating, Ventilating and Air Conditioning) systems in a home and thus a better energy efficiency. With such purpose the paper describes how to implement an Artificial Neural Network (ANN) algorithm in a low cost system-on-chip to develop an autonomous intelligent wireless sensor network. The present paper uses a Wireless Sensor Networks (WSN) to monitor and forecast the indoor temperature in a smart home, based on low resources and cost microcontroller technology as the 8051MCU. An on-line learning approach, based on Back-Propagation (BP) algorithm for ANNs, has been developed for real-time time series learning. It performs the model training with every new data that arrive to the system, without saving enormous quantities of data to create a historical database as usual, i.e., without previous knowledge. Consequently to validate the approach a simulation study through a Bayesian baseline model have been tested in order to compare with a database of a real application aiming to see the performance and accuracy. The core of the paper is a new algorithm, based on the BP one, which has been described in detail, and the challenge was how to implement a computational demanding algorithm in a simple architecture with very few hardware resources.
Juan Pardo
2015-04-01
Full Text Available Time series forecasting is an important predictive methodology which can be applied to a wide range of problems. Particularly, forecasting the indoor temperature permits an improved utilization of the HVAC (Heating, Ventilating and Air Conditioning systems in a home and thus a better energy efficiency. With such purpose the paper describes how to implement an Artificial Neural Network (ANN algorithm in a low cost system-on-chip to develop an autonomous intelligent wireless sensor network. The present paper uses a Wireless Sensor Networks (WSN to monitor and forecast the indoor temperature in a smart home, based on low resources and cost microcontroller technology as the 8051MCU. An on-line learning approach, based on Back-Propagation (BP algorithm for ANNs, has been developed for real-time time series learning. It performs the model training with every new data that arrive to the system, without saving enormous quantities of data to create a historical database as usual, i.e., without previous knowledge. Consequently to validate the approach a simulation study through a Bayesian baseline model have been tested in order to compare with a database of a real application aiming to see the performance and accuracy. The core of the paper is a new algorithm, based on the BP one, which has been described in detail, and the challenge was how to implement a computational demanding algorithm in a simple architecture with very few hardware resources.
Ioannou, I.; Zhou, J.; Gilerson, A.; Gross, B.; Moshary, F.; Ahmed, S.
2009-09-01
Our previous studies showed that the Fluorescence Line Height (FLH) product, which uses 3 NIR bands at 667, 678, and 746 nm on the MODerate-resolution Imaging Spectroradiometer (MODIS) sensor, and similar bands on MERIS sensor, is not reliable in coastal waters because of a peak in the elastic reflectance spectra which occurs due to the confluence of chlorophyll and water absorption spectra and which overlaps spectrally the chlorophyll fluorescence. This combination of two overlapping peaks makes fluorescence signal retrieval inaccurate. As a consequence, the present FLH algorithm significantly underestimates fluorescence magnitudes in coastal waters. To overcome this problem, we introduce a new and more accurate approach for the retrieval of FLH in turbid waters by the MODIS sensor, which exploits the correlation between the blue-green and red bands reflectance ratios. We show that by making use of the combined remote sensing reflectance's (Rrs) at 488nm, 547nm, 667nm and 678nm we can retrieve fluorescence accurately in case 2 waters even for low fluorescence quantum yield when fluorescence magnitudes are low. The derivation and validation of our algorithm was performed using extensive synthetic datasets which cover a large variability of parameters typical of coastal waters: with CDOM absorption at 400nm 0-2 m-1, mineral concentration 0-5g/m3 and chlorophyll concentration of 0.5-100 mg/m3. In addition, we applied this proposed algorithm to MODIS satellite data and compared it with the traditional FLH algorithm.
Filtered-X Affine Projection Algorithms for Active Noise Control Using Volterra Filters
Sicuranza Giovanni L
2004-01-01
Full Text Available We consider the use of adaptive Volterra filters, implemented in the form of multichannel filter banks, as nonlinear active noise controllers. In particular, we discuss the derivation of filtered-X affine projection algorithms for homogeneous quadratic filters. According to the multichannel approach, it is then easy to pass from these algorithms to those of a generic Volterra filter. It is shown in the paper that the AP technique offers better convergence and tracking capabilities than the classical LMS and NLMS algorithms usually applied in nonlinear active noise controllers, with a limited complexity increase. This paper extends in two ways the content of a previous contribution published in Proc. IEEE-EURASIP Workshop on Nonlinear Signal and Image Processing (NSIP '03, Grado, Italy, June 2003. First of all, a general adaptation algorithm valid for any order of affine projections is presented. Secondly, a more complete set of experiments is reported. In particular, the effects of using multichannel filter banks with a reduced number of channels are investigated and relevant results are shown.
A VLSI architecture for simplified arithmetic Fourier transform algorithm
Reed, Irving S.; Shih, Ming-Tang; Truong, T. K.; Hendon, E.; Tufts, D. W.
1992-01-01
The arithmetic Fourier transform (AFT) is a number-theoretic approach to Fourier analysis which has been shown to perform competitively with the classical FFT in terms of accuracy, complexity, and speed. Theorems developed in a previous paper for the AFT algorithm are used here to derive the original AFT algorithm which Bruns found in 1903. This is shown to yield an algorithm of less complexity and of improved performance over certain recent AFT algorithms. A VLSI architecture is suggested for this simplified AFT algorithm. This architecture uses a butterfly structure which reduces the number of additions by 25 percent of that used in the direct method.
An Extended Algorithm of Flexibility Analysis in Chemical Engineering Processes
无
2001-01-01
An extended algorithm of flexibility analysis with a local adjusting method for flexibility region of chemical processes, which is based on the active constraint strategy, is proposed, which fully exploits the flexibility region of the process system operation. The hyperrectangular flexibility region determined by the extended algorithm is larger than that calculated by the previous algorithms. The limitation of the proposed algorithm due to imperfect convexity and its corresponding verification measure are also discussed. Both numerical and actual chemical process examples are presented to demonstrate the effectiveness of the new algorithm.
An Improved Particle Swarm Optimization Algorithm Based on Ensemble Technique
SHI Yan; HUANG Cong-ming
2006-01-01
An improved particle swarm optimization (PSO) algorithm based on ensemble technique is presented. The algorithm combines some previous best positions (pbest) of the particles to get an ensemble position (Epbest), which is used to replace the global best position (gbest). It is compared with the standard PSO algorithm invented by Kennedy and Eberhart and some improved PSO algorithms based on three different benchmark functions. The simulation results show that the improved PSO based on ensemble technique can get better solutions than the standard PSO and some other improved algorithms under all test cases.
A Contraction-based Ratio-cut Partitioning Algorithm
Youssef Saab
2002-01-01
Full Text Available Partitioning is a fundamental problem in the design of VLSI circuits. In recent years, ratio-cut partitioning has received attention due to its tendency to partition circuits into their natural clusters. Node contraction has also been shown to enhance the performance of iterative partitioning algorithms. This paper describes a new simple ratio-cut partitioning algorithm using node contraction. This new algorithm combines iterative improvement with progressive cluster formation. Under suitably mild assumptions, the new algorithm runs in linear time. It is also shown that the new algorithm compares favorably with previous approaches.