Omidikia, Nematollah; Kompany-Zareh, Mohsen
2013-01-01
Employment of Uninformative Variable Elimination (UVE) as a robust variable selection method is reported in this study. Each regression coefficient represents the contribution of the corresponding variable in the established model, but in the presence of uninformative variables as well as colline......Employment of Uninformative Variable Elimination (UVE) as a robust variable selection method is reported in this study. Each regression coefficient represents the contribution of the corresponding variable in the established model, but in the presence of uninformative variables as well...... as collinearity reliability of the regression coefficient's magnitude is suspicious. Successive Projection Algorithm (SPA) and Gram-Schmidt Orthogonalization (GSO) were implemented as pre-selection technique for removing collinearity and redundancy among variables in the model. Uninformative variable elimination...
Soares, Sófacles Figueredo Carreiro; Galvão, Roberto Kawakami Harrop; Araújo, Mário César Ugulino; da Silva, Edvan Cirino; Pereira, Claudete Fernandes; de Andrade, Stéfani Iury Evangelista; Leite, Flaviano Carvalho
2011-03-09
This work proposes a modification to the successive projections algorithm (SPA) aimed at selecting spectral variables for multiple linear regression (MLR) in the presence of unknown interferents not included in the calibration data set. The modified algorithm favours the selection of variables in which the effect of the interferent is less pronounced. The proposed procedure can be regarded as an adaptive modelling technique, because the spectral features of the samples to be analyzed are considered in the variable selection process. The advantages of this new approach are demonstrated in two analytical problems, namely (1) ultraviolet-visible spectrometric determination of tartrazine, allure red and sunset yellow in aqueous solutions under the interference of erythrosine, and (2) near-infrared spectrometric determination of ethanol in gasoline under the interference of toluene. In these case studies, the performance of conventional MLR-SPA models is substantially degraded by the presence of the interferent. This problem is circumvented by applying the proposed Adaptive MLR-SPA approach, which results in prediction errors smaller than those obtained by three other multivariate calibration techniques, namely stepwise regression, full-spectrum partial-least-squares (PLS) and PLS with variables selected by a genetic algorithm. An inspection of the variable selection results reveals that the Adaptive approach successfully avoids spectral regions in which the interference is more intense. Copyright © 2011 Elsevier B.V. All rights reserved.
Cascade Error Projection Learning Algorithm
Duong, T. A.; Stubberud, A. R.; Daud, T.
1995-01-01
A detailed mathematical analysis is presented for a new learning algorithm termed cascade error projection (CEP) and a general learning frame work. This frame work can be used to obtain the cascade correlation learning algorithm by choosing a particular set of parameters.
An improved affine projection algorithm for active noise cancellation
Zhang, Congyan; Wang, Mingjiang; Han, Yufei; Sun, Yunzhuo
2017-08-01
Affine projection algorithm is a signal reuse algorithm, and it has a good convergence rate compared to other traditional adaptive filtering algorithm. There are two factors that affect the performance of the algorithm, which are step factor and the projection length. In the paper, we propose a new variable step size affine projection algorithm (VSS-APA). It dynamically changes the step size according to certain rules, so that it can get smaller steady-state error and faster convergence speed. Simulation results can prove that its performance is superior to the traditional affine projection algorithm and in the active noise control (ANC) applications, the new algorithm can get very good results.
Route planning algorithms: Planific@ Project
Gonzalo Martín Ortega
2009-12-01
Full Text Available Planific@ is a route planning project for the city of Madrid (Spain. Its main aim is to develop an intelligence system capable of routing people from one place in the city to any other using the public transport. In order to do this, it is necessary to take into account such things as: time, traffic, user preferences, etc. Before beginning to design the project is necessary to make a comprehensive study of the variety of main known route planning algorithms suitable to be used in this project.
Cascade Error Projection: A New Learning Algorithm
Duong, T. A.; Stubberud, A. R.; Daud, T.; Thakoor, A. P.
1995-01-01
A new neural network architecture and a hardware implementable learning algorithm is proposed. The algorithm, called cascade error projection (CEP), handles lack of precision and circuit noise better than existing algorithms.
Chambolle's Projection Algorithm for Total Variation Denoising
Joan Duran
2013-12-01
Full Text Available Denoising is the problem of removing the inherent noise from an image. The standard noise model is additive white Gaussian noise, where the observed image f is related to the underlying true image u by the degradation model f=u+n, and n is supposed to be at each pixel independently and identically distributed as a zero-mean Gaussian random variable. Since this is an ill-posed problem, Rudin, Osher and Fatemi introduced the total variation as a regularizing term. It has proved to be quite efficient for regularizing images without smoothing the boundaries of the objects. This paper focuses on the simple description of the theory and on the implementation of Chambolle's projection algorithm for minimizing the total variation of a grayscale image. Furthermore, we adapt the algorithm to the vectorial total variation for color images. The implementation is described in detail and its parameters are analyzed and varied to come up with a reliable implementation.
Set-Membership Proportionate Affine Projection Algorithms
Stefan Werner
2007-01-01
Full Text Available Proportionate adaptive filters can improve the convergence speed for the identification of sparse systems as compared to their conventional counterparts. In this paper, the idea of proportionate adaptation is combined with the framework of set-membership filtering (SMF in an attempt to derive novel computationally efficient algorithms. The resulting algorithms attain an attractive faster converge for both situations of sparse and dispersive channels while decreasing the average computational complexity due to the data discerning feature of the SMF approach. In addition, we propose a rule that allows us to automatically adjust the number of past data pairs employed in the update. This leads to a set-membership proportionate affine projection algorithm (SM-PAPA having a variable data-reuse factor allowing a significant reduction in the overall complexity when compared with a fixed data-reuse factor. Reduced-complexity implementations of the proposed algorithms are also considered that reduce the dimensions of the matrix inversions involved in the update. Simulations show good results in terms of reduced number of updates, speed of convergence, and final mean-squared error.
The Texas Medication Algorithm Project (TMAP) schizophrenia algorithms.
Miller, A L; Chiles, J A; Chiles, J K; Crismon, M L; Rush, A J; Shon, S P
1999-10-01
In the Texas Medication Algorithm Project (TMAP), detailed guidelines for medication management of schizophrenia and related disorders, bipolar disorders, and major depressive disorders have been developed and implemented. This article describes the algorithms developed for medication treatment of schizophrenia and related disorders. The guidelines recommend a sequence of medications and discuss dosing, duration, and switch-over tactics. They also specify response criteria at each stage of the algorithm for both positive and negative symptoms. The rationale and evidence for each aspect of the algorithms are presented.
UV Reconstruction Algorithm And Diurnal Cycle Variability
Curylo, Aleksander; Litynska, Zenobia; Krzyscin, Janusz; Bogdanska, Barbara
2009-03-01
UV reconstruction is a method of estimation of surface UV with the use of available actinometrical and aerological measurements. UV reconstruction is necessary for the study of long-term UV change. A typical series of UV measurements is not longer than 15 years, which is too short for trend estimation. The essential problem in the reconstruction algorithm is the good parameterization of clouds. In our previous algorithm we used an empirical relation between Cloud Modification Factor (CMF) in global radiation and CMF in UV. The CMF is defined as the ratio between measured and modelled irradiances. Clear sky irradiance was calculated with a solar radiative transfer model. In the proposed algorithm, the time variability of global radiation during the diurnal cycle is used as an additional source of information. For elaborating an improved reconstruction algorithm relevant data from Legionowo [52.4 N, 21.0 E, 96 m a.s.l], Poland were collected with the following instruments: NILU-UV multi channel radiometer, Kipp&Zonen pyranometer, radiosonde profiles of ozone, humidity and temperature. The proposed algorithm has been used for reconstruction of UV at four Polish sites: Mikolajki, Kolobrzeg, Warszawa-Bielany and Zakopane since the early 1960s. Krzyscin's reconstruction of total ozone has been used in the calculations.
A numeric comparison of variable selection algorithms for supervised learning
Palombo, G.; Narsky, I.
2009-01-01
Datasets in modern High Energy Physics (HEP) experiments are often described by dozens or even hundreds of input variables. Reducing a full variable set to a subset that most completely represents information about data is therefore an important task in analysis of HEP data. We compare various variable selection algorithms for supervised learning using several datasets such as, for instance, imaging gamma-ray Cherenkov telescope (MAGIC) data found at the UCI repository. We use classifiers and variable selection methods implemented in the statistical package StatPatternRecognition (SPR), a free open-source C++ package developed in the HEP community ( (http://sourceforge.net/projects/statpatrec/)). For each dataset, we select a powerful classifier and estimate its learning accuracy on variable subsets obtained by various selection algorithms. When possible, we also estimate the CPU time needed for the variable subset selection. The results of this analysis are compared with those published previously for these datasets using other statistical packages such as R and Weka. We show that the most accurate, yet slowest, method is a wrapper algorithm known as generalized sequential forward selection ('Add N Remove R') implemented in SPR.
[Orthogonal Vector Projection Algorithm for Spectral Unmixing].
Song, Mei-ping; Xu, Xing-wei; Chang, Chein-I; An, Ju-bai; Yao, Li
2015-12-01
Spectrum unmixing is an important part of hyperspectral technologies, which is essential for material quantity analysis in hyperspectral imagery. Most linear unmixing algorithms require computations of matrix multiplication and matrix inversion or matrix determination. These are difficult for programming, especially hard for realization on hardware. At the same time, the computation costs of the algorithms increase significantly as the number of endmembers grows. Here, based on the traditional algorithm Orthogonal Subspace Projection, a new method called. Orthogonal Vector Projection is prompted using orthogonal principle. It simplifies this process by avoiding matrix multiplication and inversion. It firstly computes the final orthogonal vector via Gram-Schmidt process for each endmember spectrum. And then, these orthogonal vectors are used as projection vector for the pixel signature. The unconstrained abundance can be obtained directly by projecting the signature to the projection vectors, and computing the ratio of projected vector length and orthogonal vector length. Compared to the Orthogonal Subspace Projection and Least Squares Error algorithms, this method does not need matrix inversion, which is much computation costing and hard to implement on hardware. It just completes the orthogonalization process by repeated vector operations, easy for application on both parallel computation and hardware. The reasonability of the algorithm is proved by its relationship with Orthogonal Sub-space Projection and Least Squares Error algorithms. And its computational complexity is also compared with the other two algorithms', which is the lowest one. At last, the experimental results on synthetic image and real image are also provided, giving another evidence for effectiveness of the method.
Kim, Hyungjin; Park, Chang Min; Lee, Myunghee; Park, Sang Joon; Song, Yong Sub; Lee, Jong Hyuk; Hwang, Eui Jin; Goo, Jin Mo
2016-01-01
To identify the impact of reconstruction algorithms on CT radiomic features of pulmonary tumors and to reveal and compare the intra- and inter-reader and inter-reconstruction algorithm variability of each feature. Forty-two patients (M:F = 19:23; mean age, 60.43±10.56 years) with 42 pulmonary tumors (22.56±8.51mm) underwent contrast-enhanced CT scans, which were reconstructed with filtered back projection and commercial iterative reconstruction algorithm (level 3 and 5). Two readers independently segmented the whole tumor volume. Fifteen radiomic features were extracted and compared among reconstruction algorithms. Intra- and inter-reader variability and inter-reconstruction algorithm variability were calculated using coefficients of variation (CVs) and then compared. Among the 15 features, 5 first-order tumor intensity features and 4 gray level co-occurrence matrix (GLCM)-based features showed significant differences (palgorithms. As for the variability, effective diameter, sphericity, entropy, and GLCM entropy were the most robust features (CV≤5%). Inter-reader variability was larger than intra-reader or inter-reconstruction algorithm variability in 9 features. However, for entropy, homogeneity, and 4 GLCM-based features, inter-reconstruction algorithm variability was significantly greater than inter-reader variability (palgorithms. Inter-reconstruction algorithm variability was greater than inter-reader variability for entropy, homogeneity, and GLCM-based features.
Filter Pattern Search Algorithms for Mixed Variable Constrained Optimization Problems
Abramson, Mark A; Audet, Charles; Dennis, Jr, J. E
2004-01-01
.... This class combines and extends the Audet-Dennis Generalized Pattern Search (GPS) algorithms for bound constrained mixed variable optimization, and their GPS-filter algorithms for general nonlinear constraints...
A Framework for Categorizing Important Project Variables
Parsons, Vickie S.
2003-01-01
While substantial research has led to theories concerning the variables that affect project success, no universal set of such variables has been acknowledged as the standard. The identification of a specific set of controllable variables is needed to minimize project failure. Much has been hypothesized about the need to match project controls and management processes to individual projects in order to increase the chance for success. However, an accepted taxonomy for facilitating this matching process does not exist. This paper surveyed existing literature on classification of project variables. After an analysis of those proposals, a simplified categorization is offered to encourage further research.
Variable depth recursion algorithm for leaf sequencing
Siochi, R. Alfredo C.
2007-01-01
The processes of extraction and sweep are basic segmentation steps that are used in leaf sequencing algorithms. A modified version of a commercial leaf sequencer changed the way that the extracts are selected and expanded the search space, but the modification maintained the basic search paradigm of evaluating multiple solutions, each one consisting of up to 12 extracts and a sweep sequence. While it generated the best solutions compared to other published algorithms, it used more computation time. A new, faster algorithm selects one extract at a time but calls itself as an evaluation function a user-specified number of times, after which it uses the bidirectional sweeping window algorithm as the final evaluation function. To achieve a performance comparable to that of the modified commercial leaf sequencer, 2-3 calls were needed, and in all test cases, there were only slight improvements beyond two calls. For the 13 clinical test maps, computation speeds improved by a factor between 12 and 43, depending on the constraints, namely the ability to interdigitate and the avoidance of the tongue-and-groove under dose. The new algorithm was compared to the original and modified versions of the commercial leaf sequencer. It was also compared to other published algorithms for 1400, random, 15x15, test maps with 3-16 intensity levels. In every single case the new algorithm provided the best solution
A Partitioning and Bounded Variable Algorithm for Linear Programming
Sheskin, Theodore J.
2006-01-01
An interesting new partitioning and bounded variable algorithm (PBVA) is proposed for solving linear programming problems. The PBVA is a variant of the simplex algorithm which uses a modified form of the simplex method followed by the dual simplex method for bounded variables. In contrast to the two-phase method and the big M method, the PBVA does…
Risk variables in evaluation of transport projects
Vařbuchta, Petr; Kovářová, Hana; Hromádka, Vít; Vítková, Eva
2017-09-01
Depending on the constantly increasing demands on assessment of investment projects, especially assessment of large-scale projects in transport and important European projects with wide impacts, there is constantly increasing focus on risk management, whether to find mitigations, creating corrective measures or their implementation in assessment, especially in the context of Cost-Benefit analysis. To project assessment is often used implementation of certain risk variables, which can generate negative impacts of project outputs in framework of assess. Especially in case of transportation infrastructure projects is taken much emphasis on the influence of risk variables. However, currently in case of assessment of transportation projects is in Czech Republic used a few risk variables, which occur in the most projects. This leads to certain limitation in framework of impact assessment of risk variables. This papers aims to specify a new risk variables and process of applying them to already executed project assessment. Based on changes generated by new risk variables will be evaluated differences between original and adapted assessment.
Modified Projection Algorithms for Solving the Split Equality Problems
Qiao-Li Dong
2014-01-01
proposed a CQ algorithm for solving it. In this paper, we propose a modification for the CQ algorithm, which computes the stepsize adaptively and performs an additional projection step onto two half-spaces in each iteration. We further propose a relaxation scheme for the self-adaptive projection algorithm by using projections onto half-spaces instead of those onto the original convex sets, which is much more practical. Weak convergence results for both algorithms are analyzed.
Semiconvergence and Relaxation Parameters for Projected SIRT Algorithms
Elfving, Tommy; Hansen, Per Christian; Nikazad, Touraj
2012-01-01
We give a detailed study of the semiconverg ence behavior of projected nonstationary simultaneous iterative reconstruction technique (SIRT) algorithms, including the projected Landweber algorithm. We also consider the use of a relaxation parameter strategy, proposed recently for the standard...... algorithms, for controlling the semiconvergence of the projected algorithms. We demonstrate the semiconvergence and the performance of our strategies by examples taken from tomographic imaging. © 2012 Society for Industrial and Applied Mathematics....
Dynamic Vehicle Routing Using an Improved Variable Neighborhood Search Algorithm
Yingcheng Xu
2013-01-01
Full Text Available In order to effectively solve the dynamic vehicle routing problem with time windows, the mathematical model is established and an improved variable neighborhood search algorithm is proposed. In the algorithm, allocation customers and planning routes for the initial solution are completed by the clustering method. Hybrid operators of insert and exchange are used to achieve the shaking process, the later optimization process is presented to improve the solution space, and the best-improvement strategy is adopted, which make the algorithm can achieve a better balance in the solution quality and running time. The idea of simulated annealing is introduced to take control of the acceptance of new solutions, and the influences of arrival time, distribution of geographical location, and time window range on route selection are analyzed. In the experiment, the proposed algorithm is applied to solve the different sizes' problems of DVRP. Comparing to other algorithms on the results shows that the algorithm is effective and feasible.
Cascade Error Projection: An Efficient Hardware Learning Algorithm
Duong, T. A.
1995-01-01
A new learning algorithm termed cascade error projection (CEP) is presented. CEP is an adaption of a constructive architecture from cascade correlation and the dynamical stepsize of A/D conversion from the cascade back propagation algorithm.
Sparse Nonlinear Electromagnetic Imaging Accelerated With Projected Steepest Descent Algorithm
Desmal, Abdulla; Bagci, Hakan
2017-01-01
steepest descent algorithm. The algorithm uses a projection operator to enforce the sparsity constraint by thresholding the solution at every iteration. Thresholding level and iteration step are selected carefully to increase the efficiency without
Optimization Shape of Variable Capacitance Micromotor Using Differential Evolution Algorithm
A. Ketabi
2010-01-01
Full Text Available A new method for optimum shape design of variable capacitance micromotor (VCM using Differential Evolution (DE, a stochastic search algorithm, is presented. In this optimization exercise, the objective function aims to maximize torque value and minimize the torque ripple, where the geometric parameters are considered to be the variables. The optimization process is carried out using a combination of DE algorithm and FEM analysis. Fitness value is calculated by FEM analysis using COMSOL3.4, and the DE algorithm is realized by MATLAB7.4. The proposed method is applied to a VCM with 8 poles at the stator and 6 poles at the rotor. The results show that the optimized micromotor using DE algorithm had higher torque value and lower torque ripple, indicating the validity of this methodology for VCM design.
Improvement of the cost-benefit analysis algorithm for high-rise construction projects
Gafurov Andrey
2018-01-01
Full Text Available The specific nature of high-rise investment projects entailing long-term construction, high risks, etc. implies a need to improve the standard algorithm of cost-benefit analysis. An improved algorithm is described in the article. For development of the improved algorithm of cost-benefit analysis for high-rise construction projects, the following methods were used: weighted average cost of capital, dynamic cost-benefit analysis of investment projects, risk mapping, scenario analysis, sensitivity analysis of critical ratios, etc. This comprehensive approach helped to adapt the original algorithm to feasibility objectives in high-rise construction. The authors put together the algorithm of cost-benefit analysis for high-rise construction projects on the basis of risk mapping and sensitivity analysis of critical ratios. The suggested project risk management algorithms greatly expand the standard algorithm of cost-benefit analysis in investment projects, namely: the “Project analysis scenario” flowchart, improving quality and reliability of forecasting reports in investment projects; the main stages of cash flow adjustment based on risk mapping for better cost-benefit project analysis provided the broad range of risks in high-rise construction; analysis of dynamic cost-benefit values considering project sensitivity to crucial variables, improving flexibility in implementation of high-rise projects.
Improvement of the cost-benefit analysis algorithm for high-rise construction projects
Gafurov, Andrey; Skotarenko, Oksana; Plotnikov, Vladimir
2018-03-01
The specific nature of high-rise investment projects entailing long-term construction, high risks, etc. implies a need to improve the standard algorithm of cost-benefit analysis. An improved algorithm is described in the article. For development of the improved algorithm of cost-benefit analysis for high-rise construction projects, the following methods were used: weighted average cost of capital, dynamic cost-benefit analysis of investment projects, risk mapping, scenario analysis, sensitivity analysis of critical ratios, etc. This comprehensive approach helped to adapt the original algorithm to feasibility objectives in high-rise construction. The authors put together the algorithm of cost-benefit analysis for high-rise construction projects on the basis of risk mapping and sensitivity analysis of critical ratios. The suggested project risk management algorithms greatly expand the standard algorithm of cost-benefit analysis in investment projects, namely: the "Project analysis scenario" flowchart, improving quality and reliability of forecasting reports in investment projects; the main stages of cash flow adjustment based on risk mapping for better cost-benefit project analysis provided the broad range of risks in high-rise construction; analysis of dynamic cost-benefit values considering project sensitivity to crucial variables, improving flexibility in implementation of high-rise projects.
An accurate projection algorithm for array processor based SPECT systems
King, M.A.; Schwinger, R.B.; Cool, S.L.
1985-01-01
A data re-projection algorithm has been developed for use in single photon emission computed tomography (SPECT) on an array processor based computer system. The algorithm makes use of an accurate representation of pixel activity (uniform square pixel model of intensity distribution), and is rapidly performed due to the efficient handling of an array based algorithm and the Fast Fourier Transform (FFT) on parallel processing hardware. The algorithm consists of using a pixel driven nearest neighbour projection operation to an array of subdivided projection bins. This result is then convolved with the projected uniform square pixel distribution before being compressed to original bin size. This distribution varies with projection angle and is explicitly calculated. The FFT combined with a frequency space multiplication is used instead of a spatial convolution for more rapid execution. The new algorithm was tested against other commonly used projection algorithms by comparing the accuracy of projections of a simulated transverse section of the abdomen against analytically determined projections of that transverse section. The new algorithm was found to yield comparable or better standard error and yet result in easier and more efficient implementation on parallel hardware. Applications of the algorithm include iterative reconstruction and attenuation correction schemes and evaluation of regions of interest in dynamic and gated SPECT
Variable selection in Logistic regression model with genetic algorithm.
Zhang, Zhongheng; Trevino, Victor; Hoseini, Sayed Shahabuddin; Belciug, Smaranda; Boopathi, Arumugam Manivanna; Zhang, Ping; Gorunescu, Florin; Subha, Velappan; Dai, Songshi
2018-02-01
Variable or feature selection is one of the most important steps in model specification. Especially in the case of medical-decision making, the direct use of a medical database, without a previous analysis and preprocessing step, is often counterproductive. In this way, the variable selection represents the method of choosing the most relevant attributes from the database in order to build a robust learning models and, thus, to improve the performance of the models used in the decision process. In biomedical research, the purpose of variable selection is to select clinically important and statistically significant variables, while excluding unrelated or noise variables. A variety of methods exist for variable selection, but none of them is without limitations. For example, the stepwise approach, which is highly used, adds the best variable in each cycle generally producing an acceptable set of variables. Nevertheless, it is limited by the fact that it commonly trapped in local optima. The best subset approach can systematically search the entire covariate pattern space, but the solution pool can be extremely large with tens to hundreds of variables, which is the case in nowadays clinical data. Genetic algorithms (GA) are heuristic optimization approaches and can be used for variable selection in multivariable regression models. This tutorial paper aims to provide a step-by-step approach to the use of GA in variable selection. The R code provided in the text can be extended and adapted to other data analysis needs.
Constrained variable projection method for blind deconvolution
Cornelio, A; Piccolomini, E Loli; Nagy, J G
2012-01-01
This paper is focused on the solution of the blind deconvolution problem, here modeled as a separable nonlinear least squares problem. The well known ill-posedness, both on recovering the blurring operator and the true image, makes the problem really difficult to handle. We show that, by imposing appropriate constraints on the variables and with well chosen regularization parameters, it is possible to obtain an objective function that is fairly well behaved. Hence, the resulting nonlinear minimization problem can be effectively solved by classical methods, such as the Gauss-Newton algorithm.
Quasi Gradient Projection Algorithm for Sparse Reconstruction in Compressed Sensing
Xin Meng
2014-02-01
Full Text Available Compressed sensing is a novel signal sampling theory under the condition that the signal is sparse or compressible. The existing recovery algorithms based on the gradient projection can either need prior knowledge or recovery the signal poorly. In this paper, a new algorithm based on gradient projection is proposed, which is referred as Quasi Gradient Projection. The algorithm presented quasi gradient direction and two step sizes schemes along this direction. The algorithm doesn’t need any prior knowledge of the original signal. Simulation results demonstrate that the presented algorithm cans recovery the signal more correctly than GPSR which also don’t need prior knowledge. Meanwhile, the algorithm has a lower computation complexity.
Variable Step Size Maximum Correntropy Criteria Based Adaptive Filtering Algorithm
S. Radhika
2016-04-01
Full Text Available Maximum correntropy criterion (MCC based adaptive filters are found to be robust against impulsive interference. This paper proposes a novel MCC based adaptive filter with variable step size in order to obtain improved performance in terms of both convergence rate and steady state error with robustness against impulsive interference. The optimal variable step size is obtained by minimizing the Mean Square Deviation (MSD error from one iteration to the other. Simulation results in the context of a highly impulsive system identification scenario show that the proposed algorithm has faster convergence and lesser steady state error than the conventional MCC based adaptive filters.
Fast image matching algorithm based on projection characteristics
Zhou, Lijuan; Yue, Xiaobo; Zhou, Lijun
2011-06-01
Based on analyzing the traditional template matching algorithm, this paper identified the key factors restricting the speed of matching and put forward a brand new fast matching algorithm based on projection. Projecting the grayscale image, this algorithm converts the two-dimensional information of the image into one-dimensional one, and then matches and identifies through one-dimensional correlation, meanwhile, because of normalization has been done, when the image brightness or signal amplitude increasing in proportion, it could also perform correct matching. Experimental results show that the projection characteristics based image registration method proposed in this article could greatly improve the matching speed, which ensuring the matching accuracy as well.
US Climate Variability and Predictability Project
Patterson, Mike [University Corporation for Atmospheric Research (UCAR), Boulder, CO (United States)
2017-11-14
The US CLIVAR Project Office administers the US CLIVAR Program with its mission to advance understanding and prediction of climate variability and change across timescales with an emphasis on the role of the ocean and its interaction with other elements of the Earth system. The Project Office promotes and facilitates scientific collaboration within the US and international climate and Earth science communities, addressing priority topics from subseasonal to centennial climate variability and change; the global energy imbalance; the ocean’s role in climate, water, and carbon cycles; climate and weather extremes; and polar climate changes. This project provides essential one-year support of the Project Office, enabling the participation of US scientists in the meetings of the US CLIVAR bodies that guide scientific planning and implementation, including the scientific steering committee that establishes program goals and evaluates progress of activities to address them, the science team of funded investigators studying the ocean overturning circulation in the Atlantic, and two working groups tackling the priority research topics of Arctic change influence on midlatitude climate and weather extremes and the decadal-scale widening of the tropical belt.
Identification of chaotic systems with hidden variables (modified Bock's algorithm)
Bezruchko, Boris P.; Smirnov, Dmitry A.; Sysoev, Ilya V.
2006-01-01
We address the problem of estimating parameters of chaotic dynamical systems from a time series in a situation when some of state variables are not observed and/or the data are very noisy. Using specially developed quantitative criteria, we compare performance of the original multiple shooting approach (Bock's algorithm) and its modified version. The latter is shown to be significantly superior for long chaotic time series. In particular, it allows to obtain accurate estimates for much worse starting guesses for the estimated parameters
Preconditioned alternating projection algorithms for maximum a posteriori ECT reconstruction
Krol, Andrzej; Li, Si; Shen, Lixin; Xu, Yuesheng
2012-01-01
We propose a preconditioned alternating projection algorithm (PAPA) for solving the maximum a posteriori (MAP) emission computed tomography (ECT) reconstruction problem. Specifically, we formulate the reconstruction problem as a constrained convex optimization problem with the total variation (TV) regularization. We then characterize the solution of the constrained convex optimization problem and show that it satisfies a system of fixed-point equations defined in terms of two proximity operators raised from the convex functions that define the TV-norm and the constraint involved in the problem. The characterization (of the solution) via the proximity operators that define two projection operators naturally leads to an alternating projection algorithm for finding the solution. For efficient numerical computation, we introduce to the alternating projection algorithm a preconditioning matrix (the EM-preconditioner) for the dense system matrix involved in the optimization problem. We prove theoretically convergence of the PAPA. In numerical experiments, performance of our algorithms, with an appropriately selected preconditioning matrix, is compared with performance of the conventional MAP expectation-maximization (MAP-EM) algorithm with TV regularizer (EM-TV) and that of the recently developed nested EM-TV algorithm for ECT reconstruction. Based on the numerical experiments performed in this work, we observe that the alternating projection algorithm with the EM-preconditioner outperforms significantly the EM-TV in all aspects including the convergence speed, the noise in the reconstructed images and the image quality. It also outperforms the nested EM-TV in the convergence speed while providing comparable image quality. (paper)
Cost Forecasting of Substation Projects Based on Cuckoo Search Algorithm and Support Vector Machines
Dongxiao Niu
2018-01-01
Full Text Available Accurate prediction of substation project cost is helpful to improve the investment management and sustainability. It is also directly related to the economy of substation project. Ensemble Empirical Mode Decomposition (EEMD can decompose variables with non-stationary sequence signals into significant regularity and periodicity, which is helpful in improving the accuracy of prediction model. Adding the Gauss perturbation to the traditional Cuckoo Search (CS algorithm can improve the searching vigor and precision of CS algorithm. Thus, the parameters and kernel functions of Support Vector Machines (SVM model are optimized. By comparing the prediction results with other models, this model has higher prediction accuracy.
Computational performance of a projection and rescaling algorithm
Pena, Javier; Soheili, Negar
2018-01-01
This paper documents a computational implementation of a {\\em projection and rescaling algorithm} for finding most interior solutions to the pair of feasibility problems \\[ \\text{find} \\; x\\in L\\cap\\mathbb{R}^n_{+} \\;\\;\\;\\; \\text{ and } \\; \\;\\;\\;\\; \\text{find} \\; \\hat x\\in L^\\perp\\cap\\mathbb{R}^n_{+}, \\] where $L$ denotes a linear subspace in $\\mathbb{R}^n$ and $L^\\perp$ denotes its orthogonal complement. The projection and rescaling algorithm is a recently developed method that combines a {\\...
Limitations on continuous variable quantum algorithms with Fourier transforms
Adcock, Mark R A; Hoeyer, Peter; Sanders, Barry C
2009-01-01
We study quantum algorithms implemented within a single harmonic oscillator, or equivalently within a single mode of the electromagnetic field. Logical states correspond to functions of the canonical position, and the Fourier transform to canonical momentum serves as the analogue of the Hadamard transform for this implementation. This continuous variable version of quantum information processing has widespread appeal because of advanced quantum optics technology that can create, manipulate and read Gaussian states of light. We show that, contrary to a previous claim, this implementation of quantum information processing has limitations due to a position-momentum trade-off of the Fourier transform, analogous to the famous time-bandwidth theorem of signal processing.
Preventive maintenance scheduling by variable dimension evolutionary algorithms
Limbourg, Philipp; Kochs, Hans-Dieter
2006-01-01
Black box optimization strategies have been proven to be useful tools for solving complex maintenance optimization problems. There has been a considerable amount of research on the right choice of optimization strategies for finding optimal preventive maintenance schedules. Much less attention is turned to the representation of the schedule to the algorithm. Either the search space is represented as a binary string leading to highly complex combinatorial problem or maintenance operations are defined by regular intervals which may restrict the search space to suboptimal solutions. An adequate representation however is vitally important for result quality. This work presents several nonstandard input representations and compares them to the standard binary representation. An evolutionary algorithm with extensions to handle variable length genomes is used for the comparison. The results demonstrate that two new representations perform better than the binary representation scheme. A second analysis shows that the performance may be even more increased using modified genetic operators. Thus, the choice of alternative representations leads to better results in the same amount of time and without any loss of accuracy
Aidin Delgoshaei
2016-09-01
Full Text Available Purpose: The issue resource over-allocating is a big concern for project engineers in the process of scheduling project activities. Resource over-allocating drawback is frequently seen after scheduling of a project in practice which causes a schedule to be useless. Modifying an over-allocated schedule is very complicated and needs a lot of efforts and time. In this paper, a new and fast tracking method is proposed to schedule large scale projects which can help project engineers to schedule the project rapidly and with more confidence. Design/methodology/approach: In this article, a forward approach for maximizing net present value (NPV in multi-mode resource constrained project scheduling problem while assuming discounted positive cash flows (MRCPSP-DCF is proposed. The progress payment method is used and all resources are considered as pre-emptible. The proposed approach maximizes NPV using unscheduled resources through resource calendar in forward mode. For this purpose, a Genetic Algorithm is applied to solve. Findings: The findings show that the proposed method is an effective way to maximize NPV in MRCPSP-DCF problems while activity splitting is allowed. The proposed algorithm is very fast and can schedule experimental cases with 1000 variables and 100 resources in few seconds. The results are then compared with branch and bound method and simulated annealing algorithm and it is found the proposed genetic algorithm can provide results with better quality. Then algorithm is then applied for scheduling a hospital in practice. Originality/value: The method can be used alone or as a macro in Microsoft Office Project® Software to schedule MRCPSP-DCF problems or to modify resource over-allocated activities after scheduling a project. This can help project engineers to schedule project activities rapidly with more accuracy in practice.
Multiobjective genetic algorithm approaches to project scheduling under risk
Kılıç, Murat; Kilic, Murat
2003-01-01
In this thesis, project scheduling under risk is chosen as the topic of research. Project scheduling under risk is defined as a biobjective decision problem and is formulated as a 0-1 integer mathematical programming model. In this biobjective formulation, one of the objectives is taken as the expected makespan minimization and the other is taken as the expected cost minimization. As the solution approach to this biobjective formulation genetic algorithm (GA) is chosen. After carefully invest...
Theory of affine projection algorithms for adaptive filtering
Ozeki, Kazuhiko
2016-01-01
This book focuses on theoretical aspects of the affine projection algorithm (APA) for adaptive filtering. The APA is a natural generalization of the classical, normalized least-mean-squares (NLMS) algorithm. The book first explains how the APA evolved from the NLMS algorithm, where an affine projection view is emphasized. By looking at those adaptation algorithms from such a geometrical point of view, we can find many of the important properties of the APA, e.g., the improvement of the convergence rate over the NLMS algorithm especially for correlated input signals. After the birth of the APA in the mid-1980s, similar algorithms were put forward by other researchers independently from different perspectives. This book shows that they are variants of the APA, forming a family of APAs. Then it surveys research on the convergence behavior of the APA, where statistical analyses play important roles. It also reviews developments of techniques to reduce the computational complexity of the APA, which are important f...
A Turn-Projected State-Based Conflict Resolution Algorithm
Butler, Ricky W.; Lewis, Timothy A.
2013-01-01
State-based conflict detection and resolution (CD&R) algorithms detect conflicts and resolve them on the basis on current state information without the use of additional intent information from aircraft flight plans. Therefore, the prediction of the trajectory of aircraft is based solely upon the position and velocity vectors of the traffic aircraft. Most CD&R algorithms project the traffic state using only the current state vectors. However, the past state vectors can be used to make a better prediction of the future trajectory of the traffic aircraft. This paper explores the idea of using past state vectors to detect traffic turns and resolve conflicts caused by these turns using a non-linear projection of the traffic state. A new algorithm based on this idea is presented and validated using a fast-time simulator developed for this study.
Approximated affine projection algorithm for feedback cancellation in hearing aids.
Lee, Sangmin; Kim, In-Young; Park, Young-Cheol
2007-09-01
We propose an approximated affine projection (AP) algorithm for feedback cancellation in hearing aids. It is based on the conventional approach using the Gauss-Seidel (GS) iteration, but provides more stable convergence behaviour even with small step sizes. In the proposed algorithm, a residue of the weighted error vector, instead of the current error sample, is used to provide stable convergence. A new learning rate control scheme is also applied to the proposed algorithm to prevent signal cancellation and system instability. The new scheme determines step size in proportion to the prediction factor of the input, so that adaptation is inhibited whenever tone-like signals are present in the input. Simulation results verified the efficiency of the proposed algorithm.
Integrated variable projection approach (IVAPA) for parallel magnetic resonance imaging.
Zhang, Qiao; Sheng, Jinhua
2012-10-01
Parallel magnetic resonance imaging (pMRI) is a fast method which requires algorithms for the reconstructing image from a small number of measured k-space lines. The accurate estimation of the coil sensitivity functions is still a challenging problem in parallel imaging. The joint estimation of the coil sensitivity functions and the desired image has recently been proposed to improve the situation by iteratively optimizing both the coil sensitivity functions and the image reconstruction. It regards both the coil sensitivities and the desired images as unknowns to be solved for jointly. In this paper, we propose an integrated variable projection approach (IVAPA) for pMRI, which integrates two individual processing steps (coil sensitivity estimation and image reconstruction) into a single processing step to improve the accuracy of the coil sensitivity estimation using the variable projection approach. The method is demonstrated to be able to give an optimal solution with considerably reduced artifacts for high reduction factors and a low number of auto-calibration signal (ACS) lines, and our implementation has a fast convergence rate. The performance of the proposed method is evaluated using a set of in vivo experiment data. Copyright © 2012 Elsevier Ltd. All rights reserved.
Combinatorial Optimization in Project Selection Using Genetic Algorithm
Dewi, Sari; Sawaluddin
2018-01-01
This paper discusses the problem of project selection in the presence of two objective functions that maximize profit and minimize cost and the existence of some limitations is limited resources availability and time available so that there is need allocation of resources in each project. These resources are human resources, machine resources, raw material resources. This is treated as a consideration to not exceed the budget that has been determined. So that can be formulated mathematics for objective function (multi-objective) with boundaries that fulfilled. To assist the project selection process, a multi-objective combinatorial optimization approach is used to obtain an optimal solution for the selection of the right project. It then described a multi-objective method of genetic algorithm as one method of multi-objective combinatorial optimization approach to simplify the project selection process in a large scope.
A Variable Neighborhood Search Algorithm for the Leather Nesting Problem
Cláudio Alves
2012-01-01
Full Text Available The leather nesting problem is a cutting and packing optimization problem that consists in finding the best layout for a set of irregular pieces within a natural leather hide with an irregular surface and contour. In this paper, we address a real application of this problem related to the production of car seats in the automotive industry. The high quality requirements imposed on these products combined with the heterogeneity of the leather hides make the problem very complex to solve in practice. Very few results are reported in the literature for the leather nesting problem. Furthermore, the majority of the approaches impose some additional constraints to the layouts related to the particular application that is considered. In this paper, we describe a variable neighborhood search algorithm for the general leather nesting problem. To evaluate the performance of our approaches, we conducted an extensive set of computational experiments on real instances. The results of these experiments are reported at the end of the paper.
Projective block Lanczos algorithm for dense, Hermitian eigensystems
Webster, F.; Lo, G.C.
1996-01-01
Projection operators are used to effect open-quotes deflation by restrictionclose quotes and it is argued that this is an optimal Lanczos algorithm for memory minimization. Algorithmic optimization is constrained to dense, Hermitian eigensystems where a significant number of the extreme eigenvectors must be obtained reliably and completely. The defining constraints are operator algebra without a matrix representation and semi-orthogonalization without storage of Krylov vectors. other semi-orthogonalization strategies for Lanczos algorithms and conjugate gradient techniques are evaluated within these constraints. Large scale, sparse, complex numerical experiments are performed on clusters of magnetic dipoles, a quantum many-body system that is not block-diagonalizable. Plane-wave, density functional theory of beryllium clusters provides examples of dense complex eigensystems. Use of preconditioners and spectral transformations is evaluated in a preprocessor prior to a high accuracy self-consistent field calculation. 25 refs., 3 figs., 5 tabs
Sparse Nonlinear Electromagnetic Imaging Accelerated With Projected Steepest Descent Algorithm
Desmal, Abdulla
2017-04-03
An efficient electromagnetic inversion scheme for imaging sparse 3-D domains is proposed. The scheme achieves its efficiency and accuracy by integrating two concepts. First, the nonlinear optimization problem is constrained using L₀ or L₁-norm of the solution as the penalty term to alleviate the ill-posedness of the inverse problem. The resulting Tikhonov minimization problem is solved using nonlinear Landweber iterations (NLW). Second, the efficiency of the NLW is significantly increased using a steepest descent algorithm. The algorithm uses a projection operator to enforce the sparsity constraint by thresholding the solution at every iteration. Thresholding level and iteration step are selected carefully to increase the efficiency without sacrificing the convergence of the algorithm. Numerical results demonstrate the efficiency and accuracy of the proposed imaging scheme in reconstructing sparse 3-D dielectric profiles.
Liu, Jianming; Grant, Steven L.; Benesty, Jacob
2015-12-01
A new reweighted proportionate affine projection algorithm (RPAPA) with memory and row action projection (MRAP) is proposed in this paper. The reweighted PAPA is derived from a family of sparseness measures, which demonstrate performance similar to mu-law and the l 0 norm PAPA but with lower computational complexity. The sparseness of the channel is taken into account to improve the performance for dispersive system identification. Meanwhile, the memory of the filter's coefficients is combined with row action projections (RAP) to significantly reduce computational complexity. Simulation results demonstrate that the proposed RPAPA MRAP algorithm outperforms both the affine projection algorithm (APA) and PAPA, and has performance similar to l 0 PAPA and mu-law PAPA, in terms of convergence speed and tracking ability. Meanwhile, the proposed RPAPA MRAP has much lower computational complexity than PAPA, mu-law PAPA, and l 0 PAPA, etc., which makes it very appealing for real-time implementation.
Abejuela, Harmony Raylen; Osser, David N
2016-01-01
This revision of previous algorithms for the pharmacotherapy of generalized anxiety disorder was developed by the Psychopharmacology Algorithm Project at the Harvard South Shore Program. Algorithms from 1999 and 2010 and associated references were reevaluated. Newer studies and reviews published from 2008-14 were obtained from PubMed and analyzed with a focus on their potential to justify changes in the recommendations. Exceptions to the main algorithm for special patient populations, such as women of childbearing potential, pregnant women, the elderly, and those with common medical and psychiatric comorbidities, were considered. Selective serotonin reuptake inhibitors (SSRIs) are still the basic first-line medication. Early alternatives include duloxetine, buspirone, hydroxyzine, pregabalin, or bupropion, in that order. If response is inadequate, then the second recommendation is to try a different SSRI. Additional alternatives now include benzodiazepines, venlafaxine, kava, and agomelatine. If the response to the second SSRI is unsatisfactory, then the recommendation is to try a serotonin-norepinephrine reuptake inhibitor (SNRI). Other alternatives to SSRIs and SNRIs for treatment-resistant or treatment-intolerant patients include tricyclic antidepressants, second-generation antipsychotics, and valproate. This revision of the GAD algorithm responds to issues raised by new treatments under development (such as pregabalin) and organizes the evidence systematically for practical clinical application.
Research on calibration algorithm in laser scanning projection system
Li, Li Juan; Qu, Song; Hou, Mao Sheng
2017-10-01
Laser scanning projection technology can project the image defined by the existing CAD digital model to the working surface, in the form of a laser harness profile. This projection is in accordance with the ratio of 1: 1. Through the laser harness contours with high positioning quality, the technical staff can carry out the operation with high precision. In a typical process of the projection, in order to determine the relative positional relationship between the laser projection instrument and the target, it is necessary to place several fixed reference points on the projection target and perform the calibration of projection. This position relationship is the transformation from projection coordinate system to the global coordinate system. The entire projection work is divided into two steps: the first step, the calculation of the projector six position parameters is performed, that is, the projector calibration. In the second step, the deflection angle is calculated by the known projector position parameter and the known coordinate points, and then the actual model is projected. Typically, the calibration requires the establishment of six reference points to reduce the possibility of divergence of the nonlinear equations, but the whole solution is very complex and the solution may still diverge. In this paper, the distance is detected combined with the calculation so that the position parameters of the projector can be solved by using the coordinate values of three reference points and the distance of at least one reference point to the projector. The addition of the distance measurement increases the stability of the solution of the nonlinear system and avoids the problem of divergence of the solution caused by the reference point which is directly under the projector. Through the actual analysis and calculation, the Taylor expansion method combined with the least squares method is used to obtain the solution of the system. Finally, the simulation experiment is
The Texas medication algorithm project: clinical results for schizophrenia.
Miller, Alexander L; Crismon, M Lynn; Rush, A John; Chiles, John; Kashner, T Michael; Toprac, Marcia; Carmody, Thomas; Biggs, Melanie; Shores-Wilson, Kathy; Chiles, Judith; Witte, Brad; Bow-Thomas, Christine; Velligan, Dawn I; Trivedi, Madhukar; Suppes, Trisha; Shon, Steven
2004-01-01
In the Texas Medication Algorithm Project (TMAP), patients were given algorithm-guided treatment (ALGO) or treatment as usual (TAU). The ALGO intervention included a clinical coordinator to assist the physicians and administer a patient and family education program. The primary comparison in the schizophrenia module of TMAP was between patients seen in clinics in which ALGO was used (n = 165) and patients seen in clinics in which no algorithms were used (n = 144). A third group of patients, seen in clinics using an algorithm for bipolar or major depressive disorder but not for schizophrenia, was also studied (n = 156). The ALGO group had modestly greater improvement in symptoms (Brief Psychiatric Rating Scale) during the first quarter of treatment. The TAU group caught up by the end of 12 months. Cognitive functions were more improved in ALGO than in TAU at 3 months, and this difference was greater at 9 months (the final cognitive assessment). In secondary comparisons of ALGO with the second TAU group, the greater improvement in cognitive functioning was again noted, but the initial symptom difference was not significant.
Mohammad, Othman; Osser, David N
2014-01-01
This new algorithm for the pharmacotherapy of acute mania was developed by the Psychopharmacology Algorithm Project at the Harvard South Shore Program. The authors conducted a literature search in PubMed and reviewed key studies, other algorithms and guidelines, and their references. Treatments were prioritized considering three main considerations: (1) effectiveness in treating the current episode, (2) preventing potential relapses to depression, and (3) minimizing side effects over the short and long term. The algorithm presupposes that clinicians have made an accurate diagnosis, decided how to manage contributing medical causes (including substance misuse), discontinued antidepressants, and considered the patient's childbearing potential. We propose different algorithms for mixed and nonmixed mania. Patients with mixed mania may be treated first with a second-generation antipsychotic, of which the first choice is quetiapine because of its greater efficacy for depressive symptoms and episodes in bipolar disorder. Valproate and then either lithium or carbamazepine may be added. For nonmixed mania, lithium is the first-line recommendation. A second-generation antipsychotic can be added. Again, quetiapine is favored, but if quetiapine is unacceptable, risperidone is the next choice. Olanzapine is not considered a first-line treatment due to its long-term side effects, but it could be second-line. If the patient, whether mixed or nonmixed, is still refractory to the above medications, then depending on what has already been tried, consider carbamazepine, haloperidol, olanzapine, risperidone, and valproate first tier; aripiprazole, asenapine, and ziprasidone second tier; and clozapine third tier (because of its weaker evidence base and greater side effects). Electroconvulsive therapy may be considered at any point in the algorithm if the patient has a history of positive response or is intolerant of medications.
S. Selvi
2015-07-01
Full Text Available Grid computing solves high performance and high-throughput computing problems through sharing resources ranging from personal computers to super computers distributed around the world. As the grid environments facilitate distributed computation, the scheduling of grid jobs has become an important issue. In this paper, an investigation on implementing Multiobjective Variable Neighborhood Search (MVNS algorithm for scheduling independent jobs on computational grid is carried out. The performance of the proposed algorithm has been evaluated with Min–Min algorithm, Simulated Annealing (SA and Greedy Randomized Adaptive Search Procedure (GRASP algorithm. Simulation results show that MVNS algorithm generally performs better than other metaheuristics methods.
Liu, Ke; Chen, Xiaojing; Li, Limin; Chen, Huiling; Ruan, Xiukai; Liu, Wenbin
2015-02-09
The successive projections algorithm (SPA) is widely used to select variables for multiple linear regression (MLR) modeling. However, SPA used only once may not obtain all the useful information of the full spectra, because the number of selected variables cannot exceed the number of calibration samples in the SPA algorithm. Therefore, the SPA-MLR method risks the loss of useful information. To make a full use of the useful information in the spectra, a new method named "consensus SPA-MLR" (C-SPA-MLR) is proposed herein. This method is the combination of consensus strategy and SPA-MLR method. In the C-SPA-MLR method, SPA-MLR is used to construct member models with different subsets of variables, which are selected from the remaining variables iteratively. A consensus prediction is obtained by combining the predictions of the member models. The proposed method is evaluated by analyzing the near infrared (NIR) spectra of corn and diesel. The results of C-SPA-MLR method showed a better prediction performance compared with the SPA-MLR and full-spectra PLS methods. Moreover, these results could serve as a reference for combination the consensus strategy and other variable selection methods when analyzing NIR spectra and other spectroscopic techniques. Copyright © 2014 Elsevier B.V. All rights reserved.
Azimi, Ehsan; Behrad, Alireza; Ghaznavi-Ghoushchi, Mohammad Bagher; Shanbehzadeh, Jamshid
2016-11-01
The projective model is an important mapping function for the calculation of global transformation between two images. However, its hardware implementation is challenging because of a large number of coefficients with different required precisions for fixed point representation. A VLSI hardware architecture is proposed for the calculation of a global projective model between input and reference images and refining false matches using random sample consensus (RANSAC) algorithm. To make the hardware implementation feasible, it is proved that the calculation of the projective model can be divided into four submodels comprising two translations, an affine model and a simpler projective mapping. This approach makes the hardware implementation feasible and considerably reduces the required number of bits for fixed point representation of model coefficients and intermediate variables. The proposed hardware architecture for the calculation of a global projective model using the RANSAC algorithm was implemented using Verilog hardware description language and the functionality of the design was validated through several experiments. The proposed architecture was synthesized by using an application-specific integrated circuit digital design flow utilizing 180-nm CMOS technology as well as a Virtex-6 field programmable gate array. Experimental results confirm the efficiency of the proposed hardware architecture in comparison with software implementation.
Low-Energy Real-Time OS Using Voltage Scheduling Algorithm for Variable Voltage Processors
Okuma, Takanori; Yasuura, Hiroto
2001-01-01
This paper presents a real-time OS based on $ mu $ITRON using proposed voltage scheduling algorithm for variable voltage processors which can vary supply voltage dynamically. The proposed voltage scheduling algorithms assign voltage level for each task dynamically in order to minimize energy consumption under timing constraints. Using the presented real-time OS, running tasks with low supply voltage leads to drastic energy reduction. In addition, the presented voltage scheduling algorithm is ...
Weissman, Alexander
2013-01-01
Convergence of the expectation-maximization (EM) algorithm to a global optimum of the marginal log likelihood function for unconstrained latent variable models with categorical indicators is presented. The sufficient conditions under which global convergence of the EM algorithm is attainable are provided in an information-theoretic context by…
Homaifar, Abdollah; Esterline, Albert; Kimiaghalam, Bahram
2005-01-01
The Hybrid Projected Gradient-Evolutionary Search Algorithm (HPGES) algorithm uses a specially designed evolutionary-based global search strategy to efficiently create candidate solutions in the solution space...
Ishioka, R.; Wang, S.-Y.; Zhang, Z.-W.; Lehner, M. J.; Cook, K. H.; King, S.-K.; Lee, T.; Marshall, S. L.; Schwamb, M. E.; Wang, J.-H.; Wen, C.-Y. [Institute of Astronomy and Astrophysics, Academia Sinica, 11F of Astronomy-Mathematics Building, National Taiwan University, No. 1, Sec. 4, Roosevelt Road, Taipei 10617, Taiwan (China); Alcock, C.; Protopapas, P. [Harvard-Smithsonian Center for Astrophysics, 60 Garden Street, Cambridge, MA 02138 (United States); Axelrod, T. [Steward Observatory, 933 North Cherry Avenue, Room N204, Tucson, AZ 85721 (United States); Bianco, F. B. [Center for Cosmology and Particle Physics, New York University, 4 Washington Place, New York, NY 10003 (United States); Byun, Y.-I. [Department of Astronomy and University Observatory, Yonsei University, 134 Shinchon, Seoul 120-749 (Korea, Republic of); Chen, W. P.; Ngeow, C.-C. [Institute of Astronomy, National Central University, No. 300, Jhongda Road, Jhongli City, Taoyuan County 320, Taiwan (China); Kim, D.-W. [Max Planck Institute for Astronomy, Königstuhl 17, D-69117 Heidelberg (Germany); Rice, J. A., E-mail: ishioka@asiaa.sinica.edu.tw [Department of Statistics, University of California Berkeley, 367 Evans Hall, Berkeley, CA 94720 (United States)
2014-04-01
The Taiwanese-American Occultation Survey project is designed for the detection of stellar occultations by small-size Kuiper Belt Objects, and it has monitored selected fields along the ecliptic plane by using four telescopes with a 3 deg{sup 2} field of view on the sky since 2005. We have analyzed data accumulated during 2005-2012 to detect variable stars. Sixteen fields with observations of more than 100 epochs were examined. We recovered 85 variables among a total of 158 known variable stars in these 16 fields. Most of the unrecovered variables are located in the fields observed less frequently. We also detected 58 variable stars which are not listed in the International Variable Star Index of the American Association of Variable Star Observers. These variable stars are classified as 3 RR Lyrae, 4 Cepheid, 1 δ Scuti, 5 Mira, 15 semi-regular, and 27 eclipsing binaries based on the periodicity and the profile of the light curves.
Simultaneous and semi-alternating projection algorithms for solving split equality problems.
Dong, Qiao-Li; Jiang, Dan
2018-01-01
In this article, we first introduce two simultaneous projection algorithms for solving the split equality problem by using a new choice of the stepsize, and then propose two semi-alternating projection algorithms. The weak convergence of the proposed algorithms is analyzed under standard conditions. As applications, we extend the results to solve the split feasibility problem. Finally, a numerical example is presented to illustrate the efficiency and advantage of the proposed algorithms.
Lee Tae-Hoon
2016-12-01
Full Text Available In many cases, a X¯$\\overline X $ control chart based on a performance variable is used in industrial fields. Typically, the control chart monitors the measurements of a performance variable itself. However, if the performance variable is too costly or impossible to measure, and a less expensive surrogate variable is available, the process may be more efficiently controlled using surrogate variables. In this paper, we present a model for the economic statistical design of a VSI (Variable Sampling Interval X¯$\\overline X $ control chart using a surrogate variable that is linearly correlated with the performance variable. We derive the total average profit model from an economic viewpoint and apply the model to a Very High Temperature Reactor (VHTR nuclear fuel measurement system and derive the optimal result using genetic algorithms. Compared with the control chart based on a performance variable, the proposed model gives a larger expected net income per unit of time in the long-run if the correlation between the performance variable and the surrogate variable is relatively high. The proposed model was confined to the sample mean control chart under the assumption that a single assignable cause occurs according to the Poisson process. However, the model may also be extended to other types of control charts using a single or multiple assignable cause assumptions such as VSS (Variable Sample Size X¯$\\overline X $ control chart, EWMA, CUSUM charts and so on.
Impact of internal variability on projections of Sahel precipitation change
Monerie, Paul-Arthur; Sanchez-Gomez, Emilia; Pohl, Benjamin; Robson, Jon; Dong, Buwen
2017-11-01
The impact of the increase of greenhouse gases on Sahelian precipitation is very uncertain in both its spatial pattern and magnitude. In particular, the relative importance of internal variability versus external forcings depends on the time horizon considered in the climate projection. In this study we address the respective roles of the internal climate variability versus external forcings on Sahelian precipitation by using the data from the CESM Large Ensemble Project, which consists of a 40 member ensemble performed with the CESM1-CAM5 coupled model for the period 1920-2100. We show that CESM1-CAM5 is able to simulate the mean and interannual variability of Sahel precipitation, and is representative of a CMIP5 ensemble of simulations (i.e. it simulates the same pattern of precipitation change along with equivalent magnitude and seasonal cycle changes as the CMIP5 ensemble mean). However, CESM1-CAM5 underestimates the long-term decadal variability in Sahel precipitation. For short-term (2010-2049) and mid-term (2030-2069) projections the simulated internal variability component is able to obscure the projected impact of the external forcing. For long-term (2060-2099) projections external forcing induced change becomes stronger than simulated internal variability. Precipitation changes are found to be more robust over the central Sahel than over the western Sahel, where climate change effects struggle to emerge. Ten (thirty) members are needed to separate the 10 year averaged forced response from climate internal variability response in the western Sahel for a long-term (short-term) horizon. Over the central Sahel two members (ten members) are needed for a long-term (short-term) horizon.
Zhang, Ling; Cai, Yunlong; Li, Chunguang; de Lamare, Rodrigo C.
2017-12-01
In this work, we present low-complexity variable forgetting factor (VFF) techniques for diffusion recursive least squares (DRLS) algorithms. Particularly, we propose low-complexity VFF-DRLS algorithms for distributed parameter and spectrum estimation in sensor networks. For the proposed algorithms, they can adjust the forgetting factor automatically according to the posteriori error signal. We develop detailed analyses in terms of mean and mean square performance for the proposed algorithms and derive mathematical expressions for the mean square deviation (MSD) and the excess mean square error (EMSE). The simulation results show that the proposed low-complexity VFF-DRLS algorithms achieve superior performance to the existing DRLS algorithm with fixed forgetting factor when applied to scenarios of distributed parameter and spectrum estimation. Besides, the simulation results also demonstrate a good match for our proposed analytical expressions.
An Ensemble Successive Project Algorithm for Liquor Detection Using Near Infrared Sensor.
Qu, Fangfang; Ren, Dong; Wang, Jihua; Zhang, Zhong; Lu, Na; Meng, Lei
2016-01-11
Spectral analysis technique based on near infrared (NIR) sensor is a powerful tool for complex information processing and high precision recognition, and it has been widely applied to quality analysis and online inspection of agricultural products. This paper proposes a new method to address the instability of small sample sizes in the successive projections algorithm (SPA) as well as the lack of association between selected variables and the analyte. The proposed method is an evaluated bootstrap ensemble SPA method (EBSPA) based on a variable evaluation index (EI) for variable selection, and is applied to the quantitative prediction of alcohol concentrations in liquor using NIR sensor. In the experiment, the proposed EBSPA with three kinds of modeling methods are established to test their performance. In addition, the proposed EBSPA combined with partial least square is compared with other state-of-the-art variable selection methods. The results show that the proposed method can solve the defects of SPA and it has the best generalization performance and stability. Furthermore, the physical meaning of the selected variables from the near infrared sensor data is clear, which can effectively reduce the variables and improve their prediction accuracy.
An Ensemble Successive Project Algorithm for Liquor Detection Using Near Infrared Sensor
Fangfang Qu
2016-01-01
Full Text Available Spectral analysis technique based on near infrared (NIR sensor is a powerful tool for complex information processing and high precision recognition, and it has been widely applied to quality analysis and online inspection of agricultural products. This paper proposes a new method to address the instability of small sample sizes in the successive projections algorithm (SPA as well as the lack of association between selected variables and the analyte. The proposed method is an evaluated bootstrap ensemble SPA method (EBSPA based on a variable evaluation index (EI for variable selection, and is applied to the quantitative prediction of alcohol concentrations in liquor using NIR sensor. In the experiment, the proposed EBSPA with three kinds of modeling methods are established to test their performance. In addition, the proposed EBSPA combined with partial least square is compared with other state-of-the-art variable selection methods. The results show that the proposed method can solve the defects of SPA and it has the best generalization performance and stability. Furthermore, the physical meaning of the selected variables from the near infrared sensor data is clear, which can effectively reduce the variables and improve their prediction accuracy.
A segmentation algorithm based on image projection for complex text layout
Zhu, Wangsheng; Chen, Qin; Wei, Chuanyi; Li, Ziyang
2017-10-01
Segmentation algorithm is an important part of layout analysis, considering the efficiency advantage of the top-down approach and the particularity of the object, a breakdown of projection layout segmentation algorithm. Firstly, the algorithm will algorithm first partitions the text image, and divided into several columns, then for each column scanning projection, the text image is divided into several sub regions through multiple projection. The experimental results show that, this method inherits the projection itself and rapid calculation speed, but also can avoid the effect of arc image information page segmentation, and also can accurate segmentation of the text image layout is complex.
Shen-yan Chen
2015-01-01
Full Text Available This paper presents an Improved Genetic Algorithm with Two-Level Approximation (IGATA to minimize truss weight by simultaneously optimizing size, shape, and topology variables. On the basis of a previously presented truss sizing/topology optimization method based on two-level approximation and genetic algorithm (GA, a new method for adding shape variables is presented, in which the nodal positions are corresponding to a set of coordinate lists. A uniform optimization model including size/shape/topology variables is established. First, a first-level approximate problem is constructed to transform the original implicit problem to an explicit problem. To solve this explicit problem which involves size/shape/topology variables, GA is used to optimize individuals which include discrete topology variables and shape variables. When calculating the fitness value of each member in the current generation, a second-level approximation method is used to optimize the continuous size variables. With the introduction of shape variables, the original optimization algorithm was improved in individual coding strategy as well as GA execution techniques. Meanwhile, the update strategy of the first-level approximation problem was also improved. The results of numerical examples show that the proposed method is effective in dealing with the three kinds of design variables simultaneously, and the required computational cost for structural analysis is quite small.
Unifying parameter estimation and the Deutsch-Jozsa algorithm for continuous variables
Zwierz, Marcin; Perez-Delgado, Carlos A.; Kok, Pieter
2010-01-01
We reveal a close relationship between quantum metrology and the Deutsch-Jozsa algorithm on continuous-variable quantum systems. We develop a general procedure, characterized by two parameters, that unifies parameter estimation and the Deutsch-Jozsa algorithm. Depending on which parameter we keep constant, the procedure implements either the parameter-estimation protocol or the Deutsch-Jozsa algorithm. The parameter-estimation part of the procedure attains the Heisenberg limit and is therefore optimal. Due to the use of approximate normalizable continuous-variable eigenstates, the Deutsch-Jozsa algorithm is probabilistic. The procedure estimates a value of an unknown parameter and solves the Deutsch-Jozsa problem without the use of any entanglement.
Yue Wu
2017-01-01
Full Text Available Firefly Algorithm (FA, for short is inspired by the social behavior of fireflies and their phenomenon of bioluminescent communication. Based on the fundamentals of FA, two improved strategies are proposed to conduct size and topology optimization for trusses with discrete design variables. Firstly, development of structural topology optimization method and the basic principle of standard FA are introduced in detail. Then, in order to apply the algorithm to optimization problems with discrete variables, the initial positions of fireflies and the position updating formula are discretized. By embedding the random-weight and enhancing the attractiveness, the performance of this algorithm is improved, and thus an Improved Firefly Algorithm (IFA, for short is proposed. Furthermore, using size variables which are capable of including topology variables and size and topology optimization for trusses with discrete variables is formulated based on the Ground Structure Approach. The essential techniques of variable elastic modulus technology and geometric construction analysis are applied in the structural analysis process. Subsequently, an optimization method for the size and topological design of trusses based on the IFA is introduced. Finally, two numerical examples are shown to verify the feasibility and efficiency of the proposed method by comparing with different deterministic methods.
C. Fernandez-Lozano
2013-01-01
Full Text Available Given the background of the use of Neural Networks in problems of apple juice classification, this paper aim at implementing a newly developed method in the field of machine learning: the Support Vector Machines (SVM. Therefore, a hybrid model that combines genetic algorithms and support vector machines is suggested in such a way that, when using SVM as a fitness function of the Genetic Algorithm (GA, the most representative variables for a specific classification problem can be selected.
The continuous-variable Deutsch–Jozsa algorithm using realistic quantum systems
Wagner, Rob C; Kendon, Viv M
2012-01-01
This paper is a study of the continuous-variable Deutsch–Jozsa algorithm. First, we review an existing version of the algorithm for qunat states (Pati and Braunstein 2002 arXiv:0207108v1), and then, we present a realistic version of the Deutsch–Jozsa algorithm for continuous variables, which can be implemented in a physical quantum system given the appropriate oracle. Under these conditions, we have a probabilistic algorithm for deciding the function with a very high success rate with a single call to the oracle. Finally, we look at the effects of errors in both of these continuous-variable algorithms and how they affect the chances of success. We find that the algorithm is generally robust for errors in initialization and the oracle, but less so for errors in the measurement apparatus and the Fourier transform. This article is part of a special issue of Journal of Physics A: Mathematical and Theoretical devoted to ‘Coherent states: mathematical and physical aspects’. (paper)
A chaos wolf optimization algorithm with self-adaptive variable step-size
Yong Zhu
2017-10-01
Full Text Available To explore the problem of parameter optimization for complex nonlinear function, a chaos wolf optimization algorithm (CWOA with self-adaptive variable step-size was proposed. The algorithm was based on the swarm intelligence of wolf pack, which fully simulated the predation behavior and prey distribution way of wolves. It possessed three intelligent behaviors such as migration, summons and siege. And the competition rule as “winner-take-all” and the update mechanism as “survival of the fittest” were also the characteristics of the algorithm. Moreover, it combined the strategies of self-adaptive variable step-size search and chaos optimization. The CWOA was utilized in parameter optimization of twelve typical and complex nonlinear functions. And the obtained results were compared with many existing algorithms, including the classical genetic algorithm, the particle swarm optimization algorithm and the leader wolf pack search algorithm. The investigation results indicate that CWOA possess preferable optimization ability. There are advantages in optimization accuracy and convergence rate. Furthermore, it demonstrates high robustness and global searching ability.
A chaos wolf optimization algorithm with self-adaptive variable step-size
Zhu, Yong; Jiang, Wanlu; Kong, Xiangdong; Quan, Lingxiao; Zhang, Yongshun
2017-10-01
To explore the problem of parameter optimization for complex nonlinear function, a chaos wolf optimization algorithm (CWOA) with self-adaptive variable step-size was proposed. The algorithm was based on the swarm intelligence of wolf pack, which fully simulated the predation behavior and prey distribution way of wolves. It possessed three intelligent behaviors such as migration, summons and siege. And the competition rule as "winner-take-all" and the update mechanism as "survival of the fittest" were also the characteristics of the algorithm. Moreover, it combined the strategies of self-adaptive variable step-size search and chaos optimization. The CWOA was utilized in parameter optimization of twelve typical and complex nonlinear functions. And the obtained results were compared with many existing algorithms, including the classical genetic algorithm, the particle swarm optimization algorithm and the leader wolf pack search algorithm. The investigation results indicate that CWOA possess preferable optimization ability. There are advantages in optimization accuracy and convergence rate. Furthermore, it demonstrates high robustness and global searching ability.
Korean Medication Algorithm Project for Bipolar Disorder: third revision.
Woo, Young Sup; Lee, Jung Goo; Jeong, Jong-Hyun; Kim, Moon-Doo; Sohn, Inki; Shim, Se-Hoon; Jon, Duk-In; Seo, Jeong Seok; Shin, Young-Chul; Min, Kyung Joon; Yoon, Bo-Hyun; Bahk, Won-Myong
2015-01-01
To constitute the third revision of the guidelines for the treatment of bipolar disorder issued by the Korean Medication Algorithm Project for Bipolar Disorder (KMAP-BP 2014). A 56-item questionnaire was used to obtain the consensus of experts regarding pharmacological treatment strategies for the various phases of bipolar disorder and for special populations. The review committee included 110 Korean psychiatrists and 38 experts for child and adolescent psychiatry. Of the committee members, 64 general psychiatrists and 23 child and adolescent psychiatrists responded to the survey. The treatment of choice (TOC) for euphoric, mixed, and psychotic mania was the combination of a mood stabilizer (MS) and an atypical antipsychotic (AAP); the TOC for acute mild depression was monotherapy with MS or AAP; and the TOC for moderate or severe depression was MS plus AAP/antidepressant. The first-line maintenance treatment following mania or depression was MS monotherapy or MS plus AAP; the first-line treatment after mania was AAP monotherapy; and the first-line treatment after depression was lamotrigine (LTG) monotherapy, LTG plus MS/AAP, or MS plus AAP plus LTG. The first-line treatment strategy for mania in children and adolescents was MS plus AAP or AAP monotherapy. For geriatric bipolar patients, the TOC for mania was AAP/MS monotherapy, and the TOC for depression was AAP plus MS or AAP monotherapy. The expert consensus in the KMAP-BP 2014 differed from that in previous publications; most notably, the preference for AAP was increased in the treatment of acute mania, depression, and maintenance treatment. There was increased expert preference for the use of AAP and LTG. The major limitation of the present study is that it was based on the consensus of Korean experts rather than on experimental evidence.
Increasing Prediction the Original Final Year Project of Student Using Genetic Algorithm
Saragih, Rijois Iboy Erwin; Turnip, Mardi; Sitanggang, Delima; Aritonang, Mendarissan; Harianja, Eva
2018-04-01
Final year project is very important forgraduation study of a student. Unfortunately, many students are not seriouslydidtheir final projects. Many of studentsask for someone to do it for them. In this paper, an application of genetic algorithms to predict the original final year project of a studentis proposed. In the simulation, the data of the final project for the last 5 years is collected. The genetic algorithm has several operators namely population, selection, crossover, and mutation. The result suggest that genetic algorithm can do better prediction than other comparable model. Experimental results of predicting showed that 70% was more accurate than the previous researched.
Projective-Dual Method for Solving Systems of Linear Equations with Nonnegative Variables
Ganin, B. V.; Golikov, A. I.; Evtushenko, Yu. G.
2018-02-01
In order to solve an underdetermined system of linear equations with nonnegative variables, the projection of a given point onto its solutions set is sought. The dual of this problem—the problem of unconstrained maximization of a piecewise-quadratic function—is solved by Newton's method. The problem of unconstrained optimization dual of the regularized problem of finding the projection onto the solution set of the system is considered. A connection of duality theory and Newton's method with some known algorithms of projecting onto a standard simplex is shown. On the example of taking into account the specifics of the constraints of the transport linear programming problem, the possibility to increase the efficiency of calculating the generalized Hessian matrix is demonstrated. Some examples of numerical calculations using MATLAB are presented.
Wu, Yue; Li, Q.; Hu, Qingjie; Borgart, A.
2017-01-01
Firefly Algorithm (FA, for short) is inspired by the social behavior of fireflies and their phenomenon of bioluminescent communication. Based on the fundamentals of FA, two improved strategies are proposed to conduct size and topology optimization for trusses with discrete design variables. Firstly,
A chaos-based image encryption algorithm with variable control parameters
Wang Yong; Wong, K.-W.; Liao Xiaofeng; Xiang Tao; Chen Guanrong
2009-01-01
In recent years, a number of image encryption algorithms based on the permutation-diffusion structure have been proposed. However, the control parameters used in the permutation stage are usually fixed in the whole encryption process, which favors attacks. In this paper, a chaos-based image encryption algorithm with variable control parameters is proposed. The control parameters used in the permutation stage and the keystream employed in the diffusion stage are generated from two chaotic maps related to the plain-image. As a result, the algorithm can effectively resist all known attacks against permutation-diffusion architectures. Theoretical analyses and computer simulations both confirm that the new algorithm possesses high security and fast encryption speed for practical image encryption.
Sánchez-Oro, J.; Duarte, A.; Salcedo-Sanz, S.
2016-01-01
Highlights: • The total energy demand in Spain is estimated with a Variable Neighborhood algorithm. • Socio-economic variables are used, and one year ahead prediction horizon is considered. • Improvement of the prediction with an Extreme Learning Machine network is considered. • Experiments are carried out in real data for the case of Spain. - Abstract: Energy demand prediction is an important problem whose solution is evaluated by policy makers in order to take key decisions affecting the economy of a country. A number of previous approaches to improve the quality of this estimation have been proposed in the last decade, the majority of them applying different machine learning techniques. In this paper, the performance of a robust hybrid approach, composed of a Variable Neighborhood Search algorithm and a new class of neural network called Extreme Learning Machine, is discussed. The Variable Neighborhood Search algorithm is focused on obtaining the most relevant features among the set of initial ones, by including an exponential prediction model. While previous approaches consider that the number of macroeconomic variables used for prediction is a parameter of the algorithm (i.e., it is fixed a priori), the proposed Variable Neighborhood Search method optimizes both: the number of variables and the best ones. After this first step of feature selection, an Extreme Learning Machine network is applied to obtain the final energy demand prediction. Experiments in a real case of energy demand estimation in Spain show the excellent performance of the proposed approach. In particular, the whole method obtains an estimation of the energy demand with an error lower than 2%, even when considering the crisis years, which are a real challenge.
Iman Yousefi
2015-01-01
Full Text Available This paper presents parameter estimation of Permanent Magnet Synchronous Motor (PMSM using a combinatorial algorithm. Nonlinear fourth-order space state model of PMSM is selected. This model is rewritten to the linear regression form without linearization. Noise is imposed to the system in order to provide a real condition, and then combinatorial Orthogonal Projection Algorithm and Recursive Least Squares (OPA&RLS method is applied in the linear regression form to the system. Results of this method are compared to the Orthogonal Projection Algorithm (OPA and Recursive Least Squares (RLS methods to validate the feasibility of the proposed method. Simulation results validate the efficacy of the proposed algorithm.
Korean Medication Algorithm Project for Bipolar Disorder: third revision
Woo YS
2015-02-01
Full Text Available Young Sup Woo,1 Jung Goo Lee,2,3 Jong-Hyun Jeong,1 Moon-Doo Kim,4 Inki Sohn,5 Se-Hoon Shim,6 Duk-In Jon,7 Jeong Seok Seo,8 Young-Chul Shin,9 Kyung Joon Min,10 Bo-Hyun Yoon,11 Won-Myong Bahk1 1Department of Psychiatry, College of Medicine, The Catholic University of Korea, Seoul, South Korea; 2Department of Psychiatry, Inje University Haeundae Paik Hospital, Busan, South Korea;3Paik Institute for Clinical Research, Inje Univeristy, Busan, South Korea; 4Department of Psychiatry, Jeju National University Hospital, Jeju, South Korea; 5Department of Psychiatry, Keyo Hospital, Keyo Medical Foundation, Uiwang, South Korea; 6Department of Psychiatry, Soonchunhyang University Cheonan Hospital, Soonchunhyang University, Cheonan, South Korea; 7Department of Psychiatry, Sacred Heart Hospital, Hallym University, Anyang, South Korea; 8Department of Psychiatry, School of Medicine, Konkuk University, Chungju, South Korea; 9Department of Psychiatry, Kangbuk Samsung Hospital, School of Medicine, Sungkyunkwan University, Seoul, South Korea; 10Department of Psychiatry, College of Medicine, Chung-Ang University, Seoul, South Korea; 11Department of Psychiatry, Naju National Hospital, Naju, South Korea Objective: To constitute the third revision of the guidelines for the treatment of bipolar disorder issued by the Korean Medication Algorithm Project for Bipolar Disorder (KMAP-BP 2014. Methods: A 56-item questionnaire was used to obtain the consensus of experts regarding pharmacological treatment strategies for the various phases of bipolar disorder and for special populations. The review committee included 110 Korean psychiatrists and 38 experts for child and adolescent psychiatry. Of the committee members, 64 general psychiatrists and 23 child and adolescent psychiatrists responded to the survey. Results: The treatment of choice (TOC for euphoric, mixed, and psychotic mania was the combination of a mood stabilizer (MS and an atypical antipsychotic (AAP; the TOC for
AUTOCLASSIFICATION OF THE VARIABLE 3XMM SOURCES USING THE RANDOM FOREST MACHINE LEARNING ALGORITHM
Farrell, Sean A.; Murphy, Tara; Lo, Kitty K.
2015-01-01
In the current era of large surveys and massive data sets, autoclassification of astrophysical sources using intelligent algorithms is becoming increasingly important. In this paper we present the catalog of variable sources in the Third XMM-Newton Serendipitous Source catalog (3XMM) autoclassified using the Random Forest machine learning algorithm. We used a sample of manually classified variable sources from the second data release of the XMM-Newton catalogs (2XMMi-DR2) to train the classifier, obtaining an accuracy of ∼92%. We also evaluated the effectiveness of identifying spurious detections using a sample of spurious sources, achieving an accuracy of ∼95%. Manual investigation of a random sample of classified sources confirmed these accuracy levels and showed that the Random Forest machine learning algorithm is highly effective at automatically classifying 3XMM sources. Here we present the catalog of classified 3XMM variable sources. We also present three previously unidentified unusual sources that were flagged as outlier sources by the algorithm: a new candidate supergiant fast X-ray transient, a 400 s X-ray pulsar, and an eclipsing 5 hr binary system coincident with a known Cepheid.
Hegde, Veena; Deekshit, Ravishankar; Satyanarayana, P. S.
2011-12-01
The electrocardiogram (ECG) is widely used for diagnosis of heart diseases. Good quality of ECG is utilized by physicians for interpretation and identification of physiological and pathological phenomena. However, in real situations, ECG recordings are often corrupted by artifacts or noise. Noise severely limits the utility of the recorded ECG and thus needs to be removed, for better clinical evaluation. In the present paper a new noise cancellation technique is proposed for removal of random noise like muscle artifact from ECG signal. A transform domain robust variable step size Griffiths' LMS algorithm (TVGLMS) is proposed for noise cancellation. For the TVGLMS, the robust variable step size has been achieved by using the Griffiths' gradient which uses cross-correlation between the desired signal contaminated with observation or random noise and the input. The algorithm is discrete cosine transform (DCT) based and uses symmetric property of the signal to represent the signal in frequency domain with lesser number of frequency coefficients when compared to that of discrete Fourier transform (DFT). The algorithm is implemented for adaptive line enhancer (ALE) filter which extracts the ECG signal in a noisy environment using LMS filter adaptation. The proposed algorithm is found to have better convergence error/misadjustment when compared to that of ordinary transform domain LMS (TLMS) algorithm, both in the presence of white/colored observation noise. The reduction in convergence error achieved by the new algorithm with desired signal decomposition is found to be lower than that obtained without decomposition. The experimental results indicate that the proposed method is better than traditional adaptive filter using LMS algorithm in the aspects of retaining geometrical characteristics of ECG signal.
Simulated Annealing Genetic Algorithm Based Schedule Risk Management of IT Outsourcing Project
Fuqiang Lu
2017-01-01
Full Text Available IT outsourcing is an effective way to enhance the core competitiveness for many enterprises. But the schedule risk of IT outsourcing project may cause enormous economic loss to enterprise. In this paper, the Distributed Decision Making (DDM theory and the principal-agent theory are used to build a model for schedule risk management of IT outsourcing project. In addition, a hybrid algorithm combining simulated annealing (SA and genetic algorithm (GA is designed, namely, simulated annealing genetic algorithm (SAGA. The effect of the proposed model on the schedule risk management problem is analyzed in the simulation experiment. Meanwhile, the simulation results of the three algorithms GA, SA, and SAGA show that SAGA is the most superior one to the other two algorithms in terms of stability and convergence. Consequently, this paper provides the scientific quantitative proposal for the decision maker who needs to manage the schedule risk of IT outsourcing project.
WANG, Qingrong; ZHU, Changfeng
2017-06-01
Integration of distributed heterogeneous data sources is the key issues under the big data applications. In this paper the strategy of variable precision is introduced to the concept lattice, and the one-to-one mapping mode of variable precision concept lattice and ontology concept lattice is constructed to produce the local ontology by constructing the variable precision concept lattice for each subsystem, and the distributed generation algorithm of variable precision concept lattice based on ontology heterogeneous database is proposed to draw support from the special relationship between concept lattice and ontology construction. Finally, based on the standard of main concept lattice of the existing heterogeneous database generated, a case study has been carried out in order to testify the feasibility and validity of this algorithm, and the differences between the main concept lattice and the standard concept lattice are compared. Analysis results show that this algorithm above-mentioned can automatically process the construction process of distributed concept lattice under the heterogeneous data sources.
Redundancy allocation of series-parallel systems using a variable neighborhood search algorithm
Liang, Y.-C.; Chen, Y.-C.
2007-01-01
This paper presents a meta-heuristic algorithm, variable neighborhood search (VNS), to the redundancy allocation problem (RAP). The RAP, an NP-hard problem, has attracted the attention of much prior research, generally in a restricted form where each subsystem must consist of identical components. The newer meta-heuristic methods overcome this limitation and offer a practical way to solve large instances of the relaxed RAP where different components can be used in parallel. Authors' previously published work has shown promise for the variable neighborhood descent (VND) method, the simplest version among VNS variations, on RAP. The variable neighborhood search method itself has not been used in reliability design, yet it is a method that fits those combinatorial problems with potential neighborhood structures, as in the case of the RAP. Therefore, authors further extended their work to develop a VNS algorithm for the RAP and tested a set of well-known benchmark problems from the literature. Results on 33 test instances ranging from less to severely constrained conditions show that the variable neighborhood search method improves the performance of VND and provides a competitive solution quality at economically computational expense in comparison with the best-known heuristics including ant colony optimization, genetic algorithm, and tabu search
Redundancy allocation of series-parallel systems using a variable neighborhood search algorithm
Liang, Y.-C. [Department of Industrial Engineering and Management, Yuan Ze University, No 135 Yuan-Tung Road, Chung-Li, Taoyuan County, Taiwan 320 (China)]. E-mail: ycliang@saturn.yzu.edu.tw; Chen, Y.-C. [Department of Industrial Engineering and Management, Yuan Ze University, No 135 Yuan-Tung Road, Chung-Li, Taoyuan County, Taiwan 320 (China)]. E-mail: s927523@mail.yzu.edu.tw
2007-03-15
This paper presents a meta-heuristic algorithm, variable neighborhood search (VNS), to the redundancy allocation problem (RAP). The RAP, an NP-hard problem, has attracted the attention of much prior research, generally in a restricted form where each subsystem must consist of identical components. The newer meta-heuristic methods overcome this limitation and offer a practical way to solve large instances of the relaxed RAP where different components can be used in parallel. Authors' previously published work has shown promise for the variable neighborhood descent (VND) method, the simplest version among VNS variations, on RAP. The variable neighborhood search method itself has not been used in reliability design, yet it is a method that fits those combinatorial problems with potential neighborhood structures, as in the case of the RAP. Therefore, authors further extended their work to develop a VNS algorithm for the RAP and tested a set of well-known benchmark problems from the literature. Results on 33 test instances ranging from less to severely constrained conditions show that the variable neighborhood search method improves the performance of VND and provides a competitive solution quality at economically computational expense in comparison with the best-known heuristics including ant colony optimization, genetic algorithm, and tabu search.
Fast alternating projected gradient descent algorithms for recovering spectrally sparse signals
Cho, Myung
2016-06-24
We propose fast algorithms that speed up or improve the performance of recovering spectrally sparse signals from un-derdetermined measurements. Our algorithms are based on a non-convex approach of using alternating projected gradient descent for structured matrix recovery. We apply this approach to two formulations of structured matrix recovery: Hankel and Toeplitz mosaic structured matrix, and Hankel structured matrix. Our methods provide better recovery performance, and faster signal recovery than existing algorithms, including atomic norm minimization.
Fast alternating projected gradient descent algorithms for recovering spectrally sparse signals
Cho, Myung; Cai, Jian-Feng; Liu, Suhui; Eldar, Yonina C.; Xu, Weiyu
2016-01-01
We propose fast algorithms that speed up or improve the performance of recovering spectrally sparse signals from un-derdetermined measurements. Our algorithms are based on a non-convex approach of using alternating projected gradient descent for structured matrix recovery. We apply this approach to two formulations of structured matrix recovery: Hankel and Toeplitz mosaic structured matrix, and Hankel structured matrix. Our methods provide better recovery performance, and faster signal recovery than existing algorithms, including atomic norm minimization.
The JPSS Ground Project Algorithm Verification, Test and Evaluation System
Vicente, G. A.; Jain, P.; Chander, G.; Nguyen, V. T.; Dixon, V.
2016-12-01
The Government Resource for Algorithm Verification, Independent Test, and Evaluation (GRAVITE) is an operational system that provides services to the Suomi National Polar-orbiting Partnership (S-NPP) Mission. It is also a unique environment for Calibration/Validation (Cal/Val) and Data Quality Assessment (DQA) of the Join Polar Satellite System (JPSS) mission data products. GRAVITE provides a fast and direct access to the data and products created by the Interface Data Processing Segment (IDPS), the NASA/NOAA operational system that converts Raw Data Records (RDR's) generated by sensors on the S-NPP into calibrated geo-located Sensor Data Records (SDR's) and generates Mission Unique Products (MUPS). It also facilitates algorithm investigation, integration, checkouts and tuning, instrument and product calibration and data quality support, monitoring and data/products distribution. GRAVITE is the portal for the latest S-NPP and JPSS baselined Processing Coefficient Tables (PCT's) and Look-Up-Tables (LUT's) and hosts a number DQA offline tools that takes advantage of the proximity to the near-real time data flows. It also contains a set of automated and ad-hoc Cal/Val tools used for algorithm analysis and updates, including an instance of the IDPS called GRAVITE Algorithm Development Area (G-ADA), that has the latest installation of the IDPS algorithms running in an identical software and hardware platforms. Two other important GRAVITE component are the Investigator-led Processing System (IPS) and the Investigator Computing Facility (ICF). The IPS is a dedicated environment where authorized users run automated scripts called Product Generation Executables (PGE's) to support Cal/Val and data quality assurance offline. This data-rich and data-driven service holds its own distribution system and allows operators to retrieve science data products. The ICF is a workspace where users can share computing applications and resources and have full access to libraries and
GENERAL ALGORITHMIC SCHEMA OF THE PROCESS OF THE CHILL AUXILIARIES PROJECTION
A. N. Chichko
2006-01-01
Full Text Available The general algorithmic diagram of systematization of the existing approaches to the process of projection is offered and the foundation of computer system of the chill mold arming construction is laid.
Volkov transform generalized projection algorithm for attosecond pulse characterization
Keathley, P D; Bhardwaj, S; Moses, J; Laurent, G; Kärtner, F X
2016-01-01
An algorithm for characterizing attosecond extreme ultraviolet pulses that is not bandwidth-limited, requires no interpolation of the experimental data, and makes no approximations beyond the strong-field approximation is introduced. This approach fully incorporates the dipole transition matrix element into the retrieval process. Unlike attosecond retrieval methods such as phase retrieval by omega oscillation filtering (PROOF), or improved PROOF, it simultaneously retrieves both the attosecond and infrared (IR) pulses, without placing fundamental restrictions on the IR pulse duration, intensity or bandwidth. The new algorithm is validated both numerically and experimentally, and is also found to have practical advantages. These include an increased robustness to noise, and relaxed requirements for the size of the experimental dataset and the intensity of the streaking pulse. (paper)
Anisotropic conductivity imaging with MREIT using equipotential projection algorithm
Degirmenci, Evren [Department of Electrical and Electronics Engineering, Mersin University, Mersin (Turkey); Eyueboglu, B Murat [Department of Electrical and Electronics Engineering, Middle East Technical University, 06531, Ankara (Turkey)
2007-12-21
Magnetic resonance electrical impedance tomography (MREIT) combines magnetic flux or current density measurements obtained by magnetic resonance imaging (MRI) and surface potential measurements to reconstruct images of true conductivity with high spatial resolution. Most of the biological tissues have anisotropic conductivity; therefore, anisotropy should be taken into account in conductivity image reconstruction. Almost all of the MREIT reconstruction algorithms proposed to date assume isotropic conductivity distribution. In this study, a novel MREIT image reconstruction algorithm is proposed to image anisotropic conductivity. Relative anisotropic conductivity values are reconstructed iteratively, using only current density measurements without any potential measurement. In order to obtain true conductivity values, only either one potential or conductivity measurement is sufficient to determine a scaling factor. The proposed technique is evaluated on simulated data for isotropic and anisotropic conductivity distributions, with and without measurement noise. Simulation results show that the images of both anisotropic and isotropic conductivity distributions can be reconstructed successfully.
Muñoz, Gonzalo; Espinoza, Daniel; Goycoolea, Marcos; Moreno, Eduardo; Queyranne, Maurice; Rivera, Orlando
2016-01-01
We study a Lagrangian decomposition algorithm recently proposed by Dan Bienstock and Mark Zuckerberg for solving the LP relaxation of a class of open pit mine project scheduling problems. In this study we show that the Bienstock-Zuckerberg (BZ) algorithm can be used to solve LP relaxations corresponding to a much broader class of scheduling problems, including the well-known Resource Constrained Project Scheduling Problem (RCPSP), and multi-modal variants of the RCPSP that consider batch proc...
polynomial) division have been found in Vedic Mathematics which are dated much before Euclid's algorithm. A programming language Is used to describe an algorithm for execution on a computer. An algorithm expressed using a programming.
A note on the von Neumann alternating projections algorithm
Kopecká, Eva; Reich, S.
2004-01-01
Roč. 5, č. 3 (2004), s. 379-386 ISSN 1345-4773 Institutional research plan: CEZ:AV0Z1019905 Keywords : alternating orthogonal projections * Hilbert space * nearest point mapping Subject RIV: BA - General Mathematics
Modelling and control algorithms of the cross conveyors line with multiengine variable speed drives
Cheremushkina, M. S.; Baburin, S. V.
2017-02-01
The paper deals with the actual problem of developing the control algorithm that meets the technical requirements of the mine belt conveyors, and enables energy and resource savings taking into account a random sort of traffic. The most effective method of solution of these tasks is the construction of control systems with the use of variable speed drives for asynchronous motors. The authors designed the mathematical model of the system ‘variable speed multiengine drive - conveyor - control system of conveyors’ that takes into account the dynamic processes occurring in the elements of the transport system, provides an assessment of the energy efficiency of application the developed algorithms, which allows one to reduce the dynamic overload in the belt to 15-20%.
An algorithm for the design and tuning of RF accelerating structures with variable cell lengths
Lal, Shankar; Pant, K. K.
2018-05-01
An algorithm is proposed for the design of a π mode standing wave buncher structure with variable cell lengths. It employs a two-parameter, multi-step approach for the design of the structure with desired resonant frequency and field flatness. The algorithm, along with analytical scaling laws for the design of the RF power coupling slot, makes it possible to accurately design the structure employing a freely available electromagnetic code like SUPERFISH. To compensate for machining errors, a tuning method has been devised to achieve desired RF parameters for the structure, which has been qualified by the successful tuning of a 7-cell buncher to π mode frequency of 2856 MHz with field flatness algorithm and tuning method have demonstrated the feasibility of developing an S-band accelerating structure for desired RF parameters with a relatively relaxed machining tolerance of ∼ 25 μm. This paper discusses the algorithm for the design and tuning of an RF accelerating structure with variable cell lengths.
Human activity and climate variability project: annual report 2001
Harle, K.J.; Heijnis, H.; Henderson-Sellers, A.; Sharmeen, S.; Zahorowski, W.
2002-01-01
Knowledge of the state of the Australian environment, including natural climate variability, prior to colonial settlement is vital if we are to define and understand the impact of over two hundred years of post-industrial human activity on our landscape. ANSTO, in conjunction with university partners, is leading a major research effort to provide natural archives of human activity and climate variability over the last 500 years in Australia, utilising a variety of techniques, including lead-210 and radiocarbon dating and analyses of proxy indicators (such as microfossils) as well as direct evidence (such as trace elements) of human activity and climate variability. The other major project objectives were to contribute to the understanding of the impact of human induced and natural aerosols in the East Asian region on climate through analysis and sourcing of fine particles and characterisation of air samples using radon concentrations and to contribute to the improvement of land surface parameterisation schemes and investigate the potential to use stable isotopes to improve global climate models and thus improve our understanding of future climate
Yongyi Shou
2014-01-01
Full Text Available A multiagent evolutionary algorithm is proposed to solve the resource-constrained project portfolio selection and scheduling problem. The proposed algorithm has a dual level structure. In the upper level a set of agents make decisions to select appropriate project portfolios. Each agent selects its project portfolio independently. The neighborhood competition operator and self-learning operator are designed to improve the agent’s energy, that is, the portfolio profit. In the lower level the selected projects are scheduled simultaneously and completion times are computed to estimate the expected portfolio profit. A priority rule-based heuristic is used by each agent to solve the multiproject scheduling problem. A set of instances were generated systematically from the widely used Patterson set. Computational experiments confirmed that the proposed evolutionary algorithm is effective for the resource-constrained project portfolio selection and scheduling problem.
Variable threshold algorithm for division of labor analyzed as a dynamical system.
Castillo-Cagigal, Manuel; Matallanas, Eduardo; Navarro, Iñaki; Caamaño-Martín, Estefanía; Monasterio-Huelin, Félix; Gutiérrez, Álvaro
2014-12-01
Division of labor is a widely studied aspect of colony behavior of social insects. Division of labor models indicate how individuals distribute themselves in order to perform different tasks simultaneously. However, models that study division of labor from a dynamical system point of view cannot be found in the literature. In this paper, we define a division of labor model as a discrete-time dynamical system, in order to study the equilibrium points and their properties related to convergence and stability. By making use of this analytical model, an adaptive algorithm based on division of labor can be designed to satisfy dynamic criteria. In this way, we have designed and tested an algorithm that varies the response thresholds in order to modify the dynamic behavior of the system. This behavior modification allows the system to adapt to specific environmental and collective situations, making the algorithm a good candidate for distributed control applications. The variable threshold algorithm is based on specialization mechanisms. It is able to achieve an asymptotically stable behavior of the system in different environments and independently of the number of individuals. The algorithm has been successfully tested under several initial conditions and number of individuals.
Muller, Laurent Flindt
2009-01-01
We present an application of an Adaptive Large Neighborhood Search (ALNS) algorithm to the Resource-constrained Project Scheduling Problem (RCPSP). The ALNS framework was first proposed by Pisinger and Røpke [19] and can be described as a large neighborhood search algorithm with an adaptive layer......, where a set of destroy/repair neighborhoods compete to modify the current solution in each iteration of the algorithm. Experiments are performed on the wellknown J30, J60 and J120 benchmark instances, which show that the proposed algorithm is competitive and confirms the strength of the ALNS framework...
SDART: An algorithm for discrete tomography from noisy projections
F. Bleichrodt (Folkert); F. Tabak (Frank); K.J. Batenburg (Joost)
2014-01-01
htmlabstractComputed tomography is a noninvasive technique for reconstructing an object from projection data. If the object consists of only a few materials, discrete tomography allows us to use prior knowledge of the gray values corresponding to these materials to improve the accuracy of the
Another note on the von Neumann alternating projections algorithm
Kopecká, Eva; Reich, S.
2010-01-01
Roč. 11, č. 3 (2010), s. 455-460 ISSN 1345-4773 Institutional research plan: CEZ:AV0Z10190503 Keywords : projection * iteration * Hilbert space Subject RIV: BA - General Mathematics Impact factor: 0.738, year: 2010 http://www.ybook.co.jp/online/jncae/vol11/p455.html
The GHG-CCI Project to Deliver the Essential Climate Variable Greenhouse Gases: Current status
Buchwitz, M.; Boesch, H.; Reuter, M.
2012-04-01
The GHG-CCI project (http://www.esa-ghg-cci.org) is one of several projects of ESA's Climate Change Initiative (CCI), which will deliver various Essential Climate Variables (ECVs). The goal of GHG-CCI is to deliver global satellite-derived data sets of the two most important anthropogenic greenhouse gases (GHGs) carbon dioxide (CO2) and methane (CH4) suitable to obtain information on regional CO2 and CH4 surface sources and sinks as needed for better climate prediction. The GHG-CCI core ECV data products are column-averaged mole fractions of CO2 and CH4, XCO2 and XCH4, retrieved from SCIAMACHY on ENVISAT and TANSO on GOSAT. Other satellite instruments will be used to provide constraints in upper layers such as IASI, MIPAS, and ACE-FTS. Which of the advanced algorithms, which are under development, will be the best for a given data product still needs to be determined. For each of the 4 GHG-CCI core data products - XCO2 and XCH4 from SCIAMACHY and GOSAT - several algorithms are bing further developed and the corresponding data products are inter-compared to identify which data product is the most appropriate. This includes comparisons with corresponding data products generated elsewhere, most notably with the operational data products of GOSAT generated at NIES and the NASA/ACOS GOSAT XCO2 product. This activity, the so-called "Round Robin exercise", will be performed in the first two years of this project. At the end of the 2 year Round Robin phase (end of August 2012) a decision will be made which of the algorithms performs best. The selected algorithms will be used to generate the first version of the ECV GHG. In the last six months of this 3 year project the resulting data products will be validated and made available to all interested users. In the presentation and overview about this project will be given focussing on the latest results.
The Impact of Organization, Project and Governance Variables on Software Quality and Project Success
Abbas, Noura; Gravell, Andy; Wills, Gary
2010-01-01
In this paper we present a statistically tested evidence about how quality and success rate are correlated with variables reflecting the organization and aspects of its project’s governance, namely retrospectives and metrics. The results presented in this paper are based on the Agile Projects Governance Survey that collected 129 responses. This paper discuss the deep analysis of this survey, and the main findings suggest that when applying agile software development, the quality of software i...
Projection pursuit water quality evaluation model based on chicken swam algorithm
Hu, Zhe
2018-03-01
In view of the uncertainty and ambiguity of each index in water quality evaluation, in order to solve the incompatibility of evaluation results of individual water quality indexes, a projection pursuit model based on chicken swam algorithm is proposed. The projection index function which can reflect the water quality condition is constructed, the chicken group algorithm (CSA) is introduced, the projection index function is optimized, the best projection direction of the projection index function is sought, and the best projection value is obtained to realize the water quality evaluation. The comparison between this method and other methods shows that it is reasonable and feasible to provide decision-making basis for water pollution control in the basin.
Yingning Qiu
2016-07-01
Full Text Available Although Permanent Magnet Synchronous Generator (PMSG wind turbines (WTs mitigate gearbox impacts, they requires high reliability of generators and converters. Statistical analysis shows that the failure rate of direct-drive PMSG wind turbines’ generators and inverters are high. Intelligent fault diagnosis algorithms to detect inverters faults is a premise for the condition monitoring system aimed at improving wind turbines’ reliability and availability. The influences of random wind speed and diversified control strategies lead to challenges for developing intelligent fault diagnosis algorithms for converters. This paper studies open-circuit fault features of wind turbine converters in variable wind speed situations through systematic simulation and experiment. A new fault diagnosis algorithm named Wind Speed Based Normalized Current Trajectory is proposed and used to accurately detect and locate faulted IGBT in the circuit arms. It is compared to direct current monitoring and current vector trajectory pattern approaches. The results show that the proposed method has advantages in the accuracy of fault diagnosis and has superior anti-noise capability in variable wind speed situations. The impact of the control strategy is also identified. Experimental results demonstrate its applicability on practical WT condition monitoring system which is used to improve wind turbine reliability and reduce their maintenance cost.
Stanimirović Ivan
2009-01-01
Full Text Available We introduce a heuristic method for the single resource constrained project scheduling problem, based on the dynamic programming solution of the knapsack problem. This method schedules projects with one type of resources, in the non-preemptive case: once started an activity is not interrupted and runs to completion. We compare the implementation of this method with well-known heuristic scheduling method, called Minimum Slack First (known also as Gray-Kidd algorithm, as well as with Microsoft Project.
Resource Allocation in a Repetitive Project Scheduling Using Genetic Algorithm
Samuel, Biju; Mathew, Jeeno
2018-03-01
Resource Allocation is procedure of doling out or allocating the accessible assets in a monetary way and productive way. Resource allocation is the scheduling of the accessible assets and accessible exercises or activities required while thinking about both the asset accessibility and the total project completion time. Asset provisioning and allocation takes care of that issue by permitting the specialist co-ops to deal with the assets for every individual demand of asset. A probabilistic selection procedure has been developed in order to ensure various selections of chromosomes
Gang Qin
2015-01-01
Full Text Available The acceleration performance of EV, which affects a lot of performances of EV such as start-up, overtaking, driving safety, and ride comfort, has become increasingly popular in recent researches. An improved variable gain PID control algorithm to improve the acceleration performance is proposed in this paper. The results of simulation with Matlab/Simulink demonstrate the effectiveness of the proposed algorithm through the control performance of motor velocity, motor torque, and three-phase current of motor. Moreover, it is investigated that the proposed controller is valid by comparison with the other PID controllers. Furthermore, the AC induction motor experiment set is constructed to verify the effect of proposed controller.
Wang, Chun; Ji, Zhicheng; Wang, Yan
2017-07-01
In this paper, multi-objective flexible job shop scheduling problem (MOFJSP) was studied with the objects to minimize makespan, total workload and critical workload. A variable neighborhood evolutionary algorithm (VNEA) was proposed to obtain a set of Pareto optimal solutions. First, two novel crowded operators in terms of the decision space and object space were proposed, and they were respectively used in mating selection and environmental selection. Then, two well-designed neighborhood structures were used in local search, which consider the problem characteristics and can hold fast convergence. Finally, extensive comparison was carried out with the state-of-the-art methods specially presented for solving MOFJSP on well-known benchmark instances. The results show that the proposed VNEA is more effective than other algorithms in solving MOFJSP.
to as 'divide-and-conquer'. Although there has been a large effort in realizing efficient algorithms, there are not many universally accepted algorithm design paradigms. In this article, we illustrate algorithm design techniques such as balancing, greedy strategy, dynamic programming strategy, and backtracking or traversal of ...
Generalized algorithm for X-ray projections generation in cone-beam tomography
Qin Zhongyuan; Mu Xuanqin; Wang Ping; Cai Yuanlong; Hou Chuanjian
2002-01-01
In order to get rid of random factors in the measurement so as to support proceeding 3D reconstruction, a general approach is presented to obtain the X-ray projections in cone-beam tomography. The phantom is firstly discretized into cubic volume through inverse transformation then a generalized projection procedure is proposed to the digitized result without concerning what the phantom exactly is. In the second step, line integrals are calculated to obtain the projection of each X-ray through accumulation of tri-linear interpolation. Considering projection angles, a rotation matrix is proposed to the X-ray source and the detector plane so projections in arbitrary angles can be got. In this approach the algorithm is easy to be extended and irregular objects can also be processed. The algorithm is implemented in Visual C++ and experiments are done using different models. Satisfactory results are obtained. It makes good preparation for the proceeding reconstruction
An optimized outlier detection algorithm for jury-based grading of engineering design projects
Thompson, Mary Kathryn; Espensen, Christina; Clemmensen, Line Katrine Harder
2016-01-01
This work characterizes and optimizes an outlier detection algorithm to identify potentially invalid scores produced by jury members while grading engineering design projects. The paper describes the original algorithm and the associated adjudication process in detail. The impact of the various...... (the base rule and the three additional conditions) play a role in the algorithm's performance and should be included in the algorithm. Because there is significant interaction between the base rule and the additional conditions, many acceptable combinations that balance the FPR and FNR can be found......, but no true optimum seems to exist. The performance of the best optimizations and the original algorithm are similar. Therefore, it should be possible to choose new coefficient values for jury populations in other cultures and contexts logically and empirically without a full optimization as long...
Stolzmann, Paul [Massachusetts General Hospital and Harvard Medical School, Cardiac MR PET CT Program, Boston, MA (United States); University Hospital Zurich, Institute of Diagnostic and Interventional Radiology, Zurich (Switzerland); Schlett, Christopher L.; Maurovich-Horvat, Pal; Scheffel, Hans; Engel, Leif-Christopher; Karolyi, Mihaly; Hoffmann, Udo [Massachusetts General Hospital and Harvard Medical School, Cardiac MR PET CT Program, Boston, MA (United States); Maehara, Akiko; Ma, Shixin; Mintz, Gary S. [Columbia University Medical Center, Cardiovascular Research Foundation, New York, NY (United States)
2012-10-15
To systematically assess inter-technique and inter-/intra-reader variability of coronary CT angiography (CTA) to measure plaque burden compared with intravascular ultrasound (IVUS) and to determine whether iterative reconstruction algorithms affect variability. IVUS and CTA data were acquired from nine human coronary arteries ex vivo. CT images were reconstructed using filtered back projection (FBPR) and iterative reconstruction algorithms: adaptive-statistical (ASIR) and model-based (MBIR). After co-registration of 284 cross-sections between IVUS and CTA, two readers manually delineated the cross-sectional plaque area in all images presented in random order. Average plaque burden by IVUS was 63.7 {+-} 10.7% and correlated significantly with all CTA measurements (r = 0.45-0.52; P < 0.001), while CTA overestimated the burden by 10 {+-} 10%. There were no significant differences among FBPR, ASIR and MBIR (P > 0.05). Increased overestimation was associated with smaller plaques, eccentricity and calcification (P < 0.001). Reproducibility of plaque burden by CTA and IVUS datasets was excellent with a low mean intra-/inter-reader variability of <1/<4% for CTA and <0.5/<1% for IVUS respectively (P < 0.05) with no significant difference between CT reconstruction algorithms (P > 0.05). In ex vivo coronary arteries, plaque burden by coronary CTA had extremely low inter-/intra-reader variability and correlated significantly with IVUS measurements. Accuracy as well as reader reliability were independent of CT image reconstruction algorithm. (orig.)
Stevendaal, U. van; Schlomka, J.-P.; Harding, A.; Grass, M.
2003-01-01
Coherent scatter computed tomography (CSCT) is a reconstructive x-ray imaging technique that yields the spatially resolved coherent-scatter form factor of the investigated object. Reconstruction from coherently scattered x-rays is commonly done using algebraic reconstruction techniques (ART). In this paper, we propose an alternative approach based on filtered back-projection. For the first time, a three-dimensional (3D) filtered back-projection technique using curved 3D back-projection lines is applied to two-dimensional coherent scatter projection data. The proposed algorithm is tested with simulated projection data as well as with projection data acquired with a demonstrator setup similar to a multi-line CT scanner geometry. While yielding comparable image quality as ART reconstruction, the modified 3D filtered back-projection algorithm is about two orders of magnitude faster. In contrast to iterative reconstruction schemes, it has the advantage that subfield-of-view reconstruction becomes feasible. This allows a selective reconstruction of the coherent-scatter form factor for a region of interest. The proposed modified 3D filtered back-projection algorithm is a powerful reconstruction technique to be implemented in a CSCT scanning system. This method gives coherent scatter CT the potential of becoming a competitive modality for medical imaging or nondestructive testing
Algorithm for evaluating the effectiveness of a high-rise development project based on current yield
Soboleva, Elena
2018-03-01
The article is aimed at the issues of operational evaluation of development project efficiency in high-rise construction under the current economic conditions in Russia. The author touches the following issues: problems of implementing development projects, the influence of the operational evaluation quality of high-rise construction projects on general efficiency, assessing the influence of the project's external environment on the effectiveness of project activities under crisis conditions and the quality of project management. The article proposes the algorithm and the methodological approach to the quality management of the developer project efficiency based on operational evaluation of the current yield efficiency. The methodology for calculating the current efficiency of a development project for high-rise construction has been updated.
Neural network algorithm for image reconstruction using the grid friendly projections
Cierniak, R.
2011-01-01
Full text: The presented paper describes a development of original approach to the reconstruction problem using a recurrent neural network. Particularly, the 'grid-friendly' angles of performed projections are selected according to the discrete Radon transform (DRT) concept to decrease the number of projections required. The methodology of our approach is consistent with analytical reconstruction algorithms. Reconstruction problem is reformulated in our approach to optimization problem. This problem is solved in present concept using method based on the maximum likelihood methodology. The reconstruction algorithm proposed in this work is consequently adapted for more practical discrete fan beam projections. Computer simulation results show that the neural network reconstruction algorithm designed to work in this way improves obtained results and outperforms conventional methods in reconstructed image quality. (author)
Multichannel Filtered-X Error Coded Affine Projection-Like Algorithm with Evolving Order
J. G. Avalos
2017-01-01
Full Text Available Affine projection (AP algorithms are commonly used to implement active noise control (ANC systems because they provide fast convergence. However, their high computational complexity can restrict their use in certain practical applications. The Error Coded Affine Projection-Like (ECAP-L algorithm has been proposed to reduce the computational burden while maintaining the speed of AP, but no version of this algorithm has been derived for active noise control, for which the adaptive structures are very different from those of other configurations. In this paper, we introduce a version of the ECAP-L for single-channel and multichannel ANC systems. The proposed algorithm is implemented using the conventional filtered-x scheme, which incurs a lower computational cost than the modified filtered-x structure, especially for multichannel systems. Furthermore, we present an evolutionary method that dynamically decreases the projection order in order to reduce the dimensions of the matrix used in the algorithm’s computations. Experimental results demonstrate that the proposed algorithm yields a convergence speed and a final residual error similar to those of AP algorithms. Moreover, it achieves meaningful computational savings, leading to simpler hardware implementation of real-time ANC applications.
Decoding using back-project algorithm from coded image in ICF
Jiang shaoen; Liu Zhongli; Zheng Zhijian; Tang Daoyuan
1999-01-01
The principle of the coded imaging and its decoding in inertial confinement fusion is described simply. The authors take ring aperture microscope for example and use back-project (BP) algorithm to decode the coded image. The decoding program has been performed for numerical simulation. Simulations of two models are made, and the results show that the accuracy of BP algorithm is high and effect of reconstruction is good. Thus, it indicates that BP algorithm is applicable to decoding for coded image in ICF experiments
Human activity and climate variability project - annual report 2002
Chambers, S.; Harle, K.J.; Sharmeen, S.; Zahorowski, W.; Cohen, D.; Heijnis, H.; Henderson-Sellers, A
2002-01-01
Work is well underway on identifying the spatial and temporal extent, direction and range of trace element transport across Tasmania through analysis of lake sediments; A follow up investigation of sedimentation and pollution in the Nattai River catchment following the devastating 2001 bushfires in the region has been completed; The project has been extended to include investigations of evidence of human impacts in the highly sensitive and ecologically important Great Lakes of coastal NSW. This has involved the expansion of our collaboration to include Geoscience Australia; Contributions have been made to the IGBP HITE project. Further contributions will be made as the evidence gathered is drawn together and interpreted; Over the coming year, focus will be placed on completion of the investigation of the extent of aerial transport of trace elements across Tasmania over the last 200 years as well as evidence for human activity and impacts on the Great Lakes region of NSW. Further investigation of potential climate signals from sites in northern Australia will also be made. The first 12 months of data for all ACE-Asia radon and fine particle sites is now available with preliminary analyses performed; The seasonal variability of background radon concentration at each of the radon monitoring sites has been characterised for the available data; Major components related to industrial pollution and soil sources in China have been identified and quantified; Regional and seasonal variations and trends in aerosol constituents have been measured and compared across more than 2.8Mk 2 of sampling area; The Hok Tsui and Kosan detectors were visited for general maintenance and recalibration; A grant application to the APN has been submitted in support of regional inventory analyses based on radon time series; Progress on the processing and interpretation of radon data was presented at the Cape Grim Science Meeting (6-7 February 2002) and the 7th Biennial SPERA Conference on
Human activity and climate variability project - annual report 2002
Chambers, S; Harle, K J; Sharmeen, S; Zahorowski, W; Cohen, D; Heijnis, H; Henderson-Sellers, A [Australian Nuclear Science and Technology Organisation, Menai, NSW (Australia)
2002-07-01
Work is well underway on identifying the spatial and temporal extent, direction and range of trace element transport across Tasmania through analysis of lake sediments; A follow up investigation of sedimentation and pollution in the Nattai River catchment following the devastating 2001 bushfires in the region has been completed; The project has been extended to include investigations of evidence of human impacts in the highly sensitive and ecologically important Great Lakes of coastal NSW. This has involved the expansion of our collaboration to include Geoscience Australia; Contributions have been made to the IGBP HITE project. Further contributions will be made as the evidence gathered is drawn together and interpreted; Over the coming year, focus will be placed on completion of the investigation of the extent of aerial transport of trace elements across Tasmania over the last 200 years as well as evidence for human activity and impacts on the Great Lakes region of NSW. Further investigation of potential climate signals from sites in northern Australia will also be made. The first 12 months of data for all ACE-Asia radon and fine particle sites is now available with preliminary analyses performed; The seasonal variability of background radon concentration at each of the radon monitoring sites has been characterised for the available data; Major components related to industrial pollution and soil sources in China have been identified and quantified; Regional and seasonal variations and trends in aerosol constituents have been measured and compared across more than 2.8Mk{sup 2} of sampling area; The Hok Tsui and Kosan detectors were visited for general maintenance and recalibration; A grant application to the APN has been submitted in support of regional inventory analyses based on radon time series; Progress on the processing and interpretation of radon data was presented at the Cape Grim Science Meeting (6-7 February 2002) and the 7th Biennial SPERA Conference on
A Particle Swarm Optimization Algorithm with Variable Random Functions and Mutation
ZHOU Xiao-Jun; YANG Chun-Hua; GUI Wei-Hua; DONG Tian-Xue
2014-01-01
The convergence analysis of the standard particle swarm optimization (PSO) has shown that the changing of random functions, personal best and group best has the potential to improve the performance of the PSO. In this paper, a novel strategy with variable random functions and polynomial mutation is introduced into the PSO, which is called particle swarm optimization algorithm with variable random functions and mutation (PSO-RM). Random functions are adjusted with the density of the population so as to manipulate the weight of cognition part and social part. Mutation is executed on both personal best particle and group best particle to explore new areas. Experiment results have demonstrated the effectiveness of the strategy.
Model and Algorithm for Substantiating Solutions for Organization of High-Rise Construction Project
Anisimov Vladimir
2018-01-01
Full Text Available In the paper the models and the algorithm for the optimal plan formation for the organization of the material and logistical processes of the high-rise construction project and their financial support are developed. The model is based on the representation of the optimization procedure in the form of a non-linear problem of discrete programming, which consists in minimizing the execution time of a set of interrelated works by a limited number of partially interchangeable performers while limiting the total cost of performing the work. The proposed model and algorithm are the basis for creating specific organization management methodologies for the high-rise construction project.
An Algorithm for the Weighted Earliness-Tardiness Unconstrained Project Scheduling Problem
Afshar Nadjafi, Behrouz; Shadrokh, Shahram
This research considers a project scheduling problem with the object of minimizing weighted earliness-tardiness penalty costs, taking into account a deadline for the project and precedence relations among the activities. An exact recursive method has been proposed for solving the basic form of this problem. We present a new depth-first branch and bound algorithm for extended form of the problem, which time value of money is taken into account by discounting the cash flows. The algorithm is extended with two bounding rules in order to reduce the size of the branch and bound tree. Finally, some test problems are solved and computational results are reported.
Model and Algorithm for Substantiating Solutions for Organization of High-Rise Construction Project
Anisimov, Vladimir; Anisimov, Evgeniy; Chernysh, Anatoliy
2018-03-01
In the paper the models and the algorithm for the optimal plan formation for the organization of the material and logistical processes of the high-rise construction project and their financial support are developed. The model is based on the representation of the optimization procedure in the form of a non-linear problem of discrete programming, which consists in minimizing the execution time of a set of interrelated works by a limited number of partially interchangeable performers while limiting the total cost of performing the work. The proposed model and algorithm are the basis for creating specific organization management methodologies for the high-rise construction project.
Pirbhulal, Sandeep; Zhang, Heye; Mukhopadhyay, Subhas Chandra; Li, Chunyue; Wang, Yumei; Li, Guanglin; Wu, Wanqing; Zhang, Yuan-Ting
2015-06-26
Body Sensor Network (BSN) is a network of several associated sensor nodes on, inside or around the human body to monitor vital signals, such as, Electroencephalogram (EEG), Photoplethysmography (PPG), Electrocardiogram (ECG), etc. Each sensor node in BSN delivers major information; therefore, it is very significant to provide data confidentiality and security. All existing approaches to secure BSN are based on complex cryptographic key generation procedures, which not only demands high resource utilization and computation time, but also consumes large amount of energy, power and memory during data transmission. However, it is indispensable to put forward energy efficient and computationally less complex authentication technique for BSN. In this paper, a novel biometric-based algorithm is proposed, which utilizes Heart Rate Variability (HRV) for simple key generation process to secure BSN. Our proposed algorithm is compared with three data authentication techniques, namely Physiological Signal based Key Agreement (PSKA), Data Encryption Standard (DES) and Rivest Shamir Adleman (RSA). Simulation is performed in Matlab and results suggest that proposed algorithm is quite efficient in terms of transmission time utilization, average remaining energy and total power consumption.
Sandeep Pirbhulal
2015-06-01
Full Text Available Body Sensor Network (BSN is a network of several associated sensor nodes on, inside or around the human body to monitor vital signals, such as, Electroencephalogram (EEG, Photoplethysmography (PPG, Electrocardiogram (ECG, etc. Each sensor node in BSN delivers major information; therefore, it is very significant to provide data confidentiality and security. All existing approaches to secure BSN are based on complex cryptographic key generation procedures, which not only demands high resource utilization and computation time, but also consumes large amount of energy, power and memory during data transmission. However, it is indispensable to put forward energy efficient and computationally less complex authentication technique for BSN. In this paper, a novel biometric-based algorithm is proposed, which utilizes Heart Rate Variability (HRV for simple key generation process to secure BSN. Our proposed algorithm is compared with three data authentication techniques, namely Physiological Signal based Key Agreement (PSKA, Data Encryption Standard (DES and Rivest Shamir Adleman (RSA. Simulation is performed in Matlab and results suggest that proposed algorithm is quite efficient in terms of transmission time utilization, average remaining energy and total power consumption.
Pirbhulal, Sandeep; Zhang, Heye; Mukhopadhyay, Subhas Chandra; Li, Chunyue; Wang, Yumei; Li, Guanglin; Wu, Wanqing; Zhang, Yuan-Ting
2015-01-01
Body Sensor Network (BSN) is a network of several associated sensor nodes on, inside or around the human body to monitor vital signals, such as, Electroencephalogram (EEG), Photoplethysmography (PPG), Electrocardiogram (ECG), etc. Each sensor node in BSN delivers major information; therefore, it is very significant to provide data confidentiality and security. All existing approaches to secure BSN are based on complex cryptographic key generation procedures, which not only demands high resource utilization and computation time, but also consumes large amount of energy, power and memory during data transmission. However, it is indispensable to put forward energy efficient and computationally less complex authentication technique for BSN. In this paper, a novel biometric-based algorithm is proposed, which utilizes Heart Rate Variability (HRV) for simple key generation process to secure BSN. Our proposed algorithm is compared with three data authentication techniques, namely Physiological Signal based Key Agreement (PSKA), Data Encryption Standard (DES) and Rivest Shamir Adleman (RSA). Simulation is performed in Matlab and results suggest that proposed algorithm is quite efficient in terms of transmission time utilization, average remaining energy and total power consumption. PMID:26131666
Zhang, Chunwei; Zhao, Hong; Gu, Feifei; Ma, Yueyang
2015-01-01
A phase unwrapping algorithm specially designed for the phase-shifting fringe projection profilometry (FPP) is proposed. It combines a revised dual-frequency fringe projectionalgorithm and a proposed fringe background based quality guided phase unwrapping algorithm (FB-QGPUA). Phase demodulated from the high-frequency fringe patterns is partially unwrapped by that demodulated from the low-frequency ones. Then FB-QGPUA is adopted to further unwrap the partially unwrapped phase. Influences of the phase error on the measurement are researched. Strategy to select the fringe pitch is given. Experiments demonstrate that the proposed method is very robust and efficient. (paper)
Genetic algorithm for project time-cost optimization in fuzzy environment
Khan Md. Ariful Haque
2012-12-01
Full Text Available Purpose: The aim of this research is to develop a more realistic approach to solve project time-cost optimization problem under uncertain conditions, with fuzzy time periods. Design/methodology/approach: Deterministic models for time-cost optimization are never efficient considering various uncertainty factors. To make such problems realistic, triangular fuzzy numbers and the concept of a-cut method in fuzzy logic theory are employed to model the problem. Because of NP-hard nature of the project scheduling problem, Genetic Algorithm (GA has been used as a searching tool. Finally, Dev-C++ 4.9.9.2 has been used to code this solver. Findings: The solution has been performed under different combinations of GA parameters and after result analysis optimum values of those parameters have been found for the best solution. Research limitations/implications: For demonstration of the application of the developed algorithm, a project on new product (Pre-paid electric meter, a project under government finance launching has been chosen as a real case. The algorithm is developed under some assumptions. Practical implications: The proposed model leads decision makers to choose the desired solution under different risk levels. Originality/value: Reports reveal that project optimization problems have never been solved under multiple uncertainty conditions. Here, the function has been optimized using Genetic Algorithm search technique, with varied level of risks and fuzzy time periods.
Masoumeh Soflaei
2014-01-01
Full Text Available One of the most important problems of reliable communications in shallow water channels is intersymbol interference (ISI which is due to scattering from surface and reflecting from bottom. Using adaptive equalizers in receiver is one of the best suggested ways for overcoming this problem. In this paper, we apply the family of selective regressor affine projection algorithms (SR-APA and the family of selective partial update APA (SPU-APA which have low computational complexity that is one of the important factors that influences adaptive equalizer performance. We apply experimental data from Strait of Hormuz for examining the efficiency of the proposed methods over shallow water channel. We observe that the values of the steady-state mean square error (MSE of SR-APA and SPU-APA decrease by 5.8 (dB and 5.5 (dB, respectively, in comparison with least mean square (LMS algorithm. Also the families of SPU-APA and SR-APA have better convergence speed than LMS type algorithm.
Computational issues in alternating projection algorithms for fixed-order control design
Beran, Eric Bengt; Grigoriadis, K.
1997-01-01
Alternating projection algorithms have been introduced recently to solve fixed-order controller design problems described by linear matrix inequalities and non-convex coupling rank constraints. In this work, an extensive numerical experimentation using proposed benchmark fixed-order control design...... examples is used to indicate the computational efficiency of the method. These results indicate that the proposed alternating projections are effective in obtaining low-order controllers for small and medium order problems...
An Auxiliary Variable Method for Markov Chain Monte Carlo Algorithms in High Dimension
Yosra Marnissi
2018-02-01
Full Text Available In this paper, we are interested in Bayesian inverse problems where either the data fidelity term or the prior distribution is Gaussian or driven from a hierarchical Gaussian model. Generally, Markov chain Monte Carlo (MCMC algorithms allow us to generate sets of samples that are employed to infer some relevant parameters of the underlying distributions. However, when the parameter space is high-dimensional, the performance of stochastic sampling algorithms is very sensitive to existing dependencies between parameters. In particular, this problem arises when one aims to sample from a high-dimensional Gaussian distribution whose covariance matrix does not present a simple structure. Another challenge is the design of Metropolis–Hastings proposals that make use of information about the local geometry of the target density in order to speed up the convergence and improve mixing properties in the parameter space, while not being too computationally expensive. These two contexts are mainly related to the presence of two heterogeneous sources of dependencies stemming either from the prior or the likelihood in the sense that the related covariance matrices cannot be diagonalized in the same basis. In this work, we address these two issues. Our contribution consists of adding auxiliary variables to the model in order to dissociate the two sources of dependencies. In the new augmented space, only one source of correlation remains directly related to the target parameters, the other sources of correlations being captured by the auxiliary variables. Experiments are conducted on two practical image restoration problems—namely the recovery of multichannel blurred images embedded in Gaussian noise and the recovery of signal corrupted by a mixed Gaussian noise. Experimental results indicate that adding the proposed auxiliary variables makes the sampling problem simpler since the new conditional distribution no longer contains highly heterogeneous
Soares, Antonio Henrique Germano; Farah, Breno Quintella; Cucato, Gabriel Grizzo; Bastos-Filho, Carmelo José Albanez; Christofaro, Diego Giulliano Destro; Vanderlei, Luiz Carlos Marques; Lima, Aluísio Henrique Rodrigues de Andrade; Ritti-Dias, Raphael Mendes
2016-01-01
To analyze whether the algorithm used for the heart rate variability assessment (fast Fourier transform versus autoregressive methods) influenced its association with cardiovascular risk factors in male adolescents. This cross-sectional study included 1,152 male adolescents (aged 14 to 19 years). The low frequency, high frequency components (absolute numbers and normalized units), low frequency/high frequency ratio, and total power of heart rate variability parameters were obtained using the fast Fourier transform and autoregressive methods, while the adolescents were resting in a supine position. All heart rate variability parameters calculated from both methods were different (padolescentes do gênero masculino. Estudo transversal, que incluiu 1.152 adolescentes do gênero masculino (14 a 19 anos). Componentes de baixa e alta frequência (absolutos e unidades normalizadas), razão componente de baixa frequência/componente de alta frequência e poder total da variabilidade da frequência cardíaca foram obtidos em repouso, na posição supina, usando os métodos transformada rápida de Fourier e autorregressivo. Todos os parâmetros da variabilidade da frequência cardíaca para ambos os métodos foram diferentes (padolescentes masculinos, mas essas diferenças não foram clinicamente significativas.
Dhou, S; Williams, C [Brigham and Women’s Hospital / Harvard Medical School, Boston, MA (United States); Ionascu, D [William Beaumont Hospital, Royal Oak, MI (United States); Lewis, J [University of California at Los Angeles, Los Angeles, CA (United States)
2016-06-15
Purpose: To study the variability of patient-specific motion models derived from 4-dimensional CT (4DCT) images using different deformable image registration (DIR) algorithms for lung cancer stereotactic body radiotherapy (SBRT) patients. Methods: Motion models are derived by 1) applying DIR between each 4DCT image and a reference image, resulting in a set of displacement vector fields (DVFs), and 2) performing principal component analysis (PCA) on the DVFs, resulting in a motion model (a set of eigenvectors capturing the variations in the DVFs). Three DIR algorithms were used: 1) Demons, 2) Horn-Schunck, and 3) iterative optical flow. The motion models derived were compared using patient 4DCT scans. Results: Motion models were derived and the variations were evaluated according to three criteria: 1) the average root mean square (RMS) difference which measures the absolute difference between the components of the eigenvectors, 2) the dot product between the eigenvectors which measures the angular difference between the eigenvectors in space, and 3) the Euclidean Model Norm (EMN), which is calculated by summing the dot products of an eigenvector with the first three eigenvectors from the reference motion model in quadrature. EMN measures how well an eigenvector can be reconstructed using another motion model derived using a different DIR algorithm. Results showed that comparing to a reference motion model (derived using the Demons algorithm), the eigenvectors of the motion model derived using the iterative optical flow algorithm has smaller RMS, larger dot product, and larger EMN values than those of the motion model derived using Horn-Schunck algorithm. Conclusion: The study showed that motion models vary depending on which DIR algorithms were used to derive them. The choice of a DIR algorithm may affect the accuracy of the resulting model, and it is important to assess the suitability of the algorithm chosen for a particular application. This project was supported
Dhou, S; Williams, C; Ionascu, D; Lewis, J
2016-01-01
Purpose: To study the variability of patient-specific motion models derived from 4-dimensional CT (4DCT) images using different deformable image registration (DIR) algorithms for lung cancer stereotactic body radiotherapy (SBRT) patients. Methods: Motion models are derived by 1) applying DIR between each 4DCT image and a reference image, resulting in a set of displacement vector fields (DVFs), and 2) performing principal component analysis (PCA) on the DVFs, resulting in a motion model (a set of eigenvectors capturing the variations in the DVFs). Three DIR algorithms were used: 1) Demons, 2) Horn-Schunck, and 3) iterative optical flow. The motion models derived were compared using patient 4DCT scans. Results: Motion models were derived and the variations were evaluated according to three criteria: 1) the average root mean square (RMS) difference which measures the absolute difference between the components of the eigenvectors, 2) the dot product between the eigenvectors which measures the angular difference between the eigenvectors in space, and 3) the Euclidean Model Norm (EMN), which is calculated by summing the dot products of an eigenvector with the first three eigenvectors from the reference motion model in quadrature. EMN measures how well an eigenvector can be reconstructed using another motion model derived using a different DIR algorithm. Results showed that comparing to a reference motion model (derived using the Demons algorithm), the eigenvectors of the motion model derived using the iterative optical flow algorithm has smaller RMS, larger dot product, and larger EMN values than those of the motion model derived using Horn-Schunck algorithm. Conclusion: The study showed that motion models vary depending on which DIR algorithms were used to derive them. The choice of a DIR algorithm may affect the accuracy of the resulting model, and it is important to assess the suitability of the algorithm chosen for a particular application. This project was supported
An accelerated threshold-based back-projection algorithm for Compton camera image reconstruction
Mundy, Daniel W.; Herman, Michael G.
2011-01-01
Purpose: Compton camera imaging (CCI) systems are currently under investigation for radiotherapy dose reconstruction and verification. The ability of such a system to provide real-time images during dose delivery will be limited by the computational speed of the image reconstruction algorithm. In this work, the authors present a fast and simple method by which to generate an initial back-projected image from acquired CCI data, suitable for use in a filtered back-projection algorithm or as a starting point for iterative reconstruction algorithms, and compare its performance to the current state of the art. Methods: Each detector event in a CCI system describes a conical surface that includes the true point of origin of the detected photon. Numerical image reconstruction algorithms require, as a first step, the back-projection of each of these conical surfaces into an image space. The algorithm presented here first generates a solution matrix for each slice of the image space by solving the intersection of the conical surface with the image plane. Each element of the solution matrix is proportional to the distance of the corresponding voxel from the true intersection curve. A threshold function was developed to extract those pixels sufficiently close to the true intersection to generate a binary intersection curve. This process is repeated for each image plane for each CCI detector event, resulting in a three-dimensional back-projection image. The performance of this algorithm was tested against a marching algorithm known for speed and accuracy. Results: The threshold-based algorithm was found to be approximately four times faster than the current state of the art with minimal deficit to image quality, arising from the fact that a generically applicable threshold function cannot provide perfect results in all situations. The algorithm fails to extract a complete intersection curve in image slices near the detector surface for detector event cones having axes nearly
Numerical algorithm for laser treatment of powder layer with variable thickness
Soboleva, Polina; Knyazeva, Anna
2017-12-01
Two-dimensional model of laser treatment of powder layer on the substrate is proposed in this paper. The model takes into account the shrinkage of powder layer due to the laser treatment. Three simplified variants of the model were studied. Firstly, the influence of optical properties of powder layer on the maximal temperature was researched. Secondly, two-dimensional model for given thickness of powder layer was studied where practically uniform temperature distribution across thin powder layer was demonstrated. Then, the numerical algorithm was developed to calculate the temperature field for the area of variable size. The impact of the optical properties of powder material on the character of the temperature distribution was researched numerically.
ticians but also forms the foundation of computer science. Two ... with methods of developing algorithms for solving a variety of problems but ... applications of computers in science and engineer- ... numerical calculus are as important. We will ...
Cohen, Julien G; Kim, Hyungjin; Park, Su Bin; van Ginneken, Bram; Ferretti, Gilbert R; Lee, Chang Hyun; Goo, Jin Mo; Park, Chang Min
2017-08-01
To evaluate the differences between filtered back projection (FBP) and model-based iterative reconstruction (MBIR) algorithms on semi-automatic measurements in subsolid nodules (SSNs). Unenhanced CT scans of 73 SSNs obtained using the same protocol and reconstructed with both FBP and MBIR algorithms were evaluated by two radiologists. Diameter, mean attenuation, mass and volume of whole nodules and their solid components were measured. Intra- and interobserver variability and differences between FBP and MBIR were then evaluated using Bland-Altman method and Wilcoxon tests. Longest diameter, volume and mass of nodules and those of their solid components were significantly higher using MBIR (p algorithms with respect to the diameter, volume and mass of nodules and their solid components. There were no significant differences in intra- or interobserver variability between FBP and MBIR (p > 0.05). Semi-automatic measurements of SSNs significantly differed between FBP and MBIR; however, the differences were within the range of measurement variability. • Intra- and interobserver reproducibility of measurements did not differ between FBP and MBIR. • Differences in SSNs' semi-automatic measurement induced by reconstruction algorithms were not clinically significant. • Semi-automatic measurement may be conducted regardless of reconstruction algorithm. • SSNs' semi-automated classification agreement (pure vs. part-solid) did not significantly differ between algorithms.
A New Bi-Directional Projection Model Based on Pythagorean Uncertain Linguistic Variable
Huidong Wang; Shifan He; Xiaohong Pan
2018-01-01
To solve the multi-attribute decision making (MADM) problems with Pythagorean uncertain linguistic variable, an extended bi-directional projection method is proposed. First, we utilize the linguistic scale function to convert uncertain linguistic variable and provide a new projection model, subsequently. Then, to depict the bi-directional projection method, the formative vectors of alternatives and ideal alternatives are defined. Furthermore, a comparative analysis with projection model is co...
Suppes, T; Swann, A C; Dennehy, E B; Habermacher, E D; Mason, M; Crismon, M L; Toprac, M G; Rush, A J; Shon, S P; Altshuler, K Z
2001-06-01
Use of treatment guidelines for treatment of major psychiatric illnesses has increased in recent years. The Texas Medication Algorithm Project (TMAP) was developed to study the feasibility and process of developing and implementing guidelines for bipolar disorder, major depressive disorder, and schizophrenia in the public mental health system of Texas. This article describes the consensus process used to develop the first set of TMAP algorithms for the Bipolar Disorder Module (Phase 1) and the trial testing the feasibility of their implementation in inpatient and outpatient psychiatric settings across Texas (Phase 2). The feasibility trial answered core questions regarding implementation of treatment guidelines for bipolar disorder. A total of 69 patients were treated with the original algorithms for bipolar disorder developed in Phase 1 of TMAP. Results support that physicians accepted the guidelines, followed recommendations to see patients at certain intervals, and utilized sequenced treatment steps differentially over the course of treatment. While improvements in clinical symptoms (24-item Brief Psychiatric Rating Scale) were observed over the course of enrollment in the trial, these conclusions are limited by the fact that physician volunteers were utilized for both treatment and ratings. and there was no control group. Results from Phases 1 and 2 indicate that it is possible to develop and implement a treatment guideline for patients with a history of mania in public mental health clinics in Texas. TMAP Phase 3, a recently completed larger and controlled trial assessing the clinical and economic impact of treatment guidelines and patient and family education in the public mental health system of Texas, improves upon this methodology.
Yi Han
2013-01-01
Full Text Available This paper presents a shuffled frog leaping algorithm (SFLA for the single-mode resource-constrained project scheduling problem where activities can be divided into equant units and interrupted during processing. Each activity consumes 0–3 types of resources which are renewable and temporarily not available due to resource vacations in each period. The presence of scarce resources and precedence relations between activities makes project scheduling a difficult and important task in project management. A recent popular metaheuristic shuffled frog leaping algorithm, which is enlightened by the predatory habit of frog group in a small pond, is adopted to investigate the project makespan improvement on Patterson benchmark sets which is composed of different small and medium size projects. Computational results demonstrate the effectiveness and efficiency of SFLA in reducing project makespan and minimizing activity splitting number within an average CPU runtime, 0.521 second. This paper exposes all the scheduling sequences for each project and shows that of the 23 best known solutions have been improved.
2014-01-01
We propose a smooth approximation l 0-norm constrained affine projection algorithm (SL0-APA) to improve the convergence speed and the steady-state error of affine projection algorithm (APA) for sparse channel estimation. The proposed algorithm ensures improved performance in terms of the convergence speed and the steady-state error via the combination of a smooth approximation l 0-norm (SL0) penalty on the coefficients into the standard APA cost function, which gives rise to a zero attractor that promotes the sparsity of the channel taps in the channel estimation and hence accelerates the convergence speed and reduces the steady-state error when the channel is sparse. The simulation results demonstrate that our proposed SL0-APA is superior to the standard APA and its sparsity-aware algorithms in terms of both the convergence speed and the steady-state behavior in a designated sparse channel. Furthermore, SL0-APA is shown to have smaller steady-state error than the previously proposed sparsity-aware algorithms when the number of nonzero taps in the sparse channel increases. PMID:24790588
Bolte, H.; Jahnke, T.; Schaefer, F.K.W.; Wenke, R.; Hoffmann, B.; Freitag-Wolf, S.; Dicken, V.; Kuhnigk, J.M.; Lohmann, J.; Voss, S.; Knoess, N.
2007-01-01
Objective: The aim of this study was to investigate the interobserver variability of CT based diameter and volumetric measurements of artificial pulmonary nodules. A special interest was the consideration of different measurement methods, observer experience and training levels. Materials and methods: For this purpose 46 artificial small solid nodules were examined in a dedicated ex-vivo chest phantom with multislice-spiral CT (20 mAs, 120 kV, collimation 16 mm x 0.75 mm, table feed 15 mm, reconstructed slice thickness 1 mm, reconstruction increment 0.7 mm, intermediate reconstruction kernel). Two observer groups of different radiologic experience (0 and more than 5 years of training, 3 observers each) analysed all lesions with digital callipers and 2 volumetry software packages (click-point depending and robust volumetry) in a semi-automatic and manually corrected mode. For data analysis the variation coefficient (VC) was calculated in per cent for each group and a Wilcoxon test was used for analytic statistics. Results: Click-point robust volumetry showed with a VC of <0.01% in both groups the smallest interobserver variability. Between experienced and un-experienced observers interobserver variability was significantly different for diameter measurements (p = 0.023) but not for semi-automatic and manual corrected volumetry. A significant training effect was revealed for diameter measurements (p = 0.003) and semi-automatic measurements of click-point depending volumetry (p = 0.007) in the un-experienced observer group. Conclusions: Compared to diameter measurements volumetry achieves a significantly smaller interobserver variance and advanced volumetry algorithms are independent of observer experience
Random projections and the optimization of an algorithm for phase retrieval
Elser, Veit
2003-01-01
Iterative phase retrieval algorithms typically employ projections onto constraint subspaces to recover the unknown phases in the Fourier transform of an image, or, in the case of x-ray crystallography, the electron density of a molecule. For a general class of algorithms, where the basic iteration is specified by the difference map, solutions are associated with fixed points of the map, the attractive character of which determines the effectiveness of the algorithm. The behaviour of the difference map near fixed points is controlled by the relative orientation of the tangent spaces of the two constraint subspaces employed by the map. Since the dimensionalities involved are always large in practical applications, it is appropriate to use random matrix theory ideas to analyse the average-case convergence at fixed points. Optimal values of the γ parameters of the difference map are found which differ somewhat from the values previously obtained on the assumption of orthogonal tangent spaces
Schüller, Anton; Schweitzer, Marc
2017-01-01
The contributions gathered here provide an overview of current research projects and selected software products of the Fraunhofer Institute for Algorithms and Scientific Computing SCAI. They show the wide range of challenges that scientific computing currently faces, the solutions it offers, and its important role in developing applications for industry. Given the exciting field of applied collaborative research and development it discusses, the book will appeal to scientists, practitioners, and students alike. The Fraunhofer Institute for Algorithms and Scientific Computing SCAI combines excellent research and application-oriented development to provide added value for our partners. SCAI develops numerical techniques, parallel algorithms and specialized software tools to support and optimize industrial simulations. Moreover, it implements custom software solutions for production and logistics, and offers calculations on high-performance computers. Its services and products are based on state-of-the-art metho...
A new approach for modelling variability in residential construction projects
Mehrdad Arashpour
2013-06-01
Full Text Available The construction industry is plagued by long cycle times caused by variability in the supply chain. Variations or undesirable situations are the result of factors such as non-standard practices, work site accidents, inclement weather conditions and faults in design. This paper uses a new approach for modelling variability in construction by linking relative variability indicators to processes. Mass homebuilding sector was chosen as the scope of the analysis because data is readily available. Numerous simulation experiments were designed by varying size of capacity buffers in front of trade contractors, availability of trade contractors, and level of variability in homebuilding processes. The measurements were shown to lead to an accurate determination of relationships between these factors and production parameters. The variability indicator was found to dramatically affect the tangible performance measures such as home completion rates. This study provides for future analysis of the production homebuilding sector, which may lead to improvements in performance and a faster product delivery to homebuyers.
A new approach for modelling variability in residential construction projects
Mehrdad Arashpour
2013-06-01
Full Text Available The construction industry is plagued by long cycle times caused by variability in the supply chain. Variations or undesirable situations are the result of factors such as non-standard practices, work site accidents, inclement weather conditions and faults in design. This paper uses a new approach for modelling variability in construction by linking relative variability indicators to processes. Mass homebuilding sector was chosen as the scope of the analysis because data is readily available. Numerous simulation experiments were designed by varying size of capacity buffers in front of trade contractors, availability of trade contractors, and level of variability in homebuilding processes. The measurements were shown to lead to an accurate determination of relationships between these factors and production parameters. The variability indicator was found to dramatically affect the tangible performance measures such as home completion rates. This study provides for future analysis of the production homebuilding sector, which may lead to improvements in performance and a faster product delivery to homebuyers.
algorithm design technique called 'divide-and-conquer'. One of ... Turtle graphics, September. 1996. 5. ... whole list named 'PO' is a pointer to the first element of the list; ..... Program for computing matrices X and Y and placing the result in C *).
algorithm that it is implicitly understood that we know how to generate the next natural ..... Explicit comparisons are made in line (1) where maximum and minimum is ... It can be shown that the function T(n) = 3/2n -2 is the solution to the above ...
Junlong Zhu
2017-01-01
Full Text Available We consider a distributed constrained optimization problem over a time-varying network, where each agent only knows its own cost functions and its constraint set. However, the local constraint set may not be known in advance or consists of huge number of components in some applications. To deal with such cases, we propose a distributed stochastic subgradient algorithm over time-varying networks, where the estimate of each agent projects onto its constraint set by using random projection technique and the implement of information exchange between agents by employing asynchronous broadcast communication protocol. We show that our proposed algorithm is convergent with probability 1 by choosing suitable learning rate. For constant learning rate, we obtain an error bound, which is defined as the expected distance between the estimates of agent and the optimal solution. We also establish an asymptotic upper bound between the global objective function value at the average of the estimates and the optimal value.
Sparse Adaptive Channel Estimation Based on lp-Norm-Penalized Affine Projection Algorithm
Yingsong Li
2014-01-01
Full Text Available We propose an lp-norm-penalized affine projection algorithm (LP-APA for broadband multipath adaptive channel estimations. The proposed LP-APA is realized by incorporating an lp-norm into the cost function of the conventional affine projection algorithm (APA to exploit the sparsity property of the broadband wireless multipath channel, by which the convergence speed and steady-state performance of the APA are significantly improved. The implementation of the LP-APA is equivalent to adding a zero attractor to its iterations. The simulation results, which are obtained from a sparse channel estimation, demonstrate that the proposed LP-APA can efficiently improve channel estimation performance in terms of both the convergence speed and steady-state performance when the channel is exactly sparse.
Multiple R&D projects scheduling optimization with improved particle swarm algorithm.
Liu, Mengqi; Shan, Miyuan; Wu, Juan
2014-01-01
For most enterprises, in order to win the initiative in the fierce competition of market, a key step is to improve their R&D ability to meet the various demands of customers more timely and less costly. This paper discusses the features of multiple R&D environments in large make-to-order enterprises under constrained human resource and budget, and puts forward a multi-project scheduling model during a certain period. Furthermore, we make some improvements to existed particle swarm algorithm and apply the one developed here to the resource-constrained multi-project scheduling model for a simulation experiment. Simultaneously, the feasibility of model and the validity of algorithm are proved in the experiment.
Pengfei Sun
Full Text Available Pose estimation aims at measuring the position and orientation of a calibrated camera using known image features. The pinhole model is the dominant camera model in this field. However, the imaging precision of this model is not accurate enough for an advanced pose estimation algorithm. In this paper, a new camera model, called incident ray tracking model, is introduced. More importantly, an advanced pose estimation algorithm based on the perspective ray in the new camera model, is proposed. The perspective ray, determined by two positioning points, is an abstract mathematical equivalent of the incident ray. In the proposed pose estimation algorithm, called perspective-ray-based scaled orthographic projection with iteration (PRSOI, an approximate ray-based projection is calculated by a linear system and refined by iteration. Experiments on the PRSOI have been conducted, and the results demonstrate that it is of high accuracy in the six degrees of freedom (DOF motion. And it outperforms three other state-of-the-art algorithms in terms of accuracy during the contrast experiment.
Filtered-X Affine Projection Algorithms for Active Noise Control Using Volterra Filters
Sicuranza Giovanni L
2004-01-01
Full Text Available We consider the use of adaptive Volterra filters, implemented in the form of multichannel filter banks, as nonlinear active noise controllers. In particular, we discuss the derivation of filtered-X affine projection algorithms for homogeneous quadratic filters. According to the multichannel approach, it is then easy to pass from these algorithms to those of a generic Volterra filter. It is shown in the paper that the AP technique offers better convergence and tracking capabilities than the classical LMS and NLMS algorithms usually applied in nonlinear active noise controllers, with a limited complexity increase. This paper extends in two ways the content of a previous contribution published in Proc. IEEE-EURASIP Workshop on Nonlinear Signal and Image Processing (NSIP '03, Grado, Italy, June 2003. First of all, a general adaptation algorithm valid for any order of affine projections is presented. Secondly, a more complete set of experiments is reported. In particular, the effects of using multichannel filter banks with a reduced number of channels are investigated and relevant results are shown.
A homotopy algorithm for digital optimal projection control GASD-HADOC
Collins, Emmanuel G., Jr.; Richter, Stephen; Davis, Lawrence D.
1993-01-01
The linear-quadratic-gaussian (LQG) compensator was developed to facilitate the design of control laws for multi-input, multi-output (MIMO) systems. The compensator is computed by solving two algebraic equations for which standard closed-loop solutions exist. Unfortunately, the minimal dimension of an LQG compensator is almost always equal to the dimension of the plant and can thus often violate practical implementation constraints on controller order. This deficiency is especially highlighted when considering control-design for high-order systems such as flexible space structures. This deficiency motivated the development of techniques that enable the design of optimal controllers whose dimension is less than that of the design plant. A homotopy approach based on the optimal projection equations that characterize the necessary conditions for optimal reduced-order control. Homotopy algorithms have global convergence properties and hence do not require that the initializing reduced-order controller be close to the optimal reduced-order controller to guarantee convergence. However, the homotopy algorithm previously developed for solving the optimal projection equations has sublinear convergence properties and the convergence slows at higher authority levels and may fail. A new homotopy algorithm for synthesizing optimal reduced-order controllers for discrete-time systems is described. Unlike the previous homotopy approach, the new algorithm is a gradient-based, parameter optimization formulation and was implemented in MATLAB. The results reported may offer the foundation for a reliable approach to optimal, reduced-order controller design.
Brester Christina
2017-12-01
Full Text Available Background and Purpose: In every organization, project management raises many different decision-making problems, a large proportion of which can be efficiently solved using specific decision-making support systems. Yet such kinds of problems are always a challenge since there is no time-efficient or computationally efficient algorithm to solve them as a result of their complexity. In this study, we consider the problem of optimal financial investment. In our solution, we take into account the following organizational resource and project characteristics: profits, costs and risks.
Lavergne, T.; Dybkjær, Gorm; Girard-Ardhuin, Fanny
The Sea Ice Essential Climate Variable (ECV) as defined by GCOS pertains of both sea ice concentration, thickness, and drift. Now in its second phase, the ESA CCI Sea Ice project is conducting the necessary research efforts to address sea ice drift.Accurate estimates of sea ice drift direction an...... in the final product. This contribution reviews the motivation for the work, the plans for sea ice drift algorithms intercomparison and selection, and early results from our activity....
The algorithm for duration acceleration of repetitive projects considering the learning effect
Chen, Hongtao; Wang, Keke; Du, Yang; Wang, Liwan
2018-03-01
Repetitive project optimization problem is common in project scheduling. Repetitive Scheduling Method (RSM) has many irreplaceable advantages in the field of repetitive projects. As the same or similar work is repeated, the proficiency of workers will be correspondingly low to high, and workers will gain experience and improve the efficiency of operations. This is learning effect. Learning effect is one of the important factors affecting the optimization results in repetitive project scheduling. This paper analyzes the influence of the learning effect on the controlling path in RSM from two aspects: one is that the learning effect changes the controlling path, the other is that the learning effect doesn't change the controlling path. This paper proposes corresponding methods to accelerate duration for different types of critical activities and proposes the algorithm for duration acceleration based on the learning effect in RSM. And the paper chooses graphical method to identity activities' types and considers the impacts of the learning effect on duration. The method meets the requirement of duration while ensuring the lowest acceleration cost. A concrete bridge construction project is given to verify the effectiveness of the method. The results of this study will help project managers understand the impacts of the learning effect on repetitive projects, and use the learning effect to optimize project scheduling.
Development of an image reconstruction algorithm for a few number of projection data
Vieira, Wilson S.; Brandao, Luiz E.; Braz, Delson
2007-01-01
An image reconstruction algorithm was developed for specific cases of radiotracer applications in industry (rotating cylindrical mixers), involving a very few number of projection data. The algorithm was planned for imaging radioactive isotope distributions around the center of circular planes. The method consists of adapting the original expectation maximization algorithm (EM) to solve the ill-posed emission tomography inverse problem in order to reconstruct transversal 2D images of an object with only four projections. To achieve this aim, counts of photons emitted by selected radioactive sources in the plane, after they had been simulated using the commercial software MICROSHIELD 5.05, constitutes the projections and a computational code (SPECTEM) was developed to generate activity vectors or images related to those sources. SPECTEM is flexible to support simultaneous changes of the detectors's geometry, the medium under investigation and the properties of the gamma radiation. As a consequence of the code had been followed correctly the proposed method, good results were obtained and they encouraged us to continue the next step of the research: the validation of SPECTEM utilizing experimental data to check its real performance. We aim this code will improve considerably radiotracer methodology, making easier the diagnosis of fails in industrial processes. (author)
Development of an image reconstruction algorithm for a few number of projection data
Vieira, Wilson S.; Brandao, Luiz E. [Instituto de Engenharia Nuclear (IEN-CNEN/RJ), Rio de Janeiro , RJ (Brazil)]. E-mails: wilson@ien.gov.br; brandao@ien.gov.br; Braz, Delson [Universidade Federal do Rio de Janeiro (UFRJ), RJ (Brazil). Coordenacao dos Programa de Pos-graduacao de Engenharia (COPPE). Lab. de Instrumentacao Nuclear]. E-mail: delson@mailhost.lin.ufrj.br
2007-07-01
An image reconstruction algorithm was developed for specific cases of radiotracer applications in industry (rotating cylindrical mixers), involving a very few number of projection data. The algorithm was planned for imaging radioactive isotope distributions around the center of circular planes. The method consists of adapting the original expectation maximization algorithm (EM) to solve the ill-posed emission tomography inverse problem in order to reconstruct transversal 2D images of an object with only four projections. To achieve this aim, counts of photons emitted by selected radioactive sources in the plane, after they had been simulated using the commercial software MICROSHIELD 5.05, constitutes the projections and a computational code (SPECTEM) was developed to generate activity vectors or images related to those sources. SPECTEM is flexible to support simultaneous changes of the detectors's geometry, the medium under investigation and the properties of the gamma radiation. As a consequence of the code had been followed correctly the proposed method, good results were obtained and they encouraged us to continue the next step of the research: the validation of SPECTEM utilizing experimental data to check its real performance. We aim this code will improve considerably radiotracer methodology, making easier the diagnosis of fails in industrial processes. (author)
Hermite- Padé projection to thermal radiative and variable ...
The combined effect of variable thermal conductivity and radiative heat transfer on steady flow of a conducting optically thin viscous fluid through a channel with sliding wall and non-uniform wall temperatures under the influence of an externally applied homogeneous magnetic field are analyzed in the present study.
Jose M. Gonzalez-Cava
2018-01-01
Full Text Available One of the main challenges in medicine is to guarantee an appropriate drug supply according to the real needs of patients. Closed-loop strategies have been widely used to develop automatic solutions based on feedback variables. However, when the variable of interest cannot be directly measured or there is a lack of knowledge behind the process, it turns into a difficult issue to solve. In this research, a novel algorithm to approach this problem is presented. The main objective of this study is to provide a new general algorithm capable of determining the influence of a certain clinical variable in the decision making process for drug supply and then defining an automatic system able to guide the process considering this information. Thus, this new technique will provide a way to validate a given physiological signal as a feedback variable for drug titration. In addition, the result of the algorithm in terms of fuzzy rules and membership functions will define a fuzzy-based decision system for the drug delivery process. The method proposed is based on a Fuzzy Inference System whose structure is obtained through a decision tree algorithm. A four-step methodology is then developed: data collection, preprocessing, Fuzzy Inference System generation, and the validation of results. To test this methodology, the analgesia control scenario was analysed. Specifically, the viability of the Analgesia Nociception Index (ANI as a guiding variable for the analgesic process during surgical interventions was studied. Real data was obtained from fifteen patients undergoing cholecystectomy surgery.
A New Bi-Directional Projection Model Based on Pythagorean Uncertain Linguistic Variable
Huidong Wang
2018-04-01
Full Text Available To solve the multi-attribute decision making (MADM problems with Pythagorean uncertain linguistic variable, an extended bi-directional projection method is proposed. First, we utilize the linguistic scale function to convert uncertain linguistic variable and provide a new projection model, subsequently. Then, to depict the bi-directional projection method, the formative vectors of alternatives and ideal alternatives are defined. Furthermore, a comparative analysis with projection model is conducted to show the superiority of bi-directional projection method. Finally, an example of graduate’s job option is given to demonstrate the effectiveness and feasibility of the proposed method.
will become clear in the next article when we discuss a simple logo like programming language. ... Rod B may be used as an auxiliary store. The problem is to find an algorithm which performs this task. ... No disks are moved from A to Busing C as auxiliary rod. • move _disk (A, C);. (No + l)th disk is moved from A to C directly ...
Lei Wang
2017-01-01
Full Text Available In real-world manufacturing systems, production scheduling systems are often implemented under random or dynamic events like machine failure, unexpected processing times, stochastic arrival of the urgent orders, cancellation of the orders, and so on. These dynamic events will lead the initial scheduling scheme to be nonoptimal and/or infeasible. Hence, appropriate dynamic rescheduling approaches are needed to overcome the dynamic events. In this paper, we propose a dynamic rescheduling method based on variable interval rescheduling strategy (VIRS to deal with the dynamic flexible job shop scheduling problem considering machine failure, urgent job arrival, and job damage as disruptions. On the other hand, an improved genetic algorithm (GA is proposed for minimizing makespan. In our improved GA, a mix of random initialization population by combining initialization machine and initialization operation with random initialization is designed for generating high-quality initial population. In addition, the elitist strategy (ES and improved population diversity strategy (IPDS are used to avoid falling into the local optimal solution. Experimental results for static and several dynamic events in the FJSP show that our method is feasible and effective.
Sun, Benyuan; Yue, Shihong; Cui, Ziqiang; Wang, Huaxiang
2015-12-01
As an advanced measurement technique of non-radiant, non-intrusive, rapid response, and low cost, the electrical tomography (ET) technique has developed rapidly in recent decades. The ET imaging algorithm plays an important role in the ET imaging process. Linear back projection (LBP) is the most used ET algorithm due to its advantages of dynamic imaging process, real-time response, and easy realization. But the LBP algorithm is of low spatial resolution due to the natural ‘soft field’ effect and ‘ill-posed solution’ problems; thus its applicable ranges are greatly limited. In this paper, an original data decomposition method is proposed, and every ET measuring data are decomposed into two independent new data based on the positive and negative sensing areas of the measuring data. Consequently, the number of total measuring data is extended to twice as many as the number of the original data, thus effectively reducing the ‘ill-posed solution’. On the other hand, an index to measure the ‘soft field’ effect is proposed. The index shows that the decomposed data can distinguish between different contributions of various units (pixels) for any ET measuring data, and can efficiently reduce the ‘soft field’ effect of the ET imaging process. In light of the data decomposition method, a new linear back projection algorithm is proposed to improve the spatial resolution of the ET image. A series of simulations and experiments are applied to validate the proposed algorithm by the real-time performances and the progress of spatial resolutions.
Sun, Benyuan; Yue, Shihong; Cui, Ziqiang; Wang, Huaxiang
2015-01-01
As an advanced measurement technique of non-radiant, non-intrusive, rapid response, and low cost, the electrical tomography (ET) technique has developed rapidly in recent decades. The ET imaging algorithm plays an important role in the ET imaging process. Linear back projection (LBP) is the most used ET algorithm due to its advantages of dynamic imaging process, real-time response, and easy realization. But the LBP algorithm is of low spatial resolution due to the natural ‘soft field’ effect and ‘ill-posed solution’ problems; thus its applicable ranges are greatly limited. In this paper, an original data decomposition method is proposed, and every ET measuring data are decomposed into two independent new data based on the positive and negative sensing areas of the measuring data. Consequently, the number of total measuring data is extended to twice as many as the number of the original data, thus effectively reducing the ‘ill-posed solution’. On the other hand, an index to measure the ‘soft field’ effect is proposed. The index shows that the decomposed data can distinguish between different contributions of various units (pixels) for any ET measuring data, and can efficiently reduce the ‘soft field’ effect of the ET imaging process. In light of the data decomposition method, a new linear back projection algorithm is proposed to improve the spatial resolution of the ET image. A series of simulations and experiments are applied to validate the proposed algorithm by the real-time performances and the progress of spatial resolutions. (paper)
Precise Aperture-Dependent Motion Compensation with Frequency Domain Fast Back-Projection Algorithm
Man Zhang
2017-10-01
Full Text Available Precise azimuth-variant motion compensation (MOCO is an essential and difficult task for high-resolution synthetic aperture radar (SAR imagery. In conventional post-filtering approaches, residual azimuth-variant motion errors are generally compensated through a set of spatial post-filters, where the coarse-focused image is segmented into overlapped blocks concerning the azimuth-dependent residual errors. However, image domain post-filtering approaches, such as precise topography- and aperture-dependent motion compensation algorithm (PTA, have difficulty of robustness in declining, when strong motion errors are involved in the coarse-focused image. In this case, in order to capture the complete motion blurring function within each image block, both the block size and the overlapped part need necessary extension leading to degeneration of efficiency and robustness inevitably. Herein, a frequency domain fast back-projection algorithm (FDFBPA is introduced to deal with strong azimuth-variant motion errors. FDFBPA disposes of the azimuth-variant motion errors based on a precise azimuth spectrum expression in the azimuth wavenumber domain. First, a wavenumber domain sub-aperture processing strategy is introduced to accelerate computation. After that, the azimuth wavenumber spectrum is partitioned into a set of wavenumber blocks, and each block is formed into a sub-aperture coarse resolution image via the back-projection integral. Then, the sub-aperture images are straightforwardly fused together in azimuth wavenumber domain to obtain a full resolution image. Moreover, chirp-Z transform (CZT is also introduced to implement the sub-aperture back-projection integral, increasing the efficiency of the algorithm. By disusing the image domain post-filtering strategy, robustness of the proposed algorithm is improved. Both simulation and real-measured data experiments demonstrate the effectiveness and superiority of the proposal.
Umam, M. I. H.; Santosa, B.
2018-04-01
Combinatorial optimization has been frequently used to solve both problems in science, engineering, and commercial applications. One combinatorial problems in the field of transportation is to find a shortest travel route that can be taken from the initial point of departure to point of destination, as well as minimizing travel costs and travel time. When the distance from one (initial) node to another (destination) node is the same with the distance to travel back from destination to initial, this problems known to the Traveling Salesman Problem (TSP), otherwise it call as an Asymmetric Traveling Salesman Problem (ATSP). The most recent optimization techniques is Symbiotic Organisms Search (SOS). This paper discuss how to hybrid the SOS algorithm with variable neighborhoods search (SOS-VNS) that can be applied to solve the ATSP problem. The proposed mechanism to add the variable neighborhoods search as a local search is to generate the better initial solution and then we modify the phase of parasites with adapting mechanism of mutation. After modification, the performance of the algorithm SOS-VNS is evaluated with several data sets and then the results is compared with the best known solution and some algorithm such PSO algorithm and SOS original algorithm. The SOS-VNS algorithm shows better results based on convergence, divergence and computing time.
Petersen, T. C.; Ringer, S. P.
2010-03-01
Upon discerning the mere shape of an imaged object, as portrayed by projected perimeters, the full three-dimensional scattering density may not be of particular interest. In this situation considerable simplifications to the reconstruction problem are possible, allowing calculations based upon geometric principles. Here we describe and provide an algorithm which reconstructs the three-dimensional morphology of specimens from tilt series of images for application to electron tomography. Our algorithm uses a differential approach to infer the intersection of projected tangent lines with surfaces which define boundaries between regions of different scattering densities within and around the perimeters of specimens. Details of the algorithm implementation are given and explained using reconstruction calculations from simulations, which are built into the code. An experimental application of the algorithm to a nano-sized Aluminium tip is also presented to demonstrate practical analysis for a real specimen. Program summaryProgram title: STOMO version 1.0 Catalogue identifier: AEFS_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEFS_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 2988 No. of bytes in distributed program, including test data, etc.: 191 605 Distribution format: tar.gz Programming language: C/C++ Computer: PC Operating system: Windows XP RAM: Depends upon the size of experimental data as input, ranging from 200 Mb to 1.5 Gb Supplementary material: Sample output files, for the test run provided, are available. Classification: 7.4, 14 External routines: Dev-C++ ( http://www.bloodshed.net/devcpp.html) Nature of problem: Electron tomography of specimens for which conventional back projection may fail and/or data for which there is a limited angular
Genetic algorithm parameters tuning for resource-constrained project scheduling problem
Tian, Xingke; Yuan, Shengrui
2018-04-01
Project Scheduling Problem (RCPSP) is a kind of important scheduling problem. To achieve a certain optimal goal such as the shortest duration, the smallest cost, the resource balance and so on, it is required to arrange the start and finish of all tasks under the condition of satisfying project timing constraints and resource constraints. In theory, the problem belongs to the NP-hard problem, and the model is abundant. Many combinatorial optimization problems are special cases of RCPSP, such as job shop scheduling, flow shop scheduling and so on. At present, the genetic algorithm (GA) has been used to deal with the classical RCPSP problem and achieved remarkable results. Vast scholars have also studied the improved genetic algorithm for the RCPSP problem, which makes it to solve the RCPSP problem more efficiently and accurately. However, for the selection of the main parameters of the genetic algorithm, there is no parameter optimization in these studies. Generally, we used the empirical method, but it cannot ensure to meet the optimal parameters. In this paper, the problem was carried out, which is the blind selection of parameters in the process of solving the RCPSP problem. We made sampling analysis, the establishment of proxy model and ultimately solved the optimal parameters.
A HYBRID HEURISTIC ALGORITHM FOR SOLVING THE RESOURCE CONSTRAINED PROJECT SCHEDULING PROBLEM (RCPSP
Juan Carlos Rivera
Full Text Available The Resource Constrained Project Scheduling Problem (RCPSP is a problem of great interest for the scientific community because it belongs to the class of NP-Hard problems and no methods are known that can solve it accurately in polynomial processing times. For this reason heuristic methods are used to solve it in an efficient way though there is no guarantee that an optimal solution can be obtained. This research presents a hybrid heuristic search algorithm to solve the RCPSP efficiently, combining elements of the heuristic Greedy Randomized Adaptive Search Procedure (GRASP, Scatter Search and Justification. The efficiency obtained is measured taking into account the presence of the new elements added to the GRASP algorithm taken as base: Justification and Scatter Search. The algorithms are evaluated using three data bases of instances of the problem: 480 instances of 30 activities, 480 of 60, and 600 of 120 activities respectively, taken from the library PSPLIB available online. The solutions obtained by the developed algorithm for the instances of 30, 60 and 120 are compared with results obtained by other researchers at international level, where a prominent place is obtained, according to Chen (2011.
Sentiment analysis enhancement with target variable in Kumar’s Algorithm
Arman, A. A.; Kawi, A. B.; Hurriyati, R.
2016-04-01
Sentiment analysis (also known as opinion mining) refers to the use of text analysis and computational linguistics to identify and extract subjective information in source materials. Sentiment analysis is widely applied to reviews discussion that is being talked in social media for many purposes, ranging from marketing, customer service, or public opinion of public policy. One of the popular algorithm for Sentiment Analysis implementation is Kumar algorithm that developed by Kumar and Sebastian. Kumar algorithm can identify the sentiment score of the statement, sentence or tweet, but cannot determine the relationship of the object or target related to the sentiment being analysed. This research proposed solution for that challenge by adding additional component that represent object or target to the existing algorithm (Kumar algorithm). The result of this research is a modified algorithm that can give sentiment score based on a given object or target.
Im, Piljae [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Munk, Jeffrey D [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Gehl, Anthony C [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)
2015-06-01
A research project “Evaluation of Variable Refrigerant Flow (VRF) Systems Performance and the Enhanced Control Algorithm on Oak Ridge National Laboratory’s (ORNL’s) Flexible Research Platform” was performed to (1) install and validate the performance of Samsung VRF systems compared with the baseline rooftop unit (RTU) variable-air-volume (VAV) system and (2) evaluate the enhanced control algorithm for the VRF system on the two-story flexible research platform (FRP) in Oak Ridge, Tennessee. Based on the VRF system designed by Samsung and ORNL, the system was installed from February 18 through April 15, 2014. The final commissioning and system optimization were completed on June 2, 2014, and the initial test for system operation was started the following day, June 3, 2014. In addition, the enhanced control algorithm was implemented and updated on June 18. After a series of additional commissioning actions, the energy performance data from the RTU and the VRF system were monitored from July 7, 2014, through February 28, 2015. Data monitoring and analysis were performed for the cooling season and heating season separately, and the calibrated simulation model was developed and used to estimate the energy performance of the RTU and VRF systems. This final report includes discussion of the design and installation of the VRF system, the data monitoring and analysis plan, the cooling season and heating season data analysis, and the building energy modeling study
Cabaret, S; Coppier, H; Rachid, A; Barillère, R; CERN. Geneva. IT Department
2007-01-01
The GCS (Gas Control System) project team at CERN uses a Model Driven Approach with a Framework - UNICOS (UNified Industrial COntrol System) - based on PLC (Programming Language Controller) and SCADA (Supervisory Control And Data Acquisition) technologies. The first' UNICOS versions were able to provide a PID (Proportional Integrative Derivative) controller whereas the Gas Systems required more advanced control strategies. The MultiController is a new UNICOS object which provides the following advanced control algorithms: Smith Predictor, PFC (Predictive Function Control), RST* and GPC (Global Predictive Control). Its design is based on a monolithic entity with a global structure definition which is able to capture the desired set of parameters of any specific control algorithm supported by the object. The SCADA system -- PVSS - supervises the MultiController operation. The PVSS interface provides users with supervision faceplate, in particular it links any MultiController with recipes: the GCS experts are ab...
N. M. Okasha
2016-04-01
Full Text Available In this paper, an approach for conducting a Reliability-Based Design Optimization (RBDO of truss structures with linked-discrete design variables is proposed. The sections of the truss members are selected from the AISC standard tables and thus the design variables that represent the properties of each section are linked. Latin hypercube sampling is used in the evaluation of the structural reliability. The improved firefly algorithm is used for the optimization solution process. It was found that in order to use the improved firefly algorithm for efficiently solving problems of reliability-based design optimization with linked-discrete design variables; it needs to be modified as proposed in this paper to accelerate its convergence.
Viet Tra
2017-12-01
Full Text Available This paper presents a novel method for diagnosing incipient bearing defects under variable operating speeds using convolutional neural networks (CNNs trained via the stochastic diagonal Levenberg-Marquardt (S-DLM algorithm. The CNNs utilize the spectral energy maps (SEMs of the acoustic emission (AE signals as inputs and automatically learn the optimal features, which yield the best discriminative models for diagnosing incipient bearing defects under variable operating speeds. The SEMs are two-dimensional maps that show the distribution of energy across different bands of the AE spectrum. It is hypothesized that the variation of a bearing’s speed would not alter the overall shape of the AE spectrum rather, it may only scale and translate it. Thus, at different speeds, the same defect would yield SEMs that are scaled and shifted versions of each other. This hypothesis is confirmed by the experimental results, where CNNs trained using the S-DLM algorithm yield significantly better diagnostic performance under variable operating speeds compared to existing methods. In this work, the performance of different training algorithms is also evaluated to select the best training algorithm for the CNNs. The proposed method is used to diagnose both single and compound defects at six different operating speeds.
A novel algorithm for incompressible flow using only a coarse grid projection
Lentine, Michael
2010-07-26
Large scale fluid simulation can be difficult using existing techniques due to the high computational cost of using large grids. We present a novel technique for simulating detailed fluids quickly. Our technique coarsens the Eulerian fluid grid during the pressure solve, allowing for a fast implicit update but still maintaining the resolution obtained with a large grid. This allows our simulations to run at a fraction of the cost of existing techniques while still providing the fine scale structure and details obtained with a full projection. Our algorithm scales well to very large grids and large numbers of processors, allowing for high fidelity simulations that would otherwise be intractable. © 2010 ACM.
Improvement of image quality of holographic projection on tilted plane using iterative algorithm
Pang, Hui; Cao, Axiu; Wang, Jiazhou; Zhang, Man; Deng, Qiling
2017-12-01
Holographic image projection on tilted plane has an important application prospect. In this paper, we propose a method to compute the phase-only hologram that can reconstruct a clear image on tilted plane. By adding a constant phase to the target image of the inclined plane, the corresponding light field distribution on the plane that is parallel to the hologram plane is derived through the titled diffraction calculation. Then the phase distribution of the hologram is obtained by the iterative algorithm with amplitude and phase constrain. Simulation and optical experiment are performed to show the effectiveness of the proposed method.
Fast cross-projection algorithm for reconstruction of seeds in prostate brachytherapy
Narayanan, Sreeram; Cho, Paul S.; Marks, Robert J. II
2002-01-01
A fast method of seed matching and reconstruction in prostrate brachytherapy is proposed. Previous approaches have required all seeds to be matched with all other seeds in other projections. The fast cross-projection algorithm for the reconstruction of seeds (Fast-CARS) allows for matching of a given seed with a subset of seeds in other projections. This subset lies in a proximal region centered about the projection of a line, connecting the seed to its source, onto other projection planes. The proposed technique permits a significant reduction in computational overhead, as measured by the required number of matching tests. The number of multiplications and additions is also vastly reduced at no trade-off in accuracy. Because of its speed, Fast-CARS can be used in applications requiring real-time performance such as intraoperative dosimetry of prostate brachytherapy. Furthermore, the proposed method makes practical the use of a larger number of views as opposed to previous techniques limited to a maximum use of three views
An optimization algorithm for simulation-based planning of low-income housing projects
Mohamed M. Marzouk
2010-10-01
Full Text Available Construction of low-income housing projects is a replicated process and is associated with uncertainties that arise from the unavailability of resources. Government agencies and/or contractors have to select a construction system that meets low-income housing projects constraints including project conditions, technical, financial and time constraints. This research presents a framework, using computer simulation, which aids government authorities and contractors in the planning of low-income housing projects. The proposed framework estimates the time and cost required for the construction of low-income housing using pre-cast hollow core with hollow blocks bearing walls. Five main components constitute the proposed framework: a network builder module, a construction alternative selection module, a simulation module, an optimization module and a reporting module. An optimization module utilizing a genetic algorithm enables the defining of different options and ranges of parameters associated with low-income housing projects that influence the duration and total cost of the pre-cast hollow core with hollow blocks bearing walls method. A computer prototype, named LIHouse_Sim, was developed in MS Visual Basic 6.0 as proof of concept for the proposed framework. A numerical example is presented to demonstrate the use of the developed framework and to illustrate its essential features.
de Almeida, Valber Elias; de Araújo Gomes, Adriano; de Sousa Fernandes, David Douglas; Goicoechea, Héctor Casimiro; Galvão, Roberto Kawakami Harrop; Araújo, Mario Cesar Ugulino
2018-05-01
This paper proposes a new variable selection method for nonlinear multivariate calibration, combining the Successive Projections Algorithm for interval selection (iSPA) with the Kernel Partial Least Squares (Kernel-PLS) modelling technique. The proposed iSPA-Kernel-PLS algorithm is employed in a case study involving a Vis-NIR spectrometric dataset with complex nonlinear features. The analytical problem consists of determining Brix and sucrose content in samples from a sugar production system, on the basis of transflectance spectra. As compared to full-spectrum Kernel-PLS, the iSPA-Kernel-PLS models involve a smaller number of variables and display statistically significant superiority in terms of accuracy and/or bias in the predictions. Published by Elsevier B.V.
Cohen, Julien G. [Seoul National University College of Medicine, Department of Radiology, Seoul (Korea, Republic of); Seoul National University Medical Research Center, Institute of Radiation Medicine, Seoul (Korea, Republic of); Centre Hospitalier Universitaire de Grenoble, Clinique Universitaire de Radiologie et Imagerie Medicale (CURIM), Universite Grenoble Alpes, Grenoble Cedex 9 (France); Kim, Hyungjin; Park, Su Bin [Seoul National University College of Medicine, Department of Radiology, Seoul (Korea, Republic of); Seoul National University Medical Research Center, Institute of Radiation Medicine, Seoul (Korea, Republic of); Ginneken, Bram van [Radboud University Nijmegen Medical Center, Department of Radiology and Nuclear Medicine, Nijmegen (Netherlands); Ferretti, Gilbert R. [Centre Hospitalier Universitaire de Grenoble, Clinique Universitaire de Radiologie et Imagerie Medicale (CURIM), Universite Grenoble Alpes, Grenoble Cedex 9 (France); Institut A Bonniot, INSERM U 823, La Tronche (France); Lee, Chang Hyun [Seoul National University College of Medicine, Department of Radiology, Seoul (Korea, Republic of); Goo, Jin Mo; Park, Chang Min [Seoul National University College of Medicine, Department of Radiology, Seoul (Korea, Republic of); Seoul National University Medical Research Center, Institute of Radiation Medicine, Seoul (Korea, Republic of); Seoul National University College of Medicine, Cancer Research Institute, Seoul (Korea, Republic of)
2017-08-15
To evaluate the differences between filtered back projection (FBP) and model-based iterative reconstruction (MBIR) algorithms on semi-automatic measurements in subsolid nodules (SSNs). Unenhanced CT scans of 73 SSNs obtained using the same protocol and reconstructed with both FBP and MBIR algorithms were evaluated by two radiologists. Diameter, mean attenuation, mass and volume of whole nodules and their solid components were measured. Intra- and interobserver variability and differences between FBP and MBIR were then evaluated using Bland-Altman method and Wilcoxon tests. Longest diameter, volume and mass of nodules and those of their solid components were significantly higher using MBIR (p < 0.05) with mean differences of 1.1% (limits of agreement, -6.4 to 8.5%), 3.2% (-20.9 to 27.3%) and 2.9% (-16.9 to 22.7%) and 3.2% (-20.5 to 27%), 6.3% (-51.9 to 64.6%), 6.6% (-50.1 to 63.3%), respectively. The limits of agreement between FBP and MBIR were within the range of intra- and interobserver variability for both algorithms with respect to the diameter, volume and mass of nodules and their solid components. There were no significant differences in intra- or interobserver variability between FBP and MBIR (p > 0.05). Semi-automatic measurements of SSNs significantly differed between FBP and MBIR; however, the differences were within the range of measurement variability. (orig.)
Alexandr Victorovich Budylskiy
2014-06-01
Full Text Available This article considers the multicriteria optimization approach using the modified genetic algorithm to solve the project-scheduling problem under duration and cost constraints. The work contains the list of choices for solving this problem. The multicriteria optimization approach is justified here. The study describes the Pareto principles, which are used in the modified genetic algorithm. We identify the mathematical model of the project-scheduling problem. We introduced the modified genetic algorithm, the ranking strategies, the elitism approaches. The article includes the example.
Projection decomposition algorithm for dual-energy computed tomography via deep neural network.
Xu, Yifu; Yan, Bin; Chen, Jian; Zeng, Lei; Li, Lei
2018-03-15
Dual-energy computed tomography (DECT) has been widely used to improve identification of substances from different spectral information. Decomposition of the mixed test samples into two materials relies on a well-calibrated material decomposition function. This work aims to establish and validate a data-driven algorithm for estimation of the decomposition function. A deep neural network (DNN) consisting of two sub-nets is proposed to solve the projection decomposition problem. The compressing sub-net, substantially a stack auto-encoder (SAE), learns a compact representation of energy spectrum. The decomposing sub-net with a two-layer structure fits the nonlinear transform between energy projection and basic material thickness. The proposed DNN not only delivers image with lower standard deviation and higher quality in both simulated and real data, and also yields the best performance in cases mixed with photon noise. Moreover, DNN costs only 0.4 s to generate a decomposition solution of 360 × 512 size scale, which is about 200 times faster than the competing algorithms. The DNN model is applicable to the decomposition tasks with different dual energies. Experimental results demonstrated the strong function fitting ability of DNN. Thus, the Deep learning paradigm provides a promising approach to solve the nonlinear problem in DECT.
Analytical algorithm for the generation of polygonal projection data for tomographic reconstruction
Davis, G.R.
1996-01-01
Tomographic reconstruction algorithms and filters can be tested using a mathematical phantom, that is, a computer program which takes numerical data as its input and outputs derived projection data. The input data is usually in the form of pixel ''densities'' over a regular grid, or position and dimensions of simple, geometrical objects. The former technique allows a greater variety of objects to be simulated, but is less suitable in the case when very small (relative to the ray-spacing) features are to be simulated. The second technique is normally used to simulate biological specimens, typically a human skull, modelled as a number of ellipses. This is not suitable for simulating non-biological specimens with features such as straight edges and fine cracks. We have therefore devised an algorithm for simulating objects described as a series of polygons. These polygons, or parts of them, may be smaller than the ray-spacing and there is no limit, except that imposed by computing resources, on the complexity, number or superposition of polygons. A simple test of such a phantom, reconstructed using the filtered back-projection method, revealed reconstruction artefacts not normally seen with ''biological'' phantoms. (orig.)
Predicting patchy particle crystals: variable box shape simulations and evolutionary algorithms.
Bianchi, Emanuela; Doppelbauer, Günther; Filion, Laura; Dijkstra, Marjolein; Kahl, Gerhard
2012-06-07
We consider several patchy particle models that have been proposed in literature and we investigate their candidate crystal structures in a systematic way. We compare two different algorithms for predicting crystal structures: (i) an approach based on Monte Carlo simulations in the isobaric-isothermal ensemble and (ii) an optimization technique based on ideas of evolutionary algorithms. We show that the two methods are equally successful and provide consistent results on crystalline phases of patchy particle systems.
Environmental setting for biological variability at PTEPBN project of West Kalimantan
Suwadji, E.; Endrawanto
1995-01-01
Biological variability was needed in the arrangement of environmental evaluation study on term of environmental impact assessment. The activity was carried out at PTEPBN project to find out and to predict the environmental setting of outgoing and ongoing project as well as the project operational after post construction. Methods to find out the environmental setting on biological variability were proposed. Based on the observation data on its terrestrial and aquatic flora and fauna, it can be concluded that terrestrial flora was found at fair to good value, terrestrial fauna at fair to good whereas aquatic flora and fauna at good. (author). 8 refs, 7 tabs, 1 fig
New algorithm using only one variable measurement applied to a maximum power point tracker
Salas, V.; Olias, E.; Lazaro, A.; Barrado, A. [University Carlos III de Madrid (Spain). Dept. of Electronic Technology
2005-05-01
A novel algorithm for seeking the maximum power point of a photovoltaic (PV) array for any temperature and solar irradiation level, needing only the PV current value, is proposed. Satisfactory theoretical and experimental results are presented and were obtained when the algorithm was included on a 100 W 24 V PV buck converter prototype, using an inexpensive microcontroller. The load of the system used was a battery and a resistance. The main advantage of this new maximum power point tracking (MPPT), when is compared with others, is that it only uses the measurement of the photovoltaic current, I{sub PV}. (author)
The variable refractive index correction algorithm based on a stereo light microscope
Pei, W; Zhu, Y Y
2010-01-01
Refraction occurs at least twice on both the top and the bottom surfaces of the plastic plate covering the micro channel in a microfluidic chip. The refraction and the nonlinear model of a stereo light microscope (SLM) may severely affect measurement accuracy. In this paper, we study the correlation between optical paths of the SLM and present an algorithm to adjust the refractive index based on the SLM. Our algorithm quantizes the influence of cover plate and double optical paths on the measurement accuracy, and realizes non-destructive, non-contact and precise 3D measurement of a hyaloid and closed container
Ren, Zhong; Liu, Guodong; Huang, Zhen
2012-11-01
The image reconstruction is a key step in medical imaging (MI) and its algorithm's performance determinates the quality and resolution of reconstructed image. Although some algorithms have been used, filter back-projection (FBP) algorithm is still the classical and commonly-used algorithm in clinical MI. In FBP algorithm, filtering of original projection data is a key step in order to overcome artifact of the reconstructed image. Since simple using of classical filters, such as Shepp-Logan (SL), Ram-Lak (RL) filter have some drawbacks and limitations in practice, especially for the projection data polluted by non-stationary random noises. So, an improved wavelet denoising combined with parallel-beam FBP algorithm is used to enhance the quality of reconstructed image in this paper. In the experiments, the reconstructed effects were compared between the improved wavelet denoising and others (directly FBP, mean filter combined FBP and median filter combined FBP method). To determine the optimum reconstruction effect, different algorithms, and different wavelet bases combined with three filters were respectively test. Experimental results show the reconstruction effect of improved FBP algorithm is better than that of others. Comparing the results of different algorithms based on two evaluation standards i.e. mean-square error (MSE), peak-to-peak signal-noise ratio (PSNR), it was found that the reconstructed effects of the improved FBP based on db2 and Hanning filter at decomposition scale 2 was best, its MSE value was less and the PSNR value was higher than others. Therefore, this improved FBP algorithm has potential value in the medical imaging.
Predicting patchy particle crystals: variable box shape simulations and evolutionary algorithms
Bianchi, E.; Doppelbauer, G.; Filion, L.C.; Dijkstra, M.; Kahl, G.
2012-01-01
We consider several patchy particle models that have been proposed in literature and we investigate their candidate crystal structures in a systematic way. We compare two different algorithms for predicting crystal structures: (i) an approach based on Monte Carlo simulations in the
A variable-depth search algorithm for recursive bi-partitioning of signal flow graphs
de Kock, E.A.; Aarts, E.H.L.; Essink, G.; Jansen, R.E.J.; Korst, J.H.M.
1995-01-01
We discuss the use of local search techniques for mapping video algorithms onto programmable high-performance video signal processors. The mapping problem is very complex due to many constraints that need to be satisfied in order to obtain a feasible solution. The complexity is reduced by
Infinite projected entangled-pair state algorithm for ruby and triangle-honeycomb lattices
Jahromi, Saeed S.; Orús, Román; Kargarian, Mehdi; Langari, Abdollah
2018-03-01
The infinite projected entangled-pair state (iPEPS) algorithm is one of the most efficient techniques for studying the ground-state properties of two-dimensional quantum lattice Hamiltonians in the thermodynamic limit. Here, we show how the algorithm can be adapted to explore nearest-neighbor local Hamiltonians on the ruby and triangle-honeycomb lattices, using the corner transfer matrix (CTM) renormalization group for 2D tensor network contraction. Additionally, we show how the CTM method can be used to calculate the ground-state fidelity per lattice site and the boundary density operator and entanglement entropy (EE) on an infinite cylinder. As a benchmark, we apply the iPEPS method to the ruby model with anisotropic interactions and explore the ground-state properties of the system. We further extract the phase diagram of the model in different regimes of the couplings by measuring two-point correlators, ground-state fidelity, and EE on an infinite cylinder. Our phase diagram is in agreement with previous studies of the model by exact diagonalization.
Osser, David N; Roudsari, Mohsen Jalali; Manschreck, Theo
2013-01-01
This article is an update of the algorithm for schizophrenia from the Psychopharmacology Algorithm Project at the Harvard South Shore Program. A literature review was conducted focusing on new data since the last published version (1999-2001). The first-line treatment recommendation for new-onset schizophrenia is with amisulpride, aripiprazole, risperidone, or ziprasidone for four to six weeks. In some settings the trial could be shorter, considering that evidence of clear improvement with antipsychotics usually occurs within the first two weeks. If the trial of the first antipsychotic cannot be completed due to intolerance, try another until one of the four is tolerated and given an adequate trial. There should be evidence of bioavailability. If the response to this adequate trial is unsatisfactory, try a second monotherapy. If the response to this second adequate trial is also unsatisfactory, and if at least one of the first two trials was with risperidone, olanzapine, or a first-generation (typical) antipsychotic, then clozapine is recommended for the third trial. If neither trial was with any these three options, a third trial prior to clozapine should occur, using one of those three. If the response to monotherapy with clozapine (with dose adjusted by using plasma levels) is unsatisfactory, consider adding risperidone, lamotrigine, or ECT. Beyond that point, there is little solid evidence to support further psychopharmacological treatment choices, though we do review possible options.
Pokhrel, Damodar; Murphy, Martin J.; Todor, Dorin A.; Weiss, Elisabeth; Williamson, Jeffrey F. [Department of Radiation Oncology, School of Medicine, Virginia Commonwealth University, Richmond, Virginia 23298 (United States)
2010-09-15
Purpose: To experimentally validate a new algorithm for reconstructing the 3D positions of implanted brachytherapy seeds from postoperatively acquired 2D conebeam-CT (CBCT) projection images. Methods: The iterative forward projection matching (IFPM) algorithm finds the 3D seed geometry that minimizes the sum of the squared intensity differences between computed projections of an initial estimate of the seed configuration and radiographic projections of the implant. In-house machined phantoms, containing arrays of 12 and 72 seeds, respectively, are used to validate this method. Also, four {sup 103}Pd postimplant patients are scanned using an ACUITY digital simulator. Three to ten x-ray images are selected from the CBCT projection set and processed to create binary seed-only images. To quantify IFPM accuracy, the reconstructed seed positions are forward projected and overlaid on the measured seed images to find the nearest-neighbor distance between measured and computed seed positions for each image pair. Also, the estimated 3D seed coordinates are compared to known seed positions in the phantom and clinically obtained VariSeed planning coordinates for the patient data. Results: For the phantom study, seed localization error is (0.58{+-}0.33) mm. For all four patient cases, the mean registration error is better than 1 mm while compared against the measured seed projections. IFPM converges in 20-28 iterations, with a computation time of about 1.9-2.8 min/iteration on a 1 GHz processor. Conclusions: The IFPM algorithm avoids the need to match corresponding seeds in each projection as required by standard back-projection methods. The authors' results demonstrate {approx}1 mm accuracy in reconstructing the 3D positions of brachytherapy seeds from the measured 2D projections. This algorithm also successfully localizes overlapping clustered and highly migrated seeds in the implant.
Pokhrel, Damodar; Murphy, Martin J; Todor, Dorin A; Weiss, Elisabeth; Williamson, Jeffrey F
2010-09-01
To experimentally validate a new algorithm for reconstructing the 3D positions of implanted brachytherapy seeds from postoperatively acquired 2D conebeam-CT (CBCT) projection images. The iterative forward projection matching (IFPM) algorithm finds the 3D seed geometry that minimizes the sum of the squared intensity differences between computed projections of an initial estimate of the seed configuration and radiographic projections of the implant. In-house machined phantoms, containing arrays of 12 and 72 seeds, respectively, are used to validate this method. Also, four 103Pd postimplant patients are scanned using an ACUITY digital simulator. Three to ten x-ray images are selected from the CBCT projection set and processed to create binary seed-only images. To quantify IFPM accuracy, the reconstructed seed positions are forward projected and overlaid on the measured seed images to find the nearest-neighbor distance between measured and computed seed positions for each image pair. Also, the estimated 3D seed coordinates are compared to known seed positions in the phantom and clinically obtained VariSeed planning coordinates for the patient data. For the phantom study, seed localization error is (0.58 +/- 0.33) mm. For all four patient cases, the mean registration error is better than 1 mm while compared against the measured seed projections. IFPM converges in 20-28 iterations, with a computation time of about 1.9-2.8 min/ iteration on a 1 GHz processor. The IFPM algorithm avoids the need to match corresponding seeds in each projection as required by standard back-projection methods. The authors' results demonstrate approximately 1 mm accuracy in reconstructing the 3D positions of brachytherapy seeds from the measured 2D projections. This algorithm also successfully localizes overlapping clustered and highly migrated seeds in the implant.
Abraham Dandoussou
2017-05-01
Full Text Available The crystalline silicon photovoltaic modules are widely used as power supply sources in the tropical areas where the weather conditions change abruptly. Fortunately, many MPPT algorithms are implemented to improve their performance. In the other hand, it is well known that these power supply sources are nonlinear dipoles and so, their intrinsic parameters may vary with the irradiance and the temperature. In this paper, the MPPT algorithms widely used, i.e. Perturb and Observe (P&O, Incremental Conductance (INC, Hill-Climbing (HC, are implemented using Matlab®/Simulink® model of a crystalline silicon photovoltaic module whose intrinsic parameters were extracted by fitting the I(V characteristic to experimental points. Comparing the simulation results, it is obvious that the variable step size INC algorithm has the best reliability than both HC and P&O algorithms for the near to real Simulink® model of photovoltaic modules. With a 60 Wp photovoltaic module, the daily maximum power reaches 50.76 W against 34.40 W when the photovoltaic parameters are fixed. Meanwhile, the daily average energy is 263 Wh/day against 195 Wh/day.
Haverila, Matti
2010-01-01
We present an exploratory investigation of how managers conceptualize and perceive ‘marketplace’ variables in successful and unsuccessful New Product Development (NPD) projects, and explore the role that marketplace variables play in differentiating between successful and unsuccessful NPD outcomes. Limitations and future research directions are also discussed. Our findings indicate that managers perceive the marketplace in multiple ways during the NPD process and also that differences exis...
MERIS burned area algorithm in the framework of the ESA Fire CCI Project
Oliva, P.; Calado, T.; Gonzalez, F.
2012-04-01
The Fire-CCI project aims at generating long and reliable time series of burned area (BA) maps based on existing information provided by European satellite sensors. In this context, a BA algorithm is currently being developed using the Medium Resolution Imaging Spectrometer (MERIS) sensor. The algorithm is being tested over a series of ten study sites with a area of 500x500 km2 each, for the period of 2003 to 2009. The study sites are located in Canada, Colombia, Brazil, Portugal, Angola, South Africa, Kazakhstan, Borneo, Russia and Australia and include a variety of vegetation types characterized by different fire regimes. The algorithm has to take into account several limiting aspects that range from the MERIS sensor characteristics (e.g. the lack of SWIR bands) to the noise presented in the data. In addition the lack of data in some areas caused either because of cloud contamination or because the sensor does not acquire full resolution data over the study area, provokes a limitation difficult to overcome. In order to overcome these drawbacks, the design of the BA algorithm is based on the analysis of maximum composites of spectral indices characterized by low values of temporal standard deviation in space and associated to MODIS hot spots. Accordingly, for each study site and year, composites of maximum values of BAI are computed and the corresponding Julian day of the maximum value and number of observations in the period are registered by pixel . Then we computed the temporal standard deviation for pixels with a number of observations greater than 10 using spatial matrices of 3x3 pixels. To classify the BAI values as burned or non-burned we extract statistics using the MODIS hot spots. A pixel is finally classified as burned if it satisfies the following conditions: i) it is associated to hot spots; ii) BAI maximum is higher than a certain threshold and iii) the standard deviation of the Julian day is less than a given number of days.
Quantum Monte Carlo algorithms for electronic structure at the petascale; the endstation project.
Kim, J; Ceperley, D M; Purwanto, W; Walter, E J; Krakauer, H; Zhang, S W; Kent, P.R. C; Hennig, R G; Umrigar, C; Bajdich, M; Kolorenc, J; Mitas, L
2008-10-01
Over the past two decades, continuum quantum Monte Carlo (QMC) has proved to be an invaluable tool for predicting of the properties of matter from fundamental principles. By solving the Schrodinger equation through a stochastic projection, it achieves the greatest accuracy and reliability of methods available for physical systems containing more than a few quantum particles. QMC enjoys scaling favorable to quantum chemical methods, with a computational effort which grows with the second or third power of system size. This accuracy and scalability has enabled scientific discovery across a broad spectrum of disciplines. The current methods perform very efficiently at the terascale. The quantum Monte Carlo Endstation project is a collaborative effort among researchers in the field to develop a new generation of algorithms, and their efficient implementations, which will take advantage of the upcoming petaflop architectures. Some aspects of these developments are discussed here. These tools will expand the accuracy, efficiency and range of QMC applicability and enable us to tackle challenges which are currently out of reach. The methods will be applied to several important problems including electronic and structural properties of water, transition metal oxides, nanosystems and ultracold atoms.
Leng Shuai; Zhuang Tingliang; Nett, Brian E; Chen Guanghong
2005-01-01
In this paper, we present a new algorithm designed for a specific data truncation problem in fan-beam CT. We consider a scanning configuration in which the fan-beam projection data are acquired from an asymmetrically positioned half-sized detector. Namely, the asymmetric detector only covers one half of the scanning field of view. Thus, the acquired fan-beam projection data are truncated at every view angle. If an explicit data rebinning process is not invoked, this data acquisition configuration will reek havoc on many known fan-beam image reconstruction schemes including the standard filtered backprojection (FBP) algorithm and the super-short-scan FBP reconstruction algorithms. However, we demonstrate that a recently developed fan-beam image reconstruction algorithm which reconstructs an image via filtering a backprojection image of differentiated projection data (FBPD) survives the above fan-beam data truncation problem. Namely, we may exactly reconstruct the whole image object using the truncated data acquired in a full scan mode (2π angular range). We may also exactly reconstruct a small region of interest (ROI) using the truncated projection data acquired in a short-scan mode (less than 2π angular range). The most important characteristic of the proposed reconstruction scheme is that an explicit data rebinning process is not introduced. Numerical simulations were conducted to validate the new reconstruction algorithm
Matti J. Haverila
2010-12-01
Our findings indicate that managers perceive the marketplace in multiple ways during the NPD process and also that differences exist in metric equivalence across successful and unsuccessful NPD projects. Also, although half of the marketplace variables are positively related to NPD success, managers in Finnish technology companies appear to attach higher relative importance to market attractiveness rather than market competitiveness variables. Marketplace variables appear to be less important than in the Korean and Chinese samples, and much more important than in the Canadian sample in the Mishra et all study (1996, and similarly much more important than in the Cooper study (1979b.
Big Data, Algorithmic Regulation, and the History of the Cybersyn Project in Chile, 1971–1973
Katharina Loeber
2018-04-01
Full Text Available We are living in a data-driven society. Big Data and the Internet of Things are popular terms. Governments, universities and the private sector make great investments in collecting and storing data and also extracting new knowledge from these data banks. Technological enthusiasm runs throughout political discourses. “Algorithmic regulation” is defined as a form of data-driven governance. Big Data shall offer brand new opportunities in scientific research. At the same time, political criticism of data storage grows because of a lack of privacy protection and the centralization of data in the hands of governments and corporations. Calls for data-driven dynamic regulation have existed in the past. In Chile, cybernetic development led to the creation of Cybersyn, a computer system that was created to manage the socialist economy under the Allende government 1971–1973. My contribution will present this Cybersyn project created by Stafford Beer. Beer proposed the creation of a “liberty machine” in which expert knowledge would be grounded in data-guided policy. The paper will focus on the human–technological complex in society. The first section of the paper will discuss whether the political and social environment can completely change the attempts of algorithmic regulation. I will deal specifically with the development of technological knowledge in Chile, a postcolonial state, and the relationship between citizens and data storage in a socialist state. In a second section, I will examine the question of which measures can lessen the danger of data storage regarding privacy in a democratic society. Lastly, I will discuss how much data-driven governance is required for democracy and political participation. I will present a second case study: digital participatory budgeting (DPB in Brazil.
Climate related diseases. Current regional variability and projections to the year 2100
Błażejczyk Krzysztof
2018-03-01
Full Text Available The health of individuals and societies depends on different factors including atmospheric conditions which influence humans in direct and indirect ways. The paper presents regional variability of some climate related diseases (CRD in Poland: salmonellosis intoxications, Lyme boreliosis, skin cancers (morbidity and mortality, influenza, overcooling deaths, as well as respiratory and circulatory mortality. The research consisted of two stages: 1 statistical modelling basing on past data and 2 projections of CRD for three SRES scenarios of climate change (A1B, A2, B1 to the year 2100. Several simple and multiply regression models were found for the relationships between climate variables and CRD. The models were applied to project future levels of CRD. At the end of 21st century we must expect increase in: circulatory mortality, Lyme boreliosis infections and skin cancer morbidity and mortality. There is also projected decrease in: respiratory mortality, overcooling deaths and influenza infections.
von Davier, Matthias
2016-01-01
This report presents results on a parallel implementation of the expectation-maximization (EM) algorithm for multidimensional latent variable models. The developments presented here are based on code that parallelizes both the E step and the M step of the parallel-E parallel-M algorithm. Examples presented in this report include item response…
Senol Emir
2016-04-01
Full Text Available In a data set, an outlier refers to a data point that is considerably different from the others. Detecting outliers provides useful application-specific insights and leads to choosing right prediction models. Outlier detection (also known as anomaly detection or novelty detection has been studied in statistics and machine learning for a long time. It is an essential preprocessing step of data mining process. In this study, outlier detection step in the data mining process is applied for identifying the top 20 outlier firms. Three outlier detection algorithms are utilized using fundamental analysis variables of firms listed in Borsa Istanbul for the 2011-2014 period. The results of each algorithm are presented and compared. Findings show that 15 different firms are identified by three different outlier detection methods. KCHOL and SAHOL have the greatest number of appearances with 12 observations among these firms. By investigating the results, it is concluded that each of three algorithms makes different outlier firm lists due to differences in their approaches for outlier detection.
Cheng, Jun-Hu; Jin, Huali; Liu, Zhiwei
2018-01-01
The feasibility of developing a multispectral imaging method using important wavelengths from hyperspectral images selected by genetic algorithm (GA), successive projection algorithm (SPA) and regression coefficient (RC) methods for modeling and predicting protein content in peanut kernel was investigated for the first time. Partial least squares regression (PLSR) calibration model was established between the spectral data from the selected optimal wavelengths and the reference measured protein content ranged from 23.46% to 28.43%. The RC-PLSR model established using eight key wavelengths (1153, 1567, 1972, 2143, 2288, 2339, 2389 and 2446 nm) showed the best predictive results with the coefficient of determination of prediction (R2P) of 0.901, and root mean square error of prediction (RMSEP) of 0.108 and residual predictive deviation (RPD) of 2.32. Based on the obtained best model and image processing algorithms, the distribution maps of protein content were generated. The overall results of this study indicated that developing a rapid and online multispectral imaging system using the feature wavelengths and PLSR analysis is potential and feasible for determination of the protein content in peanut kernels.
Gan, L.; Yang, F.; Shi, Y. F.; He, H. L.
2017-11-01
Many occasions related to batteries demand to know how much continuous and instantaneous power can batteries provide such as the rapidly developing electric vehicles. As the large-scale applications of lithium-ion batteries, lithium-ion batteries are used to be our research object. Many experiments are designed to get the lithium-ion battery parameters to ensure the relevance and reliability of the estimation. To evaluate the continuous and instantaneous load capability of a battery called state-of-function (SOF), this paper proposes a fuzzy logic algorithm based on battery state-of-charge(SOC), state-of-health(SOH) and C-rate parameters. Simulation and experimental results indicate that the proposed approach is suitable for battery SOF estimation.
Horvath, Sarah; Myers, Sam; Ahlers, Johnathon; Barnes, Jason W.
2017-10-01
Stellar seismic activity produces variations in brightness that introduce oscillations into transit light curves, which can create challenges for traditional fitting models. These oscillations disrupt baseline stellar flux values and potentially mask transits. We develop a model that removes these oscillations from transit light curves by minimizing the significance of each oscillation in frequency space. By removing stellar variability, we prepare each light curve for traditional fitting techniques. We apply our model to $\\delta$-Scuti KOI-976 and demonstrate that our variability subtraction routine successfully allows for measuring bulk system characteristics using traditional light curve fitting. These results open a new window for characterizing bulk system parameters of planets orbiting seismically active stars.
Mohammad Hossein Sadeghi
2013-08-01
Full Text Available In this paper, two different sub-problems are considered to solve a resource constrained project scheduling problem (RCPSP, namely i assignment of modes to tasks and ii scheduling of these tasks in order to minimize the makespan of the project. The modified electromagnetism-like algorithm deals with the first problem to create an assignment of modes to activities. This list is used to generate a project schedule. When a new assignment is made, it is necessary to fix all mode dependent requirements of the project activities and to generate a random schedule with the serial SGS method. A local search will optimize the sequence of the activities. Also in this paper, a new penalty function has been proposed for solutions which are infeasible with respect to non-renewable resources. Performance of the proposed algorithm has been compared with the best algorithms published so far on the basis of CPU time and number of generated schedules stopping criteria. Reported results indicate excellent performance of the algorithm.
Xiaodong Zhuge; Palenstijn, Willem Jan; Batenburg, Kees Joost
2016-01-01
In this paper, we present a novel iterative reconstruction algorithm for discrete tomography (DT) named total variation regularized discrete algebraic reconstruction technique (TVR-DART) with automated gray value estimation. This algorithm is more robust and automated than the original DART algorithm, and is aimed at imaging of objects consisting of only a few different material compositions, each corresponding to a different gray value in the reconstruction. By exploiting two types of prior knowledge of the scanned object simultaneously, TVR-DART solves the discrete reconstruction problem within an optimization framework inspired by compressive sensing to steer the current reconstruction toward a solution with the specified number of discrete gray values. The gray values and the thresholds are estimated as the reconstruction improves through iterations. Extensive experiments from simulated data, experimental μCT, and electron tomography data sets show that TVR-DART is capable of providing more accurate reconstruction than existing algorithms under noisy conditions from a small number of projection images and/or from a small angular range. Furthermore, the new algorithm requires less effort on parameter tuning compared with the original DART algorithm. With TVR-DART, we aim to provide the tomography society with an easy-to-use and robust algorithm for DT.
Wang, Lui; Bayer, Steven E.
1991-01-01
Genetic algorithms are mathematical, highly parallel, adaptive search procedures (i.e., problem solving methods) based loosely on the processes of natural genetics and Darwinian survival of the fittest. Basic genetic algorithms concepts are introduced, genetic algorithm applications are introduced, and results are presented from a project to develop a software tool that will enable the widespread use of genetic algorithm technology.
Shang, Ce; Chaloupka, Frank J; Fong, Geoffrey T; Thompson, Mary; O'Connor, Richard J
2015-07-01
Recent studies have shown that more opportunities exist for tax avoidance when cigarette excise tax structure departs from a uniform specific structure. However, the association between tax structure and cigarette price variability has not been thoroughly studied in the existing literature. To examine how cigarette tax structure is associated with price variability. The variability of self-reported prices is measured using the ratios of differences between higher and lower prices to the median price such as the IQR-to-median ratio. We used survey data taken from the International Tobacco Control Policy Evaluation (ITC) Project in 17 countries to conduct the analysis. Cigarette prices were derived using individual purchase information and aggregated to price variability measures for each surveyed country and wave. The effect of tax structures on price variability was estimated using Generalised Estimating Equations after adjusting for year and country attributes. Our study provides empirical evidence of a relationship between tax structure and cigarette price variability. We find that, compared to the specific uniform tax structure, mixed uniform and tiered (specific, ad valorem or mixed) structures are associated with greater price variability (p≤0.01). Moreover, while a greater share of the specific component in total excise taxes is associated with lower price variability (p≤0.05), a tiered tax structure is associated with greater price variability (p≤0.01). The results suggest that a uniform and specific tax structure is the most effective tax structure for reducing tobacco consumption and prevalence by limiting price variability and decreasing opportunities for tax avoidance. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.
Saha, Ashirbani; Harowicz, Michael R; Mazurowski, Maciej A
2018-04-16
To review features used in MRI radiomics of breast cancer and study the inter-reader stability of the features METHODS: We implemented 529 algorithmic features that can be extracted from tumor and fibroglandular tissue (FGT) in breast MRIs. The features were identified based on a review of the existing literature with consideration of their usage, prognostic ability, and uniqueness. The set was then extended so that it comprehensively describes breast cancer imaging characteristics. The features were classified into 10 groups based on the type of data used to extract them and the type of calculation being performed. For the assessment of inter-reader variability, 4 fellowship-trained readers annotated tumors on pre-operative dynamic contrast enhanced MRIs for 50 breast cancer patients. Based on the annotations, an algorithm automatically segmented the image and extracted all features resulting in one set of features for each reader. For a given feature, the inter-reader stability was defined as the intra-class correlation coefficient (ICC) computed using the feature values obtained through all readers for all cases. The average inter-reader stability for all features was 0.8474 (95% CI: 0.8068-0.8858). The mean inter-reader stability was lower for tumor-based features (0.6348, 95% CI: 0.5391-0.7257) than FGT-based features (0.9984, 95% CI: 0.9970-0.9992). The feature group with the highest inter-reader stability quantifies breast and FGT volume. The feature group with the lowest inter-reader stability quantifies variations in tumor enhancement. Breast MRI radiomics features widely vary in terms of their stability in the presence of inter-reader variability. Appropriate measures need to be taken for reducing this variability in tumor-based radiomics. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.
Modified Pressure-Correction Projection Methods: Open Boundary and Variable Time Stepping
Bonito, Andrea
2014-10-31
© Springer International Publishing Switzerland 2015. In this paper, we design and study two modifications of the first order standard pressure increment projection scheme for the Stokes system. The first scheme improves the existing schemes in the case of open boundary condition by modifying the pressure increment boundary condition, thereby minimizing the pressure boundary layer and recovering the optimal first order decay. The second scheme allows for variable time stepping. It turns out that the straightforward modification to variable time stepping leads to unstable schemes. The proposed scheme is not only stable but also exhibits the optimal first order decay. Numerical computations illustrating the theoretical estimates are provided for both new schemes.
Modified Pressure-Correction Projection Methods: Open Boundary and Variable Time Stepping
Bonito, Andrea; Guermond, Jean-Luc; Lee, Sanghyun
2014-01-01
© Springer International Publishing Switzerland 2015. In this paper, we design and study two modifications of the first order standard pressure increment projection scheme for the Stokes system. The first scheme improves the existing schemes in the case of open boundary condition by modifying the pressure increment boundary condition, thereby minimizing the pressure boundary layer and recovering the optimal first order decay. The second scheme allows for variable time stepping. It turns out that the straightforward modification to variable time stepping leads to unstable schemes. The proposed scheme is not only stable but also exhibits the optimal first order decay. Numerical computations illustrating the theoretical estimates are provided for both new schemes.
Hua Zhang
2016-09-01
Full Text Available The estimation of spatially-variable actual evapotranspiration (AET is a critical challenge to regional water resources management. We propose a new remote sensing method, the Triangle Algorithm with Variable Edges (TAVE, to generate daily AET estimates based on satellite-derived land surface temperature and the vegetation index NDVI. The TAVE captures heterogeneity in AET across elevation zones and permits variability in determining local values of wet and dry end-member classes (known as edges. Compared to traditional triangle methods, TAVE introduces three unique features: (i the discretization of the domain as overlapping elevation zones; (ii a variable wet edge that is a function of elevation zone; and (iii variable values of a combined-effect parameter (that accounts for aerodynamic and surface resistance, vapor pressure gradient, and soil moisture availability along both wet and dry edges. With these features, TAVE effectively addresses the combined influence of terrain and water stress on semi-arid environment AET estimates. We demonstrate the effectiveness of this method in one of the driest countries in the world—Jordan, and compare it to a traditional triangle method (TA and a global AET product (MOD16 over different land use types. In irrigated agricultural lands, TAVE matched the results of the single crop coefficient model (−3%, in contrast to substantial overestimation by TA (+234% and underestimation by MOD16 (−50%. In forested (non-irrigated, water consuming regions, TA and MOD16 produced AET average deviations 15.5 times and −3.5 times of those based on TAVE. As TAVE has a simple structure and low data requirements, it provides an efficient means to satisfy the increasing need for evapotranspiration estimation in data-scarce semi-arid regions. This study constitutes a much needed step towards the satellite-based quantification of agricultural water consumption in Jordan.
Methanol from TES global observations: retrieval algorithm and seasonal and spatial variability
K. E. Cady-Pereira
2012-09-01
Full Text Available We present a detailed description of the TES methanol (CH_{3}OH retrieval algorithm, along with initial global results showing the seasonal and spatial distribution of methanol in the lower troposphere. The full development of the TES methanol retrieval is described, including microwindow selection, error analysis, and the utilization of a priori and initial guess information provided by the GEOS-Chem chemical transport model. Retrieval simulations and a sensitivity analysis using the developed retrieval strategy show that TES: (i generally provides less than 1.0 piece of information, (ii is sensitive in the lower troposphere with peak sensitivity typically occurring between ~900–700 hPa (~1–3 km at a vertical resolution of ~5 km, (iii has a limit of detectability between 0.5 and 1.0 ppbv Representative Volume Mixing Ratio (RVMR depending on the atmospheric conditions, corresponding roughly to a profile with a maximum concentration of at least 1 to 2 ppbv, and (iv in a simulation environment has a mean bias of 0.16 ppbv with a standard deviation of 0.34 ppbv. Applying the newly derived TES retrieval globally and comparing the results with corresponding GEOS-Chem output, we find generally consistent large-scale patterns between the two. However, TES often reveals higher methanol concentrations than simulated in the Northern Hemisphere spring, summer and fall. In the Southern Hemisphere, the TES methanol observations indicate a model overestimate over the bulk of South America from December through July, and a model underestimate during the biomass burning season.
Vecharynski, Eugene; Yang, Chao; Pask, John E.
2015-01-01
We present an iterative algorithm for computing an invariant subspace associated with the algebraically smallest eigenvalues of a large sparse or structured Hermitian matrix A. We are interested in the case in which the dimension of the invariant subspace is large (e.g., over several hundreds or thousands) even though it may still be small relative to the dimension of A. These problems arise from, for example, density functional theory (DFT) based electronic structure calculations for complex materials. The key feature of our algorithm is that it performs fewer Rayleigh–Ritz calculations compared to existing algorithms such as the locally optimal block preconditioned conjugate gradient or the Davidson algorithm. It is a block algorithm, and hence can take advantage of efficient BLAS3 operations and be implemented with multiple levels of concurrency. We discuss a number of practical issues that must be addressed in order to implement the algorithm efficiently on a high performance computer
Grier, C. J.; Brandt, W. N.; Trump, J. R.; Schneider, D. P.; Hall, P. B.; Shen, Yue; Vivek, M.; Dawson, K. S.; Ak, N. Filiz; Chen, Yuguang; Denney, K. D.; Kochanek, C. S.; Peterson, B. M.; Green, Paul J.; Jiang, Linhua; McGreer, Ian D.; Pâris, I.; Tao, Charling; Wood-Vasey, W. M.; Bizyaev, Dmitry
2015-01-01
We report the discovery of rapid variations of a high-velocity C iv broad absorption line trough in the quasar SDSS J141007.74+541203.3. This object was intensively observed in 2014 as a part of the Sloan Digital Sky Survey Reverberation Mapping Project, during which 32 epochs of spectroscopy were obtained with the Baryon Oscillation Spectroscopic Survey spectrograph. We observe significant (>4σ) variability in the equivalent width (EW) of the broad (∼4000 km s −1 wide) C iv trough on rest-frame timescales as short as 1.20 days (∼29 hr), the shortest broad absorption line variability timescale yet reported. The EW varied by ∼10% on these short timescales, and by about a factor of two over the duration of the campaign. We evaluate several potential causes of the variability, concluding that the most likely cause is a rapid response to changes in the incident ionizing continuum. If the outflow is at a radius where the recombination rate is higher than the ionization rate, the timescale of variability places a lower limit on the density of the absorbing gas of n e ≳ 3.9 × 10 5 cm −3 . The broad absorption line variability characteristics of this quasar are consistent with those observed in previous studies of quasars, indicating that such short-term variability may in fact be common and thus can be used to learn about outflow characteristics and contributions to quasar/host-galaxy feedback scenarios
Workshop on algorithms for macromolecular modeling. Final project report, June 1, 1994--May 31, 1995
Leimkuhler, B.; Hermans, J.; Skeel, R.D.
1995-07-01
A workshop was held on algorithms and parallel implementations for macromolecular dynamics, protein folding, and structural refinement. This document contains abstracts and brief reports from that workshop.
A finite state projection algorithm for the stationary solution of the chemical master equation
Gupta, Ankit; Mikelson, Jan; Khammash, Mustafa
2017-10-01
The chemical master equation (CME) is frequently used in systems biology to quantify the effects of stochastic fluctuations that arise due to biomolecular species with low copy numbers. The CME is a system of ordinary differential equations that describes the evolution of probability density for each population vector in the state-space of the stochastic reaction dynamics. For many examples of interest, this state-space is infinite, making it difficult to obtain exact solutions of the CME. To deal with this problem, the Finite State Projection (FSP) algorithm was developed by Munsky and Khammash [J. Chem. Phys. 124(4), 044104 (2006)], to provide approximate solutions to the CME by truncating the state-space. The FSP works well for finite time-periods but it cannot be used for estimating the stationary solutions of CMEs, which are often of interest in systems biology. The aim of this paper is to develop a version of FSP which we refer to as the stationary FSP (sFSP) that allows one to obtain accurate approximations of the stationary solutions of a CME by solving a finite linear-algebraic system that yields the stationary distribution of a continuous-time Markov chain over the truncated state-space. We derive bounds for the approximation error incurred by sFSP and we establish that under certain stability conditions, these errors can be made arbitrarily small by appropriately expanding the truncated state-space. We provide several examples to illustrate our sFSP method and demonstrate its efficiency in estimating the stationary distributions. In particular, we show that using a quantized tensor-train implementation of our sFSP method, problems admitting more than 100 × 106 states can be efficiently solved.
A finite state projection algorithm for the stationary solution of the chemical master equation.
Gupta, Ankit; Mikelson, Jan; Khammash, Mustafa
2017-10-21
The chemical master equation (CME) is frequently used in systems biology to quantify the effects of stochastic fluctuations that arise due to biomolecular species with low copy numbers. The CME is a system of ordinary differential equations that describes the evolution of probability density for each population vector in the state-space of the stochastic reaction dynamics. For many examples of interest, this state-space is infinite, making it difficult to obtain exact solutions of the CME. To deal with this problem, the Finite State Projection (FSP) algorithm was developed by Munsky and Khammash [J. Chem. Phys. 124(4), 044104 (2006)], to provide approximate solutions to the CME by truncating the state-space. The FSP works well for finite time-periods but it cannot be used for estimating the stationary solutions of CMEs, which are often of interest in systems biology. The aim of this paper is to develop a version of FSP which we refer to as the stationary FSP (sFSP) that allows one to obtain accurate approximations of the stationary solutions of a CME by solving a finite linear-algebraic system that yields the stationary distribution of a continuous-time Markov chain over the truncated state-space. We derive bounds for the approximation error incurred by sFSP and we establish that under certain stability conditions, these errors can be made arbitrarily small by appropriately expanding the truncated state-space. We provide several examples to illustrate our sFSP method and demonstrate its efficiency in estimating the stationary distributions. In particular, we show that using a quantized tensor-train implementation of our sFSP method, problems admitting more than 100 × 10 6 states can be efficiently solved.
Advanced algorithms for ionosphere modelling in GNSS applications within AUDITOR project
Goss, Andreas; Erdogan, Eren; Schmidt, Michael; Garcia-Rigo, Alberto; Hernandez-Pajares, Manuel; Lyu, Haixia; Nohutcu, Metin
2017-04-01
The H2020 project AUDITOR of the European Union started on January 1st 2016, with the participation of several European institutions and universities. The goal of the project is the implementation of a novel precise positioning technique, based on augmentation data in a customized GNSS receiver. Therefore more sophisticated ionospheric models have to be developed and implemented to increase the accuracy in real-time at the user side. Since the service should be available for the public, we use public data from GNSS networks (e.g. IGS, EUREF). The contributions of DGFI-TUM and UPC are focusing on the development of high accuracy GNSS algorithms to provide enhanced ionospheric corrections. This includes two major issues: 1. The existing mapping function to convert the slant total electron content (STEC) measurable by GNSS into the vertical total electron content (VTEC) is based on a so called single layer model (SLM), where all electrons are concentrated on an infinitesimal thin layer with fixed height (between 350 and 450 kilometers). This quantity is called the effective ionospheric height (EIH). An improvement of the mapping function shall be achieved by estimating more realistic numerical values for the EIH by means of a voxel-based tomographic model (TOMION). 2. The ionospheric observations are distributed rather unevenly over the globe and within specific regions. This inhomogeneous distribution is handled by data adaptive B-Spline approaches, with polynomial and trigonometric functions used for the latitude and longitude representations to provide high resolution VTEC maps for global and regional purposes. A Kalman filter is used as sequential estimator. The unknown parameters of the filter state vector are composed of the B-spline coefficients as well as the satellite and receiver DCBs. The resulting high accuracy ionosphere products will be disseminated to the users via downlink from a dedicated server to a receiver site. In this context, an appropriate
Qiang Yu
Full Text Available Texture enhancement is one of the most important techniques in digital image processing and plays an essential role in medical imaging since textures discriminate information. Most image texture enhancement techniques use classical integral order differential mask operators or fractional differential mask operators using fixed fractional order. These masks can produce excessive enhancement of low spatial frequency content, insufficient enhancement of large spatial frequency content, and retention of high spatial frequency noise. To improve upon existing approaches of texture enhancement, we derive an improved Variable Order Fractional Centered Difference (VOFCD scheme which dynamically adjusts the fractional differential order instead of fixing it. The new VOFCD technique is based on the second order Riesz fractional differential operator using a Lagrange 3-point interpolation formula, for both grey scale and colour image enhancement. We then use this method to enhance photographs and a set of medical images related to patients with stroke and Parkinson's disease. The experiments show that our improved fractional differential mask has a higher signal to noise ratio value than the other fractional differential mask operators. Based on the corresponding quantitative analysis we conclude that the new method offers a superior texture enhancement over existing methods.
Aungkulanon, P.; Luangpaiboon, P.
2010-10-01
Nowadays, the engineering problem systems are large and complicated. An effective finite sequence of instructions for solving these problems can be categorised into optimisation and meta-heuristic algorithms. Though the best decision variable levels from some sets of available alternatives cannot be done, meta-heuristics is an alternative for experience-based techniques that rapidly help in problem solving, learning and discovery in the hope of obtaining a more efficient or more robust procedure. All meta-heuristics provide auxiliary procedures in terms of their own tooled box functions. It has been shown that the effectiveness of all meta-heuristics depends almost exclusively on these auxiliary functions. In fact, the auxiliary procedure from one can be implemented into other meta-heuristics. Well-known meta-heuristics of harmony search (HSA) and shuffled frog-leaping algorithms (SFLA) are compared with their hybridisations. HSA is used to produce a near optimal solution under a consideration of the perfect state of harmony of the improvisation process of musicians. A meta-heuristic of the SFLA, based on a population, is a cooperative search metaphor inspired by natural memetics. It includes elements of local search and global information exchange. This study presents solution procedures via constrained and unconstrained problems with different natures of single and multi peak surfaces including a curved ridge surface. Both meta-heuristics are modified via variable neighbourhood search method (VNSM) philosophy including a modified simplex method (MSM). The basic idea is the change of neighbourhoods during searching for a better solution. The hybridisations proceed by a descent method to a local minimum exploring then, systematically or at random, increasingly distant neighbourhoods of this local solution. The results show that the variant of HSA with VNSM and MSM seems to be better in terms of the mean and variance of design points and yields.
P. Hashemi
2018-01-01
Full Text Available Construction sites are accident-prone locations and therefore safety management plays an im-portant role in these workplaces. This study presents an adaptive algorithm for performance as-sessment of project management with respect to resilience engineering and job security in a large construction site. The required data are collected using questionnaires in a large construction site. The presented algorithm is composed of radial basis function (RBF, artificial neural networks multi-layer perceptron (ANN-MLP, and statistical tests. The results indicate that preparedness, fault-tolerance, and flexibility are the most effective factors on overall efficiency. Moreover, job security and resilience engineering have similar statistical impacts on overall system efficiency. The results are verified and validated by the proposed algorithm.
Dodge, C.T.; Rong, J. [MD Anderson Cancer Center, Houston, TX (United States); Dodge, C.W. [Methodist Hospital, Houston, TX (United States)
2014-06-15
Purpose: To determine how filtered back-projection (FBP), adaptive statistical (ASiR), and model based (MBIR) iterative reconstruction algorithms affect the measured modulation transfer functions (MTFs) of variable-contrast targets over a wide range of clinically applicable dose levels. Methods: The Catphan 600 CTP401 module, surrounded by an oval, fat-equivalent ring to mimic patient size/shape, was scanned on a GE HD750 CT scanner at 1, 2, 3, 6, 12 and 24 mGy CTDIvol levels with typical patient scan parameters: 120kVp, 0.8s, 40mm beam width, large SFOV, 2.5mm thickness, 0.984 pitch. The images were reconstructed using GE's Standard kernel with FBP; 20%, 40% and 70% ASiR; and MBIR. A task-based MTF (MTFtask) was computed for six cylindrical targets: 2 low-contrast (Polystyrene, LDPE), 2 medium-contrast (Delrin, PMP), and 2 high-contrast (Teflon, air). MTFtask was used to compare the performance of reconstruction algorithms with decreasing CTDIvol from 24mGy, which is currently used in the clinic. Results: For the air target and 75% dose savings (6 mGy), MBIR MTFtask at 5 lp/cm measured 0.24, compared to 0.20 for 70% ASiR and 0.11 for FBP. Overall, for both high-contrast targets, MBIR MTFtask improved with increasing CTDIvol and consistently outperformed ASiR and FBP near the system's Nyquist frequency. Conversely, for Polystyrene at 6 mGy, MBIR (0.10) and 70% ASiR (0.07) MTFtask was lower than for FBP (0.18). For medium and low-contrast targets, FBP remains the best overall algorithm for improved resolution at low CTDIvol (1–6 mGy) levels, whereas MBIR is comparable at higher dose levels (12–24 mGy). Conclusion: MBIR improved the MTF of small, high-contrast targets compared to FBP and ASiR at doses of 50%–12.5% of those currently used in the clinic. However, for imaging low- and mediumcontrast targets, FBP performed the best across all dose levels. For assessing MTF from different reconstruction algorithms, task-based MTF measurements are necessary.
Dodge, C.T.; Rong, J.; Dodge, C.W.
2014-01-01
Purpose: To determine how filtered back-projection (FBP), adaptive statistical (ASiR), and model based (MBIR) iterative reconstruction algorithms affect the measured modulation transfer functions (MTFs) of variable-contrast targets over a wide range of clinically applicable dose levels. Methods: The Catphan 600 CTP401 module, surrounded by an oval, fat-equivalent ring to mimic patient size/shape, was scanned on a GE HD750 CT scanner at 1, 2, 3, 6, 12 and 24 mGy CTDIvol levels with typical patient scan parameters: 120kVp, 0.8s, 40mm beam width, large SFOV, 2.5mm thickness, 0.984 pitch. The images were reconstructed using GE's Standard kernel with FBP; 20%, 40% and 70% ASiR; and MBIR. A task-based MTF (MTFtask) was computed for six cylindrical targets: 2 low-contrast (Polystyrene, LDPE), 2 medium-contrast (Delrin, PMP), and 2 high-contrast (Teflon, air). MTFtask was used to compare the performance of reconstruction algorithms with decreasing CTDIvol from 24mGy, which is currently used in the clinic. Results: For the air target and 75% dose savings (6 mGy), MBIR MTFtask at 5 lp/cm measured 0.24, compared to 0.20 for 70% ASiR and 0.11 for FBP. Overall, for both high-contrast targets, MBIR MTFtask improved with increasing CTDIvol and consistently outperformed ASiR and FBP near the system's Nyquist frequency. Conversely, for Polystyrene at 6 mGy, MBIR (0.10) and 70% ASiR (0.07) MTFtask was lower than for FBP (0.18). For medium and low-contrast targets, FBP remains the best overall algorithm for improved resolution at low CTDIvol (1–6 mGy) levels, whereas MBIR is comparable at higher dose levels (12–24 mGy). Conclusion: MBIR improved the MTF of small, high-contrast targets compared to FBP and ASiR at doses of 50%–12.5% of those currently used in the clinic. However, for imaging low- and mediumcontrast targets, FBP performed the best across all dose levels. For assessing MTF from different reconstruction algorithms, task-based MTF measurements are necessary
Alfonso Antón
2009-01-01
Full Text Available Alfonso Antón1,2,3, Marta Castany1,2, Marta Pazos-Lopez1,2, Ruben Cuadrado3, Ana Flores3, Miguel Castilla11Hospital de la Esperanza-Hospital del Mar (IMAS, Barcelona, Spain; 2Institut Català de la Retina (ICR, Barcelona, Spain. Glaucoma Department; 3Instituto Universitario de Oftalmobiología Aplicada (IOBA, Universidad de Valladolid, Valladolid, EspañaPurpose: To assess the reproducibility of retinal nerve fiber layer (RNFL measurements and the variability of the probabilistic classification algorithm in normal, hypertensive and glaucomatous eyes using Stratus optical coherence tomography (OCT.Methods: Forty-nine eyes (13 normal, 17 ocular hypertensive [OHT] and 19 glaucomatous of 49 subjects were included in this study. RNFL was determined with Stratus OCT using the standard protocol RNFL thickness 3.4. Three different images of each eye were taken consecutively during the same session. To evaluate OCT reproducibility, coefficient of variation (COV and intraclass correlation coefficient (ICC were calculated for average thickness (AvgT, superior average thickness (Savg, and inferior average thickness (Iavg parameters. The variability of the results of the probabilistic classification algorithm, based on the OCT normative database, was also analyzed. The percentage of eyes with changes in the category assigned was calculated for each group.Results: The 50th percentile of COV was 2.96%, 4.00%, and 4.31% for AvgT, Savg, and Iavg, respectively. Glaucoma group presented the largest COV for all three parameters (3.87%, 5.55%, 7.82%. ICC were greater than 0.75 for almost all measures (except from the inferior thickness parameter in the normal group; ICC = 0.64, 95% CI 0.334–0.857. Regarding the probabilistic classification algorithm for the three parameters (AvgT, Savg, Iavg, the percentage of eyes without color-code category changes among the three images was as follows: normal group, 100%, 84.6% and 92%; OHT group, 89.5%, 52.7%, 79%; and
Grier, C. J.; Brandt, W. N.; Trump, J. R.; Schneider, D. P. [Department of Astronomy and Astrophysics and Institute for Gravitation and the Cosmos, The Pennsylvania State University, 525 Davey Laboratory, University Park, PA 16802 (United States); Hall, P. B. [Department of Physics and Astronomy, York University, Toronto, ON M3J 1P3 (Canada); Shen, Yue [Carnegie Observatories, 813 Santa Barbara Street, Pasadena, CA 91101 (United States); Vivek, M.; Dawson, K. S. [Department of Physics and Astronomy, University of Utah, Salt Lake City, UT 84112 (United States); Ak, N. Filiz [Faculty of Sciences, Department of Astronomy and Space Sciences, Erciyes University, 38039 Kayseri (Turkey); Chen, Yuguang [Department of Astronomy, School of Physics, Peking University, Beijing 100871 (China); Denney, K. D.; Kochanek, C. S.; Peterson, B. M. [Department of Astronomy, The Ohio State University, 140 West 18th Avenue, Columbus, OH 43210 (United States); Green, Paul J. [Harvard-Smithsonian Center for Astrophysics, 60 Garden Street, Cambridge, MA 02138 (United States); Jiang, Linhua [Kavli Institute for Astronomy and Astrophysics, Peking University, Beijing 100871 (China); McGreer, Ian D. [Steward Observatory, The University of Arizona, 933 North Cherry Avenue, Tucson, AZ 85721-0065 (United States); Pâris, I. [INAF-Osservatorio Astronomico di Trieste, Via G. B. Tiepolo 11, I-34131 Trieste (Italy); Tao, Charling [Centre de Physique des Particules de Marseille, Aix-Marseille Universite, CNRS /IN2P3, 163, avenue de Luminy, Case 902, F-13288 Marseille Cedex 09 (France); Wood-Vasey, W. M. [PITT PACC, Department of Physics and Astronomy, University of Pittsburgh, 3941 O’Hara Street, Pittsburgh, PA 15260 (United States); Bizyaev, Dmitry, E-mail: grier@psu.edu [Apache Point Observatory and New Mexico State University, P.O. Box 59, Sunspot, NM, 88349-0059 (United States); and others
2015-06-10
We report the discovery of rapid variations of a high-velocity C iv broad absorption line trough in the quasar SDSS J141007.74+541203.3. This object was intensively observed in 2014 as a part of the Sloan Digital Sky Survey Reverberation Mapping Project, during which 32 epochs of spectroscopy were obtained with the Baryon Oscillation Spectroscopic Survey spectrograph. We observe significant (>4σ) variability in the equivalent width (EW) of the broad (∼4000 km s{sup −1} wide) C iv trough on rest-frame timescales as short as 1.20 days (∼29 hr), the shortest broad absorption line variability timescale yet reported. The EW varied by ∼10% on these short timescales, and by about a factor of two over the duration of the campaign. We evaluate several potential causes of the variability, concluding that the most likely cause is a rapid response to changes in the incident ionizing continuum. If the outflow is at a radius where the recombination rate is higher than the ionization rate, the timescale of variability places a lower limit on the density of the absorbing gas of n{sub e} ≳ 3.9 × 10{sup 5} cm{sup −3}. The broad absorption line variability characteristics of this quasar are consistent with those observed in previous studies of quasars, indicating that such short-term variability may in fact be common and thus can be used to learn about outflow characteristics and contributions to quasar/host-galaxy feedback scenarios.
Final report on LDRD project: Simulation/optimization tools for system variability analysis
R. L. Bierbaum; R. F. Billau; J. E. Campbell; K. D. Marx; R. J. Sikorski; B. M. Thompson; S. D. Wix
1999-10-01
>This work was conducted during FY98 (Proposal Number 98-0036) and FY99 (Proposal Number 99-0818) under the auspices of the Sandia National Laboratories Laboratory-Directed Research and Development (LDRD) program. Electrical simulation typically treats a single data point in the very large input space of component properties. For electrical simulation to reach its full potential as a design tool, it must be able to address the unavoidable variability and uncertainty in component properties. Component viability is strongly related to the design margin (and reliability) of the end product. During the course of this project, both tools and methodologies were developed to enable analysis of variability in the context of electrical simulation tools. Two avenues to link relevant tools were also developed, and the resultant toolset was applied to a major component.
Winter Arctic sea ice growth: current variability and projections for the coming decades
Petty, A.; Boisvert, L.; Webster, M.; Holland, M. M.; Bailey, D. A.; Kurtz, N. T.; Markus, T.
2017-12-01
Arctic sea ice increases in both extent and thickness during the cold winter months ( October to May). Winter sea ice growth is an important factor controlling ocean ventilation and winter water/deep water formation, as well as determining the state and vulnerability of the sea ice pack before the melt season begins. Key questions for the Arctic community thus include: (i) what is the current magnitude and variability of winter Arctic sea ice growth and (ii) how might this change in a warming Arctic climate? To address (i), our current best guess of pan-Arctic sea ice thickness, and thus volume, comes from satellite altimetry observations, e.g. from ESA's CryoSat-2 satellite. A significant source of uncertainty in these data come from poor knowledge of the overlying snow depth. Here we present new estimates of winter sea ice thickness from CryoSat-2 using snow depths from a simple snow model forced by reanalyses and satellite-derived ice drift estimates, combined with snow depth estimates from NASA's Operation IceBridge. To address (ii), we use data from the Community Earth System Model's Large Ensemble Project, to explore sea ice volume and growth variability, and how this variability might change over the coming decades. We compare and contrast the model simulations to observations and the PIOMAS ice-ocean model (over recent years/decades). The combination of model and observational analysis provide novel insight into Arctic sea ice volume variability.
Analysis of electrical circuits with variable load regime parameters projective geometry method
Penin, A
2015-01-01
This book introduces electric circuits with variable loads and voltage regulators. It allows to define invariant relationships for various parameters of regime and circuit sections and to prove the concepts characterizing these circuits. Generalized equivalent circuits are introduced. Projective geometry is used for the interpretation of changes of operating regime parameters. Expressions of normalized regime parameters and their changes are presented. Convenient formulas for the calculation of currents are given. Parallel voltage sources and the cascade connection of multi-port networks are d
CRITICAL RADAR: TOOL AND METHODOLOGY FOR EVALUATING CURRENT PROJECTS USING MULTIPLE VARIABLES
André M. Ferrari
2017-06-01
Full Text Available Many resources are invested in measurement processes of projects indicators without, however, give a clear view of which projects deserves the right attention at the right time. This paper proposes the use of statistics, through the analysis of multiple variables and their interrelationships, to give better basis to a critical assessment methodology of current projects used in a multinational mining company. The contribution of the research is to report the methodology called Critical Radar which is based on a graphical tool with simple operationalization that can support the decision making in complex environments, and has great flexibility across the different market scenarios and possible changes in companies guidelines. The tool has great potential to help evaluate current projects due to their characteristics of flexible use in different business areas; high degree of freedom for improvement; use of known market tool in its development; ease of viewing the results through charts and notes and user freedom to use any existing indicators in the company if complied with some statistical data quality characteristics.
Nuclear reactors project optimization based on neural network and genetic algorithm
Pereira, Claudio M.N.A.; Schirru, Roberto; Martinez, Aquilino S.
1997-01-01
This work presents a prototype of a system for nuclear reactor core design optimization based on genetic algorithms and artificial neural networks. A neural network is modeled and trained in order to predict the flux and the neutron multiplication factor values based in the enrichment, network pitch and cladding thickness, with average error less than 2%. The values predicted by the neural network are used by a genetic algorithm in this heuristic search, guided by an objective function that rewards the high flux values and penalizes multiplication factors far from the required value. Associating the quick prediction - that may substitute the reactor physics calculation code - with the global optimization capacity of the genetic algorithm, it was obtained a quick and effective system for nuclear reactor core design optimization. (author). 11 refs., 8 figs., 3 tabs
Casanueva, Ana; Kotlarski, Sven; Liniger, Mark A.
2017-04-01
Future climate change is likely to have important impacts in many socio-economic sectors. In particular, higher summer temperatures or more prolonged heat waves may be responsible for health problems and productivity losses related to heat stress, especially affecting people exposed to such situations (e.g. working under outside settings or in non-acclimatized workplaces). Heat stress on the body under work load and consequently their productivity loss can be described through heat stress indices that are based on multiple meteorological parameters such as temperature, humidity, wind and radiation. Exploring the changes of these variables under a warmer climate is of prime importance for the Impacts, Adaptation and Vulnerability communities. In particular, the H2020 project HEAT-SHIELD aims at analyzing the impact of climate change on heat stress in strategic industries in Europe (manufacturing, construction, transportation, tourism and agriculture) within an inter-sectoral framework (climate scientists, biometeorologists, physiologists and stakeholders). In the present work we explore present and future heat stress over Europe using an ensemble of the state-of-the-art RCMs from the EURO-CORDEX initiative. Since RCMs cannot be directly used in impact studies due to their partly substantial biases, a standard bias correction method (empirical quantile mapping) is applied to correct the individual variables that are then used to derive heat stress indices. The objectives of this study are twofold, 1) to test the ability of the separately bias corrected variables to reproduce the main characteristics of heat stress indices in present climate conditions and 2) to explore climate change projections of heat stress indices. We use the wet bulb globe temperature (WBGT) as primary heat stress index, considering two different versions for indoor (or in the shade, based on temperature and humidity conditions) and outdoor settings (including also wind and radiation). The WBGT
Rohman, Muhamad Nur; Hidayat, Mas Irfan P.; Purniawan, Agung
2018-04-01
Neural networks (NN) have been widely used in application of fatigue life prediction. In the use of fatigue life prediction for polymeric-base composite, development of NN model is necessary with respect to the limited fatigue data and applicable to be used to predict the fatigue life under varying stress amplitudes in the different stress ratios. In the present paper, Multilayer-Perceptrons (MLP) model of neural network is developed, and Genetic Algorithm was employed to optimize the respective weights of NN for prediction of polymeric-base composite materials under variable amplitude loading. From the simulation result obtained with two different composite systems, named E-glass fabrics/epoxy (layups [(±45)/(0)2]S), and E-glass/polyester (layups [90/0/±45/0]S), NN model were trained with fatigue data from two different stress ratios, which represent limited fatigue data, can be used to predict another four and seven stress ratios respectively, with high accuracy of fatigue life prediction. The accuracy of NN prediction were quantified with the small value of mean square error (MSE). When using 33% from the total fatigue data for training, the NN model able to produce high accuracy for all stress ratios. When using less fatigue data during training (22% from the total fatigue data), the NN model still able to produce high coefficient of determination between the prediction result compared with obtained by experiment.
Garfin, G. M.; Eischeid, J. K.; Cole, K. L.; Ironside, K.; Cobb, N. S.
2008-12-01
most striking aspect of projections of future precipitation is steadily decreasing May-June precipitation during the twenty-first century. Though absolute precipitation during this season is small, declining moisture during the arid pre-monsoon will likely decrease soil moisture, and increase drought stress - consequently, increasing vegetation susceptibility the insect outbreaks and disease. Summer precipitation projections show considerable multi-decade variability, but no substantial trends. Winter precipitation shows little interannual variability and no strong trends. By 2090, annual precipitation is projected to decline by 1-5% across much of the region, with greater declines in the southern part of the domain and increases of 1-5% in the northwestern and northeastern parts of the domain. As part of a National Institute for Climate Change Research project, these projected changes will be input into a USDA-FS vegetation response model, in order to estimate species-specific responses to projected climate changes. We expect increasing temperatures, declining annual precipitation, and extreme declines in pre-monsoon season precipitation to generate significant redistribution of some plant species in the Southern Colorado Plateau.
A prolongation-projection algorithm for computing the finite real variety of an ideal
J.B. Lasserre; M. Laurent (Monique); P. Rostalski
2009-01-01
htmlabstractWe provide a real algebraic symbolic-numeric algorithm for computing the real variety $V_R(I)$ of an ideal $I$, assuming it is finite while $V_C(I)$ may not be. Our approach uses sets of linear functionals on $R[X]$, vanishing on a given set of polynomials generating $I$ and their
A prolongation-projection algorithm for computing the finite real variety of an ideal
J.B. Lasserre; M. Laurent (Monique); P. Rostalski
2008-01-01
htmlabstractWe provide a real algebraic symbolic-numeric algorithm for computing the real variety $V_R(I)$ of an ideal $I$, assuming it is finite while $V_C(I)$ may not be. Our approach uses sets of linear functionals on $R[X]$, vanishing on a given set of polynomials generating $I$ and their
Anon.
1987-01-01
Signal validation in the context of this project is the process of combining information from multiple plant sensors to produce highly reliable information about plant conditions. High information reliability is achieved by the use of redundant sources of information and by the inherent detection, identification, and isolation of faulty signals. The signal validation methodology that has been developed in previous EPRI-sponsored projects has been enhanced and applied toward validation of critical safety-related SPDS signals in the Northeast Utilities Millstone 3 Westinghouse PWR plant and the Millstone 2 Combustion Engineering PWR plant. The designs were implemented in FORTRAN software and tested off-line using recorded plant sensor data, RETRAN-generated simulation data, and data to exercise software logic branches and the integration of software modules. Designs and software modules have been developed for 15 variables to support six PWR SPDS critical safety functions as required by a utility advisory group attached to the project. The signal validation process automates a task currently performed by plant operators and does so with consistent, verified logic regardless of operator stress and training level. The methodology uses a simple structure of generic software blocks, a modular implementation, and it performs effectively within the processor and memory constraints of modern plant process computers. The ability to detect and isolate sensor failures with greater sensitivity, robustness, and coverage of common-cause failures should ultimately lead to improved plant availability, efficiency, and productivity
Suppes, Trisha; Rush, A John; Dennehy, Ellen B; Crismon, M Lynn; Kashner, T Michael; Toprac, Marcia G; Carmody, Thomas J; Brown, E Sherwood; Biggs, Melanie M; Shores-Wilson, Kathy; Witte, Bradley P; Trivedi, Madhukar H; Miller, Alexander L; Altshuler, Kenneth Z; Shon, Steven P
2003-04-01
The Texas Medication Algorithm Project (TMAP) assessed the clinical and economic impact of algorithm-driven treatment (ALGO) as compared with treatment-as-usual (TAU) in patients served in public mental health centers. This report presents clinical outcomes in patients with a history of mania (BD), including bipolar I and schizoaffective disorder, bipolar type, during 12 months of treatment beginning March 1998 and ending with the final active patient visit in April 2000. Patients were diagnosed with bipolar I disorder or schizoaffective disorder, bipolar type, according to DSM-IV criteria. ALGO was comprised of a medication algorithm and manual to guide treatment decisions. Physicians and clinical coordinators received training and expert consultation throughout the project. ALGO also provided a disorder-specific patient and family education package. TAU clinics had no exposure to the medication algorithms. Quarterly outcome evaluations were obtained by independent raters. Hierarchical linear modeling, based on a declining effects model, was used to assess clinical outcome of ALGO versus TAU. ALGO and TAU patients showed significant initial decreases in symptoms (p =.03 and p <.001, respectively) measured by the 24-item Brief Psychiatric Rating Scale (BPRS-24) at the 3-month assessment interval, with significantly greater effects for the ALGO group. Limited catch-up by TAU was observed over the remaining 3 quarters. Differences were also observed in measures of mania and psychosis but not in depression, side-effect burden, or functioning. For patients with a history of mania, relative to TAU, the ALGO intervention package was associated with greater initial and sustained improvement on the primary clinical outcome measure, the BPRS-24, and the secondary outcome measure, the Clinician-Administered Rating Scale for Mania (CARS-M). Further research is planned to clarify which elements of the ALGO package contributed to this between-group difference.
The Impact of Variable Wind Shear Coefficients on Risk Reduction of Wind Energy Projects.
Corscadden, Kenneth W; Thomson, Allan; Yoonesi, Behrang; McNutt, Josiah
2016-01-01
Estimation of wind speed at proposed hub heights is typically achieved using a wind shear exponent or wind shear coefficient (WSC), variation in wind speed as a function of height. The WSC is subject to temporal variation at low and high frequencies, ranging from diurnal and seasonal variations to disturbance caused by weather patterns; however, in many cases, it is assumed that the WSC remains constant. This assumption creates significant error in resource assessment, increasing uncertainty in projects and potentially significantly impacting the ability to control gird connected wind generators. This paper contributes to the body of knowledge relating to the evaluation and assessment of wind speed, with particular emphasis on the development of techniques to improve the accuracy of estimated wind speed above measurement height. It presents an evaluation of the use of a variable wind shear coefficient methodology based on a distribution of wind shear coefficients which have been implemented in real time. The results indicate that a VWSC provides a more accurate estimate of wind at hub height, ranging from 41% to 4% reduction in root mean squared error (RMSE) between predicted and actual wind speeds when using a variable wind shear coefficient at heights ranging from 33% to 100% above the highest actual wind measurement.
THE LICK AGN MONITORING PROJECT: PHOTOMETRIC LIGHT CURVES AND OPTICAL VARIABILITY CHARACTERISTICS
Walsh, Jonelle L.; Bentz, Misty C.; Barth, Aaron J.; Minezaki, Takeo; Sakata, Yu; Yoshii, Yuzuru; Baliber, Nairn; Bennert, Vardha Nicola; Street, Rachel A.; Treu, Tommaso; Li Weidong; Filippenko, Alexei V.; Stern, Daniel; Brown, Timothy M.; Canalizo, Gabriela; Gates, Elinor L.; Greene, Jenny E.; Malkan, Matthew A.; Woo, Jong-Hak
2009-01-01
The Lick AGN Monitoring Project targeted 13 nearby Seyfert 1 galaxies with the intent of measuring the masses of their central black holes using reverberation mapping. The sample includes 12 galaxies selected to have black holes with masses roughly in the range 10 6 -10 7 M sun , as well as the well-studied active galactic nucleus (AGN) NGC 5548. In conjunction with a spectroscopic monitoring campaign, we obtained broadband B and V images on most nights from 2008 February through 2008 May. The imaging observations were carried out by four telescopes: the 0.76 m Katzman Automatic Imaging Telescope, the 2 m Multicolor Active Galactic Nuclei Monitoring telescope, the Palomar 60 inch (1.5 m) telescope, and the 0.80 m Tenagra II telescope. Having well-sampled light curves over the course of a few months is useful for obtaining the broad-line reverberation lag and black hole mass, and also allows us to examine the characteristics of the continuum variability. In this paper, we discuss the observational methods and the photometric measurements, and present the AGN continuum light curves. We measure various variability characteristics of each of the light curves. We do not detect any evidence for a time lag between the B- and V-band variations, and we do not find significant color variations for the AGNs in our sample.
NERI PROJECT 99-119. TASK 2. DATA-DRIVEN PREDICTION OF PROCESS VARIABLES. FINAL REPORT
Upadhyaya, B.R.
2003-04-10
This report describes the detailed results for task 2 of DOE-NERI project number 99-119 entitled ''Automatic Development of Highly Reliable Control Architecture for Future Nuclear Power Plants''. This project is a collaboration effort between the Oak Ridge National Laboratory (ORNL,) The University of Tennessee, Knoxville (UTK) and the North Carolina State University (NCSU). UTK is the lead organization for Task 2 under contract number DE-FG03-99SF21906. Under task 2 we completed the development of data-driven models for the characterization of sub-system dynamics for predicting state variables, control functions, and expected control actions. We have also developed the ''Principal Component Analysis (PCA)'' approach for mapping system measurements, and a nonlinear system modeling approach called the ''Group Method of Data Handling (GMDH)'' with rational functions, and includes temporal data information for transient characterization. The majority of the results are presented in detailed reports for Phases 1 through 3 of our research, which are attached to this report.
I. De Smedt
2018-04-01
Full Text Available On board the Copernicus Sentinel-5 Precursor (S5P platform, the TROPOspheric Monitoring Instrument (TROPOMI is a double-channel, nadir-viewing grating spectrometer measuring solar back-scattered earthshine radiances in the ultraviolet, visible, near-infrared, and shortwave infrared with global daily coverage. In the ultraviolet range, its spectral resolution and radiometric performance are equivalent to those of its predecessor OMI, but its horizontal resolution at true nadir is improved by an order of magnitude. This paper introduces the formaldehyde (HCHO tropospheric vertical column retrieval algorithm implemented in the S5P operational processor and comprehensively describes its various retrieval steps. Furthermore, algorithmic improvements developed in the framework of the EU FP7-project QA4ECV are described for future updates of the processor. Detailed error estimates are discussed in the light of Copernicus user requirements and needs for validation are highlighted. Finally, verification results based on the application of the algorithm to OMI measurements are presented, demonstrating the performances expected for TROPOMI.
Bosmans, H; Verbeeck, R; Vandermeulen, D; Suetens, P; Wilms, G; Maaly, M; Marchal, G; Baert, A L [Louvain Univ. (Belgium)
1995-12-01
The objective of this study was to validate a new post processing algorithm for improved maximum intensity projections (mip) of intracranial MR angiography acquisitions. The core of the post processing procedure is a new brain segmentation algorithm. Two seed areas, background and brain, are automatically detected. A 3D region grower then grows both regions towards each other and this preferentially towards white regions. In this way, the skin gets included into the final `background region` whereas cortical blood vessels and all brain tissues are included in the `brain region`. The latter region is then used for mip. The algorithm runs less than 30 minutes on a full dataset on a Unix workstation. Images from different acquisition strategies including multiple overlapping thin slab acquisition, magnetization transfer (MT) MRA, Gd-DTPA enhanced MRA, normal and high resolution acquisitions and acquisitions from mid field and high field systems were filtered. A series of contrast enhanced MRA acquisitions obtained with identical parameters was filtered to study the robustness of the filter parameters. In all cases, only a minimal manual interaction was necessary to segment the brain. The quality of the mip was significantly improved, especially in post Gd-DTPA acquisitions or using MT, due to the absence of high intensity signals of skin, sinuses and eyes that otherwise superimpose on the angiograms. It is concluded that the filter is a robust technique to improve the quality of MR angiograms.
Bosmans, H.; Verbeeck, R.; Vandermeulen, D.; Suetens, P.; Wilms, G.; Maaly, M.; Marchal, G.; Baert, A.L.
1995-01-01
The objective of this study was to validate a new post processing algorithm for improved maximum intensity projections (mip) of intracranial MR angiography acquisitions. The core of the post processing procedure is a new brain segmentation algorithm. Two seed areas, background and brain, are automatically detected. A 3D region grower then grows both regions towards each other and this preferentially towards white regions. In this way, the skin gets included into the final 'background region' whereas cortical blood vessels and all brain tissues are included in the 'brain region'. The latter region is then used for mip. The algorithm runs less than 30 minutes on a full dataset on a Unix workstation. Images from different acquisition strategies including multiple overlapping thin slab acquisition, magnetization transfer (MT) MRA, Gd-DTPA enhanced MRA, normal and high resolution acquisitions and acquisitions from mid field and high field systems were filtered. A series of contrast enhanced MRA acquisitions obtained with identical parameters was filtered to study the robustness of the filter parameters. In all cases, only a minimal manual interaction was necessary to segment the brain. The quality of the mip was significantly improved, especially in post Gd-DTPA acquisitions or using MT, due to the absence of high intensity signals of skin, sinuses and eyes that otherwise superimpose on the angiograms. It is concluded that the filter is a robust technique to improve the quality of MR angiograms
Climate variability of heat wave and projection of warming scenario in Taiwan
Lin, C. Y.; Chien, Y. Y.; Su, C. J.
2017-12-01
This study examined the climate variability of heat wave (HW) according to air temperature and relative humidity to determine trends of variation and stress threshold in three major cities of Taiwan, Taipei (TP), Taichung (TC) and Kaohsiung (KH), during in the past four decades (1971-2010). According to data available, the wet-bulb globe temperature (WBGT) heat stress for the three studied cities was also calculated for the past (2003-2012) and simulated under the projected warming scenario for the end of this century (2075-2099) using ECHAM5/MPIOM-WRF (ECW) dynamic downscaling 5-km resolution Analysis showed that past decade (2001-2010) saw increase not only in number of HW days in all three cities but also the duration of each HW event in TP and KH. Simulation results revealed that ECW captures well the characteristics of data distribution in these three cities during 2003-2012. Under the A1B projection, ECW yielded higher WBGT in all three cities for 2075-2099. The WBGT in TP indicated that the heat stress for 50% of the days in July and August by 2075-2099 will be at danger level (WBGT ³ 31 °C). Even the median WBGT in TC and KH (30.91°C and 30.88°C, respectively), are close to 31°C. Hence, the heat stress in all three cities will either exceed or approach the danger level by the end of this century. Such projection under the global warming trend would necessitate adaptation and mitigation, and the huge impact of dangerous heat stress on public health merits urgent attention for Taiwan.
Texas Medication Algorithm Project, phase 3 (TMAP-3): rationale and study design.
Rush, A John; Crismon, M Lynn; Kashner, T Michael; Toprac, Marcia G; Carmody, Thomas J; Trivedi, Madhukar H; Suppes, Trisha; Miller, Alexander L; Biggs, Melanie M; Shores-Wilson, Kathy; Witte, Bradley P; Shon, Steven P; Rago, William V; Altshuler, Kenneth Z
2003-04-01
Medication treatment algorithms may improve clinical outcomes, uniformity of treatment, quality of care, and efficiency. However, such benefits have never been evaluated for patients with severe, persistent mental illnesses. This study compared clinical and economic outcomes of an algorithm-driven disease management program (ALGO) with treatment-as-usual (TAU) for adults with DSM-IV schizophrenia (SCZ), bipolar disorder (BD), and major depressive disorder (MDD) treated in public mental health outpatient clinics in Texas. The disorder-specific intervention ALGO included a consensually derived and feasibility-tested medication algorithm, a patient/family educational program, ongoing physician training and consultation, a uniform medical documentation system with routine assessment of symptoms and side effects at each clinic visit to guide ALGO implementation, and prompting by on-site clinical coordinators. A total of 19 clinics from 7 local authorities were matched by authority and urban status, such that 4 clinics each offered ALGO for only 1 disorder (SCZ, BD, or MDD). The remaining 7 TAU clinics offered no ALGO and thus served as controls (TAUnonALGO). To determine if ALGO for one disorder impacted care for another disorder within the same clinic ("culture effect"), additional TAU subjects were selected from 4 of the ALGO clinics offering ALGO for another disorder (TAUinALGO). Patient entry occurred over 13 months, beginning March 1998 and concluding with the final active patient visit in April 2000. Research outcomes assessed at baseline and periodically for at least 1 year included (1) symptoms, (2) functioning, (3) cognitive functioning (for SCZ), (4) medication side effects, (5) patient satisfaction, (6) physician satisfaction, (7) quality of life, (8) frequency of contacts with criminal justice and state welfare system, (9) mental health and medical service utilization and cost, and (10) alcohol and substance abuse and supplemental substance use information
Michel, D.
2015-10-20
The WACMOS-ET project has compiled a forcing data set covering the period 2005–2007 that aims to maximize the exploitation of European Earth Observations data sets for evapotranspiration (ET) estimation. The data set was used to run 4 established ET algorithms: the Priestley–Taylor Jet Propulsion Laboratory model (PT-JPL), the Penman–Monteith algorithm from the MODIS evaporation product (PM-MOD), the Surface Energy Balance System (SEBS) and the Global Land Evaporation Amsterdam Model (GLEAM). In addition, in-situ meteorological data from 24 FLUXNET towers was used to force the models, with results from both forcing sets compared to tower-based flux observations. Model performance was assessed across several time scales using both sub-daily and daily forcings. The PT-JPL model and GLEAM provide the best performance for both satellite- and tower-based forcing as well as for the considered temporal resolutions. Simulations using the PM-MOD were mostly underestimated, while the SEBS performance was characterized by a systematic overestimation. In general, all four algorithms produce the best results in wet and moderately wet climate regimes. In dry regimes, the correlation and the absolute agreement to the reference tower ET observations were consistently lower. While ET derived with in situ forcing data agrees best with the tower measurements (R^{2} = 0.67), the agreement of the satellite-based ET estimates is only marginally lower (R^{2} = 0.58). Results also show similar model performance at daily and sub-daily (3-hourly) resolutions. Overall, our validation experiments against in situ measurements indicate that there is no single best-performing algorithm across all biome and forcing types. An extension of the evaluation to a larger selection of 85 towers (model inputs re-sampled to a common grid to facilitate global estimates) confirmed the original findings.
Annamalai, H. [Univ. of Hawaii, Honolulu, HI (United States)
2014-09-15
The overall goal of this project is to assess the ability of the CMIP3/5 models to simulate the Indian-Ocean monsoon systems. The PI along with post-docs investigated research issues ranging from synoptic systems to long-term trends over the Asian monsoon region. The PI applied diagnostic tools such as moist static energy (MSE) to isolate: the moist and radiative processes responsible for extended monsoon breaks over South Asia, precursors in the ENSO-monsoon association, reasons for the drying tendency over South Asia and the possible effect on tropical Indian Ocean climate anomalies influencing certain aspects of ENSO characteristics. By diagnosing various observations and coupled model simulations, we developed working hypothesis and tested them by carrying out sensitivity experiments with both linear and nonlinear models. Possible physical and dynamical reasons for model sensitivities were deduced. On the teleconnection front, the ability of CMIP5 models in representing the monsoon-desert mechanism was examined recently. Further more, we have applied a suite of diagnostics and have performed an in depth analysis on CMIP5 integrations to isolate the possible reasons for the ENSO-monsoon linkage or lack thereof. The PI has collaborated with Dr. K.R. Sperber of PCMDI and other CLIVAR Asian-Australian monsoon panel members in understanding the ability of CMIP3/5 models in capturing monsoon and its spectrum of variability. The objective and process-based diagnostics aided in selecting models that best represent the present-day monsoon and its variability that are then employed for future projections. Two major highlights were an invitation to write a review on present understanding monsoons in a changing climate in Nature Climate Change, and identification of an east-west shift in observed monsoon rainfall (more rainfall over tropical western Pacific and drying tendency over South Asia) in the last six decades and attributing that shift to SST rise over the tropical
Huang, Q; Zeng, G L; You, J; Gullberg, G T
2005-01-01
In this paper, Novikov's inversion formula of the attenuated two-dimensional (2D) Radon transform is applied to the reconstruction of attenuated fan-beam projections acquired with equal detector spacing and of attenuated cone-beam projections acquired with a flat planar detector and circular trajectory. The derivation of the fan-beam algorithm is obtained by transformation from parallel-beam coordinates to fan-beam coordinates. The cone-beam reconstruction algorithm is an extension of the fan-beam reconstruction algorithm using Feldkamp-Davis-Kress's (FDK) method. Computer simulations indicate that the algorithm is efficient and is accurate in reconstructing slices close to the central slice of the cone-beam orbit plane. When the attenuation map is set to zero the implementation is equivalent to the FDK method. Reconstructed images are also shown for noise corrupted projections
Sperber, K. R.; Palmer, T. N.
1996-11-01
The interannual variability of rainfall over the Indian subcontinent, the African Sahel, and the Nordeste region of Brazil have been evaluated in 32 models for the period 1979-88 as part of the Atmospheric Model Intercomparison Project (AMIP). The interannual variations of Nordeste rainfall are the most readily captured, owing to the intimate link with Pacific and Atlantic sea surface temperatures. The precipitation variations over India and the Sahel are less well simulated. Additionally, an Indian monsoon wind shear index was calculated for each model. Evaluation of the interannual variability of a wind shear index over the summer monsoon region indicates that the models exhibit greater fidelity in capturing the large-scale dynamic fluctuations than the regional-scale rainfall variations. A rainfall/SST teleconnection quality control was used to objectively stratify model performance. Skill scores improved for those models that qualitatively simulated the observed rainfall/El Niño- Southern Oscillation SST correlation pattern. This subset of models also had a rainfall climatology that was in better agreement with observations, indicating a link between systematic model error and the ability to simulate interannual variations.A suite of six European Centre for Medium-Range Weather Forecasts (ECMWF) AMIP runs (differing only in their initial conditions) have also been examined. As observed, all-India rainfall was enhanced in 1988 relative to 1987 in each of these realizations. All-India rainfall variability during other years showed little or no predictability, possibly due to internal chaotic dynamics associated with intraseasonal monsoon fluctuations and/or unpredictable land surface process interactions. The interannual variations of Nordeste rainfall were best represented. The State University of New York at Albany/National Center for Atmospheric Research Genesis model was run in five initial condition realizations. In this model, the Nordeste rainfall
M. K. Sharbatdar
2016-11-01
Full Text Available Abstract The appropriate planning and scheduling for reaching the project goals in the most economical way is the very basic issue of the project management. As in each project, the project manager must determine the required activities for the implementation of the project and select the best option in the implementation of each of the activities, in a way that the least final cost and time of the project is achieved. Considering the number of activities and selecting options for each of the activities, usually the selection has not one unique solution, but it consists of a set of solutions that are not preferred to each other and are known as Pareto solutions. On the other hand, in some actual projects, there are activities that their implementation options depend on the implementation of the prerequisite activity and are not applicable using all the implementation options, and even in some cases the implementation or the non-implementation of some activities are also dependent on the prerequisite activity implementation. These projects can be introduced as conditional projects. Much researchs have been conducted for acquiring Pareto solution set, using different methods and algorithms, but in all the done tasks the time-cost optimization of conditional projects is not considered. Thus, in the present study the concept of conditional network is defined along with some practical examples, then an appropriate way to illustrate these networks and suitable time-cost formulation of these are presented. Finally, for some instances of conditional activity networks, conditional project time-cost optimization conducted multi-objectively using known meta-heuristic algorithms such as multi-objective genetic algorithm, multi-objective particle swarm algorithm and multi-objective charged system search algorithm.
Pokhrel, Damodar; Murphy, Martin J.; Todor, Dorin A.; Weiss, Elisabeth; Williamson, Jeffrey F.
2011-01-01
Purpose: To present a novel method for reconstructing the 3D pose (position and orientation) of radio-opaque applicators of known but arbitrary shape from a small set of 2D x-ray projections in support of intraoperative brachytherapy planning. Methods: The generalized iterative forward projection matching (gIFPM) algorithm finds the six degree-of-freedom pose of an arbitrary rigid object by minimizing the sum-of-squared-intensity differences (SSQD) between the computed and experimentally acquired autosegmented projection of the objects. Starting with an initial estimate of the object's pose, gIFPM iteratively refines the pose parameters (3D position and three Euler angles) until the SSQD converges. The object, here specialized to a Fletcher-Weeks intracavitary brachytherapy (ICB) applicator, is represented by a fine mesh of discrete points derived from complex combinatorial geometric models of the actual applicators. Three pairs of computed and measured projection images with known imaging geometry are used. Projection images of an intrauterine tandem and colpostats were acquired from an ACUITY cone-beam CT digital simulator. An image postprocessing step was performed to create blurred binary applicators only images. To quantify gIFPM accuracy, the reconstructed 3D pose of the applicator model was forward projected and overlaid with the measured images and empirically calculated the nearest-neighbor applicator positional difference for each image pair. Results: In the numerical simulations, the tandem and colpostats positions (x,y,z) and orientations (α,β,γ) were estimated with accuracies of 0.6 mm and 2 deg., respectively. For experimentally acquired images of actual applicators, the residual 2D registration error was less than 1.8 mm for each image pair, corresponding to about 1 mm positioning accuracy at isocenter, with a total computation time of less than 1.5 min on a 1 GHz processor. Conclusions: This work describes a novel, accurate, fast, and completely
Pokhrel, Damodar; Murphy, Martin J.; Todor, Dorin A.; Weiss, Elisabeth; Williamson, Jeffrey F. [Department of Radiation Oncology, School of Medicine, Virginia Commonwealth University, Richmond, Virginia 23298 (United States)
2011-02-15
Purpose: To present a novel method for reconstructing the 3D pose (position and orientation) of radio-opaque applicators of known but arbitrary shape from a small set of 2D x-ray projections in support of intraoperative brachytherapy planning. Methods: The generalized iterative forward projection matching (gIFPM) algorithm finds the six degree-of-freedom pose of an arbitrary rigid object by minimizing the sum-of-squared-intensity differences (SSQD) between the computed and experimentally acquired autosegmented projection of the objects. Starting with an initial estimate of the object's pose, gIFPM iteratively refines the pose parameters (3D position and three Euler angles) until the SSQD converges. The object, here specialized to a Fletcher-Weeks intracavitary brachytherapy (ICB) applicator, is represented by a fine mesh of discrete points derived from complex combinatorial geometric models of the actual applicators. Three pairs of computed and measured projection images with known imaging geometry are used. Projection images of an intrauterine tandem and colpostats were acquired from an ACUITY cone-beam CT digital simulator. An image postprocessing step was performed to create blurred binary applicators only images. To quantify gIFPM accuracy, the reconstructed 3D pose of the applicator model was forward projected and overlaid with the measured images and empirically calculated the nearest-neighbor applicator positional difference for each image pair. Results: In the numerical simulations, the tandem and colpostats positions (x,y,z) and orientations ({alpha},{beta},{gamma}) were estimated with accuracies of 0.6 mm and 2 deg., respectively. For experimentally acquired images of actual applicators, the residual 2D registration error was less than 1.8 mm for each image pair, corresponding to about 1 mm positioning accuracy at isocenter, with a total computation time of less than 1.5 min on a 1 GHz processor. Conclusions: This work describes a novel, accurate
Pokhrel, Damodar; Murphy, Martin J; Todor, Dorin A; Weiss, Elisabeth; Williamson, Jeffrey F
2011-02-01
To present a novel method for reconstructing the 3D pose (position and orientation) of radio-opaque applicators of known but arbitrary shape from a small set of 2D x-ray projections in support of intraoperative brachytherapy planning. The generalized iterative forward projection matching (gIFPM) algorithm finds the six degree-of-freedom pose of an arbitrary rigid object by minimizing the sum-of-squared-intensity differences (SSQD) between the computed and experimentally acquired autosegmented projection of the objects. Starting with an initial estimate of the object's pose, gIFPM iteratively refines the pose parameters (3D position and three Euler angles) until the SSQD converges. The object, here specialized to a Fletcher-Weeks intracavitary brachytherapy (ICB) applicator, is represented by a fine mesh of discrete points derived from complex combinatorial geometric models of the actual applicators. Three pairs of computed and measured projection images with known imaging geometry are used. Projection images of an intrauterine tandem and colpostats were acquired from an ACUITY cone-beam CT digital simulator. An image postprocessing step was performed to create blurred binary applicators only images. To quantify gIFPM accuracy, the reconstructed 3D pose of the applicator model was forward projected and overlaid with the measured images and empirically calculated the nearest-neighbor applicator positional difference for each image pair. In the numerical simulations, the tandem and colpostats positions (x,y,z) and orientations (alpha, beta, gamma) were estimated with accuracies of 0.6 mm and 2 degrees, respectively. For experimentally acquired images of actual applicators, the residual 2D registration error was less than 1.8 mm for each image pair, corresponding to about 1 mm positioning accuracy at isocenter, with a total computation time of less than 1.5 min on a 1 GHz processor. This work describes a novel, accurate, fast, and completely automatic method to
Latent Dirichlet Allocation (LDA) Model and kNN Algorithm to Classify Research Project Selection
Safi’ie, M. A.; Utami, E.; Fatta, H. A.
2018-03-01
Universitas Sebelas Maret has a teaching staff more than 1500 people, and one of its tasks is to carry out research. In the other side, the funding support for research and service is limited, so there is need to be evaluated to determine the Research proposal submission and devotion on society (P2M). At the selection stage, research proposal documents are collected as unstructured data and the data stored is very large. To extract information contained in the documents therein required text mining technology. This technology applied to gain knowledge to the documents by automating the information extraction. In this articles we use Latent Dirichlet Allocation (LDA) to the documents as a model in feature extraction process, to get terms that represent its documents. Hereafter we use k-Nearest Neighbour (kNN) algorithm to classify the documents based on its terms.
Variables that impact the implementation of project-based learning in high school science
Cunningham, Kellie
Wagner and colleagues (2006) state the mediocrity of teaching and instructional leadership is the central problem that must be addressed if we are to improve student achievement. Educational reform efforts have been initiated to improve student performance and to hold teachers and school leaders accountable for student achievement (Wagner et al., 2006). Specifically, in the area of science, goals for improving student learning have led reformers to establish standards for what students should know and be able to do, as well as what instructional methods should be used. Key concepts and principles have been identified for student learning. Additionally, reformers recommend student-centered, inquiry-based practices that promote a deep understanding of how science is embedded in the everyday world. These new approaches to science education emphasize inquiry as an essential element for student learning (Schneider, Krajcik, Marx, & Soloway, 2002). Project-based learning (PBL) is an inquiry-based instructional approach that addresses these recommendations for science education reform. The objective of this research was to study the implementation of project-based learning (PBL) in an urban school undergoing reform efforts and identify the variables that positively or negatively impacted the PBL implementation process and its outcomes. This study responded to the need to change how science is taught by focusing on the implementation of project-based learning as an instructional approach to improve student achievement in science and identify the role of both school leaders and teachers in the creation of a school environment that supports project-based learning. A case study design using a mixed-method approach was used in this study. Data were collected through individual interviews with the school principal, science instructional coach, and PBL facilitator. A survey, classroom observations and interviews involving three high school science teachers teaching grades 9
Han, Yishi; Luo, Zhixiao; Wang, Jianhua; Min, Zhixuan; Qin, Xinyu; Sun, Yunlong
2014-09-01
In general, context-based adaptive variable length coding (CAVLC) decoding in H.264/AVC standard requires frequent access to the unstructured variable length coding tables (VLCTs) and significant memory accesses are consumed. Heavy memory accesses will cause high power consumption and time delays, which are serious problems for applications in portable multimedia devices. We propose a method for high-efficiency CAVLC decoding by using a program instead of all the VLCTs. The decoded codeword from VLCTs can be obtained without any table look-up and memory access. The experimental results show that the proposed algorithm achieves 100% memory access saving and 40% decoding time saving without degrading video quality. Additionally, the proposed algorithm shows a better performance compared with conventional CAVLC decoding, such as table look-up by sequential search, table look-up by binary search, Moon's method, and Kim's method.
Flay, B R; Miller, T Q; Hedeker, D; Siddiqui, O; Britton, C F; Brannon, B R; Johnson, C A; Hansen, W B; Sussman, S; Dent, C
1995-01-01
This paper presents the student outcomes of a large-scale, social-influences-based, school and media-based tobacco use prevention and cessation project in Southern California. The study provided an experimental comparison of classroom delivery with television delivery and the combination of the two in a 2 x 2 plus 1 design. Schools were randomly assigned to conditions. Control groups included "treatment as usual" and an "attention control" with the same outcome expectancies as the treatment conditions. Students were surveyed twice in grade 7 and once in each of grades 8 and 9. The interventions occurred during grade 7. We observed significant effects on mediating variables such as knowledge and prevalence estimates, and coping effort. The knowledge and prevalence estimates effects decayed partially but remained significant up to a 2-year follow-up. The coping effort effect did not persist at follow-ups. There were significant main effects of both classroom training and TV programming on knowledge and prevalence estimates and significant interactions of classroom and TV programming on knowledge (negative), disapproval of parental smoking, and coping effort. There were no consistent program effects on refusal/self-efficacy, smoking intentions, or behavior. Previous reports demonstrated successful development and pilot testing of program components and measures and high acceptance of the program by students and parents. The lack of behavioral effects may have been the result of imperfect program implementation or low base rates of intentions and behavior.
Heidari, Morteza; Zargari Khuzani, Abolfazl; Hollingsworth, Alan B.; Danala, Gopichandh; Mirniaharikandehei, Seyedehnafiseh; Qiu, Yuchen; Liu, Hong; Zheng, Bin
2018-02-01
In order to automatically identify a set of effective mammographic image features and build an optimal breast cancer risk stratification model, this study aims to investigate advantages of applying a machine learning approach embedded with a locally preserving projection (LPP) based feature combination and regeneration algorithm to predict short-term breast cancer risk. A dataset involving negative mammograms acquired from 500 women was assembled. This dataset was divided into two age-matched classes of 250 high risk cases in which cancer was detected in the next subsequent mammography screening and 250 low risk cases, which remained negative. First, a computer-aided image processing scheme was applied to segment fibro-glandular tissue depicted on mammograms and initially compute 44 features related to the bilateral asymmetry of mammographic tissue density distribution between left and right breasts. Next, a multi-feature fusion based machine learning classifier was built to predict the risk of cancer detection in the next mammography screening. A leave-one-case-out (LOCO) cross-validation method was applied to train and test the machine learning classifier embedded with a LLP algorithm, which generated a new operational vector with 4 features using a maximal variance approach in each LOCO process. Results showed a 9.7% increase in risk prediction accuracy when using this LPP-embedded machine learning approach. An increased trend of adjusted odds ratios was also detected in which odds ratios increased from 1.0 to 11.2. This study demonstrated that applying the LPP algorithm effectively reduced feature dimensionality, and yielded higher and potentially more robust performance in predicting short-term breast cancer risk.
Han, Wenhua; Shen, Xiaohui; Xu, Jun; Wang, Ping; Tian, Guiyun; Wu, Zhengyang
2014-09-04
Magnetic flux leakage (MFL) inspection is one of the most important and sensitive nondestructive testing approaches. For online MFL inspection of a long-range railway track or oil pipeline, a fast and effective defect profile estimating method based on a multi-power affine projection algorithm (MAPA) is proposed, where the depth of a sampling point is related with not only the MFL signals before it, but also the ones after it, and all of the sampling points related to one point appear as serials or multi-power. Defect profile estimation has two steps: regulating a weight vector in an MAPA filter and estimating a defect profile with the MAPA filter. Both simulation and experimental data are used to test the performance of the proposed method. The results demonstrate that the proposed method exhibits high speed while maintaining the estimated profiles clearly close to the desired ones in a noisy environment, thereby meeting the demand of accurate online inspection.
Heidari, Morteza; Zargari Khuzani, Abolfazl; Danala, Gopichandh; Mirniaharikandehei, Seyedehnafiseh; Qian, Wei; Zheng, Bin
2018-03-01
Both conventional and deep machine learning has been used to develop decision-support tools applied in medical imaging informatics. In order to take advantages of both conventional and deep learning approach, this study aims to investigate feasibility of applying a locally preserving projection (LPP) based feature regeneration algorithm to build a new machine learning classifier model to predict short-term breast cancer risk. First, a computer-aided image processing scheme was used to segment and quantify breast fibro-glandular tissue volume. Next, initially computed 44 image features related to the bilateral mammographic tissue density asymmetry were extracted. Then, an LLP-based feature combination method was applied to regenerate a new operational feature vector using a maximal variance approach. Last, a k-nearest neighborhood (KNN) algorithm based machine learning classifier using the LPP-generated new feature vectors was developed to predict breast cancer risk. A testing dataset involving negative mammograms acquired from 500 women was used. Among them, 250 were positive and 250 remained negative in the next subsequent mammography screening. Applying to this dataset, LLP-generated feature vector reduced the number of features from 44 to 4. Using a leave-onecase-out validation method, area under ROC curve produced by the KNN classifier significantly increased from 0.62 to 0.68 (p breast cancer detected in the next subsequent mammography screening.
Linn, S
One of the more often used measures of multiple injuries is the injury severity score (ISS). Determination of the ISS is based on the abbreviated injury scale (AIS). This paper suggests a new algorithm to sort the AISs for each case and calculate ISS. The program uses unsorted abbreviated injury scale (AIS) levels for each case and rearranges them in descending order. The first three sorted AISs representing the three most severe injuries of a person are then used to calculate injury severity score (ISS). This algorithm should be useful for analyses of clusters of injuries especially when more patients have multiple injuries.
Liu, Yunlong; Wang, Aiping; Guo, Lei; Wang, Hong
2017-07-09
This paper presents an error-entropy minimization tracking control algorithm for a class of dynamic stochastic system. The system is represented by a set of time-varying discrete nonlinear equations with non-Gaussian stochastic input, where the statistical properties of stochastic input are unknown. By using Parzen windowing with Gaussian kernel to estimate the probability densities of errors, recursive algorithms are then proposed to design the controller such that the tracking error can be minimized. The performance of the error-entropy minimization criterion is compared with the mean-square-error minimization in the simulation results.
Long, Craig S.; Fujiwara, Masatomo; Davis, Sean; Mitchell, Daniel M.; Wright, Corwin J.
2017-12-01
Two of the most basic parameters generated from a reanalysis are temperature and winds. Temperatures in the reanalyses are derived from conventional (surface and balloon), aircraft, and satellite observations. Winds are observed by conventional systems, cloud tracked, and derived from height fields, which are in turn derived from the vertical temperature structure. In this paper we evaluate as part of the SPARC Reanalysis Intercomparison Project (S-RIP) the temperature and wind structure of all the recent and past reanalyses. This evaluation is mainly among the reanalyses themselves, but comparisons against independent observations, such as HIRDLS and COSMIC temperatures, are also presented. This evaluation uses monthly mean and 2.5° zonal mean data sets and spans the satellite era from 1979-2014. There is very good agreement in temperature seasonally and latitudinally among the more recent reanalyses (CFSR, MERRA, ERA-Interim, JRA-55, and MERRA-2) between the surface and 10 hPa. At lower pressures there is increased variance among these reanalyses that changes with season and latitude. This variance also changes during the time span of these reanalyses with greater variance during the TOVS period (1979-1998) and less variance afterward in the ATOVS period (1999-2014). There is a distinct change in the temperature structure in the middle and upper stratosphere during this transition from TOVS to ATOVS systems. Zonal winds are in greater agreement than temperatures and this agreement extends to lower pressures than the temperatures. Older reanalyses (NCEP/NCAR, NCEP/DOE, ERA-40, JRA-25) have larger temperature and zonal wind disagreement from the more recent reanalyses. All reanalyses to date have issues analysing the quasi-biennial oscillation (QBO) winds. Comparisons with Singapore QBO winds show disagreement in the amplitude of the westerly and easterly anomalies. The disagreement with Singapore winds improves with the transition from TOVS to ATOVS observations
Tessier, Francois [Argonne National Lab. (ANL), Argonne, IL (United States); Vishwanath, Venkatram [Argonne National Lab. (ANL), Argonne, IL (United States)
2017-11-28
Reading and writing data efficiently from different tiers of storage is necessary for most scientific simulations to achieve good performance at scale. Many software solutions have been developed to decrease the I/O bottleneck. One wellknown strategy, in the context of collective I/O operations, is the two-phase I/O scheme. This strategy consists of selecting a subset of processes to aggregate contiguous pieces of data before performing reads/writes. In our previous work, we implemented the two-phase I/O scheme with a MPI-based topology-aware algorithm. Our algorithm showed very good performance at scale compared to the standard I/O libraries such as POSIX I/O and MPI I/O. However, the algorithm had several limitations hindering a satisfying reproducibility of our experiments. In this paper, we extend our work by 1) identifying the obstacles we face to reproduce our experiments and 2) discovering solutions that reduce the unpredictability of our results.
Altarelli, Fabrizio; Monasson, Remi; Zamponi, Francesco
2007-01-01
For large clause-to-variable ratios, typical K-SAT instances drawn from the uniform distribution have no solution. We argue, based on statistical mechanics calculations using the replica and cavity methods, that rare satisfiable instances from the uniform distribution are very similar to typical instances drawn from the so-called planted distribution, where instances are chosen uniformly between the ones that admit a given solution. It then follows, from a recent article by Feige, Mossel and Vilenchik (2006 Complete convergence of message passing algorithms for some satisfiability problems Proc. Random 2006 pp 339-50), that these rare instances can be easily recognized (in O(log N) time and with probability close to 1) by a simple message-passing algorithm
Ghobadi, Kimia; Ghaffari, Hamid R; Aleman, Dionne M; Jaffray, David A; Ruschin, Mark
2012-06-01
The purpose of this work is to develop a framework to the inverse problem for radiosurgery treatment planning on the Gamma Knife(®) Perfexion™ (PFX) for intracranial targets. The approach taken in the present study consists of two parts. First, a hybrid grassfire and sphere-packing algorithm is used to obtain shot positions (isocenters) based on the geometry of the target to be treated. For the selected isocenters, a sector duration optimization (SDO) model is used to optimize the duration of radiation delivery from each collimator size from each individual source bank. The SDO model is solved using a projected gradient algorithm. This approach has been retrospectively tested on seven manually planned clinical cases (comprising 11 lesions) including acoustic neuromas and brain metastases. In terms of conformity and organ-at-risk (OAR) sparing, the quality of plans achieved with the inverse planning approach were, on average, improved compared to the manually generated plans. The mean difference in conformity index between inverse and forward plans was -0.12 (range: -0.27 to +0.03) and +0.08 (range: 0.00-0.17) for classic and Paddick definitions, respectively, favoring the inverse plans. The mean difference in volume receiving the prescribed dose (V(100)) between forward and inverse plans was 0.2% (range: -2.4% to +2.0%). After plan renormalization for equivalent coverage (i.e., V(100)), the mean difference in dose to 1 mm(3) of brainstem between forward and inverse plans was -0.24 Gy (range: -2.40 to +2.02 Gy) favoring the inverse plans. Beam-on time varied with the number of isocenters but for the most optimal plans was on average 33 min longer than manual plans (range: -17 to +91 min) when normalized to a calibration dose rate of 3.5 Gy/min. In terms of algorithm performance, the isocenter selection for all the presented plans was performed in less than 3 s, while the SDO was performed in an average of 215 min. PFX inverse planning can be performed using
Ghobadi, Kimia; Ghaffari, Hamid R.; Aleman, Dionne M.; Jaffray, David A.; Ruschin, Mark [Department of Mechanical and Industrial Engineering, University of Toronto, 5 King' s College Road, Toronto, Ontario M5S 3G8 (Canada); Department of Radiation Oncology, University of Toronto, Radiation Medicine Program, Princess Margaret Hospital, 610 University Avenue, Toronto, Ontario M5G 2M9 (Canada)
2012-06-15
Purpose: The purpose of this work is to develop a framework to the inverse problem for radiosurgery treatment planning on the Gamma Knife{sup Registered-Sign} Perfexion Trade-Mark-Sign (PFX) for intracranial targets. Methods: The approach taken in the present study consists of two parts. First, a hybrid grassfire and sphere-packing algorithm is used to obtain shot positions (isocenters) based on the geometry of the target to be treated. For the selected isocenters, a sector duration optimization (SDO) model is used to optimize the duration of radiation delivery from each collimator size from each individual source bank. The SDO model is solved using a projected gradient algorithm. This approach has been retrospectively tested on seven manually planned clinical cases (comprising 11 lesions) including acoustic neuromas and brain metastases. Results: In terms of conformity and organ-at-risk (OAR) sparing, the quality of plans achieved with the inverse planning approach were, on average, improved compared to the manually generated plans. The mean difference in conformity index between inverse and forward plans was -0.12 (range: -0.27 to +0.03) and +0.08 (range: 0.00-0.17) for classic and Paddick definitions, respectively, favoring the inverse plans. The mean difference in volume receiving the prescribed dose (V{sub 100}) between forward and inverse plans was 0.2% (range: -2.4% to +2.0%). After plan renormalization for equivalent coverage (i.e., V{sub 100}), the mean difference in dose to 1 mm{sup 3} of brainstem between forward and inverse plans was -0.24 Gy (range: -2.40 to +2.02 Gy) favoring the inverse plans. Beam-on time varied with the number of isocenters but for the most optimal plans was on average 33 min longer than manual plans (range: -17 to +91 min) when normalized to a calibration dose rate of 3.5 Gy/min. In terms of algorithm performance, the isocenter selection for all the presented plans was performed in less than 3 s, while the SDO was performed in an
Ghobadi, Kimia; Ghaffari, Hamid R.; Aleman, Dionne M.; Jaffray, David A.; Ruschin, Mark
2012-01-01
Purpose: The purpose of this work is to develop a framework to the inverse problem for radiosurgery treatment planning on the Gamma Knife ® Perfexion™ (PFX) for intracranial targets. Methods: The approach taken in the present study consists of two parts. First, a hybrid grassfire and sphere-packing algorithm is used to obtain shot positions (isocenters) based on the geometry of the target to be treated. For the selected isocenters, a sector duration optimization (SDO) model is used to optimize the duration of radiation delivery from each collimator size from each individual source bank. The SDO model is solved using a projected gradient algorithm. This approach has been retrospectively tested on seven manually planned clinical cases (comprising 11 lesions) including acoustic neuromas and brain metastases. Results: In terms of conformity and organ-at-risk (OAR) sparing, the quality of plans achieved with the inverse planning approach were, on average, improved compared to the manually generated plans. The mean difference in conformity index between inverse and forward plans was −0.12 (range: −0.27 to +0.03) and +0.08 (range: 0.00–0.17) for classic and Paddick definitions, respectively, favoring the inverse plans. The mean difference in volume receiving the prescribed dose (V 100 ) between forward and inverse plans was 0.2% (range: −2.4% to +2.0%). After plan renormalization for equivalent coverage (i.e., V 100 ), the mean difference in dose to 1 mm 3 of brainstem between forward and inverse plans was −0.24 Gy (range: −2.40 to +2.02 Gy) favoring the inverse plans. Beam-on time varied with the number of isocenters but for the most optimal plans was on average 33 min longer than manual plans (range: −17 to +91 min) when normalized to a calibration dose rate of 3.5 Gy/min. In terms of algorithm performance, the isocenter selection for all the presented plans was performed in less than 3 s, while the SDO was performed in an average of 215 min
Krysa, Zbigniew; Pactwa, Katarzyna; Wozniak, Justyna; Dudek, Michal
2017-12-01
Geological variability is one of the main factors that has an influence on the viability of mining investment projects and on the technical risk of geology projects. In the current scenario, analyses of economic viability of new extraction fields have been performed for the KGHM Polska Miedź S.A. underground copper mine at Fore Sudetic Monocline with the assumption of constant averaged content of useful elements. Research presented in this article is aimed at verifying the value of production from copper and silver ore for the same economic background with the use of variable cash flows resulting from the local variability of useful elements. Furthermore, the ore economic model is investigated for a significant difference in model value estimated with the use of linear correlation between useful elements content and the height of mine face, and the approach in which model parameters correlation is based upon the copula best matched information capacity criterion. The use of copula allows the simulation to take into account the multi variable dependencies at the same time, thereby giving a better reflection of the dependency structure, which linear correlation does not take into account. Calculation results of the economic model used for deposit value estimation indicate that the correlation between copper and silver estimated with the use of copula generates higher variation of possible project value, as compared to modelling correlation based upon linear correlation. Average deposit value remains unchanged.
Melli, S. Ali; Wahid, Khan A.; Babyn, Paul; Cooper, David M. L.; Gopi, Varun P.
2016-12-01
Synchrotron X-ray Micro Computed Tomography (Micro-CT) is an imaging technique which is increasingly used for non-invasive in vivo preclinical imaging. However, it often requires a large number of projections from many different angles to reconstruct high-quality images leading to significantly high radiation doses and long scan times. To utilize this imaging technique further for in vivo imaging, we need to design reconstruction algorithms that reduce the radiation dose and scan time without reduction of reconstructed image quality. This research is focused on using a combination of gradient-based Douglas-Rachford splitting and discrete wavelet packet shrinkage image denoising methods to design an algorithm for reconstruction of large-scale reduced-view synchrotron Micro-CT images with acceptable quality metrics. These quality metrics are computed by comparing the reconstructed images with a high-dose reference image reconstructed from 1800 equally spaced projections spanning 180°. Visual and quantitative-based performance assessment of a synthetic head phantom and a femoral cortical bone sample imaged in the biomedical imaging and therapy bending magnet beamline at the Canadian Light Source demonstrates that the proposed algorithm is superior to the existing reconstruction algorithms. Using the proposed reconstruction algorithm to reduce the number of projections in synchrotron Micro-CT is an effective way to reduce the overall radiation dose and scan time which improves in vivo imaging protocols.
Variable stars classification based on photometric data from the "Pi of the Sky" project
Majczyna, A.; Nalezyty, M.; Siudek, M.; Malek, K.; Barnacka, A.; Mankiewicz, L.; Żarnecki, A. F.
2009-06-01
We present the first few steps of creation the second edition of the variable stars catalogue, based on the "Pi of the Sky" data, collected during two years 2006-2007. We have chosen - 3000 variable star candidates from about 1.5 million objects.
Shang, Ce; Chaloupka, Frank J.; Fong, Geoffrey T; Thompson, Mary; O’Connor, Richard J
2015-01-01
Background Recent studies have shown that more opportunities exist for tax avoidance when cigarette excise tax structure departs from a uniform specific structure. However, the association between tax structure and cigarette price variability has not been thoroughly studied in the existing literature. Objective To examine how cigarette tax structure is associated with price variability. The variability of self-reported prices is measured using the ratios of differences between higher and lower prices to the median price such as the IQR-to-median ratio. Methods We used survey data taken from the International Tobacco Control Policy Evaluation (ITC) Project in 17 countries to conduct the analysis. Cigarette prices were derived using individual purchase information and aggregated to price variability measures for each surveyed country and wave. The effect of tax structures on price variability was estimated using Generalised Estimating Equations after adjusting for year and country attributes. Findings Our study provides empirical evidence of a relationship between tax structure and cigarette price variability. We find that, compared to the specific uniform tax structure, mixed uniform and tiered (specific, ad valorem or mixed) structures are associated with greater price variability (p≤0.01). Moreover, while a greater share of the specific component in total excise taxes is associated with lower price variability (p≤0.05), a tiered tax structure is associated with greater price variability (p≤0.01). The results suggest that a uniform and specific tax structure is the most effective tax structure for reducing tobacco consumption and prevalence by limiting price variability and decreasing opportunities for tax avoidance. PMID:25855641
Kim, Dae-Won; Protopapas, Pavlos; Alcock, Charles; Trichas, Markos; Byun, Yong-Ik; Khardon, Roni
2011-01-01
We present a new quasi-stellar object (QSO) selection algorithm using a Support Vector Machine, a supervised classification method, on a set of extracted time series features including period, amplitude, color, and autocorrelation value. We train a model that separates QSOs from variable stars, non-variable stars, and microlensing events using 58 known QSOs, 1629 variable stars, and 4288 non-variables in the MAssive Compact Halo Object (MACHO) database as a training set. To estimate the efficiency and the accuracy of the model, we perform a cross-validation test using the training set. The test shows that the model correctly identifies ∼80% of known QSOs with a 25% false-positive rate. The majority of the false positives are Be stars. We applied the trained model to the MACHO Large Magellanic Cloud (LMC) data set, which consists of 40 million light curves, and found 1620 QSO candidates. During the selection none of the 33,242 known MACHO variables were misclassified as QSO candidates. In order to estimate the true false-positive rate, we crossmatched the candidates with astronomical catalogs including the Spitzer Surveying the Agents of a Galaxy's Evolution LMC catalog and a few X-ray catalogs. The results further suggest that the majority of the candidates, more than 70%, are QSOs.
Project Sekwa: A variable stability, blended-wing-body, research UAV
Broughton, BA
2008-10-01
Full Text Available of flying wing and Blended-Wing-Body (BWB) platforms. The main objective of the project was to investigate the advantages and pitfalls of relaxing the longitudinal stability criteria on a Blended-Wing-Body UAV. The project was also aimed at expanding...
Shieh, Chun-Chien; Kipritidis, John; O’Brien, Ricky T.; Keall, Paul J.; Kuncic, Zdenka
2014-01-01
Purpose: Respiratory signal, binning method, and reconstruction algorithm are three major controllable factors affecting image quality in thoracic 4D cone-beam CT (4D-CBCT), which is widely used in image guided radiotherapy (IGRT). Previous studies have investigated each of these factors individually, but no integrated sensitivity analysis has been performed. In addition, projection angular spacing is also a key factor in reconstruction, but how it affects image quality is not obvious. An investigation of the impacts of these four factors on image quality can help determine the most effective strategy in improving 4D-CBCT for IGRT. Methods: Fourteen 4D-CBCT patient projection datasets with various respiratory motion features were reconstructed with the following controllable factors: (i) respiratory signal (real-time position management, projection image intensity analysis, or fiducial marker tracking), (ii) binning method (phase, displacement, or equal-projection-density displacement binning), and (iii) reconstruction algorithm [Feldkamp–Davis–Kress (FDK), McKinnon–Bates (MKB), or adaptive-steepest-descent projection-onto-convex-sets (ASD-POCS)]. The image quality was quantified using signal-to-noise ratio (SNR), contrast-to-noise ratio, and edge-response width in order to assess noise/streaking and blur. The SNR values were also analyzed with respect to the maximum, mean, and root-mean-squared-error (RMSE) projection angular spacing to investigate how projection angular spacing affects image quality. Results: The choice of respiratory signals was found to have no significant impact on image quality. Displacement-based binning was found to be less prone to motion artifacts compared to phase binning in more than half of the cases, but was shown to suffer from large interbin image quality variation and large projection angular gaps. Both MKB and ASD-POCS resulted in noticeably improved image quality almost 100% of the time relative to FDK. In addition, SNR
Shieh, Chun-Chien [Radiation Physics Laboratory, Sydney Medical School, University of Sydney, NSW 2006, Australia and Institute of Medical Physics, School of Physics, University of Sydney, NSW 2006 (Australia); Kipritidis, John; O’Brien, Ricky T.; Keall, Paul J., E-mail: paul.keall@sydney.edu.au [Radiation Physics Laboratory, Sydney Medical School, University of Sydney, NSW 2006 (Australia); Kuncic, Zdenka [Institute of Medical Physics, School of Physics, University of Sydney, NSW 2006 (Australia)
2014-04-15
Purpose: Respiratory signal, binning method, and reconstruction algorithm are three major controllable factors affecting image quality in thoracic 4D cone-beam CT (4D-CBCT), which is widely used in image guided radiotherapy (IGRT). Previous studies have investigated each of these factors individually, but no integrated sensitivity analysis has been performed. In addition, projection angular spacing is also a key factor in reconstruction, but how it affects image quality is not obvious. An investigation of the impacts of these four factors on image quality can help determine the most effective strategy in improving 4D-CBCT for IGRT. Methods: Fourteen 4D-CBCT patient projection datasets with various respiratory motion features were reconstructed with the following controllable factors: (i) respiratory signal (real-time position management, projection image intensity analysis, or fiducial marker tracking), (ii) binning method (phase, displacement, or equal-projection-density displacement binning), and (iii) reconstruction algorithm [Feldkamp–Davis–Kress (FDK), McKinnon–Bates (MKB), or adaptive-steepest-descent projection-onto-convex-sets (ASD-POCS)]. The image quality was quantified using signal-to-noise ratio (SNR), contrast-to-noise ratio, and edge-response width in order to assess noise/streaking and blur. The SNR values were also analyzed with respect to the maximum, mean, and root-mean-squared-error (RMSE) projection angular spacing to investigate how projection angular spacing affects image quality. Results: The choice of respiratory signals was found to have no significant impact on image quality. Displacement-based binning was found to be less prone to motion artifacts compared to phase binning in more than half of the cases, but was shown to suffer from large interbin image quality variation and large projection angular gaps. Both MKB and ASD-POCS resulted in noticeably improved image quality almost 100% of the time relative to FDK. In addition, SNR
Wen-Xiang Wu
2014-01-01
Full Text Available The cost-based system optimum problem in networks with continuously distributed value of time is formulated as a path-based form, which cannot be solved by the Frank-Wolfe algorithm. In light of magnitude improvement in the availability of computer memory in recent years, path-based algorithms have been regarded as a viable approach for traffic assignment problems with reasonably large network sizes. We develop a path-based gradient projection algorithm for solving the cost-based system optimum model, based on Goldstein-Levitin-Polyak method which has been successfully applied to solve standard user equilibrium and system optimum problems. The Sioux Falls network tested is used to verify the effectiveness of the algorithm.
Choo, Ji Yung; Goo, Jin Mo; Park, Chang Min; Park, Sang Joon; Lee, Chang Hyun; Shim, Mi-Suk
2014-01-01
To evaluate filtered back projection (FBP) and two iterative reconstruction (IR) algorithms and their effects on the quantitative analysis of lung parenchyma and airway measurements on computed tomography (CT) images. Low-dose chest CT obtained in 281 adult patients were reconstructed using three algorithms: FBP, adaptive statistical IR (ASIR) and model-based IR (MBIR). Measurements of each dataset were compared: total lung volume, emphysema index (EI), airway measurements of the lumen and wall area as well as average wall thickness. Accuracy of airway measurements of each algorithm was also evaluated using an airway phantom. EI using a threshold of -950 HU was significantly different among the three algorithms in decreasing order of FBP (2.30 %), ASIR (1.49 %) and MBIR (1.20 %) (P < 0.01). Wall thickness was also significantly different among the three algorithms with FBP (2.09 mm) demonstrating thicker walls than ASIR (2.00 mm) and MBIR (1.88 mm) (P < 0.01). Airway phantom analysis revealed that MBIR showed the most accurate value for airway measurements. The three algorithms presented different EIs and wall thicknesses, decreasing in the order of FBP, ASIR and MBIR. Thus, care should be taken in selecting the appropriate IR algorithm on quantitative analysis of the lung. (orig.)
Choo, Ji Yung [Seoul National University Medical Research Center, Department of Radiology, Seoul National University College of Medicine, and Institute of Radiation Medicine, Seoul (Korea, Republic of); Korea University Ansan Hospital, Ansan-si, Department of Radiology, Gyeonggi-do (Korea, Republic of); Goo, Jin Mo; Park, Chang Min; Park, Sang Joon [Seoul National University Medical Research Center, Department of Radiology, Seoul National University College of Medicine, and Institute of Radiation Medicine, Seoul (Korea, Republic of); Seoul National University, Cancer Research Institute, Seoul (Korea, Republic of); Lee, Chang Hyun; Shim, Mi-Suk [Seoul National University Medical Research Center, Department of Radiology, Seoul National University College of Medicine, and Institute of Radiation Medicine, Seoul (Korea, Republic of)
2014-04-15
To evaluate filtered back projection (FBP) and two iterative reconstruction (IR) algorithms and their effects on the quantitative analysis of lung parenchyma and airway measurements on computed tomography (CT) images. Low-dose chest CT obtained in 281 adult patients were reconstructed using three algorithms: FBP, adaptive statistical IR (ASIR) and model-based IR (MBIR). Measurements of each dataset were compared: total lung volume, emphysema index (EI), airway measurements of the lumen and wall area as well as average wall thickness. Accuracy of airway measurements of each algorithm was also evaluated using an airway phantom. EI using a threshold of -950 HU was significantly different among the three algorithms in decreasing order of FBP (2.30 %), ASIR (1.49 %) and MBIR (1.20 %) (P < 0.01). Wall thickness was also significantly different among the three algorithms with FBP (2.09 mm) demonstrating thicker walls than ASIR (2.00 mm) and MBIR (1.88 mm) (P < 0.01). Airway phantom analysis revealed that MBIR showed the most accurate value for airway measurements. The three algorithms presented different EIs and wall thicknesses, decreasing in the order of FBP, ASIR and MBIR. Thus, care should be taken in selecting the appropriate IR algorithm on quantitative analysis of the lung. (orig.)
Öhrmalm, Christina; Jobs, Magnus; Eriksson, Ronnie; Golbob, Sultan; Elfaitouri, Amal; Benachenhou, Farid; Strømme, Maria; Blomberg, Jonas
2010-01-01
One of the main problems in nucleic acid-based techniques for detection of infectious agents, such as influenza viruses, is that of nucleic acid sequence variation. DNA probes, 70-nt long, some including the nucleotide analog deoxyribose-Inosine (dInosine), were analyzed for hybridization tolerance to different amounts and distributions of mismatching bases, e.g. synonymous mutations, in target DNA. Microsphere-linked 70-mer probes were hybridized in 3M TMAC buffer to biotinylated single-stranded (ss) DNA for subsequent analysis in a Luminex® system. When mismatches interrupted contiguous matching stretches of 6 nt or longer, it had a strong impact on hybridization. Contiguous matching stretches are more important than the same number of matching nucleotides separated by mismatches into several regions. dInosine, but not 5-nitroindole, substitutions at mismatching positions stabilized hybridization remarkably well, comparable to N (4-fold) wobbles in the same positions. In contrast to shorter probes, 70-nt probes with judiciously placed dInosine substitutions and/or wobble positions were remarkably mismatch tolerant, with preserved specificity. An algorithm, NucZip, was constructed to model the nucleation and zipping phases of hybridization, integrating both local and distant binding contributions. It predicted hybridization more exactly than previous algorithms, and has the potential to guide the design of variation-tolerant yet specific probes. PMID:20864443
Li, Si; Xu, Yuesheng, E-mail: yxu06@syr.edu [Guangdong Provincial Key Laboratory of Computational Science, School of Mathematics and Computational Sciences, Sun Yat-sen University, Guangzhou 510275 (China); Zhang, Jiahan; Lipson, Edward [Department of Physics, Syracuse University, Syracuse, New York 13244 (United States); Krol, Andrzej; Feiglin, David [Department of Radiology, SUNY Upstate Medical University, Syracuse, New York 13210 (United States); Schmidtlein, C. Ross [Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York, New York 10065 (United States); Vogelsang, Levon [Carestream Health, Rochester, New York 14608 (United States); Shen, Lixin [Guangdong Provincial Key Laboratory of Computational Science, School of Mathematics and Computational Sciences, Sun Yat-sen University, Guangzhou 510275, China and Department of Mathematics, Syracuse University, Syracuse, New York 13244 (United States)
2015-08-15
Purpose: The authors have recently developed a preconditioned alternating projection algorithm (PAPA) with total variation (TV) regularizer for solving the penalized-likelihood optimization model for single-photon emission computed tomography (SPECT) reconstruction. This algorithm belongs to a novel class of fixed-point proximity methods. The goal of this work is to investigate how PAPA performs while dealing with realistic noisy SPECT data, to compare its performance with more conventional methods, and to address issues with TV artifacts by proposing a novel form of the algorithm invoking high-order TV regularization, denoted as HOTV-PAPA, which has been explored and studied extensively in the present work. Methods: Using Monte Carlo methods, the authors simulate noisy SPECT data from two water cylinders; one contains lumpy “warm” background and “hot” lesions of various sizes with Gaussian activity distribution, and the other is a reference cylinder without hot lesions. The authors study the performance of HOTV-PAPA and compare it with PAPA using first-order TV regularization (TV-PAPA), the Panin–Zeng–Gullberg one-step-late method with TV regularization (TV-OSL), and an expectation–maximization algorithm with Gaussian postfilter (GPF-EM). The authors select penalty-weights (hyperparameters) by qualitatively balancing the trade-off between resolution and image noise separately for TV-PAPA and TV-OSL. However, the authors arrived at the same penalty-weight value for both of them. The authors set the first penalty-weight in HOTV-PAPA equal to the optimal penalty-weight found for TV-PAPA. The second penalty-weight needed for HOTV-PAPA is tuned by balancing resolution and the severity of staircase artifacts. The authors adjust the Gaussian postfilter to approximately match the local point spread function of GPF-EM and HOTV-PAPA. The authors examine hot lesion detectability, study local spatial resolution, analyze background noise properties, estimate mean
Li, Si; Xu, Yuesheng; Zhang, Jiahan; Lipson, Edward; Krol, Andrzej; Feiglin, David; Schmidtlein, C. Ross; Vogelsang, Levon; Shen, Lixin
2015-01-01
Purpose: The authors have recently developed a preconditioned alternating projection algorithm (PAPA) with total variation (TV) regularizer for solving the penalized-likelihood optimization model for single-photon emission computed tomography (SPECT) reconstruction. This algorithm belongs to a novel class of fixed-point proximity methods. The goal of this work is to investigate how PAPA performs while dealing with realistic noisy SPECT data, to compare its performance with more conventional methods, and to address issues with TV artifacts by proposing a novel form of the algorithm invoking high-order TV regularization, denoted as HOTV-PAPA, which has been explored and studied extensively in the present work. Methods: Using Monte Carlo methods, the authors simulate noisy SPECT data from two water cylinders; one contains lumpy “warm” background and “hot” lesions of various sizes with Gaussian activity distribution, and the other is a reference cylinder without hot lesions. The authors study the performance of HOTV-PAPA and compare it with PAPA using first-order TV regularization (TV-PAPA), the Panin–Zeng–Gullberg one-step-late method with TV regularization (TV-OSL), and an expectation–maximization algorithm with Gaussian postfilter (GPF-EM). The authors select penalty-weights (hyperparameters) by qualitatively balancing the trade-off between resolution and image noise separately for TV-PAPA and TV-OSL. However, the authors arrived at the same penalty-weight value for both of them. The authors set the first penalty-weight in HOTV-PAPA equal to the optimal penalty-weight found for TV-PAPA. The second penalty-weight needed for HOTV-PAPA is tuned by balancing resolution and the severity of staircase artifacts. The authors adjust the Gaussian postfilter to approximately match the local point spread function of GPF-EM and HOTV-PAPA. The authors examine hot lesion detectability, study local spatial resolution, analyze background noise properties, estimate mean
Bradley J Tomasek
Full Text Available As weather patterns become more volatile and extreme, risks introduced by weather variability will become more critical to agricultural production. The availability of days suitable for field work is driven by soil temperature and moisture, both of which may be altered by climate change. We projected changes in Illinois season length, spring field workability, and summer drought risk under three different emissions scenarios (B1, A1B, and A2 down to the crop district scale. Across all scenarios, thermal time units increased in parallel with a longer frost-free season. An increase in late March and Early April field workability was consistent across scenarios, but a decline in overall April through May workable days was observed for many cases. In addition, summer drought metrics were projected to increase for most scenarios. These results highlight how the spatial and temporal variability in climate change may present unique challenges to mitigation and adaptation efforts.
Altino José Mentzingen de Moraes
2015-02-01
building a solution that complies with the expectations presented by Stakeholders in the Project Result (which, as already said, are recorded in the Discipline of Scope in accordance with the Planning Stage and should meet the intended requirements of compliance in Product Delivery (which, as already said, are recorded in the Discipline of Quality in accordance with Validation Stage. In the execution of Validation Stage of Product Delivery structured in the Discipline of Quality, which occurs immediately as predecessor of the placement of the solution built in Operational Phase Production is used, as reference to parameterize the expected quality, the degree of compliance with the requirements (Essential and/or Desirable specified in the Planning Stage of Project Result structured in the Discipline of Scope. Independently of the nature of the Project (if this is to execute a solution for the Civil Construction, Software Development, Product Implementation or others Areas, may be perceived some Exogenous Variables that interfere with the achievement with success of Validation Stage of Product Delivery structured in the Discipline of Quality. The perception of these Exogenous Variables, which can interfere with Deadlines and Costs initially planned for the implementation of Validation Stage of Product Delivery, is the result of the accumulation of experiences throughout the professional career of more than 40 (forty years of author this work in projects of various types, besides their additional Technical Certifications in the field of Project Management (PMP© - Project Management Professional/PMI© - Project Management Institute and System Testing (CTFL© - Certified Tester Foundation Level/ISTQB© - International Software Testing Qualifications Board. As result of their assessments and surveys in this theme, the author identified and classified the Exogenous Variables perceived between 2 (two Aspects, which are, the Circumstantial Aspects of the Project and the Specific
Jiang, Hui; Liu, Guohai; Mei, Congli; Yu, Shuang; Xiao, Xiahong; Ding, Yuhan
2012-11-01
The feasibility of rapid determination of the process variables (i.e. pH and moisture content) in solid-state fermentation (SSF) of wheat straw using Fourier transform near infrared (FT-NIR) spectroscopy was studied. Synergy interval partial least squares (siPLS) algorithm was implemented to calibrate regression model. The number of PLS factors and the number of subintervals were optimized simultaneously by cross-validation. The performance of the prediction model was evaluated according to the root mean square error of cross-validation (RMSECV), the root mean square error of prediction (RMSEP) and the correlation coefficient (R). The measurement results of the optimal model were obtained as follows: RMSECV = 0.0776, Rc = 0.9777, RMSEP = 0.0963, and Rp = 0.9686 for pH model; RMSECV = 1.3544% w/w, Rc = 0.8871, RMSEP = 1.4946% w/w, and Rp = 0.8684 for moisture content model. Finally, compared with classic PLS and iPLS models, the siPLS model revealed its superior performance. The overall results demonstrate that FT-NIR spectroscopy combined with siPLS algorithm can be used to measure process variables in solid-state fermentation of wheat straw, and NIR spectroscopy technique has a potential to be utilized in SSF industry.
Wenjuan Li
2015-11-01
Full Text Available The leaf area index (LAI and the fraction of photosynthetically active radiation absorbed by green vegetation (FAPAR are essential climatic variables in surface process models. FCOVER is also important to separate vegetation and soil for energy balance processes. Currently, several LAI, FAPAR and FCOVER satellite products are derived moderate to coarse spatial resolution. The launch of Sentinel-2 in 2015 will provide data at decametric resolution with a high revisit frequency to allow quantifying the canopy functioning at the local to regional scales. The aim of this study is thus to evaluate the performances of a neural network based algorithm to derive LAI, FAPAR and FCOVER products at decametric spatial resolution and high temporal sampling. The algorithm is generic, i.e., it is applied without any knowledge of the landcover. A time series of high spatial resolution SPOT4_HRVIR (16 scenes and Landsat 8 (18 scenes images acquired in 2013 over the France southwestern site were used to generate the LAI, FAPAR and FCOVER products. For each sensor and each biophysical variable, a neural network was first trained over PROSPECT+SAIL radiative transfer model simulations of top of canopy reflectance data for green, red, near-infra red and short wave infra-red bands. Our results show a good spatial and temporal consistency between the variables derived from both sensors: almost half the pixels show an absolute difference between SPOT and LANDSAT estimates of lower that 0.5 unit for LAI, and 0.05 unit for FAPAR and FCOVER. Finally, downward-looking digital hemispherical cameras were completed over the main land cover types to validate the accuracy of the products. Results show that the derived products are strongly correlated with the field measurements (R2 > 0.79, corresponding to a RMSE = 0.49 for LAI, RMSE = 0.10 (RMSE = 0.12 for black-sky (white sky FAPAR and RMSE = 0.15 for FCOVER. It is concluded that the proposed generic algorithm provides a good
Atashkari, K.; Nariman-Zadeh, N.; Goelcue, M.; Khalkhali, A.; Jamali, A.
2007-01-01
The main reason for the efficiency decrease at part load conditions for four-stroke spark-ignition (SI) engines is the flow restriction at the cross-sectional area of the intake system. Traditionally, valve-timing has been designed to optimize operation at high engine-speed and wide open throttle conditions. Several investigations have demonstrated that improvements at part load conditions in engine performance can be accomplished if the valve-timing is variable. Controlling valve-timing can be used to improve the torque and power curve as well as to reduce fuel consumption and emissions. In this paper, a group method of data handling (GMDH) type neural network and evolutionary algorithms (EAs) are firstly used for modelling the effects of intake valve-timing (V t ) and engine speed (N) of a spark-ignition engine on both developed engine torque (T) and fuel consumption (Fc) using some experimentally obtained training and test data. Using such obtained polynomial neural network models, a multi-objective EA (non-dominated sorting genetic algorithm, NSGA-II) with a new diversity preserving mechanism are secondly used for Pareto based optimization of the variable valve-timing engine considering two conflicting objectives such as torque (T) and fuel consumption (Fc). The comparison results demonstrate the superiority of the GMDH type models over feedforward neural network models in terms of the statistical measures in the training data, testing data and the number of hidden neurons. Further, it is shown that some interesting and important relationships, as useful optimal design principles, involved in the performance of the variable valve-timing four-stroke spark-ignition engine can be discovered by the Pareto based multi-objective optimization of the polynomial models. Such important optimal principles would not have been obtained without the use of both the GMDH type neural network modelling and the multi-objective Pareto optimization approach
Barnard, L.; Scott, C. J.; Owens, M.; Lockwood, M.; Crothers, S. R.; Davies, J. A.; Harrison, R. A.
2015-10-01
Observations from the Heliospheric Imager (HI) instruments aboard the twin STEREO spacecraft have enabled the compilation of several catalogues of coronal mass ejections (CMEs), each characterizing the propagation of CMEs through the inner heliosphere. Three such catalogues are the Rutherford Appleton Laboratory (RAL)-HI event list, the Solar Stormwatch CME catalogue, and, presented here, the J-tracker catalogue. Each catalogue uses a different method to characterize the location of CME fronts in the HI images: manual identification by an expert, the statistical reduction of the manual identifications of many citizen scientists, and an automated algorithm. We provide a quantitative comparison of the differences between these catalogues and techniques, using 51 CMEs common to each catalogue. The time-elongation profiles of these CME fronts are compared, as are the estimates of the CME kinematics derived from application of three widely used single-spacecraft-fitting techniques. The J-tracker and RAL-HI profiles are most similar, while the Solar Stormwatch profiles display a small systematic offset. Evidence is presented that these differences arise because the RAL-HI and J-tracker profiles follow the sunward edge of CME density enhancements, while Solar Stormwatch profiles track closer to the antisunward (leading) edge. We demonstrate that the method used to produce the time-elongation profile typically introduces more variability into the kinematic estimates than differences between the various single-spacecraft-fitting techniques. This has implications for the repeatability and robustness of these types of analyses, arguably especially so in the context of space weather forecasting, where it could make the results strongly dependent on the methods used by the forecaster.
Barriers to Investment in Utility-scale Variable Renewable Electricity (VRE) Projects
Hu, J.; Harmsen, R.; Crijns-Graus, W.; Worrell, E.
To effectively mitigate climate change, variable renewable electricity (VRE) is expected to substitute a great share of current fossil-fired electricity generation. However, VRE investments can be obstructed by many barriers, endangering the amount of investments needed in order to be consistent
Projecting county pulpwood production with historical production and macro-economic variables
Consuelo Brandeis; Dayton M. Lambert
2014-01-01
We explored forecasting of county roundwood pulpwood produc-tion with county-vector autoregressive (CVAR) and spatial panelvector autoregressive (SPVAR) methods. The analysis used timberproducts output data for the state of Florida, together with a set ofmacro-economic variables. Overall, we found the SPVAR specifica-tion produced forecasts with lower error rates...
Adli, Mazda; Wiethoff, Katja; Baghai, Thomas C; Fisher, Robert; Seemüller, Florian; Laakmann, Gregor; Brieger, Peter; Cordes, Joachim; Malevani, Jaroslav; Laux, Gerd; Hauth, Iris; Möller, Hans-Jürgen; Kronmüller, Klaus-Thomas; Smolka, Michael N; Schlattmann, Peter; Berger, Maximilian; Ricken, Roland; Stamm, Thomas J; Heinz, Andreas; Bauer, Michael
2017-09-01
Treatment algorithms are considered as key to improve outcomes by enhancing the quality of care. This is the first randomized controlled study to evaluate the clinical effect of algorithm-guided treatment in inpatients with major depressive disorder. Inpatients, aged 18 to 70 years with major depressive disorder from 10 German psychiatric departments were randomized to 5 different treatment arms (from 2000 to 2005), 3 of which were standardized stepwise drug treatment algorithms (ALGO). The fourth arm proposed medications and provided less specific recommendations based on a computerized documentation and expert system (CDES), the fifth arm received treatment as usual (TAU). ALGO included 3 different second-step strategies: lithium augmentation (ALGO LA), antidepressant dose-escalation (ALGO DE), and switch to a different antidepressant (ALGO SW). Time to remission (21-item Hamilton Depression Rating Scale ≤9) was the primary outcome. Time to remission was significantly shorter for ALGO DE (n=91) compared with both TAU (n=84) (HR=1.67; P=.014) and CDES (n=79) (HR=1.59; P=.031) and ALGO SW (n=89) compared with both TAU (HR=1.64; P=.018) and CDES (HR=1.56; P=.038). For both ALGO LA (n=86) and ALGO DE, fewer antidepressant medications were needed to achieve remission than for CDES or TAU (Palgorithm-guided treatment is associated with shorter times and fewer medication changes to achieve remission with depressed inpatients than treatment as usual or computerized medication choice guidance. © The Author 2017. Published by Oxford University Press on behalf of CINP.
Guo, Pi; Zeng, Fangfang; Hu, Xiaomin; Zhang, Dingmei; Zhu, Shuming; Deng, Yu; Hao, Yuantao
2015-01-01
Objectives In epidemiological studies, it is important to identify independent associations between collective exposures and a health outcome. The current stepwise selection technique ignores stochastic errors and suffers from a lack of stability. The alternative LASSO-penalized regression model can be applied to detect significant predictors from a pool of candidate variables. However, this technique is prone to false positives and tends to create excessive biases. It remains challenging to develop robust variable selection methods and enhance predictability. Material and methods Two improved algorithms denoted the two-stage hybrid and bootstrap ranking procedures, both using a LASSO-type penalty, were developed for epidemiological association analysis. The performance of the proposed procedures and other methods including conventional LASSO, Bolasso, stepwise and stability selection models were evaluated using intensive simulation. In addition, methods were compared by using an empirical analysis based on large-scale survey data of hepatitis B infection-relevant factors among Guangdong residents. Results The proposed procedures produced comparable or less biased selection results when compared to conventional variable selection models. In total, the two newly proposed procedures were stable with respect to various scenarios of simulation, demonstrating a higher power and a lower false positive rate during variable selection than the compared methods. In empirical analysis, the proposed procedures yielding a sparse set of hepatitis B infection-relevant factors gave the best predictive performance and showed that the procedures were able to select a more stringent set of factors. The individual history of hepatitis B vaccination, family and individual history of hepatitis B infection were associated with hepatitis B infection in the studied residents according to the proposed procedures. Conclusions The newly proposed procedures improve the identification of
Kwiatkowski, Lester; Aumont, Olivier; Bopp, Laurent; Ciais, Philippe
2018-04-01
Ocean biogeochemical models are integral components of Earth system models used to project the evolution of the ocean carbon sink, as well as potential changes in the physical and chemical environment of marine ecosystems. In such models the stoichiometry of phytoplankton C:N:P is typically fixed at the Redfield ratio. The observed stoichiometry of phytoplankton, however, has been shown to considerably vary from Redfield values due to plasticity in the expression of phytoplankton cell structures with different elemental compositions. The intrinsic structure of fixed C:N:P models therefore has the potential to bias projections of the marine response to climate change. We assess the importance of variable stoichiometry on 21st century projections of net primary production, food quality, and ocean carbon uptake using the recently developed Pelagic Interactions Scheme for Carbon and Ecosystem Studies Quota (PISCES-QUOTA) ocean biogeochemistry model. The model simulates variable phytoplankton C:N:P stoichiometry and was run under historical and business-as-usual scenario forcing from 1850 to 2100. PISCES-QUOTA projects similar 21st century global net primary production decline (7.7%) to current generation fixed stoichiometry models. Global phytoplankton N and P content or food quality is projected to decline by 1.2% and 6.4% over the 21st century, respectively. The largest reductions in food quality are in the oligotrophic subtropical gyres and Arctic Ocean where declines by the end of the century can exceed 20%. Using the change in the carbon export efficiency in PISCES-QUOTA, we estimate that fixed stoichiometry models may be underestimating 21st century cumulative ocean carbon uptake by 0.5-3.5% (2.0-15.1 PgC).
Michel, D.; Jimé nez, C.; Miralles, Diego G.; Jung, M.; Hirschi, M.; Ershadi, A.; Martens, B.; McCabe, Matthew; Fisher, J. B.; Mu, Q.; Seneviratne, S. I.; Wood, E. F.; Ferná ndez-Prieto, D.
2015-01-01
algorithms: the Priestley–Taylor Jet Propulsion Laboratory model (PT-JPL), the Penman–Monteith algorithm from the MODIS evaporation product (PM-MOD), the Surface Energy Balance System (SEBS) and the Global Land Evaporation Amsterdam Model (GLEAM). In addition
Y. Fei
2014-09-01
Full Text Available In this study, a hydrological modelling framework was introduced to assess the climate change impacts on future river flow in the West River basin, China, especially on streamflow variability and extremes. The modelling framework includes a delta-change method with the quantile-mapping technique to construct future climate forcings on the basis of observed meteorological data and the downscaled climate model outputs. This method is able to retain the signals of extreme weather events, as projected by climate models, in the constructed future forcing scenarios. Fed with the historical and future forcing data, a large-scale hydrologic model (the Variable Infiltration Capacity model, VIC was executed for streamflow simulations and projections at daily time scales. A bootstrapping resample approach was used as an indirect alternative to test the equality of means, standard deviations and the coefficients of variation for the baseline and future streamflow time series, and to assess the future changes in flood return levels. The West River basin case study confirms that the introduced modelling framework is an efficient effective tool to quantify streamflow variability and extremes in response to future climate change.
Yang, Qin; Zou, Hong-Yan; Zhang, Yan; Tang, Li-Juan; Shen, Guo-Li; Jiang, Jian-Hui; Yu, Ru-Qin
2016-01-15
Most of the proteins locate more than one organelle in a cell. Unmixing the localization patterns of proteins is critical for understanding the protein functions and other vital cellular processes. Herein, non-linear machine learning technique is proposed for the first time upon protein pattern unmixing. Variable-weighted support vector machine (VW-SVM) is a demonstrated robust modeling technique with flexible and rational variable selection. As optimized by a global stochastic optimization technique, particle swarm optimization (PSO) algorithm, it makes VW-SVM to be an adaptive parameter-free method for automated unmixing of protein subcellular patterns. Results obtained by pattern unmixing of a set of fluorescence microscope images of cells indicate VW-SVM as optimized by PSO is able to extract useful pattern features by optimally rescaling each variable for non-linear SVM modeling, consequently leading to improved performances in multiplex protein pattern unmixing compared with conventional SVM and other exiting pattern unmixing methods. Copyright © 2015 Elsevier B.V. All rights reserved.
B. Langford
2017-12-01
µg m−2 h−1; Bosco Fontana, Italy (1610 ± 420 µg m−2 h−1; Castelporziano, Italy (121 ± 15 µg m−2 h−1; Ispra, Italy (7590 ± 1070 µg m−2 h−1; and the Observatoire de Haute Provence, France (7990 ± 1010 µg m−2 h−1. Ecosystem-scale isoprene emission potentials were then extrapolated to the leaf-level and compared to previous leaf-level measurements for Quercus robur and Quercus pubescens, two species thought to account for 50 % of the total European isoprene budget. The literature values agreed closely with emission potentials calculated using the G93 algorithm, which were 85 ± 75 and 78 ± 25 µg g−1 h−1 for Q. robur and Q. pubescens, respectively. By contrast, emission potentials calculated using the G06 algorithm, the same algorithm used in a previous study to derive the European budget, were significantly lower, which we attribute to the influence of past light and temperature conditions. Adopting these new G06 specific emission potentials for Q. robur (55 ± 24 µg g−1 h−1 and Q. pubescens (47 ± 16 µg g−1 h−1 reduced the projected European budget by ∼ 17 %. Our findings demonstrate that calculated isoprene emission potentials vary considerably depending upon the specific approach used in their calculation. Therefore, it is our recommendation that the community now adopt a standardised approach to the way in which micrometeorological flux measurements are corrected and used to derive isoprene, and other biogenic volatile organic compounds, emission potentials.
Langford, Ben; Cash, James; Acton, W. Joe F.; Valach, Amy C.; Hewitt, C. Nicholas; Fares, Silvano; Goded, Ignacio; Gruening, Carsten; House, Emily; Kalogridis, Athina-Cerise; Gros, Valérie; Schafers, Richard; Thomas, Rick; Broadmeadow, Mark; Nemitz, Eiko
2017-12-01
420 µg m-2 h-1); Castelporziano, Italy (121 ± 15 µg m-2 h-1); Ispra, Italy (7590 ± 1070 µg m-2 h-1); and the Observatoire de Haute Provence, France (7990 ± 1010 µg m-2 h-1). Ecosystem-scale isoprene emission potentials were then extrapolated to the leaf-level and compared to previous leaf-level measurements for Quercus robur and Quercus pubescens, two species thought to account for 50 % of the total European isoprene budget. The literature values agreed closely with emission potentials calculated using the G93 algorithm, which were 85 ± 75 and 78 ± 25 µg g-1 h-1 for Q. robur and Q. pubescens, respectively. By contrast, emission potentials calculated using the G06 algorithm, the same algorithm used in a previous study to derive the European budget, were significantly lower, which we attribute to the influence of past light and temperature conditions. Adopting these new G06 specific emission potentials for Q. robur (55 ± 24 µg g-1 h-1) and Q. pubescens (47 ± 16 µg g-1 h-1) reduced the projected European budget by ˜ 17 %. Our findings demonstrate that calculated isoprene emission potentials vary considerably depending upon the specific approach used in their calculation. Therefore, it is our recommendation that the community now adopt a standardised approach to the way in which micrometeorological flux measurements are corrected and used to derive isoprene, and other biogenic volatile organic compounds, emission potentials.
Dubrovský, Martin; Hayes, M.; Duce, P.; Trnka, Miroslav; Svoboda, M.; Zara, P.
2014-01-01
Roč. 14, č. 5 (2014), s. 1907-1919 ISSN 1436-3798 R&D Projects: GA MŠk(CZ) EE2.3.20.0248; GA MŠk(CZ) EE2.4.31.0056 Institutional support: RVO:67179843 Keywords : climate change * mediteranean * global climate models * temperature * precipitation * drought * palmer drought severity index * weather generator Subject RIV: DG - Athmosphere Sciences, Meteorology Impact factor: 2.628, year: 2014
The Polar Crust Project- BSC Diversity and Variability in the Arctic and Antarctica
Williams, Laura; Borchhardt, Nadine; Komisc-Buchmann, Karin; Becker, Burkhard; Karsten, Ulf; Büdel, Burkhard
2015-04-01
The Polar Crust Project is a newly funded DFG initiative that aims to provide a precise evaluation of the biodiversity of eukaryotic green microalgae and cyanobacteria in Biological Soil Crusts (BSC) isolated from the Antarctic Peninsula and Arctic Svalbard. This project will include a thorough investigation into the composition of BSC in the Polar regions, this especially is important for Svalbard due to the severe lack of any previous research on such communities in this area. During our expedition to Spitsbergen, Svalbard in August 2014 we were particularly surprised to find that the coverage of BSC is extremely high and is certainly the dominant vegetation type around Ny Ålesund. Due to this discovery the project has now been extended to include long term measurements of CO2 gas exchange in order to gain exact seasonal carbon fixation rates and therefore discovering how the BSC contributes to the ecosystems carbon balance. The research areas of Spitsbergen were centred around 2 localities: Ny-Ålesund is a research town, home to the AWIPEV station, on the Brøgger peninsula. Longyearbyen, which is the largest settlement on the island, is found in the valley Longyeardalen on the shore of Adventfjorden. Areas where BSC is the prevalent vegetation type were identified, 6 around Ny-Ålesund and 4 for Longyearbyen, and vegetation surveys were conducted. This entailed 625 single point measurements at each site and identifying the crust/or other cover type. For example, green algal lichen, cyanobacterial crust, higher plant, open soil. Samples were also taken at every location in order to study the green algal and cyanobacterial diversity. The vegetation survey will allow us to get a good overview of the BSC composition at the different sites. In January 2015 an expedition to the Antarctic Peninsular took place, here the sampling method was repeated and therefore both Polar Regions BSC composition can be described and compared. Here, we wish to introduce the Polar
Kuttig, Jan; Steiding, Christian; Hupfer, Martin; Karolczak, Marek; Kolditz, Daniel
2015-01-01
In this study we compared various defect pixel correction methods for reducing artifact appearance within projection images used for computed tomography (CT) reconstructions.Defect pixel correction algorithms were examined with respect to their artifact behaviour within planar projection images as well as in volumetric CT reconstructions. We investigated four algorithms: nearest neighbour, linear and adaptive linear interpolation, and a frequency-selective spectral-domain approach.To characterise the quality of each algorithm in planar image data, we inserted line defects of varying widths and orientations into images. The structure preservation of each algorithm was analysed by corrupting and correcting the image of a slit phantom pattern and by evaluating its line spread function (LSF). The noise preservation was assessed by interpolating corrupted flat images and estimating the noise power spectrum (NPS) of the interpolated region.For the volumetric investigations, we examined the structure and noise preservation within a structured aluminium foam, a mid-contrast cone-beam phantom and a homogeneous Polyurethane (PUR) cylinder.The frequency-selective algorithm showed the best structure and noise preservation for planar data of the correction methods tested. For volumetric data it still showed the best noise preservation, whereas the structure preservation was outperformed by the linear interpolation.The frequency-selective spectral-domain approach in the correction of line defects is recommended for planar image data, but its abilities within high-contrast volumes are restricted. In that case, the application of a simple linear interpolation might be the better choice to correct line defects within projection images used for CT. (paper)
Karagali, Ioanna
of the vertical extend of diurnal signals. Drifting buoys provide measurements close to the surface but are not always available. Moored buoys are generally not able to resolve the daily SST signal, which strongly weakens with depth within the upper water column. For such reasons, the General Ocean Turbulence......, atmospheric and oceanic modelling, bio-chemical processes and oceanic CO2 studies. The diurnal variability of SST, driven by the coincident occurrence of low enough wind and solar heating, is currently not properly understood. Atmospheric, oceanic and climate models are currently not adequately resolving...... the daily SST variability, resulting in biases of the total heat budget estimates and therefore, demised model accuracies. The ESA STSE funded project SSTDV:R.EX.-IM.A.M. aimed at characterising the regional extend of diurnal SST signals and their impact in atmospheric modelling. This study will briefly...
Mahowald, Natalie [Cornell Univ., Ithaca, NY (United States)
2016-11-29
Soils in natural and managed ecosystems and wetlands are well known sources of methane, nitrous oxides, and reactive nitrogen gases, but the magnitudes of gas flux to the atmosphere are still poorly constrained. Thus, the reasons for the large increases in atmospheric concentrations of methane and nitrous oxide since the preindustrial time period are not well understood. The low atmospheric concentrations of methane and nitrous oxide, despite being more potent greenhouse gases than carbon dioxide, complicate empirical studies to provide explanations. In addition to climate concerns, the emissions of reactive nitrogen gases from soils are important to the changing nitrogen balance in the earth system, subject to human management, and may change substantially in the future. Thus improved modeling of the emission fluxes of these species from the land surface is important. Currently, there are emission modules for methane and some nitrogen species in the Community Earth System Model’s Community Land Model (CLM-ME/N); however, there are large uncertainties and problems in the simulations, resulting in coarse estimates. In this proposal, we seek to improve these emission modules by combining state-of-the-art process modules for emissions, available data, and new optimization methods. In earth science problems, we often have substantial data and knowledge of processes in disparate systems, and thus we need to combine data and a general process level understanding into a model for projections of future climate that are as accurate as possible. The best methodologies for optimization of parameters in earth system models are still being developed. In this proposal we will develop and apply surrogate algorithms that a) were especially developed for computationally expensive simulations like CLM-ME/N models; b) were (in the earlier surrogate optimization Stochastic RBF) demonstrated to perform very well on computationally expensive complex partial differential equations in
Brzhechko, Danyyl
2016-01-01
A jet is a spray of particles, usually produced by the hadronization of a quark or gluon in a particle physics or heavy ion experiment. Reconstructed particles are clustered into jets using one of the available jet clustering algorithms (kT, anti-kT etc.), which adopt dierent metrics to decide if two given particles belong to the same jet or not. Jets can also originate from the decay of high-momenta heavy particles, such as boosted vector boson. When these particles decay to quarks, the overlap of the hadronization products of each quark result into a single massive jet, dierent than the ordinary jets from quarks and gluons. These special jets can be identied using substructure algorithms. In this study, we consider the performances of a commonly used substructure variable, N-subjettiness, with two variants of an alternative approach, based on the momentum ow around the jet axis. I focused on high-energy collision in a hypothetical future circular collider (FCC) colliding protons at a center-of-mass energy 1...
Jianzhong Zhou
2018-04-01
Full Text Available With the fast development of artificial intelligence techniques, data-driven modeling approaches are becoming hotspots in both academic research and engineering practice. This paper proposes a novel data-driven T-S fuzzy model to precisely describe the complicated dynamic behaviors of pumped storage generator motor (PSGM. In premise fuzzy partition of the proposed T-S fuzzy model, a novel variable-length tree-seed algorithm based competitive agglomeration (VTSA-CA algorithm is presented to determine the optimal number of clusters automatically and improve the fuzzy clustering performances. Besides, in order to promote modeling accuracy of PSGM, the input and output formats in the T-S fuzzy model are selected by an economical parameter controlled auto-regressive (CAR model derived from a high-order transfer function of PSGM considering the distributed components in the water diversion system of the power plant. The effectiveness and superiority of the T-S fuzzy model for PSGM under different working conditions are validated by performing comparative studies with both practical data and the conventional mechanistic model.
Shieh, Chun-Chien; Kipritidis, John; O’Brien, Ricky T.; Kuncic, Zdenka; Keall, Paul J.
2014-01-01
Purpose: Respiratory signal, binning method, and reconstruction algorithm are three major controllable factors affecting image quality in thoracic 4D cone-beam CT (4D-CBCT), which is widely used in image guided radiotherapy (IGRT). Previous studies have investigated each of these factors individually, but no integrated sensitivity analysis has been performed. In addition, projection angular spacing is also a key factor in reconstruction, but how it affects image quality is not obvious. An inv...
Duclos, D.; Lonnoy, J.; Guillerm, Q.; Jurie, F.; Herbin, S.; D'Angelo, E.
2008-04-01
The last five years have seen a renewal of Automatic Target Recognition applications, mainly because of the latest advances in machine learning techniques. In this context, large collections of image datasets are essential for training algorithms as well as for their evaluation. Indeed, the recent proliferation of recognition algorithms, generally applied to slightly different problems, make their comparisons through clean evaluation campaigns necessary. The ROBIN project tries to fulfil these two needs by putting unclassified datasets, ground truths, competitions and metrics for the evaluation of ATR algorithms at the disposition of the scientific community. The scope of this project includes single and multi-class generic target detection and generic target recognition, in military and security contexts. From our knowledge, it is the first time that a database of this importance (several hundred thousands of visible and infrared hand annotated images) has been publicly released. Funded by the French Ministry of Defence (DGA) and by the French Ministry of Research, ROBIN is one of the ten Techno-vision projects. Techno-vision is a large and ambitious government initiative for building evaluation means for computer vision technologies, for various application contexts. ROBIN's consortium includes major companies and research centres involved in Computer Vision R&D in the field of defence: Bertin Technologies, CNES, ECA, DGA, EADS, INRIA, ONERA, MBDA, SAGEM, THALES. This paper, which first gives an overview of the whole project, is focused on one of ROBIN's key competitions, the SAGEM Defence Security database. This dataset contains more than eight hundred ground and aerial infrared images of six different vehicles in cluttered scenes including distracters. Two different sets of data are available for each target. The first set includes different views of each vehicle at close range in a "simple" background, and can be used to train algorithms. The second set
Dubrovský, Martin; Hayes, M.; Duce, P.; Trnka, M.; Svoboda, M.; Zara, P.
2014-01-01
Roč. 14, č. 5 (2014), s. 1907-1919 ISSN 1436-3798 R&D Projects: GA AV ČR IAA300420806; GA MŠk LD12029 Institutional support: RVO:68378289 Keywords : mediterranean * climate change * global climate models * temperature * precipitation * drought * Palmer drought severity index * weather generator Subject RIV: DG - Athmosphere Sciences, Meteorology Impact factor: 2.628, year: 2014 http://link.springer.com/article/10.1007%2Fs10113-013-0562-z/fulltext.html
Farivar, Faezeh; Aliyari Shoorehdeli, Mahdi; Nekoui, Mohammad Ali; Teshnehlab, Mohammad
2012-01-01
Highlights: ► A systematic procedure for GPS of unknown heavy chaotic gyroscope systems. ► Proposed methods are based on Lyapunov stability theory. ► Without calculating Lyapunov exponents and Eigen values of the Jacobian matrix. ► Capable to extend for a variety of chaotic systems. ► Useful for practical applications in the future. - Abstract: This paper proposes the chaos control and the generalized projective synchronization methods for heavy symmetric gyroscope systems via Gaussian radial basis adaptive variable structure control. Because of the nonlinear terms of the gyroscope system, the system exhibits chaotic motions. Occasionally, the extreme sensitivity to initial states in a system operating in chaotic mode can be very destructive to the system because of unpredictable behavior. In order to improve the performance of a dynamic system or avoid the chaotic phenomena, it is necessary to control a chaotic system with a periodic motion beneficial for working with a particular condition. As chaotic signals are usually broadband and noise like, synchronized chaotic systems can be used as cipher generators for secure communication. This paper presents chaos synchronization of two identical chaotic motions of symmetric gyroscopes. In this paper, the switching surfaces are adopted to ensure the stability of the error dynamics in variable structure control. Using the neural variable structure control technique, control laws are established which guarantees the chaos control and the generalized projective synchronization of unknown gyroscope systems. In the neural variable structure control, Gaussian radial basis functions are utilized to on-line estimate the system dynamic functions. Also, the adaptation laws of the on-line estimator are derived in the sense of Lyapunov function. Thus, the unknown gyro systems can be guaranteed to be asymptotically stable. Also, the proposed method can achieve the control objectives. Numerical simulations are presented to
Vallot, Dorothée; Applegate, Patrick; Pettersson, Rickard
2013-04-01
Projecting future climate and ice sheet development requires sophisticated models and extensive field observations. Given the present state of our knowledge, it is very difficult to say what will happen with certainty. Despite the ongoing increase in atmospheric greenhouse gas concentrations, the possibility that a new ice sheet might form over Scandinavia in the far distant future cannot be excluded. The growth of a new Scandinavian Ice Sheet would have important consequences for buried nuclear waste repositories. The Greenland Analogue Project, initiated by the Swedish Nuclear Fuel and Waste Management Company (SKB), is working to assess the effects of a possible future ice sheet on groundwater flow by studying a constrained domain in Western Greenland by field measurements (including deep bedrock drilling in front of the ice sheet) combined with numerical modeling. To address the needs of the GAP project, we interpolated results from an ensemble of ice sheet model runs to the smaller and more finely resolved modeling domain used in the GAP project's hydrologic modeling. Three runs have been chosen with three fairly different positive degree-day factors among those that reproduced the modern ice margin at the borehole position. The interpolated results describe changes in hydrologically-relevant variables over two time periods, 115 ka to 80 ka, and 20 ka to 1 ka. In the first of these time periods, the ice margin advances over the model domain; in the second time period, the ice margin retreats over the model domain. The spatially-and temporally dependent variables that we treated include the ice thickness, basal melting rate, surface mass balance, basal temperature, basal thermal regime (frozen or thawed), surface temperature, and basal water pressure. The melt flux is also calculated.
Hu, A.; Bates, S. C.
2017-12-01
Observations indicate that the global mean surface temperature is rising, so does the global mean sea level. Sea level rise (SLR) can impose significant impacts on island and coastal communities, especially when SLR is compounded with storm surges. Here, via analyzing results from two sets of ensemble simulations from the Community Earth System Model version 1, we investigate how the potential SLR benefits through mitigating the future emission scenarios from business as usual to a mild-mitigation over the 21st Century would be affected by internal climate variability. Results show that there is almost no SLR benefit in the near term due to the large SLR variability due to the internal ocean dynamics. However, toward the end of the 21st century, the SLR benefit can be as much as a 26±1% reduction of the global mean SLR due to seawater thermal expansion. Regionally, the benefits from this mitigation for both near and long terms are heterogeneous. They vary from just a 11±5% SLR reduction in Melbourne, Australia to a 35±6% reduction in London. The processes contributing to these regional differences are the coupling of the wind-driven ocean circulation with the decadal scale sea surface temperature mode in the Pacific and Southern Oceans, and the changes of the thermohaline circulation and the mid-latitude air-sea coupling in the Atlantic.
The ISLAnds Project. III. Variable Stars in Six Andromeda Dwarf Spheroidal Galaxies
Martínez-Vázquez, Clara E.; Monelli, Matteo; Bernard, Edouard J.; Gallart, Carme; Stetson, Peter B.; Skillman, Evan D.; Bono, Giuseppe; Cassisi, Santi; Fiorentino, Giuliana; McQuinn, Kristen B. W.; Cole, Andrew A.; McConnachie, Alan W.; Martin, Nicolas F.; Dolphin, Andrew E.; Boylan-Kolchin, Michael; Aparicio, Antonio; Hidalgo, Sebastian L.; Weisz, Daniel R.
2017-12-01
We present a census of variable stars in six M31 dwarf spheroidal satellites observed with the Hubble Space Telescope. We detect 870 RR Lyrae (RRL) stars in the fields of And I (296), II (251), III (111), XV (117), XVI (8), and XXVIII (87). We also detect a total of 15 Anomalous Cepheids, three eclipsing binaries, and seven field RRL stars compatible with being members of the M31 halo or the Giant Stellar Stream. We derive robust and homogeneous distances to the six galaxies using different methods based on the properties of the RRL stars. Working with the up-to-date set of Period-Wesenheit (I, B-I) relations published by Marconi et al., we obtain distance moduli of μ 0 = [24.49, 24.16, 24.36, 24.42, 23.70, 24.43] mag (respectively), with systematic uncertainties of 0.08 mag and statistical uncertainties <0.11 mag. We have considered an enlarged sample of 16 M31 satellites with published variability studies, and compared their pulsational observables (e.g., periods and amplitudes) with those of 15 Milky Way satellites for which similar data are available. The properties of the (strictly old) RRL in both satellite systems do not show any significant difference. In particular, we found a strikingly similar correlation between the mean period distribution of the fundamental RRL pulsators (RRab) and the mean metallicities of the galaxies. This indicates that the old RRL progenitors were similar at the early stage in the two environments, suggesting very similar characteristics for the earliest stages of evolution of both satellite systems. Based on observations made with the NASA/ESA Hubble Space Telescope, obtained at the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc., under NASA contract NAS 5-26555. These observations are associated with programs 13028 and 13739.
An Object-Based Approach to Evaluation of Climate Variability Projections and Predictions
Ammann, C. M.; Brown, B.; Kalb, C. P.; Bullock, R.
2017-12-01
Evaluations of the performance of earth system model predictions and projections are of critical importance to enhance usefulness of these products. Such evaluations need to address specific concerns depending on the system and decisions of interest; hence, evaluation tools must be tailored to inform about specific issues. Traditional approaches that summarize grid-based comparisons of analyses and models, or between current and future climate, often do not reveal important information about the models' performance (e.g., spatial or temporal displacements; the reason behind a poor score) and are unable to accommodate these specific information needs. For example, summary statistics such as the correlation coefficient or the mean-squared error provide minimal information to developers, users, and decision makers regarding what is "right" and "wrong" with a model. New spatial and temporal-spatial object-based tools from the field of weather forecast verification (where comparisons typically focus on much finer temporal and spatial scales) have been adapted to more completely answer some of the important earth system model evaluation questions. In particular, the Method for Object-based Diagnostic Evaluation (MODE) tool and its temporal (three-dimensional) extension (MODE-TD) have been adapted for these evaluations. More specifically, these tools can be used to address spatial and temporal displacements in projections of El Nino-related precipitation and/or temperature anomalies, ITCZ-associated precipitation areas, atmospheric rivers, seasonal sea-ice extent, and other features of interest. Examples of several applications of these tools in a climate context will be presented, using output of the CESM large ensemble. In general, these tools provide diagnostic information about model performance - accounting for spatial, temporal, and intensity differences - that cannot be achieved using traditional (scalar) model comparison approaches. Thus, they can provide more
Seaby, L. P.; Tague, C. L.; Hope, A. S.
2006-12-01
The Mediterranean type environments (MTEs) of California are characterized by a distinct wet and dry season and high variability in inter-annual climate. Water limitation in MTEs makes eco-hydrological processes highly sensitive to both climate variability and frequent fire disturbance. This research modeled post-fire eco- hydrologic behavior under historical and moderate and extreme scenarios of future climate in a semi-arid chaparral dominated southern California MTE. We used a physically-based, spatially-distributed, eco- hydrological model (RHESSys - Regional Hydro-Ecologic Simulation System), to capture linkages between water and vegetation response to the combined effects of fire and historic and future climate variability. We found post-fire eco-hydrologic behavior to be strongly influenced by the episodic nature of MTE climate, which intensifies under projected climate change. Higher rates of post-fire net primary productivity were found under moderate climate change, while more extreme climate change produced water stressed conditions which were less favorable for vegetation productivity. Precipitation variability in the historic record follows the El Niño Southern Oscillation (ENSO) and the Pacific Decadal Oscillation (PDO), and these inter-annual climate characteristics intensify under climate change. Inter-annual variation in streamflow follows these precipitation patterns. Post-fire streamflow and carbon cycling trajectories are strongly dependent on climate characteristics during the first 5 years following fire, and historic intra-climate variability during this period tends to overwhelm longer term trends and variation that might be attributable to climate change. Results have implications for water resource availability, vegetation type conversion from shrubs to grassland, and changes in ecosystem structure and function.
Small, Richard [National Center for Atmospheric Research, Boulder, CO (United States); Bryan, Frank [National Center for Atmospheric Research, Boulder, CO (United States); Tribbia, Joseph [National Center for Atmospheric Research, Boulder, CO (United States); Park, Sungsu [National Center for Atmospheric Research, Boulder, CO (United States); Dennis, John [National Center for Atmospheric Research, Boulder, CO (United States); Saravanan, R. [National Center for Atmospheric Research, Boulder, CO (United States); Schneider, Niklas [National Center for Atmospheric Research, Boulder, CO (United States); Kwon, Young-Oh [National Center for Atmospheric Research, Boulder, CO (United States)
2015-06-11
This project aims to improve long term global climate simulations by resolving ocean mesoscale activity and the corresponding response in the atmosphere. The main computational objectives are; i) to perform and assess Community Earth System Model (CESM) simulations with the new Community Atmospheric Model (CAM) spectral element dynamical core; ii) use static mesh refinement to focus on oceanic fronts; iii) develop a new Earth System Modeling tool to investigate the atmospheric response to fronts by selectively filtering surface flux fields in the CESM coupler. The climate research objectives are 1) to improve the coupling of ocean fronts and the atmospheric boundary layer via investigations of dependency on model resolution and stability functions: 2) to understand and simulate the ensuing tropospheric response that has recently been documented in observations: and 3) to investigate the relationship of ocean frontal variability to low frequency climate variability and the accompanying storm tracks and extremes in high resolution simulations. This is a collaborative multi-institution project consisting of computational scientists, climate scientists and climate model developers. It specifically aims at DOE objectives of advancing simulation and predictive capability of climate models through improvements in resolution and physical process representation.
Sheng, Xianjie; Lin, Duanmu
2016-01-01
Highlights: • The mathematical model of economic frictional factor based on DVFSP DHS is established. • Influence factors of economic frictional factor are analyzed. • Energy saving in a DVFSP district heating system is presented and analyzed. - Abstract: Optimization of the district heating (DH) piping network is of vital importance to the economics of the whole DH system. The application of distributed variable frequency speed pump (DVFSP) in the district heating network has been considered as a technology improvement that has a potential in saving energy compared to the conventional central circulating pump (CCCP) district heating system (DHS). Economic frictional factor is a common design parameter used in DH pipe network design. In this paper, the mathematical model of economic frictional factor based on DVFSP DHS is established, and influence factors are analyzed, providing a reference for engineering designs for the system. According to the analysis results, it is studied that the energy efficiency in the DH system with the DVFSP is compared with the one in the DH system with conventional central circulating pump (CCCP) using a case based on a district heating network in Dalian, China. The results of the study on the case show that the average electrical energy saved is 49.41% of the one saved by the DH system with conventional central circulating pump in the primary network.
Variable resolution pattern generation for the Associative Memory of the ATLAS FTK project
Annovi, A; The ATLAS collaboration; Faulkner, G; Giannetti, P; Jiang, Z; Luongo, C; Pandini, C; Shochet, M; Tompkins, L; Volpi, G
2013-01-01
The Associative Memory (AM) chip is special device that allows to find coincidence patterns, or just patterns, between the incoming data in up to 8 parallel streams. The latest AM chip has been designed to receive silicon clusters generated in 8 layers of the ATLAS silicon detector sensor, to perform parallel track pattern matching at high rate and it will be the core of the FTK project. Data going through each of the busses are compared with a bank of patterns and AM chip looks for matches in each line, like commercial content addressable memory (CAM). The high density of hits expected in the ATLAS inner detector from 2015 put a challenge in the capability of the AM chip in rejecting random coincidences, requiring either an extremely high number of high precision patterns, with increasing costs and complexity of the system, or more flexible solutions. For this reason in the most recent prototype of the AM chip ternary cells have been added in the logic, allowing “don’t care” (DC) bits in the match. Hav...
Variable exchange between a stream and an aquifer in the Rio Grande Project Area
Sheng, Z.; Abudu, S.; Michelsen, A.; King, P.
2016-12-01
Both surface water and groundwater in the Rio Grande Project area in southern New Mexico and Far West Texas have been stressed by natural conditions such as droughts and human activities, including urban development and agricultural irrigation. In some area pumping stress in the aquifer becomes so great that it depletes the river flow especially during the irrigation season, typically from March through October. Therefore understanding such relationship between surface water and groundwater becomes more important in regional water resources planning and management. In this area, stream flows are highly regulated by the upstream reservoirs during the irrigation season and greatly influenced by return flows during non-irrigation season. During a drought additional groundwater pumping to supplement surface water shortage further complicates the surface water and groundwater interaction. In this paper the authors will use observation data and results of numerical models (MODFLOW) to characterize and quantify hydrological exchange fluxes between groundwater in the aquifers and surface water as well as impacts of groundwater pumping. The interaction shows a very interesting seasonal variation (irrigation vs. non-irrigation) as well as impact of a drought. Groundwater has been pumped for both municipal supplies and agricultural irrigation, which has imposed stresses toward both stream flows and aquifer storage. The results clearly show that historic groundwater pumping has caused some reaches of the river change from gaining stream to losing stream. Beyond the exchange between surface water and groundwater in the shallow aquifer, groundwater pumping in a deep aquifer could also enhance the exchanges between different aquifers through leaky confining layers. In the earlier history of pumping, pumping from the shallow aquifer is compensated by simple depletion of surface water, while deep aquifer tends to use the aquifer storage. With continued pumping, the cumulative
Luis C. J. Moreira
2010-12-01
Full Text Available Em face da importância em conhecer a evapotranspiração (ET para uso racional da água na irrigação no contexto atual de escassez desse recurso, algoritmos de estimativa da ET a nível regional foram desenvolvidos utilizando-se de ferramentas de sensoriamento remoto. Este estudo objetivou aplicar o algoritmo SEBAL (Surface Energy Balance Algorithms for Land em três imagens do satélite Landsat 5, do segundo semestre de 2006. As imagens correspondem a áreas irrigadas, floresta nativa densa e a Caatinga do Estado do Ceará (Baixo Acaraú, Chapada do Apodi e Chapada do Araripe. Este algoritmo calcula a evapotranspiração horária a partir do fluxo de calor latente, estimado como resíduo do balanço de energia na superfície. Os valores de ET obtidos nas três regiões foram superiores a 0,60 mm h-1 nas áreas irrigadas ou de vegetação nativa densa. As áreas de vegetação nativa menos densa apresentaram taxa da ET horária de 0,35 a 0,60 mm h-1, e valores quase nulos em áreas degradadas. A análise das médias de evapotranspiração horária pelo teste de Tukey a 5% de probabilidade permitiu evidenciar uma variabilidade significativa local, bem como regional no Estado do Ceará.In the context of water resources scarcity, the rational use of water for irrigation is necessary, implying precise estimations of the actual evapotranspiration (ET. With the recent progresses of remote-sensed technologies, regional algorithms estimating evapotranspiration from satellite observations were developed. This work aimed at applying the SEBAL algorithm (Surface Energy Balance Algorithms for Land at three Landsat-5 images during the second semester of 2006. These images cover irrigated areas, dense native forest areas and caatinga areas in three regions of the state of Ceará (Baixo Acaraú, Chapada do Apodi and Chapada do Araripe. The SEBAL algorithm calculates the hourly evapotranspiration from the latent heat flux, estimated from the surface energy
Bare, Kimberly; Drain, Jerri; Timko-Progar, Monica; Stallings, Bobbie; Smith, Kimberly; Ward, Naomi; Wright, Sandra
Many nurses have limited experience with ostomy management. We sought to provide a standardized approach to ostomy education and management to support nurses in early identification of stomal and peristomal complications, pouching problems, and provide standardized solutions for managing ostomy care in general while improving utilization of formulary products. This article describes development and testing of an ostomy algorithm tool.
Hybrid-optimization algorithm for the management of a conjunctive-use project and well field design
Chiu, Yung-Chia; Nishikawa, Tracy; Martin, Peter
2012-01-01
Hi-Desert Water District (HDWD), the primary water-management agency in the Warren Groundwater Basin, California, plans to construct a waste water treatment plant to reduce future septic-tank effluent from reaching the groundwater system. The treated waste water will be reclaimed by recharging the groundwater basin via recharge ponds as part of a larger conjunctive-use strategy. HDWD wishes to identify the least-cost conjunctiveuse strategies for managing imported surface water, reclaimed water, and local groundwater. As formulated, the mixed-integer nonlinear programming (MINLP) groundwater-management problem seeks to minimize water delivery costs subject to constraints including potential locations of the new pumping wells, California State regulations, groundwater-level constraints, water-supply demand, available imported water, and pump/recharge capacities. In this study, a hybrid-optimization algorithm, which couples a genetic algorithm and successive-linear programming, is developed to solve the MINLP problem. The algorithm was tested by comparing results to the enumerative solution for a simplified version of the HDWD groundwater-management problem. The results indicate that the hybrid-optimization algorithm can identify the global optimum. The hybrid-optimization algorithm is then applied to solve a complex groundwater-management problem. Sensitivity analyses were also performed to assess the impact of varying the new recharge pond orientation, varying the mixing ratio of reclaimed water and pumped water, and varying the amount of imported water available. The developed conjunctive management model can provide HDWD water managers with information that will improve their ability to manage their surface water, reclaimed water, and groundwater resources.
Hybrid-optimization algorithm for the management of a conjunctive-use project and well field design
Chiu, Yung-Chia; Nishikawa, Tracy; Martin, Peter
2012-01-01
Hi‐Desert Water District (HDWD), the primary water‐management agency in the Warren Groundwater Basin, California, plans to construct a waste water treatment plant to reduce future septic‐tank effluent from reaching the groundwater system. The treated waste water will be reclaimed by recharging the groundwater basin via recharge ponds as part of a larger conjunctive‐use strategy. HDWD wishes to identify the least‐cost conjunctive‐use strategies for managing imported surface water, reclaimed water, and local groundwater. As formulated, the mixed‐integer nonlinear programming (MINLP) groundwater‐management problem seeks to minimize water‐delivery costs subject to constraints including potential locations of the new pumping wells, California State regulations, groundwater‐level constraints, water‐supply demand, available imported water, and pump/recharge capacities. In this study, a hybrid‐optimization algorithm, which couples a genetic algorithm and successive‐linear programming, is developed to solve the MINLP problem. The algorithm was tested by comparing results to the enumerative solution for a simplified version of the HDWD groundwater‐management problem. The results indicate that the hybrid‐optimization algorithm can identify the global optimum. The hybrid‐optimization algorithm is then applied to solve a complex groundwater‐management problem. Sensitivity analyses were also performed to assess the impact of varying the new recharge pond orientation, varying the mixing ratio of reclaimed water and pumped water, and varying the amount of imported water available. The developed conjunctive management model can provide HDWD water managers with information that will improve their ability to manage their surface water, reclaimed water, and groundwater resources.
Hybrid-optimization algorithm for the management of a conjunctive-use project and well field design.
Chiu, Yung-Chia; Nishikawa, Tracy; Martin, Peter
2012-01-01
Hi-Desert Water District (HDWD), the primary water-management agency in the Warren Groundwater Basin, California, plans to construct a waste water treatment plant to reduce future septic-tank effluent from reaching the groundwater system. The treated waste water will be reclaimed by recharging the groundwater basin via recharge ponds as part of a larger conjunctive-use strategy. HDWD wishes to identify the least-cost conjunctive-use strategies for managing imported surface water, reclaimed water, and local groundwater. As formulated, the mixed-integer nonlinear programming (MINLP) groundwater-management problem seeks to minimize water-delivery costs subject to constraints including potential locations of the new pumping wells, California State regulations, groundwater-level constraints, water-supply demand, available imported water, and pump/recharge capacities. In this study, a hybrid-optimization algorithm, which couples a genetic algorithm and successive-linear programming, is developed to solve the MINLP problem. The algorithm was tested by comparing results to the enumerative solution for a simplified version of the HDWD groundwater-management problem. The results indicate that the hybrid-optimization algorithm can identify the global optimum. The hybrid-optimization algorithm is then applied to solve a complex groundwater-management problem. Sensitivity analyses were also performed to assess the impact of varying the new recharge pond orientation, varying the mixing ratio of reclaimed water and pumped water, and varying the amount of imported water available. The developed conjunctive management model can provide HDWD water managers with information that will improve their ability to manage their surface water, reclaimed water, and groundwater resources. Ground Water © 2011, National Ground Water Association. This article is a U.S. Government work and is in the public domain in the USA.
Olaciregui-Ruiz, Igor; Rozendaal, Roel; van Oers, René F M; Mijnheer, Ben; Mans, Anton
2017-05-01
At our institute, a transit back-projection algorithm is used clinically to reconstruct in vivo patient and in phantom 3D dose distributions using EPID measurements behind a patient or a polystyrene slab phantom, respectively. In this study, an extension to this algorithm is presented whereby in air EPID measurements are used in combination with CT data to reconstruct 'virtual' 3D dose distributions. By combining virtual and in vivo patient verification data for the same treatment, patient-related errors can be separated from machine, planning and model errors. The virtual back-projection algorithm is described and verified against the transit algorithm with measurements made behind a slab phantom, against dose measurements made with an ionization chamber and with the OCTAVIUS 4D system, as well as against TPS patient data. Virtual and in vivo patient dose verification results are also compared. Virtual dose reconstructions agree within 1% with ionization chamber measurements. The average γ-pass rate values (3% global dose/3mm) in the 3D dose comparison with the OCTAVIUS 4D system and the TPS patient data are 98.5±1.9%(1SD) and 97.1±2.9%(1SD), respectively. For virtual patient dose reconstructions, the differences with the TPS in median dose to the PTV remain within 4%. Virtual patient dose reconstruction makes pre-treatment verification based on deviations of DVH parameters feasible and eliminates the need for phantom positioning and re-planning. Virtual patient dose reconstructions have additional value in the inspection of in vivo deviations, particularly in situations where CBCT data is not available (or not conclusive). Copyright © 2017 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.
Hernandez Reyes, B; Rodriguez Perez, E; Sosa Aquino, M [Universidad de Guanajuato, Leon, Guanajuato (Mexico)
2016-06-15
Purpose: To implement a back-projection algorithm for 2D dose reconstructions for in vivo dosimetry in radiation therapy using an Electronic Portal Imaging Device (EPID) based on amorphous silicon. Methods: An EPID system was used to calculate dose-response function, pixel sensitivity map, exponential scatter kernels and beam hardenig correction for the back-projection algorithm. All measurements were done with a 6 MV beam. A 2D dose reconstruction for an irradiated water phantom (30×30×30 cm{sup 3}) was done to verify the algorithm implementation. Gamma index evaluation between the 2D reconstructed dose and the calculated with a treatment planning system (TPS) was done. Results: A linear fit was found for the dose-response function. The pixel sensitivity map has a radial symmetry and was calculated with a profile of the pixel sensitivity variation. The parameters for the scatter kernels were determined only for a 6 MV beam. The primary dose was estimated applying the scatter kernel within EPID and scatter kernel within the patient. The beam hardening coefficient is σBH= 3.788×10{sup −4} cm{sup 2} and the effective linear attenuation coefficient is µAC= 0.06084 cm{sup −1}. The 95% of points evaluated had γ values not longer than the unity, with gamma criteria of ΔD = 3% and Δd = 3 mm, and within the 50% isodose surface. Conclusion: The use of EPID systems proved to be a fast tool for in vivo dosimetry, but the implementation is more complex that the elaborated for pre-treatment dose verification, therefore, a simplest method must be investigated. The accuracy of this method should be improved modifying the algorithm in order to compare lower isodose curves.
Gustavson, Fred G.; Reid, John K.; Wasniewski, Jerzy
2007-01-01
We present subroutines for the Cholesky factorization of a positive-definite symmetric matrix and for solving corresponding sets of linear equations. They exploit cache memory by using the block hybrid format proposed by the authors in a companion article. The matrix is packed into n(n + 1)/2 real...... variables, and the speed is usually better than that of the LAPACK algorithm that uses full storage (n2 variables). Included are subroutines for rearranging a matrix whose upper or lower-triangular part is packed by columns to this format and for the inverse rearrangement. Also included is a kernel...
Kim, Milim; Lee, Jeong Min; Son, Hyo Shin; Han, Joon Koo; Choi, Byung Ihn; Yoon, Jeong Hee; Choi, Jin Woo
2014-01-01
To evaluate the impact of the adaptive iterative dose reduction (AIDR) three-dimensional (3D) algorithm in CT on noise reduction and the image quality compared to the filtered back projection (FBP) algorithm and to compare the effectiveness of AIDR 3D on noise reduction according to the body habitus using phantoms with different sizes. Three different-sized phantoms with diameters of 24 cm, 30 cm, and 40 cm were built up using the American College of Radiology CT accreditation phantom and layers of pork belly fat. Each phantom was scanned eight times using different mAs. Images were reconstructed using the FBP and three different strengths of the AIDR 3D. The image noise, the contrast-to-noise ratio (CNR) and the signal-to-noise ratio (SNR) of the phantom were assessed. Two radiologists assessed the image quality of the 4 image sets in consensus. The effectiveness of AIDR 3D on noise reduction compared with FBP were also compared according to the phantom sizes. Adaptive iterative dose reduction 3D significantly reduced the image noise compared with FBP and enhanced the SNR and CNR (p < 0.05) with improved image quality (p < 0.05). When a stronger reconstruction algorithm was used, greater increase of SNR and CNR as well as noise reduction was achieved (p < 0.05). The noise reduction effect of AIDR 3D was significantly greater in the 40-cm phantom than in the 24-cm or 30-cm phantoms (p < 0.05). The AIDR 3D algorithm is effective to reduce the image noise as well as to improve the image-quality parameters compared by FBP algorithm, and its effectiveness may increase as the phantom size increases.
Kim, Milim; Lee, Jeong Min; Son, Hyo Shin; Han, Joon Koo; Choi, Byung Ihn [College of Medicine, Seoul National University, Seoul (Korea, Republic of); Yoon, Jeong Hee; Choi, Jin Woo [Dept. of Radiology, Seoul National University Hospital, Seoul (Korea, Republic of)
2014-04-15
To evaluate the impact of the adaptive iterative dose reduction (AIDR) three-dimensional (3D) algorithm in CT on noise reduction and the image quality compared to the filtered back projection (FBP) algorithm and to compare the effectiveness of AIDR 3D on noise reduction according to the body habitus using phantoms with different sizes. Three different-sized phantoms with diameters of 24 cm, 30 cm, and 40 cm were built up using the American College of Radiology CT accreditation phantom and layers of pork belly fat. Each phantom was scanned eight times using different mAs. Images were reconstructed using the FBP and three different strengths of the AIDR 3D. The image noise, the contrast-to-noise ratio (CNR) and the signal-to-noise ratio (SNR) of the phantom were assessed. Two radiologists assessed the image quality of the 4 image sets in consensus. The effectiveness of AIDR 3D on noise reduction compared with FBP were also compared according to the phantom sizes. Adaptive iterative dose reduction 3D significantly reduced the image noise compared with FBP and enhanced the SNR and CNR (p < 0.05) with improved image quality (p < 0.05). When a stronger reconstruction algorithm was used, greater increase of SNR and CNR as well as noise reduction was achieved (p < 0.05). The noise reduction effect of AIDR 3D was significantly greater in the 40-cm phantom than in the 24-cm or 30-cm phantoms (p < 0.05). The AIDR 3D algorithm is effective to reduce the image noise as well as to improve the image-quality parameters compared by FBP algorithm, and its effectiveness may increase as the phantom size increases.
Dzib, Sergio A.; Rodriguez-Garza, Carolina B.; Rodriguez, Luis F.; Kurtz, Stan E.; Loinard, Laurent; Zapata, Luis A.; Lizano, Susana, E-mail: s.dzib@crya.unam.mx [Centro de Radiostronomia y Astrofisica, Universidad Nacional Autonoma de Mexico, Morelia 58089 (Mexico)
2013-08-01
We present new Karl G. Jansky Very Large Array (VLA) observations of the compact ({approx}0.''05), time-variable radio source projected near the center of the ultracompact H II region W3(OH). The analysis of our new data as well as of VLA archival observations confirms the variability of the source on timescales of years and for a given epoch indicates a spectral index of {alpha} = 1.3 {+-} 0.3 (S{sub {nu}}{proportional_to}{nu}{sup {alpha}}). This spectral index and the brightness temperature of the source ({approx}6500 K) suggest that we are most likely detecting partially optically thick free-free radiation. The radio source is probably associated with the ionizing star of W3(OH), but an interpretation in terms of an ionized stellar wind fails because the detected flux densities are orders of magnitude larger than expected. We discuss several scenarios and tentatively propose that the radio emission could arise in a static ionized atmosphere around a fossil photoevaporated disk.
Dzib, Sergio A.; Rodríguez-Garza, Carolina B.; Rodríguez, Luis F.; Kurtz, Stan E.; Loinard, Laurent; Zapata, Luis A.; Lizano, Susana
2013-01-01
We present new Karl G. Jansky Very Large Array (VLA) observations of the compact (∼0.''05), time-variable radio source projected near the center of the ultracompact H II region W3(OH). The analysis of our new data as well as of VLA archival observations confirms the variability of the source on timescales of years and for a given epoch indicates a spectral index of α = 1.3 ± 0.3 (S ν ∝ν α ). This spectral index and the brightness temperature of the source (∼6500 K) suggest that we are most likely detecting partially optically thick free-free radiation. The radio source is probably associated with the ionizing star of W3(OH), but an interpretation in terms of an ionized stellar wind fails because the detected flux densities are orders of magnitude larger than expected. We discuss several scenarios and tentatively propose that the radio emission could arise in a static ionized atmosphere around a fossil photoevaporated disk
Syuan-Yi Chen
2016-01-01
Full Text Available This study developed an integrated energy management/gear-shifting strategy by using a bacterial foraging algorithm (BFA in an engine/motor hybrid powertrain with electric continuously variable transmission. A control-oriented vehicle model was constructed on the Matlab/Simulink platform for further integration with developed control strategies. A baseline control strategy with four modes was developed for comparison with the proposed BFA. The BFA was used with five bacterial populations to search for the optimal gear ratio and power-split ratio for minimizing the cost: the equivalent fuel consumption. Three main procedures were followed: chemotaxis, reproduction, and elimination-dispersal. After the vehicle model was integrated with the vehicle control unit with the BFA, two driving patterns, the New European Driving Cycle and the Federal Test Procedure, were used to evaluate the energy consumption improvement and equivalent fuel consumption compared with the baseline. The results show that [18.35%,21.77%] and [8.76%,13.81%] were improved for the optimal energy management and integrated optimization at the first and second driving cycles, respectively. Real-time platform designs and vehicle integration for a dynamometer test will be investigated in the future.
Van der Linden, Philippe; Hardy, Jean-François
2016-12-01
Preoperative anaemia is associated with increased postoperative morbidity and mortality. Patient blood management (PBM) is advocated to improve patient outcomes. NATA, the 'Network for the advancement of patient blood management, haemostasis and thrombosis', initiated a benchmark project with the aim of providing the basis for educational strategies to implement optimal PBM in participating centres. Prospective, observational study with online data collection in 11 secondary and tertiary care institutions interested in developing PBM. Ten European centres (Austria, Spain, England, Denmark, Belgium, Netherlands, Romania, Greece, France, and Germany) and one Canadian centre participated between January 2010 and June 2011. A total of 2470 patients undergoing total hip (THR) or knee replacement, or coronary artery bypass grafting (CABG), were registered in the study. Data from 2431 records were included in the final analysis. Primary outcome measures were the incidence and volume of red blood cells (RBC) transfused. Logistic regression analysis identified variables independently associated with RBC transfusions. The incidence of transfusion was significantly different between centres for THR (range 7 to 95%), total knee replacement (range 3 to 100%) and CABG (range 20 to 95%). The volume of RBC transfused was significantly different between centres for THR and CABG. The incidence of preoperative anaemia ranged between 3 and 40% and its treatment between 0 and 40%, the latter not being related to the former. Patient characteristics, evolution of haemoglobin concentrations and blood losses were also different between centres. Variables independently associated with RBC transfusion were preoperative haemoglobin concentration, lost volume of RBC and female sex. Implementation of PBM remains extremely variable across centres. The relative importance of factors explaining RBC transfusion differs across institutions, some being patient related whereas others are related to
2016-05-01
0.5 × 10−8. Our algorithm is implemented in C++ on an 1.7 GHz Intel Core i7-4650U CPU. Linear algebra packages BLAS [40] and LAPACK [41] are used to...subproblems. Our approach is expected to have wide applications in continuous dynamic games, control theory problems, and elsewhere. Mathematics...differential dynamic games, control theory problems, and dynamical systems coming from the physical world, e.g. [11]. An important application is to
1995-01-01
A standard assumption when evaluating the migration of plumes in ground water is that the impacted ground water has the same density as the native ground water. Thus density is assumed to be constant, and does not influence plume migration. This assumption is valid only for water with relatively low total dissolved solids (TDS) or a low difference in TDS between water introduced from milling processes and native ground water. Analyses in the literature suggest that relatively minor density differences can significantly affect plume migration. Density differences as small as 0.3 percent are known to cause noticeable effects on the plume migration path. The primary effect of density on plume migration is deeper migration than would be expected in the arid environments typically present at Uranium Mill Tailings Remedial Action (UMTRA) Project sites, where little or no natural recharge is available to drive the plume into the aquifer. It is also possible that at some UMTRA Project sites, a synergistic affect occurred during milling operations, where the mounding created by tailings drainage (which created a downward vertical gradient) and the density contrast between the process water and native ground water acted together, driving constituents deeper into the aquifer than either process would alone. Numerical experiments were performed with the U.S. Geological Survey saturated unsaturated transport (SUTRA) model. This is a finite-element model capable of simulating the effects of variable fluid density on ground water flow and solute transport. The simulated aquifer parameters generally are representative of the Shiprock, New Mexico, UMTRA Project site where some of the highest TDS water from processing has been observed
Hambye, Anne-Sophie; Vervaet, Ann; Dobbeleir, Andre
2004-01-01
Several software packages are commercially available for quantification of left ventricular ejection fraction (LVEF) and volumes from myocardial gated single-photon emission computed tomography (SPECT), all of which display a high reproducibility. However, their accuracy has been questioned in patients with a small heart. This study aimed to evaluate the performances of different software and the influence of modifications in acquisition or reconstruction parameters on LVEF and volume measurements, depending on the heart size. In 31 patients referred for gated SPECT, 64 2 and 128 2 matrix acquisitions were consecutively obtained. After reconstruction by filtered back-projection (Butterworth, 0.4, 0.5 or 0.6 cycles/cm cut-off, order 6), LVEF and volumes were computed with different software [three versions of Quantitative Gated SPECT (QGS), the Emory Cardiac Toolbox (ECT) and the Stanford University (SU-Segami) Medical School algorithm] and processing workstations. Depending upon their end-systolic volume (ESV), patients were classified into two groups: group I (ESV>30 ml, n=14) and group II (ESV 2 to 128 2 were associated with significantly larger volumes as well as lower LVEF values. Increasing the filter cut-off frequency had the same effect. With SU-Segami, a larger matrix was associated with larger end-diastolic volumes and smaller ESVs, resulting in a highly significant increase in LVEF. Increasing the filter sharpness, on the other hand, had no influence on LVEF though the measured volumes were significantly larger. (orig.)
Gh. Assadipour
2012-01-01
Full Text Available
ENGLISH ABSTRACT:The trade-off between time, cost, and quality is one of the important problems of project management. This problem assumes that all project activities can be executed in different modes of cost, time, and quality. Thus a manager should select each activity’s mode such that the project can meet the deadline with the minimum possible cost and the maximum achievable quality. As the problem is NP-hard and the objectives are in conflict with each other, a multi-objective meta-heuristic called CellDE, which is a hybrid cellular genetic algorithm, is implemented as the optimisation method. The proposed algorithm provides project managers with a set of non-dominated or Pareto-optimal solutions, and enables them to choose the best one according to their preferences. A set of problems of different sizes is generated and solved using the proposed algorithm. Three metrics are employed for evaluating the performance of the algorithm, appraising the diversity and convergence of the achieved Pareto fronts. Finally a comparison is made between CellDE and another meta-heuristic available in the literature. The results show the superiority of CellDE.
AFRIKAANSE OPSOMMING: ‘n Balans tussen tyd, koste en gehalte is een van die belangrike probleme van projekbestuur. Die vraagstuk maak gewoonlik die aanname dat alle projekaktiwiteite uitgevoer kan word op uiteenlopende wyses wat verband hou met koste, tyd en gehalte. ‘n Projekbestuurder selekteer gewoonlik die uitvoeringsmetodes sodanig per aktiwiteit dat gehoor gegegee word aan minimum koste en maksimum gehalte teen die voorwaarde van voltooiingsdatum wat bereik moet word.
Aangesien die beskrewe problem NP-hard is, word dit behandel ten opsigte van konflikterende doelwitte met ‘n multidoelwit metaheuristiese metode (CellDE. Die metode is ‘n hibride-sellulêre genetiese algoritme. Die algoritme lewer aan die besluitvormer ‘n versameling van ongedomineerde of Pareto
Rachmawati, D.; Budiman, M. A.; Atika, F.
2018-03-01
Data security is becoming one of the most significant challenges in the digital world. Retrieval of data by unauthorized parties will result in harm to the owner of the data. PDF data are also susceptible to data security disorder. These things affect the security of the information. To solve the security problem, it needs a method to maintain the protection of the data, such as cryptography. In cryptography, several algorithms can encode data, one of them is Two Square Cipher algorithm which is a symmetric algorithm. At this research, Two Square Cipher algorithm has already developed into a 16 x 16 key aims to enter the various plaintexts. However, for more enhancement security it will be combined with the VMPC algorithm which is a symmetric algorithm. The combination of the two algorithms is called with the super-encryption. At this point, the data already can be stored on a mobile phone allowing users to secure data flexibly and can be accessed anywhere. The application of PDF document security on this research built by Android-platform. At this study will also calculate the complexity of algorithms and process time. Based on the test results the complexity of the algorithm is θ (n) for Two Square Cipher and θ (n) for VMPC algorithm, so the complexity of the super-encryption is also θ (n). VMPC algorithm processing time results quicker than on Two Square Cipher. And the processing time is directly proportional to the length of the plaintext and passwords.
Improved multivariate polynomial factoring algorithm
Wang, P.S.
1978-01-01
A new algorithm for factoring multivariate polynomials over the integers based on an algorithm by Wang and Rothschild is described. The new algorithm has improved strategies for dealing with the known problems of the original algorithm, namely, the leading coefficient problem, the bad-zero problem and the occurrence of extraneous factors. It has an algorithm for correctly predetermining leading coefficients of the factors. A new and efficient p-adic algorithm named EEZ is described. Bascially it is a linearly convergent variable-by-variable parallel construction. The improved algorithm is generally faster and requires less store then the original algorithm. Machine examples with comparative timing are included
Sakaguchi, Toshimasa; Fujigaki, Motoharu; Murata, Yorinobu
2015-03-01
Accurate and wide-range shape measurement method is required in industrial field. The same technique is possible to be used for a shape measurement of a human body for the garment industry. Compact 3D shape measurement equipment is also required for embedding in the inspection system. A shape measurement by a phase shifting method can measure the shape with high spatial resolution because the coordinates can be obtained pixel by pixel. A key-device to develop compact equipment is a grating projector. Authors developed a linear LED projector and proposed a light source stepping method (LSSM) using the linear LED projector. The shape measurement euipment can be produced with low-cost and compact without any phase-shifting mechanical systems by using this method. Also it enables us to measure 3D shape in very short time by switching the light sources quickly. A phase unwrapping method is necessary to widen the measurement range with constant accuracy for phase shifting method. A general phase unwrapping method with difference grating pitches is often used. It is one of a simple phase unwrapping method. It is, however, difficult to apply the conventional phase unwrapping algorithm to the LSSM. Authors, therefore, developed an expansion unwrapping algorithm for the LSSM. In this paper, an expansion algorithm of measurement range suited for 3D shape measurement using two pitches of projected grating with the LSSM was evaluated.
V. E. Marley
2015-01-01
Full Text Available Summary. The concept of algorithmic models appeared from the algorithmic approach in which the simulated object, the phenomenon appears in the form of process, subject to strict rules of the algorithm, which placed the process of operation of the facility. Under the algorithmic model is the formalized description of the scenario subject specialist for the simulated process, the structure of which is comparable with the structure of the causal and temporal relationships between events of the process being modeled, together with all information necessary for its software implementation. To represent the structure of algorithmic models used algorithmic network. Normally, they were defined as loaded finite directed graph, the vertices which are mapped to operators and arcs are variables, bound by operators. The language of algorithmic networks has great features, the algorithms that it can display indifference the class of all random algorithms. In existing systems, automation modeling based on algorithmic nets, mainly used by operators working with real numbers. Although this reduces their ability, but enough for modeling a wide class of problems related to economy, environment, transport, technical processes. The task of modeling the execution of schedules and network diagrams is relevant and useful. There are many counting systems, network graphs, however, the monitoring process based analysis of gaps and terms of graphs, no analysis of prediction execution schedule or schedules. The library is designed to build similar predictive models. Specifying source data to obtain a set of projections from which to choose one and take it for a new plan.
Goodrich, J. P.; Cayan, D. R.
2017-12-01
California's Central Valley (CV) relies heavily on diverted surface water and groundwater pumping to supply irrigated agriculture. However, understanding the spatiotemporal character of water availability in the CV is difficult because of the number of individual farms and local, state, and federal agencies involved in using and managing water. Here we use the Central Valley Hydrologic Model (CVHM), developed by the USGS, to understand the relationships between climatic variability, surface water inputs, and resulting groundwater use over the historical period 1970-2013. We analyzed monthly surface water diversion data from >500 CV locations. Principle components analyses were applied to drivers constructed from meteorological data, surface reservoir storage, ET, land use cover, and upstream inflows, to feed multiple regressions and identify factors most important in predicting surface water diversions. Two thirds of the diversion locations ( 80% of total diverted water) can be predicted to within 15%. Along with monthly inputs, representations of cumulative precipitation over the previous 3 to 36 months can explain an additional 10% of variance, depending on location, compared to results that excluded this information. Diversions in the southern CV are highly sensitive to inter-annual variability in precipitation (R2 = 0.8), whereby more surface water is used during wet years. Until recently, this was not the case in the northern and mid-CV, where diversions were relatively constant annually, suggesting relative insensitivity to drought. In contrast, this has important implications for drought response in southern regions (eg. Tulare Basin) where extended dry conditions can severely limit surface water supplies and lead to excess groundwater pumping, storage loss, and subsidence. In addition to fueling our understanding of spatiotemporal variability in diversions, our ability to predict these water balance components allows us to update CVHM predictions before
Maldonado Puente, Bryan Patricio
2014-01-01
The inner detector of the ATLAS experiment has two types of silicon detectors used for tracking: Pixel Detector and SCT (semiconductor tracker). Once a proton-proton collision occurs, the result- ing particles pass through these detectors and these are recorded as hits on the detector surfaces. A medium to high energy particle passes through seven different surfaces of the two detectors, leaving seven hits, while lower energy particles can leave many more hits as they circle through the detector. For a typical event during the expected operational conditions, there are 30 000 hits in average recorded by the sensors. Only high energy particles are of interest for physics analysis and are taken into account for the path reconstruction; thus, a filtering process helps to discard the low energy particles produced in the collision. The following report presents a solution for increasing the speed of the filtering process in the path reconstruction algorithm.
Sayer, A. M.; Hsu, N. C.; Lee, J.; Bettenhausen, C.; Kim, W. V.; Smirnov, A.
2018-01-01
The Suomi National Polar-Orbiting Partnership (S-NPP) satellite, launched in late 2011, carries the Visible Infrared Imaging Radiometer Suite (VIIRS) and several other instruments. VIIRS has similar characteristics to prior satellite sensors used for aerosol optical depth (AOD) retrieval, allowing the continuation of space-based aerosol data records. The Deep Blue algorithm has previously been applied to retrieve AOD from Sea-viewing Wide Field-of-view Sensor (SeaWiFS) and Moderate Resolution Imaging Spectroradiometer (MODIS) measurements over land. The SeaWiFS Deep Blue data set also included a SeaWiFS Ocean Aerosol Retrieval (SOAR) algorithm to cover water surfaces. As part of NASA's VIIRS data processing, Deep Blue is being applied to VIIRS data over land, and SOAR has been adapted from SeaWiFS to VIIRS for use over water surfaces. This study describes SOAR as applied in version 1 of NASA's S-NPP VIIRS Deep Blue data product suite. Several advances have been made since the SeaWiFS application, as well as changes to make use of the broader spectral range of VIIRS. A preliminary validation against Maritime Aerosol Network (MAN) measurements suggests a typical uncertainty on retrieved 550 nm AOD of order ±(0.03+10%), comparable to existing SeaWiFS/MODIS aerosol data products. Retrieved Ångström exponent and fine-mode AOD fraction are also well correlated with MAN data, with small biases and uncertainty similar to or better than SeaWiFS/MODIS products.
Antonello, M
2013-01-01
Liquid Argon Time Projection Chamber (LAr TPC) detectors offer charged particle imaging capability with remarkable spatial resolution. Precise event reconstruction procedures are critical in order to fully exploit the potential of this technology. In this paper we present a new, general approach of three-dimensional reconstruction for the LAr TPC with a practical application to track reconstruction. The efficiency of the method is evaluated on a sample of simulated tracks. We present also the application of the method to the analysis of real data tracks collected during the ICARUS T600 detector operation with the CNGS neutrino beam.
M. Antonello
2013-01-01
Full Text Available Liquid Argon Time Projection Chamber (LAr TPC detectors offer charged particle imaging capability with remarkable spatial resolution. Precise event reconstruction procedures are critical in order to fully exploit the potential of this technology. In this paper we present a new, general approach to 3D reconstruction for the LAr TPC with a practical application to the track reconstruction. The efficiency of the method is evaluated on a sample of simulated tracks. We present also the application of the method to the analysis of stopping particle tracks collected during the ICARUS T600 detector operation with the CNGS neutrino beam.
Wang, Haipeng; Xu, Feng; Jin, Ya-Qiu; Ouchi, Kazuo
An inversion method of bridge height over water by polarimetric synthetic aperture radar (SAR) is developed. A geometric ray description to illustrate scattering mechanism of a bridge over water surface is identified by polarimetric image analysis. Using the mapping and projecting algorithm, a polarimetric SAR image of a bridge model is first simulated and shows that scattering from a bridge over water can be identified by three strip lines corresponding to single-, double-, and triple-order scattering, respectively. A set of polarimetric parameters based on the de-orientation theory is applied to analysis of three types scattering, and the thinning-clustering algorithm and Hough transform are then employed to locate the image positions of these strip lines. These lines are used to invert the bridge height. Fully polarimetric image data of airborne Pi-SAR at X-band are applied to inversion of the height and width of the Naruto Bridge in Japan. Based on the same principle, this approach is also applicable to spaceborne ALOSPALSAR single-polarization data of the Eastern Ocean Bridge in China. The results show good feasibility to realize the bridge height inversion.
Mobashsher, Ahmed Toaha; Mahmoud, A.; Abbosh, A. M.
2016-02-01
Intracranial hemorrhage is a medical emergency that requires rapid detection and medication to restrict any brain damage to minimal. Here, an effective wideband microwave head imaging system for on-the-spot detection of intracranial hemorrhage is presented. The operation of the system relies on the dielectric contrast between healthy brain tissues and a hemorrhage that causes a strong microwave scattering. The system uses a compact sensing antenna, which has an ultra-wideband operation with directional radiation, and a portable, compact microwave transceiver for signal transmission and data acquisition. The collected data is processed to create a clear image of the brain using an improved back projection algorithm, which is based on a novel effective head permittivity model. The system is verified in realistic simulation and experimental environments using anatomically and electrically realistic human head phantoms. Quantitative and qualitative comparisons between the images from the proposed and existing algorithms demonstrate significant improvements in detection and localization accuracy. The radiation and thermal safety of the system are examined and verified. Initial human tests are conducted on healthy subjects with different head sizes. The reconstructed images are statistically analyzed and absence of false positive results indicate the efficacy of the proposed system in future preclinical trials.
Karagali, Ioanna; Hasager, Charlotte Bay; Høyer, Jacob L.
2013-01-01
This study presents some preliminary results of the ESA Support To Science Element (STSE) funded project on the Diurnal Variability of the Sea Surface Temperature, regarding its Regional Extend and Implications in Atmospheric Modelling (SSTDV:R.EX.–IM.A.M.). Comparisons of SEVIRI SST with AATSR...
Tavakoli, Reza; Srinivasan, Sanjay; Wheeler, Mary
2015-04-01
The application of ensemble-based algorithms for history matching reservoir models has been steadily increasing over the past decade. However, the majority of implementations in the reservoir engineering have dealt only with production history matching. During geologic sequestration, the injection of large quantities of CO2 into the subsurface may alter the stress/strain field which in turn can lead to surface uplift or subsidence. Therefore, it is essential to couple multiphase flow and geomechanical response in order to predict and quantify the uncertainty of CO2 plume movement for long-term, large-scale CO2 sequestration projects. In this work, we simulate and estimate the properties of a reservoir that is being used to store CO2 as part of the In Salah Capture and Storage project in Algeria. The CO2 is separated from produced natural gas and is re-injected into downdip aquifer portion of the field from three long horizontal wells. The field observation data includes ground surface deformations (uplift) measured using satellite-based radar (InSAR), injection well locations and CO2 injection rate histories provided by the operators. We implement variations of ensemble Kalman filter and ensemble smoother algorithms for assimilating both injection rate data as well as geomechanical observations (surface uplift) into reservoir model. The preliminary estimation results of horizontal permeability and material properties such as Young Modulus and Poisson Ratio are consistent with available measurements and previous studies in this field. Moreover, the existence of high-permeability channels (fractures) within the reservoir; especially in the regions around the injection wells are confirmed. This estimation results can be used to accurately and efficiently predict and quantify the uncertainty in the movement of CO2 plume.
Ping, J.; Tavakoli, R.; Min, B.; Srinivasan, S.; Wheeler, M. F.
2015-12-01
Optimal management of subsurface processes requires the characterization of the uncertainty in reservoir description and reservoir performance prediction. The application of ensemble-based algorithms for history matching reservoir models has been steadily increasing over the past decade. However, the majority of implementations in the reservoir engineering have dealt only with production history matching. During geologic sequestration, the injection of large quantities of CO2 into the subsurface may alter the stress/strain field which in turn can lead to surface uplift or subsidence. Therefore, it is essential to couple multiphase flow and geomechanical response in order to predict and quantify the uncertainty of CO2 plume movement for long-term, large-scale CO2 sequestration projects. In this work, we simulate and estimate the properties of a reservoir that is being used to store CO2 as part of the In Salah Capture and Storage project in Algeria. The CO2 is separated from produced natural gas and is re-injected into downdip aquifer portion of the field from three long horizontal wells. The field observation data includes ground surface deformations (uplift) measured using satellite-based radar (InSAR), injection well locations and CO2 injection rate histories provided by the operators. We implement ensemble-based algorithms for assimilating both injection rate data as well as geomechanical observations (surface uplift) into reservoir model. The preliminary estimation results of horizontal permeability and material properties such as Young Modulus and Poisson Ratio are consistent with available measurements and previous studies in this field. Moreover, the existence of high-permeability channels/fractures within the reservoir; especially in the regions around the injection wells are confirmed. This estimation results can be used to accurately and efficiently predict and monitor the movement of CO2 plume.
Rasmussen, Troels A.; Merritt, Timothy R.
2017-01-01
CNC cutting machines have become essential tools for designers and architects enabling rapid prototyping, model-building and production of high quality components. Designers often cut from new materials, discarding the irregularly shaped remains. We introduce ProjecTables, a visual augmented...... reality system for interactive packing of model parts onto sheet materials. ProjecTables enables designers to (re)use scrap materials for CNC cutting that would have been previously thrown away, at the same time supporting aesthetic choices related to wood grain, avoiding surface blemishes, and other...... relevant material properties. We conducted evaluations of ProjecTables with design students from Aarhus School of Architecture, demonstrating that participants could quickly and easily place and orient model parts reducing material waste. Contextual interviews and ideation sessions led to a deeper...
Michel, D.; Jimé nez, C.; Miralles, Diego G.; Jung, M.; Hirschi, M.; Ershadi, Ali; Martens, B.; McCabe, Matthew; Fisher, J. B.; Mu, Q.; Seneviratne, S. I.; Wood, E. F.; Ferná ndez-Prieto, D.
2016-01-01
The WAter Cycle Multi-mission Observation Strategy – EvapoTranspiration (WACMOS-ET) project has compiled a forcing data set covering the period 2005–2007 that aims to maximize the exploitation of European Earth Observations data sets for evapotranspiration (ET) estimation. The data set was used to run four established ET algorithms: the Priestley–Taylor Jet Propulsion Laboratory model (PT-JPL), the Penman–Monteith algorithm from the MODerate resolution Imaging Spectroradiometer (MODIS) evaporation product (PM-MOD), the Surface Energy Balance System (SEBS) and the Global Land Evaporation Amsterdam Model (GLEAM). In addition, in situ meteorological data from 24 FLUXNET towers were used to force the models, with results from both forcing sets compared to tower-based flux observations. Model performance was assessed on several timescales using both sub-daily and daily forcings. The PT-JPL model and GLEAM provide the best performance for both satellite- and tower-based forcing as well as for the considered temporal resolutions. Simulations using the PM-MOD were mostly underestimated, while the SEBS performance was characterized by a systematic overestimation. In general, all four algorithms produce the best results in wet and moderately wet climate regimes. In dry regimes, the correlation and the absolute agreement with the reference tower ET observations were consistently lower. While ET derived with in situ forcing data agrees best with the tower measurements (R^{2} = 0.67), the agreement of the satellite-based ET estimates is only marginally lower (R^{2} = 0.58). Results also show similar model performance at daily and sub-daily (3-hourly) resolutions. Overall, our validation experiments against in situ measurements indicate that there is no single best-performing algorithm across all biome and forcing types. An extension of the evaluation to a larger selection of 85 towers (model inputs resampled to a
Michel, D.
2016-02-23
The WAter Cycle Multi-mission Observation Strategy – EvapoTranspiration (WACMOS-ET) project has compiled a forcing data set covering the period 2005–2007 that aims to maximize the exploitation of European Earth Observations data sets for evapotranspiration (ET) estimation. The data set was used to run four established ET algorithms: the Priestley–Taylor Jet Propulsion Laboratory model (PT-JPL), the Penman–Monteith algorithm from the MODerate resolution Imaging Spectroradiometer (MODIS) evaporation product (PM-MOD), the Surface Energy Balance System (SEBS) and the Global Land Evaporation Amsterdam Model (GLEAM). In addition, in situ meteorological data from 24 FLUXNET towers were used to force the models, with results from both forcing sets compared to tower-based flux observations. Model performance was assessed on several timescales using both sub-daily and daily forcings. The PT-JPL model and GLEAM provide the best performance for both satellite- and tower-based forcing as well as for the considered temporal resolutions. Simulations using the PM-MOD were mostly underestimated, while the SEBS performance was characterized by a systematic overestimation. In general, all four algorithms produce the best results in wet and moderately wet climate regimes. In dry regimes, the correlation and the absolute agreement with the reference tower ET observations were consistently lower. While ET derived with in situ forcing data agrees best with the tower measurements (R^{2} = 0.67), the agreement of the satellite-based ET estimates is only marginally lower (R^{2} = 0.58). Results also show similar model performance at daily and sub-daily (3-hourly) resolutions. Overall, our validation experiments against in situ measurements indicate that there is no single best-performing algorithm across all biome and forcing types. An extension of the evaluation to a larger selection of 85 towers (model inputs resampled to a
Anas Altaleb
2017-03-01
Full Text Available The aim of this work is to synthesize 8*8 substitution boxes (S-boxes for block ciphers. The confusion creating potential of an S-box depends on its construction technique. In the first step, we have applied the algebraic action of the projective general linear group PGL(2,GF(28 on Galois field GF(28. In step 2 we have used the permutations of the symmetric group S256 to construct new kind of S-boxes. To explain the proposed extension scheme, we have given an example and constructed one new S-box. The strength of the extended S-box is computed, and an insight is given to calculate the confusion-creating potency. To analyze the security of the S-box some popular algebraic and statistical attacks are performed as well. The proposed S-box has been analyzed by bit independent criterion, linear approximation probability test, non-linearity test, strict avalanche criterion, differential approximation probability test, and majority logic criterion. A comparison of the proposed S-box with existing S-boxes shows that the analyses of the extended S-box are comparatively better.
Dongxiao Niu
2018-01-01
Full Text Available The electric power industry is of great significance in promoting social and economic development and improving people’s living standards. Power grid construction is a necessary part of infrastructure construction, whose sustainability plays an important role in economic development, environmental protection and social progress. In order to effectively evaluate the sustainability of power grid construction projects, in this paper, we first identified 17 criteria from four dimensions including economy, technology, society and environment to establish the evaluation criteria system. After that, the grey incidence analysis was used to modify the traditional Technique for Order Preference by Similarity to an Ideal Solution (TOPSIS, which made it possible to evaluate the sustainability of electric power construction projects based on visual angle of similarity and nearness. Then, in order to simplify the procedure of experts scoring and computation, on the basis of evaluation results of the improved TOPSIS, the model using Modified Fly Optimization Algorithm (MFOA to optimize the Least Square Support Vector Machine (LSSVM was established. Finally, a numerical example was given to demonstrate the effectiveness of the proposed model.
Frampton, Dan; Gallo Cassarino, Tiziano; Raffle, Jade; Hubb, Jonathan; Ferns, R. Bridget; Waters, Laura; Tong, C. Y. William; Kozlakidis, Zisis; Hayward, Andrew; Kellam, Paul; Pillay, Deenan; Clark, Duncan; Nastouli, Eleni; Leigh Brown, Andrew J.
2018-01-01
Background & methods The ICONIC project has developed an automated high-throughput pipeline to generate HIV nearly full-length genomes (NFLG, i.e. from gag to nef) from next-generation sequencing (NGS) data. The pipeline was applied to 420 HIV samples collected at University College London Hospitals NHS Trust and Barts Health NHS Trust (London) and sequenced using an Illumina MiSeq at the Wellcome Trust Sanger Institute (Cambridge). Consensus genomes were generated and subtyped using COMET, and unique recombinants were studied with jpHMM and SimPlot. Maximum-likelihood phylogenetic trees were constructed using RAxML to identify transmission networks using the Cluster Picker. Results The pipeline generated sequences of at least 1Kb of length (median = 7.46Kb, IQR = 4.01Kb) for 375 out of the 420 samples (89%), with 174 (46.4%) being NFLG. A total of 365 sequences (169 of them NFLG) corresponded to unique subjects and were included in the down-stream analyses. The most frequent HIV subtypes were B (n = 149, 40.8%) and C (n = 77, 21.1%) and the circulating recombinant form CRF02_AG (n = 32, 8.8%). We found 14 different CRFs (n = 66, 18.1%) and multiple URFs (n = 32, 8.8%) that involved recombination between 12 different subtypes/CRFs. The most frequent URFs were B/CRF01_AE (4 cases) and A1/D, B/C, and B/CRF02_AG (3 cases each). Most URFs (19/26, 73%) lacked breakpoints in the PR+RT pol region, rendering them undetectable if only that was sequenced. Twelve (37.5%) of the URFs could have emerged within the UK, whereas the rest were probably imported from sub-Saharan Africa, South East Asia and South America. For 2 URFs we found highly similar pol sequences circulating in the UK. We detected 31 phylogenetic clusters using the full dataset: 25 pairs (mostly subtypes B and C), 4 triplets and 2 quadruplets. Some of these were not consistent across different genes due to inter- and intra-subtype recombination. Clusters involved 70 sequences, 19.2% of the dataset. Conclusions
Mahnke, Martina; Uprichard, Emma
2014-01-01
Imagine sailing across the ocean. The sun is shining, vastness all around you. And suddenly [BOOM] you’ve hit an invisible wall. Welcome to the Truman Show! Ever since Eli Pariser published his thoughts on a potential filter bubble, this movie scenario seems to have become reality, just with slight...... changes: it’s not the ocean, it’s the internet we’re talking about, and it’s not a TV show producer, but algorithms that constitute a sort of invisible wall. Building on this assumption, most research is trying to ‘tame the algorithmic tiger’. While this is a valuable and often inspiring approach, we...
Bernard, Edouard J.; Monelli, Matteo; Gallart, Carme; Drozdovsky, Igor; Stetson, Peter B.; Aparicio, Antonio; Cassisi, Santi; Mayer, Lucio; Cole, Andrew A.; Hidalgo, Sebastian L.; Skillman, Evan D.; Tolstoy, Eline
2009-01-01
We present the first study of the variable star populations in the isolated dwarf spheroidal galaxies (dSphs) Cetus and Tucana. Based on Hubble Space Telescope images obtained with the Advanced Camera for Surveys in the F475W and F814W bands, we identified 180 and 371 variables in Cetus and Tucana,
The mathematics of some tomography algorithms used at JET
Ingesson, L
2000-03-01
Mathematical details are given of various tomographic reconstruction algorithms that are in use at JET. These algorithms include constrained optimization (CO) with local basis functions, the Cormack method, methods with natural basis functions and the iterative projection-space reconstruction method. Topics discussed include: derivation of the matrix equation for constrained optimization, variable grid size, basis functions, line integrals, derivative matrices, smoothness matrices, analytical expression of the CO solution, sparse matrix storage, projection-space coordinates, the Cormack method in elliptical coordinates, interpolative generalized natural basis functions and some details of the implementation of the filtered backprojection method. (author)
Burger, Irene A; Wurnig, Moritz C; Becker, Anton S; Kenkel, David; Delso, Gaspar; Veit-Haibach, Patrick; Boss, Andreas
2015-01-01
It was the aim of this study to implement an algorithm modifying Dixon-based MR imaging datasets for attenuation correction in hybrid PET/MR imaging with a multiacquisition variable resonance image combination (MAVRIC) sequence to reduce metal artifacts. After ethics approval, in 8 oncologic patients with dental implants data were acquired in a trimodality setup with PET/CT and MR imaging. The protocol included a whole-body 3-dimensional dual gradient-echo sequence (Dixon) used for MR imaging-based PET attenuation correction and a high-resolution MAVRIC sequence, applied in the oral area compromised by dental implants. An algorithm was implemented correcting the Dixon-based μ maps using the MAVRIC in areas of Dixon signal voids. The artifact size of the corrected μ maps was compared with the uncorrected MR imaging μ maps. The algorithm was robust in all patients. There was a significant reduction in mean artifact size of 70.5% between uncorrected and corrected μ maps from 697 ± 589 mm(2) to 202 ± 119 mm(2) (P = 0.016). The proposed algorithm could improve MR imaging-based attenuation correction in critical areas, when standard attenuation correction is hampered by metal artifacts, using a MAVRIC. © 2015 by the Society of Nuclear Medicine and Molecular Imaging, Inc.
Mohamed Redha Rezoug
2018-02-01
Full Text Available Photovoltaic pumping is considered to be the most used application amongst other photovoltaic energy applications in isolated sites. This technology is developing with a slow progression to allow the photovoltaic system to operate at its maximum power. This work introduces the modified algorithm which is a perturb and observe (P&O type to overcome the limitations of the conventional P&O algorithm and increase its global performance in abrupt weather condition changes. The most significant conventional P&O algorithm restriction is the difficulty faced when choosing the variable step of the reference voltage value, a good compromise between the swift dynamic response and the stability in the steady state. To adjust the step reference voltage according to the location of the operating point of the maximum power point (MPP, a fuzzy logic controller (FLC block adapted to the P&O algorithm is used. This allows the improvement of the tracking pace and the steady state oscillation elimination. The suggested method was evaluated by simulation using MATLAB/SimPowerSystems blocks and compared to the classical P&O under different irradiation levels. The results obtained show the effectiveness of the technique proposed and its capacity for the practical and efficient tracking of maximum power.
Liu Qing; Zhu Jiamin; Hong Bihai
2008-01-01
A modified variable-coefficient projective Riccati equation method is proposed and applied to a (2 + 1)-dimensional simplified and generalized Broer-Kaup system. It is shown that the method presented by Huang and Zhang [Huang DJ, Zhang HQ. Chaos, Solitons and Fractals 2005; 23:601] is a special case of our method. The results obtained in the paper include many new formal solutions besides the all solutions found by Huang and Zhang
Moghtadaei, Motahareh; Hashemi Golpayegani, Mohammad Reza; Malekzadeh, Reza
2013-02-07
Identification of squamous dysplasia and esophageal squamous cell carcinoma (ESCC) is of great importance in prevention of cancer incidence. Computer aided algorithms can be very useful for identification of people with higher risks of squamous dysplasia, and ESCC. Such method can limit the clinical screenings to people with higher risks. Different regression methods have been used to predict ESCC and dysplasia. In this paper, a Fuzzy Neural Network (FNN) model is selected for ESCC and dysplasia prediction. The inputs to the classifier are the risk factors. Since the relation between risk factors in the tumor system has a complex nonlinear behavior, in comparison to most of ordinary data, the cost function of its model can have more local optimums. Thus the need for global optimization methods is more highlighted. The proposed method in this paper is a Chaotic Optimization Algorithm (COA) proceeding by the common Error Back Propagation (EBP) local method. Since the model has many parameters, we use a strategy to reduce the dependency among parameters caused by the chaotic series generator. This dependency was not considered in the previous COA methods. The algorithm is compared with logistic regression model as the latest successful methods of ESCC and dysplasia prediction. The results represent a more precise prediction with less mean and variance of error. Copyright © 2012 Elsevier Ltd. All rights reserved.
C. S. Long
2017-12-01
Full Text Available Two of the most basic parameters generated from a reanalysis are temperature and winds. Temperatures in the reanalyses are derived from conventional (surface and balloon, aircraft, and satellite observations. Winds are observed by conventional systems, cloud tracked, and derived from height fields, which are in turn derived from the vertical temperature structure. In this paper we evaluate as part of the SPARC Reanalysis Intercomparison Project (S-RIP the temperature and wind structure of all the recent and past reanalyses. This evaluation is mainly among the reanalyses themselves, but comparisons against independent observations, such as HIRDLS and COSMIC temperatures, are also presented. This evaluation uses monthly mean and 2.5° zonal mean data sets and spans the satellite era from 1979–2014. There is very good agreement in temperature seasonally and latitudinally among the more recent reanalyses (CFSR, MERRA, ERA-Interim, JRA-55, and MERRA-2 between the surface and 10 hPa. At lower pressures there is increased variance among these reanalyses that changes with season and latitude. This variance also changes during the time span of these reanalyses with greater variance during the TOVS period (1979–1998 and less variance afterward in the ATOVS period (1999–2014. There is a distinct change in the temperature structure in the middle and upper stratosphere during this transition from TOVS to ATOVS systems. Zonal winds are in greater agreement than temperatures and this agreement extends to lower pressures than the temperatures. Older reanalyses (NCEP/NCAR, NCEP/DOE, ERA-40, JRA-25 have larger temperature and zonal wind disagreement from the more recent reanalyses. All reanalyses to date have issues analysing the quasi-biennial oscillation (QBO winds. Comparisons with Singapore QBO winds show disagreement in the amplitude of the westerly and easterly anomalies. The disagreement with Singapore winds improves with the transition from
Javadi, Atefeh; van Loon, Jacco Th.; Mirtorabi, Mohammad Taghi
2011-02-01
We have conducted a near-infrared monitoring campaign at the UK Infrared Telescope (UKIRT), of the Local Group spiral galaxy M33 (Triangulum). The main aim was to identify stars in the very final stage of their evolution, and for which the luminosity is more directly related to the birth mass than the more numerous less-evolved giant stars that continue to increase in luminosity. The most extensive data set was obtained in the K band with the UIST instrument for the central 4 × 4 arcmin2 (1 kpc2) - this contains the nuclear star cluster and inner disc. These data, taken during the period 2003-2007, were complemented by J- and H-band images. Photometry was obtained for 18 398 stars in this region; of these, 812 stars were found to be variable, most of which are asymptotic giant branch (AGB) stars. Our data were matched to optical catalogues of variable stars and carbon stars and to mid-infrared photometry from the Spitzer Space Telescope. In this first of a series of papers, we present the methodology of the variability survey and the photometric catalogue - which is made publicly available at the Centre de Données astronomiques de Strasbourg - and discuss the properties of the variable stars. The most dusty AGB stars had not been previously identified in optical variability surveys, and our survey is also more complete for these types of stars than the Spitzer survey.
Romberger, Jeff [SBW Consulting, Inc., Bellevue, WA (United States)
2017-06-21
An adjustable-speed drive (ASD) includes all devices that vary the speed of a rotating load, including those that vary the motor speed and linkage devices that allow constant motor speed while varying the load speed. The Variable Frequency Drive Evaluation Protocol presented here addresses evaluation issues for variable-frequency drives (VFDs) installed on commercial and industrial motor-driven centrifugal fans and pumps for which torque varies with speed. Constant torque load applications, such as those for positive displacement pumps, are not covered by this protocol.
De Götzen , Amalia; Mion , Luca; Tache , Olivier
2007-01-01
International audience; We call sound algorithms the categories of algorithms that deal with digital sound signal. Sound algorithms appeared in the very infancy of computer. Sound algorithms present strong specificities that are the consequence of two dual considerations: the properties of the digital sound signal itself and its uses, and the properties of auditory perception.
Bernard, Edouard J.; Monelli, Matteo; Gallart, Carme
2009-01-01
We present the first study of the variable star populations in the isolated dwarf spheroidal galaxies (dSphs) Cetus and Tucana. Based on Hubble Space Telescope images obtained with the Advanced Camera for Surveys in the F475W and F814W bands, we identified 180 and 371 variables in Cetus and Tucana, respectively. The vast majority are RR Lyrae stars. In Cetus, we also found three anomalous Cepheids (ACs), four candidate binaries and one candidate long-period variable (LPV), while six ACs and seven LPV candidates were found in Tucana. Of the RR Lyrae stars, 147 were identified as fundamental mode (RRab) and only eight as first-overtone mode (RRc) in Cetus, with mean periods of 0.614 and 0.363 day, respectively. In Tucana, we found 216 RRab and 82 RRc giving mean periods of 0.604 and 0.353 day. These values place both galaxies in the so-called Oosterhoff Gap, as is generally the case for dSph. We calculated the distance modulus to both galaxies using different approaches based on the properties of RRab and RRc, namely, the luminosity-metallicity and period-luminosity-metallicity relations, and found values in excellent agreement with previous estimates using independent methods: (m - M) 0,Cet = 24.46 ± 0.12 and (m - M) 0,Tuc = 24.74 ± 0.12, corresponding to 780 ± 40 kpc and 890 ± 50 kpc. We also found numerous RR Lyrae variables pulsating in both modes simultaneously (RRd): 17 in Cetus and 60 in Tucana. Tucana is, after Fornax, the second dSph in which such a large fraction of RRd (∼17%) has been observed. We provide the photometry and pulsation parameters for all the variables, and compare the latter with values from the literature for well studied dSph of the Local Group and Galactic globular clusters. The parallel WFPC2 fields were also searched for variables, as they lie well within the tidal radius of Cetus, and at its limit in the case of Tucana. No variables were found in the latter, while 15 were discovered in the outer field of Cetus (11 RRab, three RRc
H. Heidari
2016-03-01
Full Text Available Introduction: Economic effects of membership in the WTO in recent years, has been one of the most important issues for Iranian economy. If Iran joins the WTO, in this process, tariff reduction in agricultural sector will be one of the policies which has to be employed. Therefore, investigating economic effects of tariff reduction or even its elimination in this sector will be necessary in running effective policies to minimize the probabilistic losses of accession. Tariffs on agricultural products in Iran are determined merely on the basis of annual country economy, and have no long term strategy. Government is just obliged to impose effective tariffs on agricultural products imports, in order to protect local productions. On the other hand, according to the census of population and housing, the share of agricultural sector in employment has reduced during the past decade. Moreover, Iran central bank information indicated the reduction in the share of agricultural sector in GDP for the past decade. Declining the share of agriculture in production and employment, considering the high number of university graduates in the field of agriculture along with rising unemployment rate of this group, motivated this study to investigate the effect of tariff reduction in this sector on macroeconomic variables. Materials and Methods: This study analyzed the welfare effects of import tariffs reduction in agricultural sector from Iran most important commercial partners and vice versa, using the Global Trade Analysis Project (GTAP, based on computable general equilibrium (CGE model. Moreover, the effects of tariffs reduction, is investigated on output, price level and transfer of production factors between different economic sectors. In order to simulate the above model, we used GTAP version 8 which covers 57 commodities and 113 regions with economic information of these regions. This model uses Social Accounting Matrix of countries as data information. Our
Fouad, Marwa A; Tolba, Enas H; El-Shal, Manal A; El Kerdawy, Ahmed M
2018-05-11
The justified continuous emerging of new β-lactam antibiotics provokes the need for developing suitable analytical methods that accelerate and facilitate their analysis. A face central composite experimental design was adopted using different levels of phosphate buffer pH, acetonitrile percentage at zero time and after 15 min in a gradient program to obtain the optimum chromatographic conditions for the elution of 31 β-lactam antibiotics. Retention factors were used as the target property to build two QSRR models utilizing the conventional forward selection and the advanced nature-inspired firefly algorithm for descriptor selection, coupled with multiple linear regression. The obtained models showed high performance in both internal and external validation indicating their robustness and predictive ability. Williams-Hotelling test and student's t-test showed that there is no statistical significant difference between the models' results. Y-randomization validation showed that the obtained models are due to significant correlation between the selected molecular descriptors and the analytes' chromatographic retention. These results indicate that the generated FS-MLR and FFA-MLR models are showing comparable quality on both the training and validation levels. They also gave comparable information about the molecular features that influence the retention behavior of β-lactams under the current chromatographic conditions. We can conclude that in some cases simple conventional feature selection algorithm can be used to generate robust and predictive models comparable to that are generated using advanced ones. Copyright © 2018 Elsevier B.V. All rights reserved.
Foundations of genetic algorithms 1991
1991-01-01
Foundations of Genetic Algorithms 1991 (FOGA 1) discusses the theoretical foundations of genetic algorithms (GA) and classifier systems.This book compiles research papers on selection and convergence, coding and representation, problem hardness, deception, classifier system design, variation and recombination, parallelization, and population divergence. Other topics include the non-uniform Walsh-schema transform; spurious correlations and premature convergence in genetic algorithms; and variable default hierarchy separation in a classifier system. The grammar-based genetic algorithm; condition
Russell, J. L.; Sarmiento, J. L.
2017-12-01
The Southern Ocean is central to the climate's response to increasing levels of atmospheric greenhouse gases as it ventilates a large fraction of the global ocean volume. Global coupled climate models and earth system models, however, vary widely in their simulations of the Southern Ocean and its role in, and response to, the ongoing anthropogenic forcing. Due to its complex water-mass structure and dynamics, Southern Ocean carbon and heat uptake depend on a combination of winds, eddies, mixing, buoyancy fluxes and topography. Understanding how the ocean carries heat and carbon into its interior and how the observed wind changes are affecting this uptake is essential to accurately projecting transient climate sensitivity. Observationally-based metrics are critical for discerning processes and mechanisms, and for validating and comparing climate models. As the community shifts toward Earth system models with explicit carbon simulations, more direct observations of important biogeochemical parameters, like those obtained from the biogeochemically-sensored floats that are part of the Southern Ocean Carbon and Climate Observations and Modeling project, are essential. One goal of future observing systems should be to create observationally-based benchmarks that will lead to reducing uncertainties in climate projections, and especially uncertainties related to oceanic heat and carbon uptake.
Suzuki, Shigeru; Machida, Haruhiko; Tanaka, Isao; Ueno, Eiko
2013-03-01
The purpose of this study was to evaluate the performance of model-based iterative reconstruction (MBIR) in measurement of the inner diameter of models of blood vessels and compare performance between MBIR and a standard filtered back projection (FBP) algorithm. Vascular models with wall thicknesses of 0.5, 1.0, and 1.5 mm were scanned with a 64-MDCT unit and densities of contrast material yielding 275, 396, and 542 HU. Images were reconstructed images by MBIR and FBP, and the mean diameter of each model vessel was measured by software automation. Twenty separate measurements were repeated for each vessel, and variance among the repeated measures was analyzed for determination of measurement error. For all nine model vessels, CT attenuation profiles were compared along a line passing through the luminal center on axial images reconstructed with FBP and MBIR, and the 10-90% edge rise distances at the boundary between the vascular wall and the lumen were evaluated. For images reconstructed with FBP, measurement errors were smallest for models with 1.5-mm wall thickness, except those filled with 275-HU contrast material, and errors grew as the density of the contrast material decreased. Measurement errors with MBIR were comparable to or less than those with FBP. In CT attenuation profiles of images reconstructed with MBIR, the 10-90% edge rise distances at the boundary between the lumen and vascular wall were relatively short for each vascular model compared with those of the profile curves of FBP images. MBIR is better than standard FBP for reducing reconstruction blur and improving the accuracy of diameter measurement at CT angiography.
Krepper, Gabriela; Romeo, Florencia; Fernandes, David Douglas de Sousa; Diniz, Paulo Henrique Gonçalves Dias; de Araújo, Mário César Ugulino; Di Nezio, María Susana; Pistonesi, Marcelo Fabián; Centurión, María Eugenia
2018-01-01
Determining fat content in hamburgers is very important to minimize or control the negative effects of fat on human health, effects such as cardiovascular diseases and obesity, which are caused by the high consumption of saturated fatty acids and cholesterol. This study proposed an alternative analytical method based on Near Infrared Spectroscopy (NIR) and Successive Projections Algorithm for interval selection in Partial Least Squares regression (iSPA-PLS) for fat content determination in commercial chicken hamburgers. For this, 70 hamburger samples with a fat content ranging from 14.27 to 32.12 mg kg- 1 were prepared based on the upper limit recommended by the Argentinean Food Codex, which is 20% (w w- 1). NIR spectra were then recorded and then preprocessed by applying different approaches: base line correction, SNV, MSC, and Savitzky-Golay smoothing. For comparison, full-spectrum PLS and the Interval PLS are also used. The best performance for the prediction set was obtained for the first derivative Savitzky-Golay smoothing with a second-order polynomial and window size of 19 points, achieving a coefficient of correlation of 0.94, RMSEP of 1.59 mg kg- 1, REP of 7.69% and RPD of 3.02. The proposed methodology represents an excellent alternative to the conventional Soxhlet extraction method, since waste generation is avoided, yet without the use of either chemical reagents or solvents, which follows the primary principles of Green Chemistry. The new method was successfully applied to chicken hamburger analysis, and the results agreed with those with reference values at a 95% confidence level, making it very attractive for routine analysis.
Ghavami, Raoof; Najafi, Amir; Sajadi, Mohammad; Djannaty, Farhad
2008-09-01
In order to accurately simulate (13)C NMR spectra of hydroxy, polyhydroxy and methoxy substituted flavonoid a quantitative structure-property relationship (QSPR) model, relating atom-based calculated descriptors to (13)C NMR chemical shifts (ppm, TMS=0), is developed. A dataset consisting of 50 flavonoid derivatives was employed for the present analysis. A set of 417 topological, geometrical, and electronic descriptors representing various structural characteristics was calculated and separate multilinear QSPR models were developed between each carbon atom of flavonoid and the calculated descriptors. Genetic algorithm (GA) and multiple linear regression analysis (MLRA) were used to select the descriptors and to generate the correlation models. Analysis of the results revealed a correlation coefficient and root mean square error (RMSE) of 0.994 and 2.53ppm, respectively, for the prediction set.
Liu, Ju, E-mail: jliu@ices.utexas.edu [Institute for Computational Engineering and Sciences, The University of Texas at Austin, 201 East 24th Street, 1 University Station C0200, Austin, TX 78712 (United States); Gomez, Hector [Department of Mathematical Methods, University of A Coruña, Campus de Elviña, s/n, 15192 A Coruña (Spain); Evans, John A.; Hughes, Thomas J.R. [Institute for Computational Engineering and Sciences, The University of Texas at Austin, 201 East 24th Street, 1 University Station C0200, Austin, TX 78712 (United States); Landis, Chad M. [Aerospace Engineering and Engineering Mechanics, The University of Texas at Austin, 210 East 24th Street, 1 University Station C0600, Austin, TX 78712 (United States)
2013-09-01
We propose a new methodology for the numerical solution of the isothermal Navier–Stokes–Korteweg equations. Our methodology is based on a semi-discrete Galerkin method invoking functional entropy variables, a generalization of classical entropy variables, and a new time integration scheme. We show that the resulting fully discrete scheme is unconditionally stable-in-energy, second-order time-accurate, and mass-conservative. We utilize isogeometric analysis for spatial discretization and verify the aforementioned properties by adopting the method of manufactured solutions and comparing coarse mesh solutions with overkill solutions. Various problems are simulated to show the capability of the method. Our methodology provides a means of constructing unconditionally stable numerical schemes for nonlinear non-convex hyperbolic systems of conservation laws.
Joux, Antoine
2009-01-01
Illustrating the power of algorithms, Algorithmic Cryptanalysis describes algorithmic methods with cryptographically relevant examples. Focusing on both private- and public-key cryptographic algorithms, it presents each algorithm either as a textual description, in pseudo-code, or in a C code program.Divided into three parts, the book begins with a short introduction to cryptography and a background chapter on elementary number theory and algebra. It then moves on to algorithms, with each chapter in this section dedicated to a single topic and often illustrated with simple cryptographic applic
Piepmeier, Jeffrey; Mohammed, Priscilla; De Amici, Giovanni; Kim, Edward; Peng, Jinzheng; Ruf, Christopher; Hanna, Maher; Yueh, Simon; Entekhabi, Dara
2016-01-01
The purpose of the Soil Moisture Active Passive (SMAP) radiometer calibration algorithm is to convert Level 0 (L0) radiometer digital counts data into calibrated estimates of brightness temperatures referenced to the Earth's surface within the main beam. The algorithm theory in most respects is similar to what has been developed and implemented for decades for other satellite radiometers; however, SMAP includes two key features heretofore absent from most satellite borne radiometers: radio frequency interference (RFI) detection and mitigation, and measurement of the third and fourth Stokes parameters using digital correlation. The purpose of this document is to describe the SMAP radiometer and forward model, explain the SMAP calibration algorithm, including approximations, errors, and biases, provide all necessary equations for implementing the calibration algorithm and detail the RFI detection and mitigation process. Section 2 provides a summary of algorithm objectives and driving requirements. Section 3 is a description of the instrument and Section 4 covers the forward models, upon which the algorithm is based. Section 5 gives the retrieval algorithm and theory. Section 6 describes the orbit simulator, which implements the forward model and is the key for deriving antenna pattern correction coefficients and testing the overall algorithm.
Hong, Haoyuan; Tsangaratos, Paraskevas; Ilia, Ioanna; Liu, Junzhi; Zhu, A-Xing; Xu, Chong
2018-07-15
The main objective of the present study was to utilize Genetic Algorithms (GA) in order to obtain the optimal combination of forest fire related variables and apply data mining methods for constructing a forest fire susceptibility map. In the proposed approach, a Random Forest (RF) and a Support Vector Machine (SVM) was used to produce a forest fire susceptibility map for the Dayu County which is located in southwest of Jiangxi Province, China. For this purpose, historic forest fires and thirteen forest fire related variables were analyzed, namely: elevation, slope angle, aspect, curvature, land use, soil cover, heat load index, normalized difference vegetation index, mean annual temperature, mean annual wind speed, mean annual rainfall, distance to river network and distance to road network. The Natural Break and the Certainty Factor method were used to classify and weight the thirteen variables, while a multicollinearity analysis was performed to determine the correlation among the variables and decide about their usability. The optimal set of variables, determined by the GA limited the number of variables into eight excluding from the analysis, aspect, land use, heat load index, distance to river network and mean annual rainfall. The performance of the forest fire models was evaluated by using the area under the Receiver Operating Characteristic curve (ROC-AUC) based on the validation dataset. Overall, the RF models gave higher AUC values. Also the results showed that the proposed optimized models outperform the original models. Specifically, the optimized RF model gave the best results (0.8495), followed by the original RF (0.8169), while the optimized SVM gave lower values (0.7456) than the RF, however higher than the original SVM (0.7148) model. The study highlights the significance of feature selection techniques in forest fire susceptibility, whereas data mining methods could be considered as a valid approach for forest fire susceptibility modeling
Hougardy, Stefan
2016-01-01
Algorithms play an increasingly important role in nearly all fields of mathematics. This book allows readers to develop basic mathematical abilities, in particular those concerning the design and analysis of algorithms as well as their implementation. It presents not only fundamental algorithms like the sieve of Eratosthenes, the Euclidean algorithm, sorting algorithms, algorithms on graphs, and Gaussian elimination, but also discusses elementary data structures, basic graph theory, and numerical questions. In addition, it provides an introduction to programming and demonstrates in detail how to implement algorithms in C++. This textbook is suitable for students who are new to the subject and covers a basic mathematical lecture course, complementing traditional courses on analysis and linear algebra. Both authors have given this "Algorithmic Mathematics" course at the University of Bonn several times in recent years.
Gong, Yuezheng; Zhao, Jia; Wang, Qi
2017-10-01
A quasi-incompressible hydrodynamic phase field model for flows of fluid mixtures of two incompressible viscous fluids of distinct densities and viscosities is derived by using the generalized Onsager principle, which warrants the variational structure, the mass conservation and energy dissipation law. We recast the model in an equivalent form and discretize the equivalent system in space firstly to arrive at a time-dependent ordinary differential and algebraic equation (DAE) system, which preserves the mass conservation and energy dissipation law at the semi-discrete level. Then, we develop a temporal discretization scheme for the DAE system, where the mass conservation and the energy dissipation law are once again preserved at the fully discretized level. We prove that the fully discretized algorithm is unconditionally energy stable. Several numerical examples, including drop dynamics of viscous fluid drops immersed in another viscous fluid matrix and mixing dynamics of binary polymeric solutions, are presented to show the convergence property as well as the accuracy and efficiency of the new scheme.
Tel, G.
We define the notion of total algorithms for networks of processes. A total algorithm enforces that a "decision" is taken by a subset of the processes, and that participation of all processes is required to reach this decision. Total algorithms are an important building block in the design of
Haoqian Huang
2014-12-01
Full Text Available High accuracy attitude and position determination is very important for underwater gliders. The cross-coupling among three attitude angles (heading angle, pitch angle and roll angle becomes more serious when pitch or roll motion occurs. This cross-coupling makes attitude angles inaccurate or even erroneous. Therefore, the high accuracy attitude and position determination becomes a difficult problem for a practical underwater glider. To solve this problem, this paper proposes backing decoupling and adaptive extended Kalman filter (EKF based on the quaternion expanded to the state variable (BD-AEKF. The backtracking decoupling can eliminate effectively the cross-coupling among the three attitudes when pitch or roll motion occurs. After decoupling, the adaptive extended Kalman filter (AEKF based on quaternion expanded to the state variable further smoothes the filtering output to improve the accuracy and stability of attitude and position determination. In order to evaluate the performance of the proposed BD-AEKF method, the pitch and roll motion are simulated and the proposed method performance is analyzed and compared with the traditional method. Simulation results demonstrate the proposed BD-AEKF performs better. Furthermore, for further verification, a new underwater navigation system is designed, and the three-axis non-magnetic turn table experiments and the vehicle experiments are done. The results show that the proposed BD-AEKF is effective in eliminating cross-coupling and reducing the errors compared with the conventional method.
Huang, Haoqian; Chen, Xiyuan; Zhou, Zhikai; Xu, Yuan; Lv, Caiping
2014-12-03
High accuracy attitude and position determination is very important for underwater gliders. The cross-coupling among three attitude angles (heading angle, pitch angle and roll angle) becomes more serious when pitch or roll motion occurs. This cross-coupling makes attitude angles inaccurate or even erroneous. Therefore, the high accuracy attitude and position determination becomes a difficult problem for a practical underwater glider. To solve this problem, this paper proposes backing decoupling and adaptive extended Kalman filter (EKF) based on the quaternion expanded to the state variable (BD-AEKF). The backtracking decoupling can eliminate effectively the cross-coupling among the three attitudes when pitch or roll motion occurs. After decoupling, the adaptive extended Kalman filter (AEKF) based on quaternion expanded to the state variable further smoothes the filtering output to improve the accuracy and stability of attitude and position determination. In order to evaluate the performance of the proposed BD-AEKF method, the pitch and roll motion are simulated and the proposed method performance is analyzed and compared with the traditional method. Simulation results demonstrate the proposed BD-AEKF performs better. Furthermore, for further verification, a new underwater navigation system is designed, and the three-axis non-magnetic turn table experiments and the vehicle experiments are done. The results show that the proposed BD-AEKF is effective in eliminating cross-coupling and reducing the errors compared with the conventional method.
Huang, Haoqian; Chen, Xiyuan; Zhou, Zhikai; Xu, Yuan; Lv, Caiping
2014-01-01
High accuracy attitude and position determination is very important for underwater gliders. The cross-coupling among three attitude angles (heading angle, pitch angle and roll angle) becomes more serious when pitch or roll motion occurs. This cross-coupling makes attitude angles inaccurate or even erroneous. Therefore, the high accuracy attitude and position determination becomes a difficult problem for a practical underwater glider. To solve this problem, this paper proposes backing decoupling and adaptive extended Kalman filter (EKF) based on the quaternion expanded to the state variable (BD-AEKF). The backtracking decoupling can eliminate effectively the cross-coupling among the three attitudes when pitch or roll motion occurs. After decoupling, the adaptive extended Kalman filter (AEKF) based on quaternion expanded to the state variable further smoothes the filtering output to improve the accuracy and stability of attitude and position determination. In order to evaluate the performance of the proposed BD-AEKF method, the pitch and roll motion are simulated and the proposed method performance is analyzed and compared with the traditional method. Simulation results demonstrate the proposed BD-AEKF performs better. Furthermore, for further verification, a new underwater navigation system is designed, and the three-axis non-magnetic turn table experiments and the vehicle experiments are done. The results show that the proposed BD-AEKF is effective in eliminating cross-coupling and reducing the errors compared with the conventional method. PMID:25479331
Zgirski, Bartlomiej; Pietrzyński, Grzegorz; Wielgorski, Piotr; Narloch, Weronika; Graczyk, Dariusz [Nicolaus Copernicus Astronomical Center, Polish Academy of Sciences, Bartycka 18, 00-716 Warsaw (Poland); Gieren, Wolfgang; Gorski, Marek [Universidad de Concepcion, Departamento de Astronomia, Casilla 160-C, Concepcion (Chile); Karczmarek, Paulina [Warsaw University Observatory, Al. Ujazdowskie 4, 00-478, Warsaw (Poland); Kudritzki, Rolf-Peter; Bresolin, Fabio, E-mail: bzgirski@camk.edu.pl, E-mail: pietrzyn@camk.edu.pl, E-mail: pwielgor@camk.edu.pl, E-mail: wnarloch@camk.edu.pl, E-mail: darek@astro-udec.cl, E-mail: mgorski@astrouw.edu.pl, E-mail: wgieren@astro-udec.cl, E-mail: pkarczmarek@astrouw.edu.pl, E-mail: kud@ifa.hawaii.edu, E-mail: bresolin@ifa.hawaii.edu [Institute for Astronomy, University of Hawaii at Manoa, 2680 Woodlawn Drive, Honolulu HI 96822 (United States)
2017-10-01
Following the earlier discovery of classical Cepheid variables in the Sculptor Group spiral galaxy NGC 7793 from an optical wide-field imaging survey, we have performed deep near-infrared J - and K -band follow-up photometry of a subsample of these Cepheids to derive the distance to this galaxy with a higher accuracy than what was possible from optical photometry alone, by minimizing the effects of reddening and metallicity on the distance result. Combining our new near-infrared period–luminosity relations with previous optical photometry, we obtain a true distance modulus to NGC 7793 of (27.66 ± 0.04) mag (statistical) ±0.07 mag (systematic), i.e., a distance of (3.40 ± 0.17) Mpc. We also determine the mean reddening affecting the Cepheids to be E(B − V) = (0.08 ± 0.02) mag, demonstrating that there is significant dust extinction intrinsic to the galaxy in addition to the small foreground extinction. A comparison of the new, improved Cepheid distance to earlier distance determinations of NGC 7793 from the Tully–Fisher and TRGB methods is in agreement within the reported uncertainties of these previous measurements.
Variable importance in latent variable regression models
Kvalheim, O.M.; Arneberg, R.; Bleie, O.; Rajalahti, T.; Smilde, A.K.; Westerhuis, J.A.
2014-01-01
The quality and practical usefulness of a regression model are a function of both interpretability and prediction performance. This work presents some new graphical tools for improved interpretation of latent variable regression models that can also assist in improved algorithms for variable
Parkes, Ben; Defrance, Dimitri; Sultan, Benjamin; Ciais, Philippe; Wang, Xuhui
2018-02-01
The ability of a region to feed itself in the upcoming decades is an important issue. The West African population is expected to increase significantly in the next 30 years. The responses of crops to short-term climate change is critical to the population and the decision makers tasked with food security. This leads to three questions: how will crop yields change in the near future? What influence will climate change have on crop failures? Which adaptation methods should be employed to ameliorate undesirable changes? An ensemble of near-term climate projections are used to simulate maize, millet and sorghum in West Africa in the recent historic period (1986-2005) and a near-term future when global temperatures are 1.5 K above pre-industrial levels to assess the change in yield, yield variability and crop failure rate. Four crop models were used to simulate maize, millet and sorghum in West Africa in the historic and future climates. Across the majority of West Africa the maize, millet and sorghum yields are shown to fall. In the regions where yields increase, the variability also increases. This increase in variability increases the likelihood of crop failures, which are defined as yield negative anomalies beyond 1 standard deviation during the historic period. The increasing variability increases the frequency of crop failures across West Africa. The return time of crop failures falls from 8.8, 9.7 and 10.1 years to 5.2, 6.3 and 5.8 years for maize, millet and sorghum respectively. The adoption of heat-resistant cultivars and the use of captured rainwater have been investigated using one crop model as an idealized sensitivity test. The generalized doption of a cultivar resistant to high-temperature stress during flowering is shown to be more beneficial than using rainwater harvesting.
B. Parkes
2018-02-01
Full Text Available The ability of a region to feed itself in the upcoming decades is an important issue. The West African population is expected to increase significantly in the next 30 years. The responses of crops to short-term climate change is critical to the population and the decision makers tasked with food security. This leads to three questions: how will crop yields change in the near future? What influence will climate change have on crop failures? Which adaptation methods should be employed to ameliorate undesirable changes? An ensemble of near-term climate projections are used to simulate maize, millet and sorghum in West Africa in the recent historic period (1986–2005 and a near-term future when global temperatures are 1.5 K above pre-industrial levels to assess the change in yield, yield variability and crop failure rate. Four crop models were used to simulate maize, millet and sorghum in West Africa in the historic and future climates. Across the majority of West Africa the maize, millet and sorghum yields are shown to fall. In the regions where yields increase, the variability also increases. This increase in variability increases the likelihood of crop failures, which are defined as yield negative anomalies beyond 1 standard deviation during the historic period. The increasing variability increases the frequency of crop failures across West Africa. The return time of crop failures falls from 8.8, 9.7 and 10.1 years to 5.2, 6.3 and 5.8 years for maize, millet and sorghum respectively. The adoption of heat-resistant cultivars and the use of captured rainwater have been investigated using one crop model as an idealized sensitivity test. The generalized doption of a cultivar resistant to high-temperature stress during flowering is shown to be more beneficial than using rainwater harvesting.
Novel Algorithms for Astronomical Plate Analyses
Hudec, René; Hudec, L.
2011-01-01
Roč. 32, 1-2 (2011), s. 121-123 ISSN 0250-6335. [Conference on Multiwavelength Variability of Blazars. Guangzhou, 22,09,2010-24,09,2010] R&D Projects: GA ČR GA205/08/1207 Grant - others:GA ČR(CZ) GA102/09/0997; MŠMT(CZ) ME09027 Institutional research plan: CEZ:AV0Z10030501 Keywords : astronomical plates * plate archives archives * astronomical algorithms Subject RIV: BN - Astronomy, Celestial Mechanics, Astrophysics Impact factor: 0.400, year: 2011
Improved algorithm for surface display from volumetric data
Lobregt, S.; Schaars, H.W.G.K.; OpdeBeek, J.C.A.; Zonneveld, F.W.
1988-01-01
A high-resolution surface display is produced from three-dimensional datasets (computed tomography or magnetic resonance imaging). Unlike other voxel-based methods, this algorithm does not show a cuberille surface structure, because the surface orientation is calculated from original gray values. The applied surface shading is a function of local orientation and position of the surface and of a virtual light source, giving a realistic impression of the surface of bone and soft tissue. The projection and shading are table driven, combining variable viewpoint and illumination conditions with speed. Other options are cutplane gray-level display and surface transparency. Combined with volume scanning, this algorithm offers powerful application possibilities
Optimal Quadratic Programming Algorithms
Dostal, Zdenek
2009-01-01
Quadratic programming (QP) is one technique that allows for the optimization of a quadratic function in several variables in the presence of linear constraints. This title presents various algorithms for solving large QP problems. It is suitable as an introductory text on quadratic programming for graduate students and researchers
Hybridizing Evolutionary Algorithms with Opportunistic Local Search
Gießen, Christian
2013-01-01
There is empirical evidence that memetic algorithms (MAs) can outperform plain evolutionary algorithms (EAs). Recently the first runtime analyses have been presented proving the aforementioned conjecture rigorously by investigating Variable-Depth Search, VDS for short (Sudholt, 2008). Sudholt...
Pereira, Claudio M.N.A. [Instituto de Engenharia Nuclear (IEN), Rio de Janeiro, RJ (Brazil); Schirru, Roberto; Martinez, Aquilino S. [Universidade Federal, Rio de Janeiro, RJ (Brazil). Coordenacao dos Programas de Pos-graduacao de Engenharia
1997-12-01
This work presents a prototype of a system for nuclear reactor core design optimization based on genetic algorithms and artificial neural networks. A neural network is modeled and trained in order to predict the flux and the neutron multiplication factor values based in the enrichment, network pitch and cladding thickness, with average error less than 2%. The values predicted by the neural network are used by a genetic algorithm in this heuristic search, guided by an objective function that rewards the high flux values and penalizes multiplication factors far from the required value. Associating the quick prediction - that may substitute the reactor physics calculation code - with the global optimization capacity of the genetic algorithm, it was obtained a quick and effective system for nuclear reactor core design optimization. (author). 11 refs., 8 figs., 3 tabs.
Approximate k-NN delta test minimization method using genetic algorithms: Application to time series
Mateo, F; Gadea, Rafael; Sovilj, Dusan
2010-01-01
In many real world problems, the existence of irrelevant input variables (features) hinders the predictive quality of the models used to estimate the output variables. In particular, time series prediction often involves building large regressors of artificial variables that can contain irrelevant or misleading information. Many techniques have arisen to confront the problem of accurate variable selection, including both local and global search strategies. This paper presents a method based on genetic algorithms that intends to find a global optimum set of input variables that minimize the Delta Test criterion. The execution speed has been enhanced by substituting the exact nearest neighbor computation by its approximate version. The problems of scaling and projection of variables have been addressed. The developed method works in conjunction with MATLAB's Genetic Algorithm and Direct Search Toolbox. The goodness of the proposed methodology has been evaluated on several popular time series examples, and also ...
Shahriari, Mohammadreza
2016-06-01
The time-cost tradeoff problem is one of the most important and applicable problems in project scheduling area. There are many factors that force the mangers to crash the time. This factor could be early utilization, early commissioning and operation, improving the project cash flow, avoiding unfavorable weather conditions, compensating the delays, and so on. Since there is a need to allocate extra resources to short the finishing time of project and the project managers are intended to spend the lowest possible amount of money and achieve the maximum crashing time, as a result, both direct and indirect costs will be influenced in the project, and here, we are facing into the time value of money. It means that when we crash the starting activities in a project, the extra investment will be tied in until the end date of the project; however, when we crash the final activities, the extra investment will be tied in for a much shorter period. This study is presenting a two-objective mathematical model for balancing compressing the project time with activities delay to prepare a suitable tool for decision makers caught in available facilities and due to the time of projects. Also drawing the scheduling problem to real world conditions by considering nonlinear objective function and the time value of money are considered. The presented problem was solved using NSGA-II, and the effect of time compressing reports on the non-dominant set.
Mao, Shihong; Lu, Changqi; Li, Meifeng; Ye, Yulong; Wei, Xu; Tong, Huarong
2018-04-13
Gas chromatography-olfactometry (GC-O) is the most frequently used method to estimate the sensory contribution of single odorant, but disregards the interactions between volatiles. In order to select the key volatiles responsible for the aroma attributes of Congou black tea (Camellia sinensis), instrumental, sensory and multivariate statistical approaches were applied. By sensory analysis, nine panelists developed 8 descriptors, namely, floral, sweet, fruity, green, roasted, oil, spicy, and off-odor. Linalool, (E)-furan linalool oxide, (Z)-pyran linalool oxide, methyl salicylate, β-myrcene, phenylethyl alcohol which identified from the most representative samples by GC-O procedure, were the essential aroma-active compounds in the formation of basic Congou black tea aroma. In addition, 136 volatiles were identified by gas chromatography-mass spectrometry (GC-MS), among which 55 compounds were determined as the key factors for the six sensory attributes by partial least-square regression (PLSR) with variable importance of projection (VIP) scores. Our results demonstrated that HS-SPME/GC-MS/GC-O was a fast approach for isolation and quantification aroma-active compounds. PLSR method was also considered to be a useful tool in selecting important variables for sensory attributes. These two strategies allowed us to comprehensively evaluate the sensorial contribution of single volatile from different perspectives, can be applied to related products for comprehensive quality control. This article is protected by copyright. All rights reserved.
Frederiksen, Carsten S.; Ying, Kairan; Grainger, Simon; Zheng, Xiaogu
2018-04-01
Models from the coupled model intercomparison project phase 5 (CMIP5) dataset are evaluated for their ability to simulate the dominant slow modes of interannual variability in the Northern Hemisphere atmospheric circulation 500 hPa geopotential height in the twentieth century. A multi-model ensemble of the best 13 models has then been used to identify the leading modes of interannual variability in components related to (1) intraseasonal processes; (2) slowly-varying internal dynamics; and (3) the slowly-varying response to external changes in radiative forcing. Modes in the intraseasonal component are related to intraseasonal variability in the North Atlantic, North Pacific and North American, and Eurasian regions and are little affected by the larger radiative forcing of the Representative Concentration Pathways 8.5 (RCP8.5) scenario. The leading modes in the slow-internal component are related to the El Niño-Southern Oscillation, Pacific North American or Tropical Northern Hemisphere teleconnection, the North Atlantic Oscillation, and the Western Pacific teleconnection pattern. While the structure of these slow-internal modes is little affected by the larger radiative forcing of the RCP8.5 scenario, their explained variance increases in the warmer climate. The leading mode in the slow-external component has a significant trend and is shown to be related predominantly to the climate change trend in the well mixed greenhouse gas concentration during the historical period. This mode is associated with increasing height in the 500 hPa pressure level. A secondary influence on this mode is the radiative forcing due to stratospheric aerosols associated with volcanic eruptions. The second slow-external mode is shown to be also related to radiative forcing due to stratospheric aerosols. Under RCP8.5 there is only one slow-external mode related to greenhouse gas forcing with a trend over four times the historical trend.
Algorithms in combinatorial design theory
Colbourn, CJ
1985-01-01
The scope of the volume includes all algorithmic and computational aspects of research on combinatorial designs. Algorithmic aspects include generation, isomorphism and analysis techniques - both heuristic methods used in practice, and the computational complexity of these operations. The scope within design theory includes all aspects of block designs, Latin squares and their variants, pairwise balanced designs and projective planes and related geometries.
Kopacz, Michał
2017-09-01
The paper attempts to assess the impact of variability of selected geological (deposit) parameters on the value and risks of projects in the hard coal mining industry. The study was based on simulated discounted cash flow analysis, while the results were verified for three existing bituminous coal seams. The Monte Carlo simulation was based on nonparametric bootstrap method, while correlations between individual deposit parameters were replicated with use of an empirical copula. The calculations take into account the uncertainty towards the parameters of empirical distributions of the deposit variables. The Net Present Value (NPV) and the Internal Rate of Return (IRR) were selected as the main measures of value and risk, respectively. The impact of volatility and correlation of deposit parameters were analyzed in two aspects, by identifying the overall effect of the correlated variability of the parameters and the indywidual impact of the correlation on the NPV and IRR. For this purpose, a differential approach, allowing determining the value of the possible errors in calculation of these measures in numerical terms, has been used. Based on the study it can be concluded that the mean value of the overall effect of the variability does not exceed 11.8% of NPV and 2.4 percentage points of IRR. Neglecting the correlations results in overestimating the NPV and the IRR by up to 4.4%, and 0.4 percentage point respectively. It should be noted, however, that the differences in NPV and IRR values can vary significantly, while their interpretation depends on the likelihood of implementation. Generalizing the obtained results, based on the average values, the maximum value of the risk premium in the given calculation conditions of the "X" deposit, and the correspondingly large datasets (greater than 2500), should not be higher than 2.4 percentage points. The impact of the analyzed geological parameters on the NPV and IRR depends primarily on their co-existence, which can be
Rosero-Vlasova, O.; Borini Alves, D.; Vlassova, L.; Perez-Cabello, F.; Montorio Lloveria, R.
2017-10-01
Deforestation in Amazon basin due, among other factors, to frequent wildfires demands continuous post-fire monitoring of soil and vegetation. Thus, the study posed two objectives: (1) evaluate the capacity of Visible - Near InfraRed - ShortWave InfraRed (VIS-NIR-SWIR) spectroscopy to estimate soil organic matter (SOM) in fire-affected soils, and (2) assess the feasibility of SOM mapping from satellite images. For this purpose, 30 soil samples (surface layer) were collected in 2016 in areas of grass and riparian vegetation of Campos Amazonicos National Park, Brazil, repeatedly affected by wildfires. Standard laboratory procedures were applied to determine SOM. Reflectance spectra of soils were obtained in controlled laboratory conditions using Fieldspec4 spectroradiometer (spectral range 350nm- 2500nm). Measured spectra were resampled to simulate reflectances for Landsat-8, Sentinel-2 and EnMap spectral bands, used as predictors in SOM models developed using Partial Least Squares regression and step-down variable selection algorithm (PLSR-SD). The best fit was achieved with models based on reflectances simulated for EnMap bands (R2=0.93; R2cv=0.82 and NMSE=0.07; NMSEcv=0.19). The model uses only 8 out of 244 predictors (bands) chosen by the step-down variable selection algorithm. The least reliable estimates (R2=0.55 and R2cv=0.40 and NMSE=0.43; NMSEcv=0.60) resulted from Landsat model, while Sentinel-2 model showed R2=0.68 and R2cv=0.63; NMSE=0.31 and NMSEcv=0.38. The results confirm high potential of VIS-NIR-SWIR spectroscopy for SOM estimation. Application of step-down produces sparser and better-fit models. Finally, SOM can be estimated with an acceptable accuracy (NMSE 0.35) from EnMap and Sentinel-2 data enabling mapping and analysis of impacts of repeated wildfires on soils in the study area.
Ortiz, Héctor; Biondo, Sebastiano; Codina, Antonio; Ciga, Miguel Á; Enríquez-Navascués, José M; Espín, Eloy; García-Granero, Eduardo; Roig, José Vicente
2016-01-01
This multicentre observational study examines variation between hospitals in postoperative mortality after elective surgery in the Rectal Cancer Project of the Spanish Society of Surgeons and explores whether hospital volume and patient characteristics contribute to any variation between hospitals. Hospital variation was quantified using a multilevel approach on prospective data derived from the multicentre database of all rectal adenocarcinomas operated by an anterior resection or an abdominoperineal excision at 84 surgical departments from 2006 to 2013. The following variables were included in the analysis; demographics, American Society of Anaesthesiologists classification, tumour location and stage, administration of neoadjuvant treatment, and annual volume of surgical procedures. A total of 9809 consecutive patients were included. The rate of 30-day postoperative mortality was 1.8% Stratified by annual surgical volume hospitals varied from 1.4 to 2.0 in 30-day mortality. In the multilevel regression analysis, male gender (OR 1.623 [1.143; 2.348]; P<.008), increased age (OR: 5.811 [3.479; 10.087]; P<.001), and ASA score (OR 10.046 [3.390; 43.185]; P<.001) were associated with 30-day mortality. However, annual surgical volume was not associated with mortality (OR 1.309 [0.483; 4.238]; P=.619). Besides, there was a statistically significant variation in mortality between all departments (MOR 1.588 [1.293; 2.015]; P<.001). Postoperative mortality varies significantly among hospitals included in the project and this difference cannot be attributed to the annual surgical volume. Copyright © 2015 AEC. Publicado por Elsevier España, S.L.U. All rights reserved.
Tang Qiulin; Zeng, Gengsheng L; Gullberg, Grant T
2007-01-01
In this paper, we develop an approximate analytical reconstruction algorithm that compensates for uniform attenuation in 2D parallel-beam SPECT with a 180 0 acquisition. This new algorithm is in the form of a direct Fourier reconstruction. The complex variable central slice theorem is used to derive this algorithm. The image is reconstructed with the following steps: first, the attenuated projection data acquired over 180 deg. are extended to 360 deg. and the value for the uniform attenuator is changed to a negative value. The Fourier transform (FT) of the image in polar coordinates is obtained from the Fourier transform of an analytic function interpolated from an extension of the projection data according to the complex central slice theorem. Finally, the image is obtained by performing a 2D inverse Fourier transform. Computer simulations and comparison studies with a 360 deg. full-scan algorithm are provided
Anglada-Escude, Guillem; Butler, R. Paul, E-mail: anglada@dtm.ciw.edu [Carnegie Institution of Washington, Department of Terrestrial Magnetism, 5241 Broad Branch Rd. NW, Washington, DC 20015 (United States)
2012-06-01
Doppler spectroscopy has uncovered or confirmed all the known planets orbiting nearby stars. Two main techniques are used to obtain precision Doppler measurements at optical wavelengths. The first approach is the gas cell method, which consists of least-squares matching of the spectrum of iodine imprinted on the spectrum of the star. The second method relies on the construction of a stabilized spectrograph externally calibrated in wavelength. The most precise stabilized spectrometer in operation is the High Accuracy Radial velocity Planet Searcher (HARPS), operated by the European Southern Observatory in La Silla Observatory, Chile. The Doppler measurements obtained with HARPS are typically obtained using the cross-correlation function (CCF) technique. This technique consists of multiplying the stellar spectrum by a weighted binary mask and finding the minimum of the product as a function of the Doppler shift. It is known that CCF is suboptimal in exploiting the Doppler information in the stellar spectrum. Here we describe an algorithm to obtain precision radial velocity measurements using least-squares matching of each observed spectrum to a high signal-to-noise ratio template derived from the same observations. This algorithm is implemented in our software HARPS-TERRA (Template-Enhanced Radial velocity Re-analysis Application). New radial velocity measurements on a representative sample of stars observed by HARPS are used to illustrate the benefits of the proposed method. We show that, compared with CCF, template matching provides a significant improvement in accuracy, especially when applied to M dwarfs.
Deng, Bai-chuan; Yun, Yong-huan; Liang, Yi-zeng; Yi, Lun-zhao
2014-10-07
In this study, a new optimization algorithm called the Variable Iterative Space Shrinkage Approach (VISSA) that is based on the idea of model population analysis (MPA) is proposed for variable selection. Unlike most of the existing optimization methods for variable selection, VISSA statistically evaluates the performance of variable space in each step of optimization. Weighted binary matrix sampling (WBMS) is proposed to generate sub-models that span the variable subspace. Two rules are highlighted during the optimization procedure. First, the variable space shrinks in each step. Second, the new variable space outperforms the previous one. The second rule, which is rarely satisfied in most of the existing methods, is the core of the VISSA strategy. Compared with some promising variable selection methods such as competitive adaptive reweighted sampling (CARS), Monte Carlo uninformative variable elimination (MCUVE) and iteratively retaining informative variables (IRIV), VISSA showed better prediction ability for the calibration of NIR data. In addition, VISSA is user-friendly; only a few insensitive parameters are needed, and the program terminates automatically without any additional conditions. The Matlab codes for implementing VISSA are freely available on the website: https://sourceforge.net/projects/multivariateanalysis/files/VISSA/.
Creutz, M.
1987-11-01
A large variety of Monte Carlo algorithms are being used for lattice gauge simulations. For purely bosonic theories, present approaches are generally adequate; nevertheless, overrelaxation techniques promise savings by a factor of about three in computer time. For fermionic fields the situation is more difficult and less clear. Algorithms which involve an extrapolation to a vanishing step size are all quite closely related. Methods which do not require such an approximation tend to require computer time which grows as the square of the volume of the system. Recent developments combining global accept/reject stages with Langevin or microcanonical updatings promise to reduce this growth to V/sup 4/3/
Hu, T C
2002-01-01
Newly enlarged, updated second edition of a valuable text presents algorithms for shortest paths, maximum flows, dynamic programming and backtracking. Also discusses binary trees, heuristic and near optimums, matrix multiplication, and NP-complete problems. 153 black-and-white illus. 23 tables.Newly enlarged, updated second edition of a valuable, widely used text presents algorithms for shortest paths, maximum flows, dynamic programming and backtracking. Also discussed are binary trees, heuristic and near optimums, matrix multiplication, and NP-complete problems. New to this edition: Chapter 9
Searching Algorithms Implemented on Probabilistic Systolic Arrays
Kramosil, Ivan
1996-01-01
Roč. 25, č. 1 (1996), s. 7-45 ISSN 0308-1079 R&D Projects: GA ČR GA201/93/0781 Keywords : searching algorithms * probabilistic algorithms * systolic arrays * parallel algorithms Impact factor: 0.214, year: 1996
Mircea FULEA
2009-01-01
Full Text Available In an evolving, highly turbulent and uncertain socio-economic environment, organizations must consider strategies of systematic and continuous integration of innovation within their business systems, as a fundamental condition for sustainable development. Adequate methodologies are required in this respect. A mature framework for integrating innovative problem solving approaches within business process improvement methodologies is proposed in this paper. It considers a TRIZ-centred algorithm in the improvement phase of the DMAIC methodology. The new tool is called enhanced sigma-TRIZ. A case study reveals the practical application of the proposed methodology. The integration of enhanced sigma-TRIZ within a knowledge management software platform (KMSP is further described. Specific developments to support processes of knowledge creation, knowledge storage and retrieval, knowledge transfer and knowledge application in a friendly and effective way within the KMSP are also highlighted.
A. R. Khamitov
2016-01-01
Full Text Available Indications for the conservation of the skin flap over the tumor for potential offset of the operational access in aesthetically acceptable zone in patients with primary nodular breast cancer are discussed in the article. The survey results of 203 patients (T1–2N0–3M0 are analyzed. The study revealed that the risk factors affecting the skin flap involvement are the presence of the skin flattening as well as topographic and anatomical characteristics: tumor < 3 cm, located at a depth of < 0.46 ± 0.2 cm, tumor ≥ 3 cm located at a depth of < 1.66 cm. Based on the data the algorithm for immediate breast reconstruction from aesthetically acceptable zone for surgical oncologist is compiled.
Efficient AM Algorithms for Stochastic ML Estimation of DOA
Haihua Chen
2016-01-01
Full Text Available The estimation of direction-of-arrival (DOA of signals is a basic and important problem in sensor array signal processing. To solve this problem, many algorithms have been proposed, among which the Stochastic Maximum Likelihood (SML is one of the most concerned algorithms because of its high accuracy of DOA. However, the estimation of SML generally involves the multidimensional nonlinear optimization problem. As a result, its computational complexity is rather high. This paper addresses the issue of reducing computational complexity of SML estimation of DOA based on the Alternating Minimization (AM algorithm. We have the following two contributions. First using transformation of matrix and properties of spatial projection, we propose an efficient AM (EAM algorithm by dividing the SML criterion into two components. One depends on a single variable parameter while the other does not. Second when the array is a uniform linear array, we get the irreducible form of the EAM criterion (IAM using polynomial forms. Simulation results show that both EAM and IAM can reduce the computational complexity of SML estimation greatly, while IAM is the best. Another advantage of IAM is that this algorithm can avoid the numerical instability problem which may happen in AM and EAM algorithms when more than one parameter converges to an identical value.
Anna Bourmistrova
2011-02-01
Full Text Available The autodriver algorithm is an intelligent method to eliminate the need of steering by a driver on a well-defined road. The proposed method performs best on a four-wheel steering (4WS vehicle, though it is also applicable to two-wheel-steering (TWS vehicles. The algorithm is based on coinciding the actual vehicle center of rotation and road center of curvature, by adjusting the kinematic center of rotation. The road center of curvature is assumed prior information for a given road, while the dynamic center of rotation is the output of dynamic equations of motion of the vehicle using steering angle and velocity measurements as inputs. We use kinematic condition of steering to set the steering angles in such a way that the kinematic center of rotation of the vehicle sits at a desired point. At low speeds the ideal and actual paths of the vehicle are very close. With increase of forward speed the road and tire characteristics, along with the motion dynamics of the vehicle cause the vehicle to turn about time-varying points. By adjusting the steering angles, our algorithm controls the dynamic turning center of the vehicle so that it coincides with the road curvature center, hence keeping the vehicle on a given road autonomously. The position and orientation errors are used as feedback signals in a closed loop control to adjust the steering angles. The application of the presented autodriver algorithm demonstrates reliable performance under different driving conditions.
Nock, Nl; Zhang, Lx
2011-11-29
Methods that can evaluate aggregate effects of rare and common variants are limited. Therefore, we applied a two-stage approach to evaluate aggregate gene effects in the 1000 Genomes Project data, which contain 24,487 single-nucleotide polymorphisms (SNPs) in 697 unrelated individuals from 7 populations. In stage 1, we identified potentially interesting genes (PIGs) as those having at least one SNP meeting Bonferroni correction using univariate, multiple regression models. In stage 2, we evaluate aggregate PIG effects on trait, Q1, by modeling each gene as a latent construct, which is defined by multiple common and rare variants, using the multivariate statistical framework of structural equation modeling (SEM). In stage 1, we found that PIGs varied markedly between a randomly selected replicate (replicate 137) and 100 other replicates, with the exception of FLT1. In stage 1, collapsing rare variants decreased false positives but increased false negatives. In stage 2, we developed a good-fitting SEM model that included all nine genes simulated to affect Q1 (FLT1, KDR, ARNT, ELAV4, FLT4, HIF1A, HIF3A, VEGFA, VEGFC) and found that FLT1 had the largest effect on Q1 (βstd = 0.33 ± 0.05). Using replicate 137 estimates as population values, we found that the mean relative bias in the parameters (loadings, paths, residuals) and their standard errors across 100 replicates was on average, less than 5%. Our latent variable SEM approach provides a viable framework for modeling aggregate effects of rare and common variants in multiple genes, but more elegant methods are needed in stage 1 to minimize type I and type II error.
Markham, Annette
This paper takes an actor network theory approach to explore some of the ways that algorithms co-construct identity and relational meaning in contemporary use of social media. Based on intensive interviews with participants as well as activity logging and data tracking, the author presents a richly...... layered set of accounts to help build our understanding of how individuals relate to their devices, search systems, and social network sites. This work extends critical analyses of the power of algorithms in implicating the social self by offering narrative accounts from multiple perspectives. It also...... contributes an innovative method for blending actor network theory with symbolic interaction to grapple with the complexity of everyday sensemaking practices within networked global information flows....
Gregory, Kyle J.; Hill, Joanne E. (Editor); Black, J. Kevin; Baumgartner, Wayne H.; Jahoda, Keith
2016-01-01
A fundamental challenge in a spaceborne application of a gas-based Time Projection Chamber (TPC) for observation of X-ray polarization is handling the large amount of data collected. The TPC polarimeter described uses the APV-25 Application Specific Integrated Circuit (ASIC) to readout a strip detector. Two dimensional photoelectron track images are created with a time projection technique and used to determine the polarization of the incident X-rays. The detector produces a 128x30 pixel image per photon interaction with each pixel registering 12 bits of collected charge. This creates challenging requirements for data storage and downlink bandwidth with only a modest incidence of photons and can have a significant impact on the overall mission cost. An approach is described for locating and isolating the photoelectron track within the detector image, yielding a much smaller data product, typically between 8x8 pixels and 20x20 pixels. This approach is implemented using a Microsemi RT-ProASIC3-3000 Field-Programmable Gate Array (FPGA), clocked at 20 MHz and utilizing 10.7k logic gates (14% of FPGA), 20 Block RAMs (17% of FPGA), and no external RAM. Results will be presented, demonstrating successful photoelectron track cluster detection with minimal impact to detector dead-time.
陶陈; 徐丹; 张海玉; 邓健
2013-01-01
针对户外禁止标识提出一种新的检测算法.首先通过路标颜色找出显著性区域,然后对满足条件的区域采用中心对称投影的方法计算其位置和尺度信息.实验证明该方法具有较高的检测率和较低的虚警率,并能满足实时应用的要求.%A new detection algorithm for prohibition signs in outdoor environment is proposed. First the salient regions are found by special color of road signs and then central symmetry projection is used to find location and scale of the regions. Experiment results show that the method obtains high detection rate as well as low false alarm during real time. The algorithm meets the requirement of real-time application.
Neutronic rebalance algorithms for SIMMER
Soran, P.D.
1976-05-01
Four algorithms to solve the two-dimensional neutronic rebalance equations in SIMMER are investigated. Results of the study are presented and indicate that a matrix decomposition technique with a variable convergence criterion is the best solution algorithm in terms of accuracy and calculational speed. Rebalance numerical stability problems are examined. The results of the study can be applied to other neutron transport codes which use discrete ordinates techniques
After introducing the basic counter machine, we discuss the. Church-Post-Turing ... SERIES I ARTICLE. The variables are called counters as the operations possi- ..... Nobody goes to that restaurant any more, it is too crowded. . Around the ...
Lord, J; Willis, S; Eatock, J; Tappenden, P; Trapero-Bertran, M; Miners, A; Crossan, C; Westby, M; Anagnostou, A; Taylor, S; Mavranezouli, I; Wonderling, D; Alderson, P; Ruiz, F
2013-12-01
National Institute for Health and Care Excellence (NICE) clinical guidelines (CGs) make recommendations across large, complex care pathways for broad groups of patients. They rely on cost-effectiveness evidence from the literature and from new analyses for selected high-priority topics. An alternative approach would be to build a model of the full care pathway and to use this as a platform to evaluate the cost-effectiveness of multiple topics across the guideline recommendations. In this project we aimed to test the feasibility of building full guideline models for NICE guidelines and to assess if, and how, such models can be used as a basis for cost-effectiveness analysis (CEA). A 'best evidence' approach was used to inform the model parameters. Data were drawn from the guideline documentation, advice from clinical experts and rapid literature reviews on selected topics. Where possible we relied on good-quality, recent UK systematic reviews and meta-analyses. Two published NICE guidelines were used as case studies: prostate cancer and atrial fibrillation (AF). Discrete event simulation (DES) was used to model the recommended care pathways and to estimate consequent costs and outcomes. For each guideline, researchers not involved in model development collated a shortlist of topics suggested for updating. The modelling teams then attempted to evaluate options related to these topics. Cost-effectiveness results were compared with opinions about the importance of the topics elicited in a survey of stakeholders. The modelling teams developed simulations of the guideline pathways and disease processes. Development took longer and required more analytical time than anticipated. Estimates of cost-effectiveness were produced for six of the nine prostate cancer topics considered, and for five of eight AF topics. The other topics were not evaluated owing to lack of data or time constraints. The modelled results suggested 'economic priorities' for an update that differed from
A. E. Karateev
2017-01-01
Full Text Available To enhance the efficacy and safety of nonsteroidal anti-inflammatory drugs (NSAIDs, a class of essential medications used to treat acute and chronic pain, is an important and urgent task. For its solution, in 2015 Russian experts provided an NSAID selection algorithm based on the assessment of risk factors (RFs for drug-induced complications and on the prescription of drugs with the least negative effect on the gastrointestinal tract and cardiovascular system. The PRINCIPLE project was implemented to test the effectiveness of this algorithm.Subjects and methods. A study group consisted of 439 patients (65% were women and 35% – men; their mean age was 51.3±14.4 years with severe musculoskeletal pain, who were prescribed NSAIDs by using the above algorithm. The majority of patients were noted to have RFs: gastrointestinal and cardiovascular ones in 62 and 88% of the patients, respectively. Given the RF, eight NSAIDs were used; these were aceclofenac, diclofenac, ibuprofen, ketoprofen, meloxicam, naproxen, nimesulide, and celecoxib, the latter being prescribed most commonly (in 57.4% of cases. NSAID was used in combination with proton pump inhibitors in 30.2% of the patients. The follow-up period was 28 days. The investigators evaluated the efficacy of therapy (pain changes on a 10-point numeric rating scale (NRS and the development of adverse events (AE. Results and discussion. Pain was completely relieved in the overwhelming majority (94.9% of patients. There were no significant differences in the efficacy of different NSAIDs according to NRS scores. The number of AE was minimal and did not differ between different NSAIDs, with the exception of a higher frequency of dyspepsia caused by diclofenac (15.7%. There were no serious complications or therapy discontinuation because of AE.Conclusion. The use of the NSAID selection algorithm allows for effective and relatively safe therapy with these drugs in real clinical practice.
An Algorithm for the Mixed Transportation Network Design Problem.
Liu, Xinyu; Chen, Qun
2016-01-01
This paper proposes an optimization algorithm, the dimension-down iterative algorithm (DDIA), for solving a mixed transportation network design problem (MNDP), which is generally expressed as a mathematical programming with equilibrium constraint (MPEC). The upper level of the MNDP aims to optimize the network performance via both the expansion of the existing links and the addition of new candidate links, whereas the lower level is a traditional Wardrop user equilibrium (UE) problem. The idea of the proposed solution algorithm (DDIA) is to reduce the dimensions of the problem. A group of variables (discrete/continuous) is fixed to optimize another group of variables (continuous/discrete) alternately; then, the problem is transformed into solving a series of CNDPs (continuous network design problems) and DNDPs (discrete network design problems) repeatedly until the problem converges to the optimal solution. The advantage of the proposed algorithm is that its solution process is very simple and easy to apply. Numerical examples show that for the MNDP without budget constraint, the optimal solution can be found within a few iterations with DDIA. For the MNDP with budget constraint, however, the result depends on the selection of initial values, which leads to different optimal solutions (i.e., different local optimal solutions). Some thoughts are given on how to derive meaningful initial values, such as by considering the budgets of new and reconstruction projects separately.
An Algorithm for the Mixed Transportation Network Design Problem.
Xinyu Liu
Full Text Available This paper proposes an optimization algorithm, the dimension-down iterative algorithm (DDIA, for solving a mixed transportation network design problem (MNDP, which is generally expressed as a mathematical programming with equilibrium constraint (MPEC. The upper level of the MNDP aims to optimize the network performance via both the expansion of the existing links and the addition of new candidate links, whereas the lower level is a traditional Wardrop user equilibrium (UE problem. The idea of the proposed solution algorithm (DDIA is to reduce the dimensions of the problem. A group of variables (discrete/continuous is fixed to optimize another group of variables (continuous/discrete alternately; then, the problem is transformed into solving a series of CNDPs (continuous network design problems and DNDPs (discrete network design problems repeatedly until the problem converges to the optimal solution. The advantage of the proposed algorithm is that its solution process is very simple and easy to apply. Numerical examples show that for the MNDP without budget constraint, the optimal solution can be found within a few iterations with DDIA. For the MNDP with budget constraint, however, the result depends on the selection of initial values, which leads to different optimal solutions (i.e., different local optimal solutions. Some thoughts are given on how to derive meaningful initial values, such as by considering the budgets of new and reconstruction projects separately.
Quantum algorithms for testing Boolean functions
Erika Andersson
2010-06-01
Full Text Available We discuss quantum algorithms, based on the Bernstein-Vazirani algorithm, for finding which variables a Boolean function depends on. There are 2^n possible linear Boolean functions of n variables; given a linear Boolean function, the Bernstein-Vazirani quantum algorithm can deterministically identify which one of these Boolean functions we are given using just one single function query. The same quantum algorithm can also be used to learn which input variables other types of Boolean functions depend on, with a success probability that depends on the form of the Boolean function that is tested, but does not depend on the total number of input variables. We also outline a procedure to futher amplify the success probability, based on another quantum algorithm, the Grover search.
Casanova, Henri; Robert, Yves
2008-01-01
""…The authors of the present book, who have extensive credentials in both research and instruction in the area of parallelism, present a sound, principled treatment of parallel algorithms. … This book is very well written and extremely well designed from an instructional point of view. … The authors have created an instructive and fascinating text. The book will serve researchers as well as instructors who need a solid, readable text for a course on parallelism in computing. Indeed, for anyone who wants an understandable text from which to acquire a current, rigorous, and broad vi
Suzuki, Shigeru; Machida, Haruhiko; Tanaka, Isao; Ueno, Eiko
2012-01-01
Objectives: To compare the performance of model-based iterative reconstruction (MBIR) with that of standard filtered back projection (FBP) for measuring vascular wall attenuation. Study design: After subjecting 9 vascular models (actual attenuation value of wall, 89 HU) with wall thickness of 0.5, 1.0, or 1.5 mm that we filled with contrast material of 275, 396, or 542 HU to scanning using 64-detector computed tomography (CT), we reconstructed images using MBIR and FBP (Bone, Detail kernels) and measured wall attenuation at the center of the wall for each model. We performed attenuation measurements for each model and additional supportive measurements by a differentiation curve. We analyzed statistics using analyzes of variance with repeated measures. Results: Using the Bone kernel, standard deviation of the measurement exceeded 30 HU in most conditions. In measurements at the wall center, the attenuation values obtained using MBIR were comparable to or significantly closer to the actual wall attenuation than those acquired using Detail kernel. Using differentiation curves, we could measure attenuation for models with walls of 1.0- or 1.5-mm thickness using MBIR but only those of 1.5-mm thickness using Detail kernel. We detected no significant differences among the attenuation values of the vascular walls of either thickness (MBIR, P = 0.1606) or among the 3 densities of intravascular contrast material (MBIR, P = 0.8185; Detail kernel, P = 0.0802). Conclusions: Compared with FBP, MBIR reduces both reconstruction blur and image noise simultaneously, facilitates recognition of vascular wall boundaries, and can improve accuracy in measuring wall attenuation.
Suzuki, Shigeru; Machida, Haruhiko; Tanaka, Isao; Ueno, Eiko
2012-11-01
To compare the performance of model-based iterative reconstruction (MBIR) with that of standard filtered back projection (FBP) for measuring vascular wall attenuation. After subjecting 9 vascular models (actual attenuation value of wall, 89 HU) with wall thickness of 0.5, 1.0, or 1.5 mm that we filled with contrast material of 275, 396, or 542 HU to scanning using 64-detector computed tomography (CT), we reconstructed images using MBIR and FBP (Bone, Detail kernels) and measured wall attenuation at the center of the wall for each model. We performed attenuation measurements for each model and additional supportive measurements by a differentiation curve. We analyzed statistics using analyzes of variance with repeated measures. Using the Bone kernel, standard deviation of the measurement exceeded 30 HU in most conditions. In measurements at the wall center, the attenuation values obtained using MBIR were comparable to or significantly closer to the actual wall attenuation than those acquired using Detail kernel. Using differentiation curves, we could measure attenuation for models with walls of 1.0- or 1.5-mm thickness using MBIR but only those of 1.5-mm thickness using Detail kernel. We detected no significant differences among the attenuation values of the vascular walls of either thickness (MBIR, P=0.1606) or among the 3 densities of intravascular contrast material (MBIR, P=0.8185; Detail kernel, P=0.0802). Compared with FBP, MBIR reduces both reconstruction blur and image noise simultaneously, facilitates recognition of vascular wall boundaries, and can improve accuracy in measuring wall attenuation. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.
An Algorithm for constructing Hjelmslev planes
Hall, Joanne L.; Rao, Asha
2013-01-01
Projective Hjelmslev planes and Affine Hjelmselv planes are generalisations of projective planes and affine planes. We present an algorithm for constructing a projective Hjelmslev planes and affine Hjelsmelv planes using projective planes, affine planes and orthogonal arrays. We show that all 2-uniform projective Hjelmslev planes, and all 2-uniform affine Hjelsmelv planes can be constructed in this way. As a corollary it is shown that all 2-uniform Affine Hjelmselv planes are sub-geometries o...
Validation of MERIS Ocean Color Algorithms in the Mediterranean Sea
Marullo, S.; D'Ortenzio, F.; Ribera D'Alcalà, M.; Ragni, M.; Santoleri, R.; Vellucci, V.; Luttazzi, C.
2004-05-01
Satellite ocean color measurements can contribute, better than any other source of data, to quantify the spatial and time variability of ocean productivity and, tanks to the success of several satellite missions starting with CZCS up to SeaWiFS, MODIS and MERIS, it is now possible to start doing the investigation of interannual variations and compare level of production during different decades ([1],[2]). The interannual variability of the ocean productivity at global and regional scale can be correctly measured providing that chlorophyll estimate are based on well calibrated algorithms in order to avoid regional biases and instrumental time shifts. The calibration and validation of Ocean Color data is then one of the most important tasks of several research projects worldwide ([3], [4]). Algorithms developed to retrieve chlorophyll concentration need a specific effort to define the error ranges associated to the estimates. In particular, the empirical algorithms, calculated on regression with in situ data, require independent records to verify the degree of uncertainties associated. In addition several evidences demonstrated that regional algorithms can improve the accuracy of the satellite chlorophyll estimates [5]. In 2002, Santoleri et al. (SIMBIOS) first showed a significant overestimation of the SeaWiFS derived chlorophyll concentration in Mediterranean Sea when the standard global NASA algorithms (OC4v2 and OC4v4) are used. The same authors [6] proposed two preliminary new algorithms for the Mediterranean Sea (L-DORMA and NL-DORMA) on a basis of a bio-optical data set collected in the basin from 1998 to 2000. In 2002 Bricaud et al., [7] analyzing other bio-optical data collected in the Mediterranean, confirmed the overestimation of the chlorophyll concentration in oligotrophic conditions and proposed a new regional algorithm to be used in case of low concentrations. Recently, the number of in situ observations in the basin was increased, permitting a first
Problem solving with genetic algorithms and Splicer
Bayer, Steven E.; Wang, Lui
1991-01-01
Genetic algorithms are highly parallel, adaptive search procedures (i.e., problem-solving methods) loosely based on the processes of population genetics and Darwinian survival of the fittest. Genetic algorithms have proven useful in domains where other optimization techniques perform poorly. The main purpose of the paper is to discuss a NASA-sponsored software development project to develop a general-purpose tool for using genetic algorithms. The tool, called Splicer, can be used to solve a wide variety of optimization problems and is currently available from NASA and COSMIC. This discussion is preceded by an introduction to basic genetic algorithm concepts and a discussion of genetic algorithm applications.
Cannon, Alex J.
2018-01-01
Most bias correction algorithms used in climatology, for example quantile mapping, are applied to univariate time series. They neglect the dependence between different variables. Those that are multivariate often correct only limited measures of joint dependence, such as Pearson or Spearman rank correlation. Here, an image processing technique designed to transfer colour information from one image to another—the N-dimensional probability density function transform—is adapted for use as a multivariate bias correction algorithm (MBCn) for climate model projections/predictions of multiple climate variables. MBCn is a multivariate generalization of quantile mapping that transfers all aspects of an observed continuous multivariate distribution to the corresponding multivariate distribution of variables from a climate model. When applied to climate model projections, changes in quantiles of each variable between the historical and projection period are also preserved. The MBCn algorithm is demonstrated on three case studies. First, the method is applied to an image processing example with characteristics that mimic a climate projection problem. Second, MBCn is used to correct a suite of 3-hourly surface meteorological variables from the Canadian Centre for Climate Modelling and Analysis Regional Climate Model (CanRCM4) across a North American domain. Components of the Canadian Forest Fire Weather Index (FWI) System, a complicated set of multivariate indices that characterizes the risk of wildfire, are then calculated and verified against observed values. Third, MBCn is used to correct biases in the spatial dependence structure of CanRCM4 precipitation fields. Results are compared against a univariate quantile mapping algorithm, which neglects the dependence between variables, and two multivariate bias correction algorithms, each of which corrects a different form of inter-variable correlation structure. MBCn outperforms these alternatives, often by a large margin
Backprojection filtering for variable orbit fan-beam tomography
Gullberg, G.T.; Zeng, G.L.
1995-01-01
Backprojection filtering algorithms are presented for three variable Orbit fan-beam geometries. Expressions for the fan beam projection and backprojection operators are given for a flat detector fan-beam geometry with fixed focal length, with variable focal length, and with fixed focal length and off-center focusing. Backprojection operators are derived for each geometry using transformation of coordinates to transform from a parallel geometry backprojector to a fan-beam backprojector for the appropriate geometry. The backprojection operator includes a factor which is a function of the coordinates of the projection ray and the coordinates of the pixel in the backprojected image. The backprojection filtering algorithm first backprojects the variable orbit fan-beam projection data using the appropriately derived backprojector to obtain a 1/r blurring of the original image then takes the two-dimensional (2D) Fast Fourier Transform (FFT) of the backprojected image, then multiples the transformed image by the 2D ramp filter function, and finally takes the inverse 2D FFT to obtain the reconstructed image. Computer simulations verify that backprojectors with appropriate weighting give artifact free reconstructions of simulated line integral projections. Also, it is shown that it is not necessary to assume a projection model of line integrals, but the projector and backprojector can be defined to model the physics of the imaging detection process. A backprojector for variable orbit fan-beam tomography with fixed focal length is derived which includes an additional factor which is a function of the flux density along the flat detector. It is shown that the impulse response for the composite of the projection and backprojection operations is equal to 1/r
Schmidtlein, CR; Beattie, B; Humm, J [Memorial Sloan Kettering Cancer Center, New York, NY (United States); Li, S; Wu, Z; Xu, Y [Sun Yat-sen University, Guangzhou, Guangdong (China); Zhang, J; Shen, L [Syracuse University, Syracuse, NY (United States); Vogelsang, L [VirtualScopics, Rochester, NY (United States); Feiglin, D; Krol, A [SUNY Upstate Medical University, Syracuse, NY (United States)
2014-06-15
Purpose: To investigate the performance of a new penalized-likelihood PET image reconstruction algorithm using the 1{sub 1}-norm total-variation (TV) sum of the 1st through 4th-order gradients as the penalty. Simulated and brain patient data sets were analyzed. Methods: This work represents an extension of the preconditioned alternating projection algorithm (PAPA) for emission-computed tomography. In this new generalized algorithm (GPAPA), the penalty term is expanded to allow multiple components, in this case the sum of the 1st to 4th order gradients, to reduce artificial piece-wise constant regions (“staircase” artifacts typical for TV) seen in PAPA images penalized with only the 1st order gradient. Simulated data were used to test for “staircase” artifacts and to optimize the penalty hyper-parameter in the root-mean-squared error (RMSE) sense. Patient FDG brain scans were acquired on a GE D690 PET/CT (370 MBq at 1-hour post-injection for 10 minutes) in time-of-flight mode and in all cases were reconstructed using resolution recovery projectors. GPAPA images were compared PAPA and RMSE-optimally filtered OSEM (fully converged) in simulations and to clinical OSEM reconstructions (3 iterations, 32 subsets) with 2.6 mm XYGaussian and standard 3-point axial smoothing post-filters. Results: The results from the simulated data show a significant reduction in the 'staircase' artifact for GPAPA compared to PAPA and lower RMSE (up to 35%) compared to optimally filtered OSEM. A simple power-law relationship between the RMSE-optimal hyper-parameters and the noise equivalent counts (NEC) per voxel is revealed. Qualitatively, the patient images appear much sharper and with less noise than standard clinical images. The convergence rate is similar to OSEM. Conclusions: GPAPA reconstructions using the 1{sub 1}-norm total-variation sum of the 1st through 4th-order gradients as the penalty show great promise for the improvement of image quality over that
Schmidtlein, CR; Beattie, B; Humm, J; Li, S; Wu, Z; Xu, Y; Zhang, J; Shen, L; Vogelsang, L; Feiglin, D; Krol, A
2014-01-01
Purpose: To investigate the performance of a new penalized-likelihood PET image reconstruction algorithm using the 1 1 -norm total-variation (TV) sum of the 1st through 4th-order gradients as the penalty. Simulated and brain patient data sets were analyzed. Methods: This work represents an extension of the preconditioned alternating projection algorithm (PAPA) for emission-computed tomography. In this new generalized algorithm (GPAPA), the penalty term is expanded to allow multiple components, in this case the sum of the 1st to 4th order gradients, to reduce artificial piece-wise constant regions (“staircase” artifacts typical for TV) seen in PAPA images penalized with only the 1st order gradient. Simulated data were used to test for “staircase” artifacts and to optimize the penalty hyper-parameter in the root-mean-squared error (RMSE) sense. Patient FDG brain scans were acquired on a GE D690 PET/CT (370 MBq at 1-hour post-injection for 10 minutes) in time-of-flight mode and in all cases were reconstructed using resolution recovery projectors. GPAPA images were compared PAPA and RMSE-optimally filtered OSEM (fully converged) in simulations and to clinical OSEM reconstructions (3 iterations, 32 subsets) with 2.6 mm XYGaussian and standard 3-point axial smoothing post-filters. Results: The results from the simulated data show a significant reduction in the 'staircase' artifact for GPAPA compared to PAPA and lower RMSE (up to 35%) compared to optimally filtered OSEM. A simple power-law relationship between the RMSE-optimal hyper-parameters and the noise equivalent counts (NEC) per voxel is revealed. Qualitatively, the patient images appear much sharper and with less noise than standard clinical images. The convergence rate is similar to OSEM. Conclusions: GPAPA reconstructions using the 1 1 -norm total-variation sum of the 1st through 4th-order gradients as the penalty show great promise for the improvement of image quality over that currently
Fast algorithm of track detection
Nehrguj, B.
1980-01-01
A fast algorithm of variable-slope histograms is proposed, which allows a considerable reduction of computer memory size and is quite simple to carry out. Corresponding FORTRAN subprograms given a triple speed gain have been included in spiral reader data handling software
Algorithmic Verification of Linearizability for Ordinary Differential Equations
Lyakhov, Dmitry A.; Gerdt, Vladimir P.; Michels, Dominik L.
2017-01-01
one by a point transformation of the dependent and independent variables. The first algorithm is based on a construction of the Lie point symmetry algebra and on the computation of its derived algebra. The second algorithm exploits the differential
Collective variables and dissipation
Balian, R.
1984-09-01
This is an introduction to some basic concepts of non-equilibrium statistical mechanics. We emphasize in particular the relevant entropy relative to a given set of collective variables, the meaning of the projection method in the Liouville space, its use to establish the generalized transport equations for these variables, and the interpretation of dissipation in the framework of information theory
A quantum causal discovery algorithm
Giarmatzi, Christina; Costa, Fabio
2018-03-01
Finding a causal model for a set of classical variables is now a well-established task—but what about the quantum equivalent? Even the notion of a quantum causal model is controversial. Here, we present a causal discovery algorithm for quantum systems. The input to the algorithm is a process matrix describing correlations between quantum events. Its output consists of different levels of information about the underlying causal model. Our algorithm determines whether the process is causally ordered by grouping the events into causally ordered non-signaling sets. It detects if all relevant common causes are included in the process, which we label Markovian, or alternatively if some causal relations are mediated through some external memory. For a Markovian process, it outputs a causal model, namely the causal relations and the corresponding mechanisms, represented as quantum states and channels. Our algorithm opens the route to more general quantum causal discovery methods.
Next Generation Suspension Dynamics Algorithms
Schunk, Peter Randall [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Higdon, Jonathon [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Chen, Steven [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)
2014-12-01
This research project has the objective to extend the range of application, improve the efficiency and conduct simulations with the Fast Lubrication Dynamics (FLD) algorithm for concentrated particle suspensions in a Newtonian fluid solvent. The research involves a combination of mathematical development, new computational algorithms, and application to processing flows of relevance in materials processing. The mathematical developments clarify the underlying theory, facilitate verification against classic monographs in the field and provide the framework for a novel parallel implementation optimized for an OpenMP shared memory environment. The project considered application to consolidation flows of major interest in high throughput materials processing and identified hitherto unforeseen challenges in the use of FLD in these applications. Extensions to the algorithm have been developed to improve its accuracy in these applications.
Understanding Algorithms in Different Presentations
Csernoch, Mária; Biró, Piroska; Abari, Kálmán; Máth, János
2015-01-01
Within the framework of the Testing Algorithmic and Application Skills project we tested first year students of Informatics at the beginning of their tertiary education. We were focusing on the students' level of understanding in different programming environments. In the present paper we provide the results from the University of Debrecen, the…
Fontana, W.
1990-12-13
In this paper complex adaptive systems are defined by a self- referential loop in which objects encode functions that act back on these objects. A model for this loop is presented. It uses a simple recursive formal language, derived from the lambda-calculus, to provide a semantics that maps character strings into functions that manipulate symbols on strings. The interaction between two functions, or algorithms, is defined naturally within the language through function composition, and results in the production of a new function. An iterated map acting on sets of functions and a corresponding graph representation are defined. Their properties are useful to discuss the behavior of a fixed size ensemble of randomly interacting functions. This function gas'', or Turning gas'', is studied under various conditions, and evolves cooperative interaction patterns of considerable intricacy. These patterns adapt under the influence of perturbations consisting in the addition of new random functions to the system. Different organizations emerge depending on the availability of self-replicators.
An Efficient Algorithm for Unconstrained Optimization
Sergio Gerardo de-los-Cobos-Silva
2015-01-01
Full Text Available This paper presents an original and efficient PSO algorithm, which is divided into three phases: (1 stabilization, (2 breadth-first search, and (3 depth-first search. The proposed algorithm, called PSO-3P, was tested with 47 benchmark continuous unconstrained optimization problems, on a total of 82 instances. The numerical results show that the proposed algorithm is able to reach the global optimum. This work mainly focuses on unconstrained optimization problems from 2 to 1,000 variables.
Algorithms for reconstructing images for industrial applications
Lopes, R.T.; Crispim, V.R.
1986-01-01
Several algorithms for reconstructing objects from their projections are being studied in our Laboratory, for industrial applications. Such algorithms are useful locating the position and shape of different composition of materials in the object. A Comparative study of two algorithms is made. The two investigated algorithsm are: The MART (Multiplicative - Algebraic Reconstruction Technique) and the Convolution Method. The comparison are carried out from the point view of the quality of the image reconstructed, number of views and cost. (Author) [pt
Goodenberger, Martin H; Wagner-Bartak, Nicolaus A; Gupta, Shiva; Liu, Xinming; Yap, Ramon Q; Sun, Jia; Tamm, Eric P; Jensen, Corey T
The purpose of this study was to compare abdominopelvic computed tomography images reconstructed with adaptive statistical iterative reconstruction-V (ASIR-V) with model-based iterative reconstruction (Veo 3.0), ASIR, and filtered back projection (FBP). Abdominopelvic computed tomography scans for 36 patients (26 males and 10 females) were reconstructed using FBP, ASIR (80%), Veo 3.0, and ASIR-V (30%, 60%, 90%). Mean ± SD patient age was 32 ± 10 years with mean ± SD body mass index of 26.9 ± 4.4 kg/m. Images were reviewed by 2 independent readers in a blinded, randomized fashion. Hounsfield unit, noise, and contrast-to-noise ratio (CNR) values were calculated for each reconstruction algorithm for further comparison. Phantom evaluation of low-contrast detectability (LCD) and high-contrast resolution was performed. Adaptive statistical iterative reconstruction-V 30%, ASIR-V 60%, and ASIR 80% were generally superior qualitatively compared with ASIR-V 90%, Veo 3.0, and FBP (P ASIR-V 60% with respective CNR values of 5.54 ± 2.39, 8.78 ± 3.15, and 3.49 ± 1.77 (P ASIR 80% had the best and worst spatial resolution, respectively. Adaptive statistical iterative reconstruction-V 30% and ASIR-V 60% provided the best combination of qualitative and quantitative performance. Adaptive statistical iterative reconstruction 80% was equivalent qualitatively, but demonstrated inferior spatial resolution and LCD.
An improved VSS NLMS algorithm for active noise cancellation
Sun, Yunzhuo; Wang, Mingjiang; Han, Yufei; Zhang, Congyan
2017-08-01
In this paper, an improved variable step size NLMS algorithm is proposed. NLMS has fast convergence rate and low steady state error compared to other traditional adaptive filtering algorithm. But there is a contradiction between the convergence speed and steady state error that affect the performance of the NLMS algorithm. Now, we propose a new variable step size NLMS algorithm. It dynamically changes the step size according to current error and iteration times. The proposed algorithm has simple formulation and easily setting parameters, and effectively solves the contradiction in NLMS. The simulation results show that the proposed algorithm has a good tracking ability, fast convergence rate and low steady state error simultaneously.
Algorithms for Port-of-Entry Inspection
Roberts, Fred S
2007-01-01
.... The percentage at some ports has now risen to 6%, but this is still a very small percentage. The purpose of this project was to develop decision support algorithms that help to optimally intercept illicit materials and weapons...
Genetic algorithms at UC Davis/LLNL
Vemuri, V.R. [comp.
1993-12-31
A tutorial introduction to genetic algorithms is given. This brief tutorial should serve the purpose of introducing the subject to the novice. The tutorial is followed by a brief commentary on the term project reports that follow.
Evolutionary Algorithms for Boolean Queries Optimization
Húsek, Dušan; Snášel, Václav; Neruda, Roman; Owais, S.S.J.; Krömer, P.
2006-01-01
Roč. 3, č. 1 (2006), s. 15-20 ISSN 1790-0832 R&D Projects: GA AV ČR 1ET100300414 Institutional research plan: CEZ:AV0Z10300504 Keywords : evolutionary algorithms * genetic algorithms * information retrieval * Boolean query Subject RIV: BA - General Mathematics
Boolean Queries Optimization by Genetic Algorithms
Húsek, Dušan; Owais, S.S.J.; Krömer, P.; Snášel, Václav
2005-01-01
Roč. 15, - (2005), s. 395-409 ISSN 1210-0552 R&D Projects: GA AV ČR 1ET100300414 Institutional research plan: CEZ:AV0Z10300504 Keywords : evolutionary algorithms * genetic algorithms * genetic programming * information retrieval * Boolean query Subject RIV: BB - Applied Statistics, Operational Research
Hardware Acceleration of Adaptive Neural Algorithms.
James, Conrad D. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)
2017-11-01
As tradit ional numerical computing has faced challenges, researchers have turned towards alternative computing approaches to reduce power - per - computation metrics and improve algorithm performance. Here, we describe an approach towards non - conventional computing that strengthens the connection between machine learning and neuroscience concepts. The Hardware Acceleration of Adaptive Neural Algorithms (HAANA) project ha s develop ed neural machine learning algorithms and hardware for applications in image processing and cybersecurity. While machine learning methods are effective at extracting relevant features from many types of data, the effectiveness of these algorithms degrades when subjected to real - world conditions. Our team has generated novel neural - inspired approa ches to improve the resiliency and adaptability of machine learning algorithms. In addition, we have also designed and fabricated hardware architectures and microelectronic devices specifically tuned towards the training and inference operations of neural - inspired algorithms. Finally, our multi - scale simulation framework allows us to assess the impact of microelectronic device properties on algorithm performance.
Harzé, Mélanie; Monty, Arnaud; Mahy, Grégory
2015-01-01
Dry calcareous grasslands represent local biodiversity hotspots of European temperate regions. They have suffered intensive fragmentations due to due to the abandonment of traditional agropastoral systems and the resulting encroachment, reforestation, urbanization or transformation into arable lands. In order to preserve and enhance their ecological value, a series of ecological restoration projects have been implemented throughout Europe (LIFE+). As habitats restoration costs can be prohibit...
CATEGORIES OF COMPUTER SYSTEMS ALGORITHMS
A. V. Poltavskiy
2015-01-01
Full Text Available Philosophy as a frame of reference on world around and as the first science is a fundamental basis, "roots" (R. Descartes for all branches of the scientific knowledge accumulated and applied in all fields of activity of a human being person. The theory of algorithms as one of the fundamental sections of mathematics, is also based on researches of the gnoseology conducting cognition of a true picture of the world of the buman being. From gnoseology and ontology positions as fundamental sections of philosophy modern innovative projects are inconceivable without development of programs,and algorithms.
Xuefei Wu
2014-01-01
Full Text Available The complex projective synchronization in drive-response stochastic coupled networks with complex-variable systems is considered. The impulsive pinning control scheme is adopted to achieve complex projective synchronization and several simple and practical sufficient conditions are obtained in a general drive-response network. In addition, the adaptive feedback algorithms are proposed to adjust the control strength. Several numerical simulations are provided to show the effectiveness and feasibility of the proposed methods.
Pseudo-deterministic Algorithms
Goldwasser , Shafi
2012-01-01
International audience; In this talk we describe a new type of probabilistic algorithm which we call Bellagio Algorithms: a randomized algorithm which is guaranteed to run in expected polynomial time, and to produce a correct and unique solution with high probability. These algorithms are pseudo-deterministic: they can not be distinguished from deterministic algorithms in polynomial time by a probabilistic polynomial time observer with black box access to the algorithm. We show a necessary an...
Gieren, Wolfgang; Pietrzynski, Grzegorz; Graczyk, Dariusz, E-mail: wgieren@astro-udec.cl, E-mail: pietrzyn@hubble.cfm.udec.cl [Warsaw University Observatory, Al. Ujazdowskie 4, 00-478 Warsaw (Poland); and others
2013-08-10
Motivated by an amazing range of reported distances to the nearby Local Group spiral galaxy M33, we have obtained deep near-infrared photometry for 26 long-period Cepheids in this galaxy with the ESO Very Large Telescope. From the data, we constructed period-luminosity relations in the J and K bands which together with previous optical VI photometry for the Cepheids by Macri et al. were used to determine the true distance modulus of M33, and the mean reddening affecting the Cepheid sample with the multiwavelength fit method developed in the Araucaria Project. We find a true distance modulus of 24.62 for M33, with a total uncertainty of {+-}0.07 mag which is dominated by the uncertainty on the photometric zero points in our photometry. The reddening is determined as E(B - V) = 0.19 {+-} 0.02, in agreement with the value used by the Hubble Space Telescope Key Project of Freedman et al. but in some discrepancy with other recent determinations based on blue supergiant spectroscopy and an O-type eclipsing binary which yielded lower reddening values. Our derived M33 distance modulus is extremely insensitive to the adopted reddening law. We show that the possible effects of metallicity and crowding on our present distance determination are both at the 1%-2% level and therefore minor contributors to the total uncertainty of our distance result for M33.
Perturbation resilience and superiorization of iterative algorithms
Censor, Y; Davidi, R; Herman, G T
2010-01-01
Iterative algorithms aimed at solving some problems are discussed. For certain problems, such as finding a common point in the intersection of a finite number of convex sets, there often exist iterative algorithms that impose very little demand on computer resources. For other problems, such as finding that point in the intersection at which the value of a given function is optimal, algorithms tend to need more computer memory and longer execution time. A methodology is presented whose aim is to produce automatically for an iterative algorithm of the first kind a 'superiorized version' of it that retains its computational efficiency but nevertheless goes a long way toward solving an optimization problem. This is possible to do if the original algorithm is 'perturbation resilient', which is shown to be the case for various projection algorithms for solving the consistent convex feasibility problem. The superiorized versions of such algorithms use perturbations that steer the process in the direction of a superior feasible point, which is not necessarily optimal, with respect to the given function. After presenting these intuitive ideas in a precise mathematical form, they are illustrated in image reconstruction from projections for two different projection algorithms superiorized for the function whose value is the total variation of the image
Melo, Jean
. Although many researchers suggest that preprocessor-based variability amplifies maintenance problems, there is little to no hard evidence on how actually variability affects programs and programmers. Specifically, how does variability affect programmers during maintenance tasks (bug finding in particular......)? How much harder is it to debug a program as variability increases? How do developers debug programs with variability? In what ways does variability affect bugs? In this Ph.D. thesis, I set off to address such issues through different perspectives using empirical research (based on controlled...... experiments) in order to understand quantitatively and qualitatively the impact of variability on programmers at bug finding and on buggy programs. From the program (and bug) perspective, the results show that variability is ubiquitous. There appears to be no specific nature of variability bugs that could...
Fatigue evaluation algorithms: Review
Passipoularidis, V.A.; Broendsted, P.
2009-11-15
A progressive damage fatigue simulator for variable amplitude loads named FADAS is discussed in this work. FADAS (Fatigue Damage Simulator) performs ply by ply stress analysis using classical lamination theory and implements adequate stiffness discount tactics based on the failure criterion of Puck, to model the degradation caused by failure events in ply level. Residual strength is incorporated as fatigue damage accumulation metric. Once the typical fatigue and static properties of the constitutive ply are determined,the performance of an arbitrary lay-up under uniaxial and/or multiaxial load time series can be simulated. The predictions are validated against fatigue life data both from repeated block tests at a single stress ratio as well as against spectral fatigue using the WISPER, WISPERX and NEW WISPER load sequences on a Glass/Epoxy multidirectional laminate typical of a wind turbine rotor blade construction. Two versions of the algorithm, the one using single-step and the other using incremental application of each load cycle (in case of ply failure) are implemented and compared. Simulation results confirm the ability of the algorithm to take into account load sequence effects. In general, FADAS performs well in predicting life under both spectral and block loading fatigue. (author)
Statistical variability of hydro-meteorological variables as indicators ...
Statistical variability of hydro-meteorological variables as indicators of climate change in north-east Sokoto-Rima basin, Nigeria. ... water resources development including water supply project, agriculture and tourism in the study area. Key word: Climate change, Climatic variability, Actual evapotranspiration, Global warming ...
Universal algorithm of time sharing
Silin, I.N.; Fedyun'kin, E.D.
1979-01-01
Timesharing system algorithm is proposed for the wide class of one- and multiprocessor computer configurations. Dynamical priority is the piece constant function of the channel characteristic and system time quantum. The interactive job quantum has variable length. Characteristic recurrent formula is received. The concept of the background job is introduced. Background job loads processor if high priority jobs are inactive. Background quality function is given on the base of the statistical data received in the timesharing process. Algorithm includes optimal trashing off procedure for the jobs replacements in the memory. Sharing of the system time in proportion to the external priorities is guaranteed for the all active enough computing channels (back-ground too). The fast answer is guaranteed for the interactive jobs, which use small time and memory. The external priority control is saved for the high level scheduler. The experience of the algorithm realization on the BESM-6 computer in JINR is discussed
Algorithms for worst-case tolerance optimization
Schjær-Jacobsen, Hans; Madsen, Kaj
1979-01-01
New algorithms are presented for the solution of optimum tolerance assignment problems. The problems considered are defined mathematically as a worst-case problem (WCP), a fixed tolerance problem (FTP), and a variable tolerance problem (VTP). The basic optimization problem without tolerances...... is denoted the zero tolerance problem (ZTP). For solution of the WCP we suggest application of interval arithmetic and also alternative methods. For solution of the FTP an algorithm is suggested which is conceptually similar to algorithms previously developed by the authors for the ZTP. Finally, the VTP...... is solved by a double-iterative algorithm in which the inner iteration is performed by the FTP- algorithm. The application of the algorithm is demonstrated by means of relatively simple numerical examples. Basic properties, such as convergence properties, are displayed based on the examples....
Hamiltonian Algorithm Sound Synthesis
大矢, 健一
2013-01-01
Hamiltonian Algorithm (HA) is an algorithm for searching solutions is optimization problems. This paper introduces a sound synthesis technique using Hamiltonian Algorithm and shows a simple example. "Hamiltonian Algorithm Sound Synthesis" uses phase transition effect in HA. Because of this transition effect, totally new waveforms are produced.
Progressive geometric algorithms
Alewijnse, S.P.A.; Bagautdinov, T.M.; de Berg, M.T.; Bouts, Q.W.; ten Brink, Alex P.; Buchin, K.A.; Westenberg, M.A.
2015-01-01
Progressive algorithms are algorithms that, on the way to computing a complete solution to the problem at hand, output intermediate solutions that approximate the complete solution increasingly well. We present a framework for analyzing such algorithms, and develop efficient progressive algorithms
Progressive geometric algorithms
Alewijnse, S.P.A.; Bagautdinov, T.M.; Berg, de M.T.; Bouts, Q.W.; Brink, ten A.P.; Buchin, K.; Westenberg, M.A.
2014-01-01
Progressive algorithms are algorithms that, on the way to computing a complete solution to the problem at hand, output intermediate solutions that approximate the complete solution increasingly well. We present a framework for analyzing such algorithms, and develop efficient progressive algorithms
Spurlock, M.; O' Keefe, R. [American Electric Power, Gahanna, OH (United States); Kidd, D. [American Electric Power, Tulsa, OK (United States); Larsen, E. [GE Energy, Schenectady, NY (United States); Roedel, J. [GE Energy, Denver, CO (United States); Bodo, R. [GE Energy, Carrolton, TX (United States); Marken, P. [GE Energy, Columbia City, IN (United States)
2006-07-01
Variable frequency transformers (VFTs) are controllable, bi-directional transmission devices capable of allowing power transfer between asynchronous networks. The VFT uses a rotary transformer with 3-phase windings on both the rotor and the stator. A motor and drive system is also used to manipulate the rotational position of the rotor in order to control the magnitude and direction of the power flow. The VFT was recently selected by American Electric Power (AEP) for its new asynchronous transmission link between the United States and Mexico. This paper provided details of the feasibility studies conducted to select the technology. Three categories of asynchronous interconnection devices were evaluated: (1) a VFT; (2) a voltage source converter; and (3) a conventional high voltage direct current (HVDC) back-to-back system. Stability performance system studies were conducted for all options. The overall reliability benefits of the options were reviewed, as well as their ability to meet steady-state system requirements. Dynamic models were used to conduct the comparative evaluation. Results of the feasibility study indicated that both the VFT and the voltage source converter performed better than the HVDC system. However, the VFT was more stable than the voltage source converter. 5 refs., 3 figs.
Kopasakis, George; Connolly, Joseph W.; Cheng, Larry
2015-01-01
This paper covers the development of stage-by-stage and parallel flow path compressor modeling approaches for a Variable Cycle Engine. The stage-by-stage compressor modeling approach is an extension of a technique for lumped volume dynamics and performance characteristic modeling. It was developed to improve the accuracy of axial compressor dynamics over lumped volume dynamics modeling. The stage-by-stage compressor model presented here is formulated into a parallel flow path model that includes both axial and rotational dynamics. This is done to enable the study of compressor and propulsion system dynamic performance under flow distortion conditions. The approaches utilized here are generic and should be applicable for the modeling of any axial flow compressor design accurate time domain simulations. The objective of this work is as follows. Given the parameters describing the conditions of atmospheric disturbances, and utilizing the derived formulations, directly compute the transfer function poles and zeros describing these disturbances for acoustic velocity, temperature, pressure, and density. Time domain simulations of representative atmospheric turbulence can then be developed by utilizing these computed transfer functions together with the disturbance frequencies of interest.
Bucher, Taina
2017-01-01
the notion of the algorithmic imaginary. It is argued that the algorithmic imaginary – ways of thinking about what algorithms are, what they should be and how they function – is not just productive of different moods and sensations but plays a generative role in moulding the Facebook algorithm itself...... of algorithms affect people's use of these platforms, if at all? To help answer these questions, this article examines people's personal stories about the Facebook algorithm through tweets and interviews with 25 ordinary users. To understand the spaces where people and algorithms meet, this article develops...
Geist, G.A. [Oak Ridge National Lab., TN (United States). Computer Science and Mathematics Div.; Howell, G.W. [Florida Inst. of Tech., Melbourne, FL (United States). Dept. of Applied Mathematics; Watkins, D.S. [Washington State Univ., Pullman, WA (United States). Dept. of Pure and Applied Mathematics
1997-11-01
The BR algorithm, a new method for calculating the eigenvalues of an upper Hessenberg matrix, is introduced. It is a bulge-chasing algorithm like the QR algorithm, but, unlike the QR algorithm, it is well adapted to computing the eigenvalues of the narrowband, nearly tridiagonal matrices generated by the look-ahead Lanczos process. This paper describes the BR algorithm and gives numerical evidence that it works well in conjunction with the Lanczos process. On the biggest problems run so far, the BR algorithm beats the QR algorithm by a factor of 30--60 in computing time and a factor of over 100 in matrix storage space.
Chan, Heang-Ping; Goodsitt, Mitchell M; Helvie, Mark A; Zelakiewicz, Scott; Schmitz, Andrea; Noroozian, Mitra; Paramagul, Chintana; Roubidoux, Marilyn A; Nees, Alexis V; Neal, Colleen H; Carson, Paul; Lu, Yao; Hadjiiski, Lubomir; Wei, Jun
2014-12-01
To investigate the dependence of microcalcification cluster detectability on tomographic scan angle, angular increment, and number of projection views acquired at digital breast tomosynthesis ( DBT digital breast tomosynthesis ). A prototype DBT digital breast tomosynthesis system operated in step-and-shoot mode was used to image breast phantoms. Four 5-cm-thick phantoms embedded with 81 simulated microcalcification clusters of three speck sizes (subtle, medium, and obvious) were imaged by using a rhodium target and rhodium filter with 29 kV, 50 mAs, and seven acquisition protocols. Fixed angular increments were used in four protocols (denoted as scan angle, angular increment, and number of projection views, respectively: 16°, 1°, and 17; 24°, 3°, and nine; 30°, 3°, and 11; and 60°, 3°, and 21), and variable increments were used in three (40°, variable, and 13; 40°, variable, and 15; and 60°, variable, and 21). The reconstructed DBT digital breast tomosynthesis images were interpreted by six radiologists who located the microcalcification clusters and rated their conspicuity. The mean sensitivity for detection of subtle clusters ranged from 80% (22.5 of 28) to 96% (26.8 of 28) for the seven DBT digital breast tomosynthesis protocols; the highest sensitivity was achieved with the 16°, 1°, and 17 protocol (96%), but the difference was significant only for the 60°, 3°, and 21 protocol (80%, P .99). The conspicuity of subtle and medium clusters with the 16°, 1°, and 17 protocol was rated higher than those with other protocols; the differences were significant for subtle clusters with the 24°, 3°, and nine protocol and for medium clusters with 24°, 3°, and nine; 30°, 3°, and 11; 60°, 3° and 21; and 60°, variable, and 21 protocols (P tomosynthesis provided higher sensitivity and conspicuity than wide-angle DBT digital breast tomosynthesis for subtle microcalcification clusters. © RSNA, 2014.
Algorithmically specialized parallel computers
Snyder, Lawrence; Gannon, Dennis B
1985-01-01
Algorithmically Specialized Parallel Computers focuses on the concept and characteristics of an algorithmically specialized computer.This book discusses the algorithmically specialized computers, algorithmic specialization using VLSI, and innovative architectures. The architectures and algorithms for digital signal, speech, and image processing and specialized architectures for numerical computations are also elaborated. Other topics include the model for analyzing generalized inter-processor, pipelined architecture for search tree maintenance, and specialized computer organization for raster
The Dropout Learning Algorithm
Baldi, Pierre; Sadowski, Peter
2014-01-01
Dropout is a recently introduced algorithm for training neural network by randomly dropping units during training to prevent their co-adaptation. A mathematical analysis of some of the static and dynamic properties of dropout is provided using Bernoulli gating variables, general enough to accommodate dropout on units or connections, and with variable rates. The framework allows a complete analysis of the ensemble averaging properties of dropout in linear networks, which is useful to understand the non-linear case. The ensemble averaging properties of dropout in non-linear logistic networks result from three fundamental equations: (1) the approximation of the expectations of logistic functions by normalized geometric means, for which bounds and estimates are derived; (2) the algebraic equality between normalized geometric means of logistic functions with the logistic of the means, which mathematically characterizes logistic functions; and (3) the linearity of the means with respect to sums, as well as products of independent variables. The results are also extended to other classes of transfer functions, including rectified linear functions. Approximation errors tend to cancel each other and do not accumulate. Dropout can also be connected to stochastic neurons and used to predict firing rates, and to backpropagation by viewing the backward propagation as ensemble averaging in a dropout linear network. Moreover, the convergence properties of dropout can be understood in terms of stochastic gradient descent. Finally, for the regularization properties of dropout, the expectation of the dropout gradient is the gradient of the corresponding approximation ensemble, regularized by an adaptive weight decay term with a propensity for self-consistent variance minimization and sparse representations. PMID:24771879
Dikusar, N.D.
1993-01-01
The new approach to solving of the finding problem is proposed. The method is based on Discrete Projective Transformations (DPT), the List Square Fitting (LSF) and uses the information feedback in tracing for linear or quadratic track segments (TS). The fast and stable with respect to measurement errors and background points recurrent algorithm is suggested. The algorithm realizes the family of digital adaptive projective filters (APF) with known nonlinear weight functions-projective invariants. APF can be used in adequate control systems for collection, processing and compression of data, including tracking problems for the wide class of detectors. 10 refs.; 9 figs
1989-01-01
The study of stellar pulsations is a major route to the understanding of stellar structure and evolution. At the South African Astronomical Observatory (SAAO) the following stellar pulsation studies were undertaken: rapidly oscillating Ap stars; solar-like oscillations in stars; 8-Scuti type variability in a classical Am star; Beta Cephei variables; a pulsating white dwarf and its companion; RR Lyrae variables and galactic Cepheids. 4 figs
Quantum Computation and Algorithms
Biham, O.; Biron, D.; Biham, E.; Grassi, M.; Lidar, D.A.
1999-01-01
It is now firmly established that quantum algorithms provide a substantial speedup over classical algorithms for a variety of problems, including the factorization of large numbers and the search for a marked element in an unsorted database. In this talk I will review the principles of quantum algorithms, the basic quantum gates and their operation. The combination of superposition and interference, that makes these algorithms efficient, will be discussed. In particular, Grover's search algorithm will be presented as an example. I will show that the time evolution of the amplitudes in Grover's algorithm can be found exactly using recursion equations, for any initial amplitude distribution
Vimont, Daniel [University of Wisconsin - Madison
2014-06-13
This project funded two efforts at understanding the interactions between Central Pacific ENSO events, the mid-latitude atmosphere, and decadal variability in the Pacific. The first was an investigation of conditions that lead to Central Pacific (CP) and East Pacific (EP) ENSO events through the use of linear inverse modeling with defined norms. The second effort was a modeling study that combined output from the National Center for Atmospheric Research (NCAR) Community Atmospheric Model (CAM4) with the Battisti (1988) intermediate coupled model. The intent of the second activity was to investigate the relationship between the atmospheric North Pacific Oscillation (NPO), the Pacific Meridional Mode (PMM), and ENSO. These two activities are described herein.
Dual decomposition for parsing with non-projective head automata
Koo, Terry; Rush, Alexander Matthew; Collins, Michael; Jaakkola, Tommi S.; Sontag, David Alexander
2010-01-01
This paper introduces algorithms for non-projective parsing based on dual decomposition. We focus on parsing algorithms for non-projective head automata, a generalization of head-automata models to non-projective structures. The dual decomposition algorithms are simple and efficient, relying on standard dynamic programming and minimum spanning tree algorithms. They provably solve an LP relaxation of the non-projective parsing problem. Empirically the LP relaxation is very often tight: for man...
Wagner, Falko Jens; Poulsen, Mikael Zebbelin
1999-01-01
When trying to solve a DAE problem of high index with more traditional methods, it often causes instability in some of the variables, and finally leads to breakdown of convergence and integration of the solution. This is nicely shown in [ESF98, p. 152 ff.].This chapter will introduce projection...... methods as a way of handling these special problems. It is assumed that we have methods for solving normal ODE systems and index-1 systems....
An Improved Harmony Search Algorithm for Power Distribution Network Planning
Wei Sun
2015-01-01
Full Text Available Distribution network planning because of involving many variables and constraints is a multiobjective, discrete, nonlinear, and large-scale optimization problem. Harmony search (HS algorithm is a metaheuristic algorithm inspired by the improvisation process of music players. HS algorithm has several impressive advantages, such as easy implementation, less adjustable parameters, and quick convergence. But HS algorithm still has some defects such as premature convergence and slow convergence speed. According to the defects of the standard algorithm and characteristics of distribution network planning, an improved harmony search (IHS algorithm is proposed in this paper. We set up a mathematical model of distribution network structure planning, whose optimal objective function is to get the minimum annual cost and constraint conditions are overload and radial network. IHS algorithm is applied to solve the complex optimization mathematical model. The empirical results strongly indicate that IHS algorithm can effectively provide better results for solving the distribution network planning problem compared to other optimization algorithms.
Siegler, Robert S.
2007-01-01
Children's thinking is highly variable at every level of analysis, from neural and associative levels to the level of strategies, theories, and other aspects of high-level cognition. This variability exists within people as well as between them; individual children often rely on different strategies or representations on closely related problems…
Adiabatic quantum search algorithm for structured problems
Roland, Jeremie; Cerf, Nicolas J.
2003-01-01
The study of quantum computation has been motivated by the hope of finding efficient quantum algorithms for solving classically hard problems. In this context, quantum algorithms by local adiabatic evolution have been shown to solve an unstructured search problem with a quadratic speedup over a classical search, just as Grover's algorithm. In this paper, we study how the structure of the search problem may be exploited to further improve the efficiency of these quantum adiabatic algorithms. We show that by nesting a partial search over a reduced set of variables into a global search, it is possible to devise quantum adiabatic algorithms with a complexity that, although still exponential, grows with a reduced order in the problem size
Orlando – Regalón Anias
2012-11-01
Full Text Available Existen múltiples sistemas dinámicos cuyos modelos matemáticos se caracterizan por ser de primer orden yparámetros variables con el tiempo. En estos casos las herramientas clásicas no siempre logran un sistema decontrol que sea estable, posea un buen desempeño dinámico y rechace adecuadamente las perturbaciones, cuandoel modelo de la planta se desvía del nominal, para el cual se realizó el diseño.En este trabajo se evalúa elcomportamiento de tres estrategias de control en presencia de variación de parámetros. Estas son: control clásico,control adaptable y control robusto. Se realiza un estudio comparativo de las mismas en cuanto a complejidad deldiseño, costo computacional de la implementación y sensibilidad ante variaciones en los parámetros y/o presencia dedisturbios. Se llega a conclusiones que permiten disponer de criterios para la elección más adecuada, endependencia de los requerimientos dinámicos que la aplicación demande, así como de los medios técnicos de que sedisponga.Many dynamic systems have first order mathematic models, with time variable parameters. In these cases, theclassical tools do not satisfy at all control system stability, good performance and perturbation rejection, when theplant model differs from the nominal one, for which the controller was designed.In this article, three control strategiesare evaluated in parameter variations and disturbance presence. The strategies are the followings: classical control,adaptive control and robust control. A comparative study is carried out, taking into account the design complexity, thecomputational cost and the sensitivity. The obtained conclusions helps to provide the criterion to choose the mostadequate control strategy, according to the necessary dynamic, as well as the available technical means.
El-Habashi, A.; Ahmed, S.; Lovko, V. J.
2017-12-01
Retrievals of of Karenia brevis Harmful Algal blooms (KB HABS) in the West Florida Shelf (WFS) obtained from remote sensing reflectance (Rrs) measurements by the JPSS VIIRS satellite and processed using recently developed neural network (NN) algorithms are examined and compared with other techniques. The NN approach is used because it does not require observations of Rrs at the 678 nm chlorophyll fluorescence channel. This channel, previously used on MODIS-A (the predecessor satellite) to satisfactorily detect KB HABs blooms using the normalized fluorescence height approach, is unavailable on VIIRS. Thus NN is trained on a synthetic data set of 20,000 IOPs based on a wide range of parameters from NOMAD, and requires as inputs the Rrs measurements only at 486, 551 and 671 and 488, 555 and 667 nm channels, available from VIIRS and MODIS-A respectively. These channels are less vulnerable to atmospheric correction inadequacies affecting observations at the shorter blue wavelengths which are used with other algorithms. The NN retrieves phytoplankton absorption at 443 nm, which, when combined with backscatter information at 551 nm, is sufficient for effective KB HABs retrievals. NN retrievals of KB HABs in the WFS are found to compare favorably with retrievals using other retrieval algorithms, including OCI/OC3, GIOP and QAA version 5. Accuracies of VIIRS retrievals were then compared against all the in-situ measurements available over the 2012-2016 period for which concurrent or near concurrent match ups could be obtained with VIIRS observations. Retrieval statistics showed that the NN technique achieved the best accuracies. They also highlight the impact of temporal variabilities on retrieval accuracies. These showed the importance of having a shorter overlap time window between in-situ measurement and satellite retrieval. Retrievals within a 15 minute overlap time window showed very significantly improved accuracies over those attained with a 100 minute window
A combinational fast algorithm for image reconstruction
Wu Zhongquan
1987-01-01
A combinational fast algorithm has been developed in order to increase the speed of reconstruction. First, an interpolation method based on B-spline functions is used in image reconstruction. Next, the influence of the boundary conditions assumed here on the interpolation of filtered projections and on the image reconstruction is discussed. It is shown that this boundary condition has almost no influence on the image in the central region of the image space, because the error of interpolation rapidly decreases by a factor of ten in shifting two pixels from the edge toward the center. In addition, a fast algorithm for computing the detecting angle has been used with the mentioned interpolation algorithm, and the cost for detecting angle computaton is reduced by a factor of two. The implementation results show that in the same subjective and objective fidelity, the computational cost for the interpolation using this algorithm is about one-twelfth of the conventional algorithm
Adaptive Filtering Algorithms and Practical Implementation
Diniz, Paulo S R
2013-01-01
In the fourth edition of Adaptive Filtering: Algorithms and Practical Implementation, author Paulo S.R. Diniz presents the basic concepts of adaptive signal processing and adaptive filtering in a concise and straightforward manner. The main classes of adaptive filtering algorithms are presented in a unified framework, using clear notations that facilitate actual implementation. The main algorithms are described in tables, which are detailed enough to allow the reader to verify the covered concepts. Many examples address problems drawn from actual applications. New material to this edition includes: Analytical and simulation examples in Chapters 4, 5, 6 and 10 Appendix E, which summarizes the analysis of set-membership algorithm Updated problems and references Providing a concise background on adaptive filtering, this book covers the family of LMS, affine projection, RLS and data-selective set-membership algorithms as well as nonlinear, sub-band, blind, IIR adaptive filtering, and more. Several problems are...
Chandrasekharan, Shailesh
2000-01-01
Cluster algorithms have been recently used to eliminate sign problems that plague Monte-Carlo methods in a variety of systems. In particular such algorithms can also be used to solve sign problems associated with the permutation of fermion world lines. This solution leads to the possibility of designing fermion cluster algorithms in certain cases. Using the example of free non-relativistic fermions we discuss the ideas underlying the algorithm
Autonomous Star Tracker Algorithms
Betto, Maurizio; Jørgensen, John Leif; Kilsgaard, Søren
1998-01-01
Proposal, in response to an ESA R.f.P., to design algorithms for autonomous star tracker operations.The proposal also included the development of a star tracker breadboard to test the algorithms performances.......Proposal, in response to an ESA R.f.P., to design algorithms for autonomous star tracker operations.The proposal also included the development of a star tracker breadboard to test the algorithms performances....
Variable-spot ion beam figuring
Wu, Lixiang; Qiu, Keqiang; Fu, Shaojun
2016-01-01
This paper introduces a new scheme of ion beam figuring (IBF), or rather variable-spot IBF, which is conducted at a constant scanning velocity with variable-spot ion beam collimated by a variable diaphragm. It aims at improving the reachability and adaptation of the figuring process within the limits of machine dynamics by varying the ion beam spot size instead of the scanning velocity. In contrast to the dwell time algorithm in the conventional IBF, the variable-spot IBF adopts a new algorithm, which consists of the scan path programming and the trajectory optimization using pattern search. In this algorithm, instead of the dwell time, a new concept, integral etching time, is proposed to interpret the process of variable-spot IBF. We conducted simulations to verify its feasibility and practicality. The simulation results indicate the variable-spot IBF is a promising alternative to the conventional approach.
Algorithms For Integrating Nonlinear Differential Equations
Freed, A. D.; Walker, K. P.
1994-01-01
Improved algorithms developed for use in numerical integration of systems of nonhomogenous, nonlinear, first-order, ordinary differential equations. In comparison with integration algorithms, these algorithms offer greater stability and accuracy. Several asymptotically correct, thereby enabling retention of stability and accuracy when large increments of independent variable used. Accuracies attainable demonstrated by applying them to systems of nonlinear, first-order, differential equations that arise in study of viscoplastic behavior, spread of acquired immune-deficiency syndrome (AIDS) virus and predator/prey populations.