WorldWideScience

Sample records for squares background prediction

  1. DNDO Report: Predicting Solar Modulation Potentials for Modeling Cosmic Background Radiation

    Energy Technology Data Exchange (ETDEWEB)

    Behne, Patrick Alan [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2016-08-08

    The modeling of the detectability of special nuclear material (SNM) at ports and border crossings requires accurate knowledge of the background radiation at those locations. Background radiation originates from two main sources, cosmic and terrestrial. Cosmic background is produced by high-energy galactic cosmic rays (GCR) entering the atmosphere and inducing a cascade of particles that eventually impact the earth’s surface. The solar modulation potential represents one of the primary inputs to modeling cosmic background radiation. Usosokin et al. formally define solar modulation potential as “the mean energy loss [per unit charge] of a cosmic ray particle inside the heliosphere…” Modulation potential, a function of elevation, location, and time, shares an inverse relationship with cosmic background radiation. As a result, radiation detector thresholds require adjustment to account for differing background levels, caused partly by differing solar modulations. Failure to do so can result in higher rates of false positives and failed detection of SNM for low and high levels of solar modulation potential, respectively. This study focuses on solar modulation’s time dependence, and seeks the best method to predict modulation for future dates using Python. To address the task of predicting future solar modulation, we utilize both non-linear least squares sinusoidal curve fitting and cubic spline interpolation. This material will be published in transactions of the ANS winter meeting of November, 2016.

  2. Prediction of in vivo background in phoswich lung count spectra

    International Nuclear Information System (INIS)

    Richards, N.W.

    1999-01-01

    Phoswich scintillation counters are used to detect actinides deposited in the lungs. The resulting spectra, however, contain Compton background from the decay of 40 K, which occurs naturally in the striated muscle tissue of the body. To determine the counts due to actinides in a lung count spectrum, the counts due to 40 K scatter must first be subtracted out. The 40 K background in the phoswich NaI(Tl) spectrum was predicted from an energy region of interest called the monitor region, which is above the 238 Pu region and the 241 Am region, where photopeaks from 238 Pu and 241 Am region, where photopeaks from 238 Pu and 241 Am occur. Empirical models were developed to predict the backgrounds in the 238 Pu and 241 Am regions by testing multiple linear and nonlinear regression models. The initial multiple regression models contain a monitor region variable as well as the variables gender, (weight/height) α , and interaction terms. Data were collected from 64 male and 63 female subjects with no internal exposure. For the 238 Pu region, the only significant predictor was found to be the monitor region. For the 241 Am region, the monitor region was found to have the greatest effect on prediction, while gender was significantly only when weight/height was included in a model. Gender-specific models were thus developed. The empirical models for the 241 Am region that contain weight/height were shown to have the best coefficients of determination (R 2 ) and the lowest mean squares for error (MSE)

  3. Least-Square Prediction for Backward Adaptive Video Coding

    Directory of Open Access Journals (Sweden)

    Li Xin

    2006-01-01

    Full Text Available Almost all existing approaches towards video coding exploit the temporal redundancy by block-matching-based motion estimation and compensation. Regardless of its popularity, block matching still reflects an ad hoc understanding of the relationship between motion and intensity uncertainty models. In this paper, we present a novel backward adaptive approach, named "least-square prediction" (LSP, and demonstrate its potential in video coding. Motivated by the duality between edge contour in images and motion trajectory in video, we propose to derive the best prediction of the current frame from its causal past using least-square method. It is demonstrated that LSP is particularly effective for modeling video material with slow motion and can be extended to handle fast motion by temporal warping and forward adaptation. For typical QCIF test sequences, LSP often achieves smaller MSE than , full-search, quarter-pel block matching algorithm (BMA without the need of transmitting any overhead.

  4. Partial Least Square with Savitzky Golay Derivative in Predicting Blood Hemoglobin Using Near Infrared Spectrum

    Directory of Open Access Journals (Sweden)

    Mohd Idrus Mohd Nazrul Effendy

    2018-01-01

    Full Text Available Near infrared spectroscopy (NIRS is a reliable technique that widely used in medical fields. Partial least square was developed to predict blood hemoglobin concentration using NIRS. The aims of this paper are (i to develop predictive model for near infrared spectroscopic analysis in blood hemoglobin prediction, (ii to establish relationship between blood hemoglobin and near infrared spectrum using a predictive model, (iii to evaluate the predictive accuracy of a predictive model based on root mean squared error (RMSE and coefficient of determination rp2. Partial least square with first order Savitzky Golay (SG derivative preprocessing (PLS-SGd1 showed the higher performance of predictions with RMSE = 0.7965 and rp2= 0.9206 in K-fold cross validation. Optimum number of latent variable (LV and frame length (f were 32 and 27 nm, respectively. These findings suggest that the relationship between blood hemoglobin and near infrared spectrum is strong, and the partial least square with first order SG derivative is able to predict the blood hemoglobin using near infrared spectral data.

  5. Prediction of toxicity of nitrobenzenes using ab initio and least squares support vector machines

    International Nuclear Information System (INIS)

    Niazi, Ali; Jameh-Bozorghi, Saeed; Nori-Shargh, Davood

    2008-01-01

    A quantitative structure-property relationship (QSPR) study is suggested for the prediction of toxicity (IGC 50 ) of nitrobenzenes. Ab initio theory was used to calculate some quantum chemical descriptors including electrostatic potentials and local charges at each atom, HOMO and LUMO energies, etc. Modeling of the IGC 50 of nitrobenzenes as a function of molecular structures was established by means of the least squares support vector machines (LS-SVM). This model was applied for the prediction of the toxicity (IGC 50 ) of nitrobenzenes, which were not in the modeling procedure. The resulted model showed high prediction ability with root mean square error of prediction of 0.0049 for LS-SVM. Results have shown that the introduction of LS-SVM for quantum chemical descriptors drastically enhances the ability of prediction in QSAR studies superior to multiple linear regression and partial least squares

  6. Background-Modeling-Based Adaptive Prediction for Surveillance Video Coding.

    Science.gov (United States)

    Zhang, Xianguo; Huang, Tiejun; Tian, Yonghong; Gao, Wen

    2014-02-01

    The exponential growth of surveillance videos presents an unprecedented challenge for high-efficiency surveillance video coding technology. Compared with the existing coding standards that were basically developed for generic videos, surveillance video coding should be designed to make the best use of the special characteristics of surveillance videos (e.g., relative static background). To do so, this paper first conducts two analyses on how to improve the background and foreground prediction efficiencies in surveillance video coding. Following the analysis results, we propose a background-modeling-based adaptive prediction (BMAP) method. In this method, all blocks to be encoded are firstly classified into three categories. Then, according to the category of each block, two novel inter predictions are selectively utilized, namely, the background reference prediction (BRP) that uses the background modeled from the original input frames as the long-term reference and the background difference prediction (BDP) that predicts the current data in the background difference domain. For background blocks, the BRP can effectively improve the prediction efficiency using the higher quality background as the reference; whereas for foreground-background-hybrid blocks, the BDP can provide a better reference after subtracting its background pixels. Experimental results show that the BMAP can achieve at least twice the compression ratio on surveillance videos as AVC (MPEG-4 Advanced Video Coding) high profile, yet with a slightly additional encoding complexity. Moreover, for the foreground coding performance, which is crucial to the subjective quality of moving objects in surveillance videos, BMAP also obtains remarkable gains over several state-of-the-art methods.

  7. Optimization of a neural network model for signal-to-background prediction in gamma-ray spectrometry

    International Nuclear Information System (INIS)

    Dragovic, S.; Onjia, A. . E-mail address of corresponding author: sdragovic@inep.co.yu; Dragovic, S.)

    2005-01-01

    The artificial neural network (ANN) model was optimized for the prediction of signal-to-background (SBR) ratio as a function of the measurement time in gamma-ray spectrometry. The network parameters: learning rate (α), momentum (μ), number of epochs (E) and number of nodes in hidden layer (N) were optimized simultaneously employing variable-size simplex method. The most accurate model with the root mean square (RMS) error of 0.073 was obtained using ANN with online backpropagation randomized (OBPR) algorithm with α = 0.27, μ 0.36, E = 14800 and N = 9. Most of the predicted and experimental SBR values for the eight radionuclides ( 226 Ra, 214 Bi, 235 U, 40 K, 232 Th, 134 Cs, 137 Cs and 7 Be), studied in this work, reasonably agreed to within 15 %, which was satisfactory accuracy. (author)

  8. Discussion About Nonlinear Time Series Prediction Using Least Squares Support Vector Machine

    International Nuclear Information System (INIS)

    Xu Ruirui; Bian Guoxing; Gao Chenfeng; Chen Tianlun

    2005-01-01

    The least squares support vector machine (LS-SVM) is used to study the nonlinear time series prediction. First, the parameter γ and multi-step prediction capabilities of the LS-SVM network are discussed. Then we employ clustering method in the model to prune the number of the support values. The learning rate and the capabilities of filtering noise for LS-SVM are all greatly improved.

  9. Intelligent Quality Prediction Using Weighted Least Square Support Vector Regression

    Science.gov (United States)

    Yu, Yaojun

    A novel quality prediction method with mobile time window is proposed for small-batch producing process based on weighted least squares support vector regression (LS-SVR). The design steps and learning algorithm are also addressed. In the method, weighted LS-SVR is taken as the intelligent kernel, with which the small-batch learning is solved well and the nearer sample is set a larger weight, while the farther is set the smaller weight in the history data. A typical machining process of cutting bearing outer race is carried out and the real measured data are used to contrast experiment. The experimental results demonstrate that the prediction accuracy of the weighted LS-SVR based model is only 20%-30% that of the standard LS-SVR based one in the same condition. It provides a better candidate for quality prediction of small-batch producing process.

  10. Compressive Strength Prediction of Square Concrete Columns Retrofitted with External Steel Collars

    Directory of Open Access Journals (Sweden)

    Pudjisuryadi, P.

    2013-01-01

    Full Text Available Transverse confining stress in concrete members, commonly provided by transverse reinforcement, has been recognized to enhance strength and ductility. Nowadays, the confining method has been further developed to external confinement approach. This type of confinement can be used for retrofitting existing concrete columns. Many external confining techniques have been proven to be successful in retrofitting circular columns. However, for square or rectangular columns, providing effective confining stress by external retrofitting method is not a simple task due to high stress concentration at column’s corners. This paper proposes an analytical model to predict the peak strength of square concrete columns confined by external steel collars. Comparison with the experimental results showed that the model can predict the peak strength reasonably well. However, it should be noted that relatively larger amount of steel is needed to achieve comparable column strength enhancement when it is compared with those of conve tional internally-confined columns.

  11. Weighted least-squares criteria for electrical impedance tomography

    International Nuclear Information System (INIS)

    Kallman, J.S.; Berryman, J.G.

    1992-01-01

    Methods are developed for design of electrical impedance tomographic reconstruction algorithms with specified properties. Assuming a starting model with constant conductivity or some other specified background distribution, an algorithm with the following properties is found: (1) the optimum constant for the starting model is determined automatically; (2) the weighted least-squares error between the predicted and measured power dissipation data is as small as possible; (3) the variance of the reconstructed conductivity from the starting model is minimized; (4) potential distributions with the largest volume integral of gradient squared have the least influence on the reconstructed conductivity, and therefore distributions most likely to be corrupted by contact impedance effects are deemphasized; (5) cells that dissipate the most power during the current injection tests tend to deviate least from the background value. The resulting algorithm maps the reconstruction problem into a vector space where the contribution to the inversion from the background conductivity remains invariant, while the optimum contributions in orthogonal directions are found. For a starting model with nonconstant conductivity, the reconstruction algorithm has analogous properties

  12. Prediction of earth rotation parameters based on improved weighted least squares and autoregressive model

    Directory of Open Access Journals (Sweden)

    Sun Zhangzhen

    2012-08-01

    Full Text Available In this paper, an improved weighted least squares (WLS, together with autoregressive (AR model, is proposed to improve prediction accuracy of earth rotation parameters(ERP. Four weighting schemes are developed and the optimal power e for determination of the weight elements is studied. The results show that the improved WLS-AR model can improve the ERP prediction accuracy effectively, and for different prediction intervals of ERP, different weight scheme should be chosen.

  13. Prediction on long-term mean and mean square pollutant concentrations in an urban atmosphere

    Energy Technology Data Exchange (ETDEWEB)

    Kumar, S; Lamb, R G; Seinfeld, J H

    1976-01-01

    The general problem of predicting long-term average (say yearly) pollutant concentrations in an urban atmosphere is formulated. The pollutant concentration can be viewed as a random process, the complete description of which requires knowledge of its probability density function, which is unknown. The mean concentration is the first moment of the concentration distribution, and at present there exist a number of models for predicting the long-term mean concentration of an inert pollutant. The second moment, or mean square concentration, indicates additional features of the distribution, such as the level of fluctuations about the mean. In the paper a model proposed by Lamb for the long-term mean concentration is reviewed, and a new model for prediction of the long-term mean square concentration of an inert air pollutant is derived. The properties and uses of the model are discussed, and the equations defining the model are presented in a form for direct application to an urban area.

  14. A Bayesian least-squares support vector machine method for predicting the remaining useful life of a microwave component

    Directory of Open Access Journals (Sweden)

    Fuqiang Sun

    2017-01-01

    Full Text Available Rapid and accurate lifetime prediction of critical components in a system is important to maintaining the system’s reliable operation. To this end, many lifetime prediction methods have been developed to handle various failure-related data collected in different situations. Among these methods, machine learning and Bayesian updating are the most popular ones. In this article, a Bayesian least-squares support vector machine method that combines least-squares support vector machine with Bayesian inference is developed for predicting the remaining useful life of a microwave component. A degradation model describing the change in the component’s power gain over time is developed, and the point and interval remaining useful life estimates are obtained considering a predefined failure threshold. In our case study, the radial basis function neural network approach is also implemented for comparison purposes. The results indicate that the Bayesian least-squares support vector machine method is more precise and stable in predicting the remaining useful life of this type of components.

  15. Numerical prediction of turbulent heat transfer augmentation in an annular fuel channel with two-dimensional square ribs

    International Nuclear Information System (INIS)

    Takase, Kazuyuki

    1996-01-01

    The square-ribbed fuel rod for high temperature gas-cooled reactors was developed in order to enhance the turbulent heat transfer in comparison with the standard fuel rod. To evaluate the heat transfer performance of the square-ribbed fuel rod, the turbulent heat transfer coefficients in an annular fuel channel with repeated two-dimensional square ribs were analyzed numerically on a fully developed incompressible flow using the k - ε turbulence model and the two-dimensional axisymmetrical coordinate system. Numerical analyses were carried out for a range of Reynolds numbers from 3000 to 20000 and ratios of square-rib pitch to height of 10, 20 and 40, respectively. The predicted values of the heat transfer coefficients agreed within an error of 10% for the square-rib pitch to height ratio of 10, 20% for 20 and 25% for 40, respectively, with the heat transfer empirical correlations obtained from the experimental data. It was concluded by the present study that the effect of the heat transfer augmentation by square ribs could be predicted sufficiently by the present numerical simulations and also a part of its mechanism could be explained by means of the change in the turbulence kinematic energy distribution along the flow direction. (author)

  16. Numerical prediction of augmented turbulent heat transfer in an annular fuel channel with repeated two-dimensional square ribs

    International Nuclear Information System (INIS)

    Takase, K.

    1996-01-01

    The square-ribbed fuel rod for high temperature gas-cooled reactors was designed and developed so as to enhance the turbulent heat transfer in comparison with the previous standard fuel rod. The turbulent heat transfer characteristics in an annular fuel channel with repeated two-dimensional square ribs were analysed numerically on a fully developed incompressible flow using the k-ε turbulence model and the two-dimensional axisymmetrical coordinate system. Numerical analyses were carried out under the conditions of Reynolds numbers from 3000 to 20000 and ratios of square-rib pitch to height of 10, 20 and 40 respectively. The predictions of the heat transfer coefficients agreed well within an error of 10% for the square-rib pitch to height ratio of 10, 20% for 20 and 25% for 40 respectively, with the heat transfer empirical correlations obtained from the experimental data due to the simulated square-ribbed fuel rods. Therefore it was found that the effect of heat transfer augmentation due to the square ribs could be predicted by the present numerical simulations and the mechanism could be explained by the change in the turbulence kinematic energy distribution along the flow direction. (orig.)

  17. Prediction of ferric iron precipitation in bioleaching process using partial least squares and artificial neural network

    Directory of Open Access Journals (Sweden)

    Golmohammadi Hassan

    2013-01-01

    Full Text Available A quantitative structure-property relationship (QSPR study based on partial least squares (PLS and artificial neural network (ANN was developed for the prediction of ferric iron precipitation in bioleaching process. The leaching temperature, initial pH, oxidation/reduction potential (ORP, ferrous concentration and particle size of ore were used as inputs to the network. The output of the model was ferric iron precipitation. The optimal condition of the neural network was obtained by adjusting various parameters by trial-and-error. After optimization and training of the network according to back-propagation algorithm, a 5-5-1 neural network was generated for prediction of ferric iron precipitation. The root mean square error for the neural network calculated ferric iron precipitation for training, prediction and validation set are 32.860, 40.739 and 35.890, respectively, which are smaller than those obtained by PLS model (180.972, 165.047 and 149.950, respectively. Results obtained reveal the reliability and good predictivity of neural network model for the prediction of ferric iron precipitation in bioleaching process.

  18. Heat transfer prediction in a square porous medium using artificial neural network

    Science.gov (United States)

    Ahamad, N. Ameer; Athani, Abdulgaphur; Badruddin, Irfan Anjum

    2018-05-01

    Heat transfer in porous media has been investigated extensively because of its applications in various important fields. Neural network approach is applied to analyze steady two dimensional free convection flows through a porous medium fixed in a square cavity. The backpropagation neural network is trained and used to predict the heat transfer. The results are compared with available information in the literature. It is found that the heat transfer increases with increase in Rayleigh number. It is further found that the local Nusselt number decreases along the height of cavity. The neural network is found to predict the heat transfer behavior accurately for given parameters.

  19. Short-term traffic flow prediction model using particle swarm optimization–based combined kernel function-least squares support vector machine combined with chaos theory

    Directory of Open Access Journals (Sweden)

    Qiang Shang

    2016-08-01

    Full Text Available Short-term traffic flow prediction is an important part of intelligent transportation systems research and applications. For further improving the accuracy of short-time traffic flow prediction, a novel hybrid prediction model (multivariate phase space reconstruction–combined kernel function-least squares support vector machine based on multivariate phase space reconstruction and combined kernel function-least squares support vector machine is proposed. The C-C method is used to determine the optimal time delay and the optimal embedding dimension of traffic variables’ (flow, speed, and occupancy time series for phase space reconstruction. The G-P method is selected to calculate the correlation dimension of attractor which is an important index for judging chaotic characteristics of the traffic variables’ series. The optimal input form of combined kernel function-least squares support vector machine model is determined by multivariate phase space reconstruction, and the model’s parameters are optimized by particle swarm optimization algorithm. Finally, case validation is carried out using the measured data of an expressway in Xiamen, China. The experimental results suggest that the new proposed model yields better predictions compared with similar models (combined kernel function-least squares support vector machine, multivariate phase space reconstruction–generalized kernel function-least squares support vector machine, and phase space reconstruction–combined kernel function-least squares support vector machine, which indicates that the new proposed model exhibits stronger prediction ability and robustness.

  20. Prediction of background in low-energy spectrum of phoswich detector

    International Nuclear Information System (INIS)

    Arun, B.; Manohari, M.; Mathiyarasu, R.; Rajagopal, V.; Jose, M.T.

    2014-01-01

    In vivo monitoring of actinides in occupational workers is done using Phoswich detector by measuring the low-energy X ray and gamma rays. Quantification of actinides like plutonium and americium in the lungs is extremely difficult due to higher background in the low-energy regions, which is from ambient background as well as from the subject. In the latter case, it is mainly due to the Compton scattering of body potassium, which varies person-to-person. Hence, an accurate prediction of subject-specific background counts in the lower-energy regions is an essential element in the in vivo measurement of plutonium and americium. Empirical equations are established for the prediction of background count rate in 239 Pu and 241 Am lower-energy regions, called 'target regions', as a function of count rate in the monitoring region (97-130 keV)/ 40 K region in the high-energy spectrum, weight-to-height ratio of the subject (scattering parameter) and the gender. (authors)

  1. Prediction of beef marblingusing Hyperspectral Imaging (HSI and Partial Least Squares Regression (PLSR

    Directory of Open Access Journals (Sweden)

    Victor Aredo

    2017-01-01

    Full Text Available The aim of this study was to build a model to predict the beef marbling using HSI and Partial Least Squares Regression (PLSR. Totally 58 samples of longissmus dorsi muscle were scanned by a HSI system (400 - 1000 nm in reflectance mode, using 44 samples to build t he PLSR model and 14 samples to model validation. The Japanese Beef Marbling Standard (BMS was used as reference by 15 middle - trained judges for the samples evaluation. The scores were assigned as continuous values and varied from 1.2 to 5.3 BMS. The PLSR model showed a high correlation coefficient in the prediction (r = 0.95, a low Standard Error of Calibration (SEC of 0.2 BMS score, and a low Standard Error of Prediction (SEP of 0.3 BMS score.

  2. Precise predictions for V + jets dark matter backgrounds

    Energy Technology Data Exchange (ETDEWEB)

    Lindert, J.M.; Glover, N.; Morgan, T.A. [University of Durham, Department of Physics, Institute for Particle Physics Phenomenology, Durham (United Kingdom); Pozzorini, S.; Gehrmann, T.; Schoenherr, M. [Universitaet Zuerich, Physik-Institut, Zurich (Switzerland); Boughezal, R. [Argonne National Laboratory, High Energy Physics Division, Argonne, IL (United States); Campbell, J.M. [Fermilab, Batavia, IL (United States); Denner, A. [Universitaet Wuerzburg, Institut fuer Theoretische Physik und Astrophysik, Wuerzburg (Germany); Dittmaier, S.; Maierhoefer, P. [Albert-Ludwigs-Universitaet Freiburg, Physikalisches Institut, Freiburg (Germany); Gehrmann-De Ridder, A. [Universitaet Zuerich, Physik-Institut, Zurich (Switzerland); Institute for Theoretical Physics, ETH, Zurich (Switzerland); Huss, A. [Institute for Theoretical Physics, ETH, Zurich (Switzerland); Kallweit, S.; Mangano, M.L.; Salam, G.P. [CERN, Theoretical Physics Department, Geneva (Switzerland); Mueck, A. [RWTH Aachen University, Institut fuer Theoretische Teilchenphysik und Kosmologie, Aachen (Germany); Petriello, F. [Argonne National Laboratory, High Energy Physics Division, Argonne, IL (United States); Northwestern University, Department of Physics and Astronomy, Evanston, IL (United States); Williams, C. [University at Buffalo, The State University of New York, Department of Physics, Buffalo (United States)

    2017-12-15

    High-energy jets recoiling against missing transverse energy (MET) are powerful probes of dark matter at the LHC. Searches based on large MET signatures require a precise control of the Z(ν anti ν) + jet background in the signal region. This can be achieved by taking accurate data in control regions dominated by Z(l{sup +}l{sup -}) + jet, W(lν) + jet and γ + jet production, and extrapolating to the Z(ν anti ν) + jet background by means of precise theoretical predictions. In this context, recent advances in perturbative calculations open the door to significant sensitivity improvements in dark matter searches. In this spirit, we present a combination of state-of-the-art calculations for all relevant V + jets processes, including throughout NNLO QCD corrections and NLO electroweak corrections supplemented by Sudakov logarithms at two loops. Predictions at parton level are provided together with detailed recommendations for their usage in experimental analyses based on the reweighting of Monte Carlo samples. Particular attention is devoted to the estimate of theoretical uncertainties in the framework of dark matter searches, where subtle aspects such as correlations across different V + jet processes play a key role. The anticipated theoretical uncertainty in the Z(ν anti ν) + jet background is at the few percent level up to the TeV range. (orig.)

  3. An Improved Generalized Predictive Control in a Robust Dynamic Partial Least Square Framework

    Directory of Open Access Journals (Sweden)

    Jin Xin

    2015-01-01

    Full Text Available To tackle the sensitivity to outliers in system identification, a new robust dynamic partial least squares (PLS model based on an outliers detection method is proposed in this paper. An improved radial basis function network (RBFN is adopted to construct the predictive model from inputs and outputs dataset, and a hidden Markov model (HMM is applied to detect the outliers. After outliers are removed away, a more robust dynamic PLS model is obtained. In addition, an improved generalized predictive control (GPC with the tuning weights under dynamic PLS framework is proposed to deal with the interaction which is caused by the model mismatch. The results of two simulations demonstrate the effectiveness of proposed method.

  4. Weighted-Average Least Squares Prediction

    NARCIS (Netherlands)

    Magnus, Jan R.; Wang, Wendun; Zhang, Xinyu

    2016-01-01

    Prediction under model uncertainty is an important and difficult issue. Traditional prediction methods (such as pretesting) are based on model selection followed by prediction in the selected model, but the reported prediction and the reported prediction variance ignore the uncertainty from the

  5. Source allocation by least-squares hydrocarbon fingerprint matching

    Energy Technology Data Exchange (ETDEWEB)

    William A. Burns; Stephen M. Mudge; A. Edward Bence; Paul D. Boehm; John S. Brown; David S. Page; Keith R. Parker [W.A. Burns Consulting Services LLC, Houston, TX (United States)

    2006-11-01

    There has been much controversy regarding the origins of the natural polycyclic aromatic hydrocarbon (PAH) and chemical biomarker background in Prince William Sound (PWS), Alaska, site of the 1989 Exxon Valdez oil spill. Different authors have attributed the sources to various proportions of coal, natural seep oil, shales, and stream sediments. The different probable bioavailabilities of hydrocarbons from these various sources can affect environmental damage assessments from the spill. This study compares two different approaches to source apportionment with the same data (136 PAHs and biomarkers) and investigate whether increasing the number of coal source samples from one to six increases coal attributions. The constrained least-squares (CLS) source allocation method that fits concentrations meets geologic and chemical constraints better than partial least-squares (PLS) which predicts variance. The field data set was expanded to include coal samples reported by others, and CLS fits confirm earlier findings of low coal contributions to PWS. 15 refs., 5 figs.

  6. Whitening of Background Brain Activity via Parametric Modeling

    Directory of Open Access Journals (Sweden)

    Nidal Kamel

    2007-01-01

    Full Text Available Several signal subspace techniques have been recently suggested for the extraction of the visual evoked potential signals from brain background colored noise. The majority of these techniques assume the background noise as white, and for colored noise, it is suggested to be whitened, without further elaboration on how this might be done. In this paper, we investigate the whitening capabilities of two parametric techniques: a direct one based on Levinson solution of Yule-Walker equations, called AR Yule-Walker, and an indirect one based on the least-squares solution of forward-backward linear prediction (FBLP equations, called AR-FBLP. The whitening effect of the two algorithms is investigated with real background electroencephalogram (EEG colored noise and compared in time and frequency domains.

  7. Predicting falls in older adults using the four square step test.

    Science.gov (United States)

    Cleary, Kimberly; Skornyakov, Elena

    2017-10-01

    The Four Square Step Test (FSST) is a performance-based balance tool involving stepping over four single-point canes placed on the floor in a cross configuration. The purpose of this study was to evaluate properties of the FSST in older adults who lived independently. Forty-five community dwelling older adults provided fall history and completed the FSST, Berg Balance Scale (BBS), Timed Up and Go (TUG), and Tinetti in random order. Future falls were recorded for 12 months following testing. The FSST accurately distinguished between non-fallers and multiple fallers, and the 15-second threshold score accurately distinguished multiple fallers from non-multiple fallers based on fall history. The FSST predicted future falls, and performance on the FSST was significantly correlated with performance on the BBS, TUG, and Tinetti. However, the test is not appropriate for older adults who use walkers. Overall, the FSST is a valid yet underutilized measure of balance performance and fall prediction tool that physical therapists should consider using in ambulatory community dwelling older adults.

  8. Prediction of CO concentrations based on a hybrid Partial Least Square and Support Vector Machine model

    Science.gov (United States)

    Yeganeh, B.; Motlagh, M. Shafie Pour; Rashidi, Y.; Kamalan, H.

    2012-08-01

    Due to the health impacts caused by exposures to air pollutants in urban areas, monitoring and forecasting of air quality parameters have become popular as an important topic in atmospheric and environmental research today. The knowledge on the dynamics and complexity of air pollutants behavior has made artificial intelligence models as a useful tool for a more accurate pollutant concentration prediction. This paper focuses on an innovative method of daily air pollution prediction using combination of Support Vector Machine (SVM) as predictor and Partial Least Square (PLS) as a data selection tool based on the measured values of CO concentrations. The CO concentrations of Rey monitoring station in the south of Tehran, from Jan. 2007 to Feb. 2011, have been used to test the effectiveness of this method. The hourly CO concentrations have been predicted using the SVM and the hybrid PLS-SVM models. Similarly, daily CO concentrations have been predicted based on the aforementioned four years measured data. Results demonstrated that both models have good prediction ability; however the hybrid PLS-SVM has better accuracy. In the analysis presented in this paper, statistic estimators including relative mean errors, root mean squared errors and the mean absolute relative error have been employed to compare performances of the models. It has been concluded that the errors decrease after size reduction and coefficients of determination increase from 56 to 81% for SVM model to 65-85% for hybrid PLS-SVM model respectively. Also it was found that the hybrid PLS-SVM model required lower computational time than SVM model as expected, hence supporting the more accurate and faster prediction ability of hybrid PLS-SVM model.

  9. Validating the Galerkin least-squares finite element methods in predicting mixing flows in stirred tank reactors

    International Nuclear Information System (INIS)

    Johnson, K.; Bittorf, K.J.

    2002-01-01

    A novel approach for computer aided modeling and optimizing mixing process has been developed using Galerkin least-squares finite element technology. Computer aided mixing modeling and analysis involves Lagrangian and Eulerian analysis for relative fluid stretching, and energy dissipation concepts for laminar and turbulent flows. High quality, conservative, accurate, fluid velocity, and continuity solutions are required for determining mixing quality. The ORCA Computational Fluid Dynamics (CFD) package, based on a finite element formulation, solves the incompressible Reynolds Averaged Navier Stokes (RANS) equations. Although finite element technology has been well used in areas of heat transfer, solid mechanics, and aerodynamics for years, it has only recently been applied to the area of fluid mixing. ORCA, developed using the Galerkin Least-Squares (GLS) finite element technology, provides another formulation for numerically solving the RANS based and LES based fluid mechanics equations. The ORCA CFD package is validated against two case studies. The first, a free round jet, demonstrates that the CFD code predicts the theoretical velocity decay rate, linear expansion rate, and similarity profile. From proper prediction of fundamental free jet characteristics, confidence can be derived when predicting flows in a stirred tank, as a stirred tank reactor can be considered a series of free jets and wall jets. (author)

  10. Regression model of support vector machines for least squares prediction of crystallinity of cracking catalysts by infrared spectroscopy

    International Nuclear Information System (INIS)

    Comesanna Garcia, Yumirka; Dago Morales, Angel; Talavera Bustamante, Isneri

    2010-01-01

    The recently introduction of the least squares support vector machines method for regression purposes in the field of Chemometrics has provided several advantages to linear and nonlinear multivariate calibration methods. The objective of the paper was to propose the use of the least squares support vector machine as an alternative multivariate calibration method for the prediction of the percentage of crystallinity of fluidized catalytic cracking catalysts, by means of Fourier transform mid-infrared spectroscopy. A linear kernel was used in the calculations of the regression model. The optimization of its gamma parameter was carried out using the leave-one-out cross-validation procedure. The root mean square error of prediction was used to measure the performance of the model. The accuracy of the results obtained with the application of the method is in accordance with the uncertainty of the X-ray powder diffraction reference method. To compare the generalization capability of the developed method, a comparison study was carried out, taking into account the results achieved with the new model and those reached through the application of linear calibration methods. The developed method can be easily implemented in refinery laboratories

  11. Optimization and Prediction of Angular Distortion and Weldment Characteristics of TIG Square Butt Joints

    Science.gov (United States)

    Narang, H. K.; Mahapatra, M. M.; Jha, P. K.; Biswas, P.

    2014-05-01

    Autogenous arc welds with minimum upper weld bead depression and lower weld bead bulging are desired as such welds do not require a second welding pass for filling up the upper bead depressions (UBDs) and characterized with minimum angular distortion. The present paper describes optimization and prediction of angular distortion and weldment characteristics such as upper weld bead depression and lower weld bead bulging of TIG-welded structural steel square butt joints. Full factorial design of experiment was utilized for selecting the combinations of welding process parameter to produce the square butts. A mathematical model was developed to establish the relationship between TIG welding process parameters and responses such as upper bead width, lower bead width, UBD, lower bead height (bulging), weld cross-sectional area, and angular distortions. The optimal welding condition to minimize UBD and lower bead bulging of the TIG butt joints was identified.

  12. Wave-equation Q tomography and least-squares migration

    KAUST Repository

    Dutta, Gaurav

    2016-03-01

    This thesis designs new methods for Q tomography and Q-compensated prestack depth migration when the recorded seismic data suffer from strong attenuation. A motivation of this work is that the presence of gas clouds or mud channels in overburden structures leads to the distortion of amplitudes and phases in seismic waves propagating inside the earth. If the attenuation parameter Q is very strong, i.e., Q<30, ignoring the anelastic effects in imaging can lead to dimming of migration amplitudes and loss of resolution. This, in turn, adversely affects the ability to accurately predict reservoir properties below such layers. To mitigate this problem, I first develop an anelastic least-squares reverse time migration (Q-LSRTM) technique. I reformulate the conventional acoustic least-squares migration problem as a viscoacoustic linearized inversion problem. Using linearized viscoacoustic modeling and adjoint operators during the least-squares iterations, I show with numerical tests that Q-LSRTM can compensate for the amplitude loss and produce images with better balanced amplitudes than conventional migration. To estimate the background Q model that can be used for any Q-compensating migration algorithm, I then develop a wave-equation based optimization method that inverts for the subsurface Q distribution by minimizing a skeletonized misfit function ε. Here, ε is the sum of the squared differences between the observed and the predicted peak/centroid-frequency shifts of the early-arrivals. Through numerical tests on synthetic and field data, I show that noticeable improvements in the migration image quality can be obtained from Q models inverted using wave-equation Q tomography. A key feature of skeletonized inversion is that it is much less likely to get stuck in a local minimum than a standard waveform inversion method. Finally, I develop a preconditioning technique for least-squares migration using a directional Gabor-based preconditioning approach for isotropic

  13. Prediction of fracture initiation in square cup drawing of DP980 using an anisotropic ductile fracture criterion

    Science.gov (United States)

    Park, N.; Huh, H.; Yoon, J. W.

    2017-09-01

    This paper deals with the prediction of fracture initiation in square cup drawing of DP980 steel sheet with the thickness of 1.2 mm. In an attempt to consider the influence of material anisotropy on the fracture initiation, an uncoupled anisotropic ductile fracture criterion is developed based on the Lou—Huh ductile fracture criterion. Tensile tests are carried out at different loading directions of 0°, 45°, and 90° to the rolling direction of the sheet using various specimen geometries including pure shear, dog-bone, and flat grooved specimens so as to calibrate the parameters of the proposed fracture criterion. Equivalent plastic strain distribution on the specimen surface is computed using Digital Image Correlation (DIC) method until surface crack initiates. The proposed fracture criterion is implemented into the commercial finite element code ABAQUS/Explicit by developing the Vectorized User-defined MATerial (VUMAT) subroutine which features the non-associated flow rule. Simulation results of the square cup drawing test clearly show that the proposed fracture criterion is capable of predicting the fracture initiation with sufficient accuracy considering the material anisotropy.

  14. Prediction of biochar yield from cattle manure pyrolysis via least squares support vector machine intelligent approach.

    Science.gov (United States)

    Cao, Hongliang; Xin, Ya; Yuan, Qiaoxia

    2016-02-01

    To predict conveniently the biochar yield from cattle manure pyrolysis, intelligent modeling approach was introduced in this research. A traditional artificial neural networks (ANN) model and a novel least squares support vector machine (LS-SVM) model were developed. For the identification and prediction evaluation of the models, a data set with 33 experimental data was used, which were obtained using a laboratory-scale fixed bed reaction system. The results demonstrated that the intelligent modeling approach is greatly convenient and effective for the prediction of the biochar yield. In particular, the novel LS-SVM model has a more satisfying predicting performance and its robustness is better than the traditional ANN model. The introduction and application of the LS-SVM modeling method gives a successful example, which is a good reference for the modeling study of cattle manure pyrolysis process, even other similar processes. Copyright © 2015 Elsevier Ltd. All rights reserved.

  15. Prediction of Navigation Satellite Clock Bias Considering Clock's Stochastic Variation Behavior with Robust Least Square Collocation

    Directory of Open Access Journals (Sweden)

    WANG Yupu

    2016-06-01

    Full Text Available In order to better express the characteristic of satellite clock bias (SCB and further improve its prediction precision, a new SCB prediction model is proposed, which can take the physical feature, cyclic variation and stochastic variation behaviors of the space-borne atomic clock into consideration by using a robust least square collocation (LSC method. The proposed model firstly uses a quadratic polynomial model with periodic terms to fit and abstract the trend term and cyclic terms of SCB. Then for the residual stochastic variation part and possible gross errors hidden in SCB data, the model employs a robust LSC method to process them. The covariance function of the LSC is determined by selecting an empirical function and combining SCB prediction tests. Using the final precise IGS SCB products to conduct prediction tests, the results show that the proposed model can get better prediction performance. Specifically, the results' prediction accuracy can enhance 0.457 ns and 0.948 ns respectively, and the corresponding prediction stability can improve 0.445 ns and 1.233 ns, compared with the results of quadratic polynomial model and grey model. In addition, the results also show that the proposed covariance function corresponding to the new model is reasonable.

  16. Improved prediction of drug-target interactions using regularized least squares integrating with kernel fusion technique

    Energy Technology Data Exchange (ETDEWEB)

    Hao, Ming; Wang, Yanli, E-mail: ywang@ncbi.nlm.nih.gov; Bryant, Stephen H., E-mail: bryant@ncbi.nlm.nih.gov

    2016-02-25

    Identification of drug-target interactions (DTI) is a central task in drug discovery processes. In this work, a simple but effective regularized least squares integrating with nonlinear kernel fusion (RLS-KF) algorithm is proposed to perform DTI predictions. Using benchmark DTI datasets, our proposed algorithm achieves the state-of-the-art results with area under precision–recall curve (AUPR) of 0.915, 0.925, 0.853 and 0.909 for enzymes, ion channels (IC), G protein-coupled receptors (GPCR) and nuclear receptors (NR) based on 10 fold cross-validation. The performance can further be improved by using a recalculated kernel matrix, especially for the small set of nuclear receptors with AUPR of 0.945. Importantly, most of the top ranked interaction predictions can be validated by experimental data reported in the literature, bioassay results in the PubChem BioAssay database, as well as other previous studies. Our analysis suggests that the proposed RLS-KF is helpful for studying DTI, drug repositioning as well as polypharmacology, and may help to accelerate drug discovery by identifying novel drug targets. - Graphical abstract: Flowchart of the proposed RLS-KF algorithm for drug-target interaction predictions. - Highlights: • A nonlinear kernel fusion algorithm is proposed to perform drug-target interaction predictions. • Performance can further be improved by using the recalculated kernel. • Top predictions can be validated by experimental data.

  17. Improved prediction of drug-target interactions using regularized least squares integrating with kernel fusion technique

    International Nuclear Information System (INIS)

    Hao, Ming; Wang, Yanli; Bryant, Stephen H.

    2016-01-01

    Identification of drug-target interactions (DTI) is a central task in drug discovery processes. In this work, a simple but effective regularized least squares integrating with nonlinear kernel fusion (RLS-KF) algorithm is proposed to perform DTI predictions. Using benchmark DTI datasets, our proposed algorithm achieves the state-of-the-art results with area under precision–recall curve (AUPR) of 0.915, 0.925, 0.853 and 0.909 for enzymes, ion channels (IC), G protein-coupled receptors (GPCR) and nuclear receptors (NR) based on 10 fold cross-validation. The performance can further be improved by using a recalculated kernel matrix, especially for the small set of nuclear receptors with AUPR of 0.945. Importantly, most of the top ranked interaction predictions can be validated by experimental data reported in the literature, bioassay results in the PubChem BioAssay database, as well as other previous studies. Our analysis suggests that the proposed RLS-KF is helpful for studying DTI, drug repositioning as well as polypharmacology, and may help to accelerate drug discovery by identifying novel drug targets. - Graphical abstract: Flowchart of the proposed RLS-KF algorithm for drug-target interaction predictions. - Highlights: • A nonlinear kernel fusion algorithm is proposed to perform drug-target interaction predictions. • Performance can further be improved by using the recalculated kernel. • Top predictions can be validated by experimental data.

  18. Improved variable reduction in partial least squares modelling based on predictive-property-ranked variables and adaptation of partial least squares complexity.

    Science.gov (United States)

    Andries, Jan P M; Vander Heyden, Yvan; Buydens, Lutgarde M C

    2011-10-31

    The calibration performance of partial least squares for one response variable (PLS1) can be improved by elimination of uninformative variables. Many methods are based on so-called predictive variable properties, which are functions of various PLS-model parameters, and which may change during the variable reduction process. In these methods variable reduction is made on the variables ranked in descending order for a given variable property. The methods start with full spectrum modelling. Iteratively, until a specified number of remaining variables is reached, the variable with the smallest property value is eliminated; a new PLS model is calculated, followed by a renewed ranking of the variables. The Stepwise Variable Reduction methods using Predictive-Property-Ranked Variables are denoted as SVR-PPRV. In the existing SVR-PPRV methods the PLS model complexity is kept constant during the variable reduction process. In this study, three new SVR-PPRV methods are proposed, in which a possibility for decreasing the PLS model complexity during the variable reduction process is build in. Therefore we denote our methods as PPRVR-CAM methods (Predictive-Property-Ranked Variable Reduction with Complexity Adapted Models). The selective and predictive abilities of the new methods are investigated and tested, using the absolute PLS regression coefficients as predictive property. They were compared with two modifications of existing SVR-PPRV methods (with constant PLS model complexity) and with two reference methods: uninformative variable elimination followed by either a genetic algorithm for PLS (UVE-GA-PLS) or an interval PLS (UVE-iPLS). The performance of the methods is investigated in conjunction with two data sets from near-infrared sources (NIR) and one simulated set. The selective and predictive performances of the variable reduction methods are compared statistically using the Wilcoxon signed rank test. The three newly developed PPRVR-CAM methods were able to retain

  19. Mode-coupling theory predictions for a limited valency attractive square well model

    International Nuclear Information System (INIS)

    Zaccarelli, E; Saika-Voivod, I; Moreno, A J; Nave, E La; Buldyrev, S V; Sciortino, F; Tartaglia, P

    2006-01-01

    Recently we have studied, using numerical simulations, a limited valency model, i.e. an attractive square well model with a constraint on the maximum number of bonded neighbours. Studying a large region of temperatures T and packing fractions φ, we have estimated the location of the liquid-gas phase separation spinodal and the loci of dynamic arrest, where the system is trapped in a disordered non-ergodic state. Two distinct arrest lines for the system are present in the system: a (repulsive) glass line at high packing fraction, and a gel line at low φ and T. The former is essentially vertical φ controlled), while the latter is rather horizontal (T controlled) in the φ-T) plane. We here complement the molecular dynamics results with mode coupling theory calculations, using the numerical structure factors as input. We find that the theory predicts a repulsive glass line-in satisfactory agreement with the simulation results-and an attractive glass line, which appears to be unrelated to the gel line

  20. A Generalized Autocovariance Least-Squares Method for Covariance Estimation

    DEFF Research Database (Denmark)

    Åkesson, Bernt Magnus; Jørgensen, John Bagterp; Poulsen, Niels Kjølstad

    2007-01-01

    A generalization of the autocovariance least- squares method for estimating noise covariances is presented. The method can estimate mutually correlated system and sensor noise and can be used with both the predicting and the filtering form of the Kalman filter.......A generalization of the autocovariance least- squares method for estimating noise covariances is presented. The method can estimate mutually correlated system and sensor noise and can be used with both the predicting and the filtering form of the Kalman filter....

  1. Lossless medical image compression using geometry-adaptive partitioning and least square-based prediction.

    Science.gov (United States)

    Song, Xiaoying; Huang, Qijun; Chang, Sheng; He, Jin; Wang, Hao

    2018-06-01

    To improve the compression rates for lossless compression of medical images, an efficient algorithm, based on irregular segmentation and region-based prediction, is proposed in this paper. Considering that the first step of a region-based compression algorithm is segmentation, this paper proposes a hybrid method by combining geometry-adaptive partitioning and quadtree partitioning to achieve adaptive irregular segmentation for medical images. Then, least square (LS)-based predictors are adaptively designed for each region (regular subblock or irregular subregion). The proposed adaptive algorithm not only exploits spatial correlation between pixels but it utilizes local structure similarity, resulting in efficient compression performance. Experimental results show that the average compression performance of the proposed algorithm is 10.48, 4.86, 3.58, and 0.10% better than that of JPEG 2000, CALIC, EDP, and JPEG-LS, respectively. Graphical abstract ᅟ.

  2. Comparison Between Wind Power Prediction Models Based on Wavelet Decomposition with Least-Squares Support Vector Machine (LS-SVM and Artificial Neural Network (ANN

    Directory of Open Access Journals (Sweden)

    Maria Grazia De Giorgi

    2014-08-01

    Full Text Available A high penetration of wind energy into the electricity market requires a parallel development of efficient wind power forecasting models. Different hybrid forecasting methods were applied to wind power prediction, using historical data and numerical weather predictions (NWP. A comparative study was carried out for the prediction of the power production of a wind farm located in complex terrain. The performances of Least-Squares Support Vector Machine (LS-SVM with Wavelet Decomposition (WD were evaluated at different time horizons and compared to hybrid Artificial Neural Network (ANN-based methods. It is acknowledged that hybrid methods based on LS-SVM with WD mostly outperform other methods. A decomposition of the commonly known root mean square error was beneficial for a better understanding of the origin of the differences between prediction and measurement and to compare the accuracy of the different models. A sensitivity analysis was also carried out in order to underline the impact that each input had in the network training process for ANN. In the case of ANN with the WD technique, the sensitivity analysis was repeated on each component obtained by the decomposition.

  3. Quantized kernel least mean square algorithm.

    Science.gov (United States)

    Chen, Badong; Zhao, Songlin; Zhu, Pingping; Príncipe, José C

    2012-01-01

    In this paper, we propose a quantization approach, as an alternative of sparsification, to curb the growth of the radial basis function structure in kernel adaptive filtering. The basic idea behind this method is to quantize and hence compress the input (or feature) space. Different from sparsification, the new approach uses the "redundant" data to update the coefficient of the closest center. In particular, a quantized kernel least mean square (QKLMS) algorithm is developed, which is based on a simple online vector quantization method. The analytical study of the mean square convergence has been carried out. The energy conservation relation for QKLMS is established, and on this basis we arrive at a sufficient condition for mean square convergence, and a lower and upper bound on the theoretical value of the steady-state excess mean square error. Static function estimation and short-term chaotic time-series prediction examples are presented to demonstrate the excellent performance.

  4. Can Personality Traits and Intelligence Compensate for Background Disadvantage? Predicting Status Attainment in Adulthood

    Science.gov (United States)

    Damian, Rodica Ioana; Su, Rong; Shanahan, Michael; Trautwein, Ulrich; Roberts, Brent W.

    2014-01-01

    This paper investigates the interplay of family background and individual differences, such as personality traits and intelligence (measured in a large US representative sample of high school students; N = 81,000) in predicting educational attainment, annual income, and occupational prestige eleven years later. Specifically, we tested whether individual differences followed one of three patterns in relation to parental SES when predicting attained status: (a) the independent effects hypothesis (i.e., individual differences predict attainments independent of parental SES level), (b) the resource substitution hypothesis (i.e., individual differences are stronger predictors of attainments at lower levels of parental SES), and (c) the Matthew effect hypothesis (i.e., “the rich get richer,” individual differences are stronger predictors of attainments at higher levels of parental SES). We found that personality traits and intelligence in adolescence predicted later attained status above and beyond parental SES. A standard deviation increase in individual differences translated to up to 8 additional months of education, $4,233 annually, and more prestigious occupations. Furthermore, although we did find some evidence for both the resource substitution and the Matthew effect hypotheses, the most robust pattern across all models supported the independent effects hypothesis. Intelligence was the exception, where interaction models were more robust. Finally, we found that although personality traits may help compensate for background disadvantage to a small extent, they do not usually lead to a “full catch up” effect, unlike intelligence. This was the first longitudinal study of status attainment to test interactive models of individual differences and background factors. PMID:25402679

  5. Can personality traits and intelligence compensate for background disadvantage? Predicting status attainment in adulthood.

    Science.gov (United States)

    Damian, Rodica Ioana; Su, Rong; Shanahan, Michael; Trautwein, Ulrich; Roberts, Brent W

    2015-09-01

    This study investigated the interplay of family background and individual differences, such as personality traits and intelligence (measured in a large U.S. representative sample of high school students; N = 81,000) in predicting educational attainment, annual income, and occupational prestige 11 years later. Specifically, we tested whether individual differences followed 1 of 3 patterns in relation to parental socioeconomic status (SES) when predicting attained status: (a) the independent effects hypothesis (i.e., individual differences predict attainments independent of parental SES level), (b) the resource substitution hypothesis (i.e., individual differences are stronger predictors of attainments at lower levels of parental SES), and (c) the Matthew effect hypothesis (i.e., "the rich get richer"; individual differences are stronger predictors of attainments at higher levels of parental SES). We found that personality traits and intelligence in adolescence predicted later attained status above and beyond parental SES. A standard deviation increase in individual differences translated to up to 8 additional months of education, $4,233 annually, and more prestigious occupations. Furthermore, although we did find some evidence for both the resource substitution and the Matthew effect hypotheses, the most robust pattern across all models supported the independent effects hypothesis. Intelligence was the exception, the interaction models being more robust. Finally, we found that although personality traits may help compensate for background disadvantage to a small extent, they do not usually lead to a "full catch-up" effect, unlike intelligence. This was the first longitudinal study of status attainment to test interactive models of individual differences and background factors. (c) 2015 APA, all rights reserved).

  6. A prediction model of compressor with variable-geometry diffuser based on elliptic equation and partial least squares.

    Science.gov (United States)

    Li, Xu; Yang, Chuanlei; Wang, Yinyan; Wang, Hechun

    2018-01-01

    To achieve a much more extensive intake air flow range of the diesel engine, a variable-geometry compressor (VGC) is introduced into a turbocharged diesel engine. However, due to the variable diffuser vane angle (DVA), the prediction for the performance of the VGC becomes more difficult than for a normal compressor. In the present study, a prediction model comprising an elliptical equation and a PLS (partial least-squares) model was proposed to predict the performance of the VGC. The speed lines of the pressure ratio map and the efficiency map were fitted with the elliptical equation, and the coefficients of the elliptical equation were introduced into the PLS model to build the polynomial relationship between the coefficients and the relative speed, the DVA. Further, the maximal order of the polynomial was investigated in detail to reduce the number of sub-coefficients and achieve acceptable fit accuracy simultaneously. The prediction model was validated with sample data and in order to present the superiority of compressor performance prediction, the prediction results of this model were compared with those of the look-up table and back-propagation neural networks (BPNNs). The validation and comparison results show that the prediction accuracy of the new developed model is acceptable, and this model is much more suitable than the look-up table and the BPNN methods under the same condition in VGC performance prediction. Moreover, the new developed prediction model provides a novel and effective prediction solution for the VGC and can be used to improve the accuracy of the thermodynamic model for turbocharged diesel engines in the future.

  7. Risk Factors Analysis and Death Prediction in Some Life-Threatening Ailments Using Chi-Square Case-Based Reasoning (χ2 CBR) Model.

    Science.gov (United States)

    Adeniyi, D A; Wei, Z; Yang, Y

    2018-01-30

    A wealth of data are available within the health care system, however, effective analysis tools for exploring the hidden patterns in these datasets are lacking. To alleviate this limitation, this paper proposes a simple but promising hybrid predictive model by suitably combining the Chi-square distance measurement with case-based reasoning technique. The study presents the realization of an automated risk calculator and death prediction in some life-threatening ailments using Chi-square case-based reasoning (χ 2 CBR) model. The proposed predictive engine is capable of reducing runtime and speeds up execution process through the use of critical χ 2 distribution value. This work also showcases the development of a novel feature selection method referred to as frequent item based rule (FIBR) method. This FIBR method is used for selecting the best feature for the proposed χ 2 CBR model at the preprocessing stage of the predictive procedures. The implementation of the proposed risk calculator is achieved through the use of an in-house developed PHP program experimented with XAMP/Apache HTTP server as hosting server. The process of data acquisition and case-based development is implemented using the MySQL application. Performance comparison between our system, the NBY, the ED-KNN, the ANN, the SVM, the Random Forest and the traditional CBR techniques shows that the quality of predictions produced by our system outperformed the baseline methods studied. The result of our experiment shows that the precision rate and predictive quality of our system in most cases are equal to or greater than 70%. Our result also shows that the proposed system executes faster than the baseline methods studied. Therefore, the proposed risk calculator is capable of providing useful, consistent, faster, accurate and efficient risk level prediction to both the patients and the physicians at any time, online and on a real-time basis.

  8. Chi-squared goodness of fit tests with applications

    CERN Document Server

    Balakrishnan, N; Nikulin, MS

    2013-01-01

    Chi-Squared Goodness of Fit Tests with Applications provides a thorough and complete context for the theoretical basis and implementation of Pearson's monumental contribution and its wide applicability for chi-squared goodness of fit tests. The book is ideal for researchers and scientists conducting statistical analysis in processing of experimental data as well as to students and practitioners with a good mathematical background who use statistical methods. The historical context, especially Chapter 7, provides great insight into importance of this subject with an authoritative author team

  9. A novel least squares support vector machine ensemble model for NOx emission prediction of a coal-fired boiler

    International Nuclear Information System (INIS)

    Lv, You; Liu, Jizhen; Yang, Tingting; Zeng, Deliang

    2013-01-01

    Real operation data of power plants are inclined to be concentrated in some local areas because of the operators’ habits and control system design. In this paper, a novel least squares support vector machine (LSSVM)-based ensemble learning paradigm is proposed to predict NO x emission of a coal-fired boiler using real operation data. In view of the plant data characteristics, a soft fuzzy c-means cluster algorithm is proposed to decompose the original data and guarantee the diversity of individual learners. Subsequently the base LSSVM is trained in each individual subset to solve the subtask. Finally, partial least squares (PLS) is applied as the combination strategy to eliminate the collinear and redundant information of the base learners. Considering that the fuzzy membership also has an effect on the ensemble output, the membership degree is added as one of the variables of the combiner. The single LSSVM and other ensemble models using different decomposition and combination strategies are also established to make a comparison. The result shows that the new soft FCM-LSSVM-PLS ensemble method can predict NO x emission accurately. Besides, because of the divide and conquer frame, the total time consumed in the searching the parameters and training also decreases evidently. - Highlights: • A novel LSSVM ensemble model to predict NO x emissions is presented. • LSSVM is used as the base learner and PLS is employed as the combiner. • The model is applied to process data from a 660 MW coal-fired boiler. • The generalization ability of the model is enhanced. • The time consuming in training and searching the parameters decreases sharply

  10. Application of the Polynomial-Based Least Squares and Total Least Squares Models for the Attenuated Total Reflection Fourier Transform Infrared Spectra of Binary Mixtures of Hydroxyl Compounds.

    Science.gov (United States)

    Shan, Peng; Peng, Silong; Zhao, Yuhui; Tang, Liang

    2016-03-01

    An analysis of binary mixtures of hydroxyl compound by Attenuated Total Reflection Fourier transform infrared spectroscopy (ATR FT-IR) and classical least squares (CLS) yield large model error due to the presence of unmodeled components such as H-bonded components. To accommodate these spectral variations, polynomial-based least squares (LSP) and polynomial-based total least squares (TLSP) are proposed to capture the nonlinear absorbance-concentration relationship. LSP is based on assuming that only absorbance noise exists; while TLSP takes both absorbance noise and concentration noise into consideration. In addition, based on different solving strategy, two optimization algorithms (limited-memory Broyden-Fletcher-Goldfarb-Shanno (LBFGS) algorithm and Levenberg-Marquardt (LM) algorithm) are combined with TLSP and then two different TLSP versions (termed as TLSP-LBFGS and TLSP-LM) are formed. The optimum order of each nonlinear model is determined by cross-validation. Comparison and analyses of the four models are made from two aspects: absorbance prediction and concentration prediction. The results for water-ethanol solution and ethanol-ethyl lactate solution show that LSP, TLSP-LBFGS, and TLSP-LM can, for both absorbance prediction and concentration prediction, obtain smaller root mean square error of prediction than CLS. Additionally, they can also greatly enhance the accuracy of estimated pure component spectra. However, from the view of concentration prediction, the Wilcoxon signed rank test shows that there is no statistically significant difference between each nonlinear model and CLS. © The Author(s) 2016.

  11. Direct measurement of the W Boson width in ppover collisions at square roots = 1.96 TeV.

    Science.gov (United States)

    Aaltonen, T; Adelman, J; Akimoto, T; Albrow, M G; González, B Alvarez; Amerio, S; Amidei, D; Anastassov, A; Annovi, A; Antos, J; Aoki, M; Apollinari, G; Apresyan, A; Arisawa, T; Artikov, A; Ashmanskas, W; Attal, A; Aurisano, A; Azfar, F; Azzi-Bacchetta, P; Azzurri, P; Bacchetta, N; Badgett, W; Barbaro-Galtieri, A; Barnes, V E; Barnett, B A; Baroiant, S; Bartsch, V; Bauer, G; Beauchemin, P-H; Bedeschi, F; Bednar, P; Beecher, D; Behari, S; Bellettini, G; Bellinger, J; Belloni, A; Benjamin, D; Beretvas, A; Beringer, J; Berry, T; Bhatti, A; Binkley, M; Bisello, D; Bizjak, I; Blair, R E; Blocker, C; Blumenfeld, B; Bocci, A; Bodek, A; Boisvert, V; Bolla, G; Bolshov, A; Bortoletto, D; Boudreau, J; Boveia, A; Brau, B; Bridgeman, A; Brigliadori, L; Bromberg, C; Brubaker, E; Budagov, J; Budd, H S; Budd, S; Burkett, K; Busetto, G; Bussey, P; Buzatu, A; Byrum, K L; Cabrera, S; Campanelli, M; Campbell, M; Canelli, F; Canepa, A; Carlsmith, D; Carosi, R; Carrillo, S; Carron, S; Casal, B; Casarsa, M; Castro, A; Catastini, P; Cauz, D; Cavalli-Sforza, M; Cerri, A; Cerrito, L; Chang, S H; Chen, Y C; Chertok, M; Chiarelli, G; Chlachidze, G; Chlebana, F; Cho, K; Chokheli, D; Chou, J P; Choudalakis, G; Chuang, S H; Chung, K; Chung, W H; Chung, Y S; Ciobanu, C I; Ciocci, M A; Clark, A; Clark, D; Compostella, G; Convery, M E; Conway, J; Cooper, B; Copic, K; Cordelli, M; Cortiana, G; Crescioli, F; Almenar, C Cuenca; Cuevas, J; Culbertson, R; Cully, J C; Dagenhart, D; Datta, M; Davies, T; de Barbaro, P; De Cecco, S; Deisher, A; De Lentdecker, G; De Lorenzo, G; Dell'orso, M; Demortier, L; Deng, J; Deninno, M; De Pedis, D; Derwent, P F; Giovanni, G P Di; Dionisi, C; Di Ruzza, B; Dittmann, J R; D'Onofrio, M; Donati, S; Dong, P; Donini, J; Dorigo, T; Dube, S; Efron, J; Erbacher, R; Errede, D; Errede, S; Eusebi, R; Fang, H C; Farrington, S; Fedorko, W T; Feild, R G; Feindt, M; Fernandez, J P; Ferrazza, C; Field, R; Flanagan, G; Forrest, R; Forrester, S; Franklin, M; Freeman, J C; Furic, I; Gallinaro, M; Galyardt, J; Garberson, F; Garcia, J E; Garfinkel, A F; Gerberich, H; Gerdes, D; Giagu, S; Giakoumopolou, V; Giannetti, P; Gibson, K; Gimmell, J L; Ginsburg, C M; Giokaris, N; Giordani, M; Giromini, P; Giunta, M; Glagolev, V; Glenzinski, D; Gold, M; Goldschmidt, N; Golossanov, A; Gomez, G; Gomez-Ceballos, G; Goncharov, M; González, O; Gorelov, I; Goshaw, A T; Goulianos, K; Gresele, A; Grinstein, S; Grosso-Pilcher, C; Grundler, U; da Costa, J Guimaraes; Gunay-Unalan, Z; Haber, C; Hahn, K; Hahn, S R; Halkiadakis, E; Hamilton, A; Han, B-Y; Han, J Y; Handler, R; Happacher, F; Hara, K; Hare, D; Hare, M; Harper, S; Harr, R F; Harris, R M; Hartz, M; Hatakeyama, K; Hauser, J; Hays, C; Heck, M; Heijboer, A; Heinemann, B; Heinrich, J; Henderson, C; Herndon, M; Heuser, J; Hewamanage, S; Hidas, D; Hill, C S; Hirschbuehl, D; Hocker, A; Hou, S; Houlden, M; Hsu, S-C; Huffman, B T; Hughes, R E; Husemann, U; Huston, J; Incandela, J; Introzzi, G; Iori, M; Ivanov, A; Iyutin, B; James, E; Jayatilaka, B; Jeans, D; Jeon, E J; Jindariani, S; Johnson, W; Jones, M; Joo, K K; Jun, S Y; Jung, J E; Junk, T R; Kamon, T; Kar, D; Karchin, P E; Kato, Y; Kephart, R; Kerzel, U; Khotilovich, V; Kilminster, B; Kim, D H; Kim, H S; Kim, J E; Kim, M J; Kim, S B; Kim, S H; Kim, Y K; Kimura, N; Kirsch, L; Klimenko, S; Klute, M; Knuteson, B; Ko, B R; Koay, S A; Kondo, K; Kong, D J; Konigsberg, J; Korytov, A; Kotwal, A V; Kraus, J; Kreps, M; Kroll, J; Krumnack, N; Kruse, M; Krutelyov, V; Kubo, T; Kuhlmann, S E; Kuhr, T; Kulkarni, N P; Kusakabe, Y; Kwang, S; Laasanen, A T; Lai, S; Lami, S; Lammel, S; Lancaster, M; Lander, R L; Lannon, K; Lath, A; Latino, G; Lazzizzera, I; Lecompte, T; Lee, J; Lee, J; Lee, Y J; Lee, S W; Lefèvre, R; Leonardo, N; Leone, S; Levy, S; Lewis, J D; Lin, C; Lin, C S; Linacre, J; Lindgren, M; Lipeles, E; Lister, A; Litvintsev, D O; Liu, T; Lockyer, N S; Loginov, A; Loreti, M; Lovas, L; Lu, R-S; Lucchesi, D; Lueck, J; Luci, C; Lujan, P; Lukens, P; Lungu, G; Lyons, L; Lys, J; Lysak, R; Lytken, E; Mack, P; Macqueen, D; Madrak, R; Maeshima, K; Makhoul, K; Maki, T; Maksimovic, P; Malde, S; Malik, S; Manca, G; Manousakis, A; Margaroli, F; Marino, C; Marino, C P; Martin, A; Martin, M; Martin, V; Martínez, M; Martínez-Ballarín, R; Maruyama, T; Mastrandrea, P; Masubuchi, T; Mattson, M E; Mazzanti, P; McFarland, K S; McIntyre, P; McNulty, R; Mehta, A; Mehtala, P; Menzemer, S; Menzione, A; Merkel, P; Mesropian, C; Messina, A; Miao, T; Miladinovic, N; Miles, J; Miller, R; Mills, C; Milnik, M; Mitra, A; Mitselmakher, G; Miyake, H; Moed, S; Moggi, N; Moon, C S; Moore, R; Morello, M; Fernandez, P Movilla; Mülmenstädt, J; Mukherjee, A; Muller, Th; Mumford, R; Murat, P; Mussini, M; Nachtman, J; Nagai, Y; Nagano, A; Naganoma, J; Nakamura, K; Nakano, I; Napier, A; Necula, V; Neu, C; Neubauer, M S; Nielsen, J; Nodulman, L; Norman, M; Norniella, O; Nurse, E; Oh, S H; Oh, Y D; Oksuzian, I; Okusawa, T; Oldeman, R; Orava, R; Osterberg, K; Griso, S Pagan; Pagliarone, C; Palencia, E; Papadimitriou, V; Papaikonomou, A; Paramonov, A A; Parks, B; Pashapour, S; Patrick, J; Pauletta, G; Paulini, M; Paus, C; Pellett, D E; Penzo, A; Phillips, T J; Piacentino, G; Piedra, J; Pinera, L; Pitts, K; Plager, C; Pondrom, L; Portell, X; Poukhov, O; Pounder, N; Prakoshyn, F; Pronko, A; Proudfoot, J; Ptohos, F; Punzi, G; Pursley, J; Rademacker, J; Rahaman, A; Ramakrishnan, V; Ranjan, N; Redondo, I; Reisert, B; Rekovic, V; Renton, P; Rescigno, M; Richter, S; Rimondi, F; Ristori, L; Robson, A; Rodrigo, T; Rogers, E; Rolli, S; Roser, R; Rossi, M; Rossin, R; Roy, P; Ruiz, A; Russ, J; Rusu, V; Saarikko, H; Safonov, A; Sakumoto, W K; Salamanna, G; Saltó, O; Santi, L; Sarkar, S; Sartori, L; Sato, K; Savoy-Navarro, A; Scheidle, T; Schlabach, P; Schmidt, E E; Schmidt, M A; Schmidt, M P; Schmitt, M; Schwarz, T; Scodellaro, L; Scott, A L; Scribano, A; Scuri, F; Sedov, A; Seidel, S; Seiya, Y; Semenov, A; Sexton-Kennedy, L; Sfyria, A; Shalhout, S Z; Shapiro, M D; Shears, T; Shepard, P F; Sherman, D; Shimojima, M; Shochet, M; Shon, Y; Shreyber, I; Sidoti, A; Sinervo, P; Sisakyan, A; Slaughter, A J; Slaunwhite, J; Sliwa, K; Smith, J R; Snider, F D; Snihur, R; Soderberg, M; Soha, A; Somalwar, S; Sorin, V; Spalding, J; Spinella, F; Spreitzer, T; Squillacioti, P; Stanitzki, M; Denis, R St; Stelzer, B; Stelzer-Chilton, O; Stentz, D; Strologas, J; Stuart, D; Suh, J S; Sukhanov, A; Sun, H; Suslov, I; Suzuki, T; Taffard, A; Takashima, R; Takeuchi, Y; Tanaka, R; Tecchio, M; Teng, P K; Terashi, K; Thom, J; Thompson, A S; Thompson, G A; Thomson, E; Tipton, P; Tiwari, V; Tkaczyk, S; Toback, D; Tokar, S; Tollefson, K; Tomura, T; Tonelli, D; Torre, S; Torretta, D; Tourneur, S; Trischuk, W; Tu, Y; Turini, N; Ukegawa, F; Uozumi, S; Vallecorsa, S; van Remortel, N; Varganov, A; Vataga, E; Vázquez, F; Velev, G; Vellidis, C; Veszpremi, V; Vidal, M; Vidal, R; Vila, I; Vilar, R; Vine, T; Vogel, M; Volobouev, I; Volpi, G; Würthwein, F; Wagner, P; Wagner, R G; Wagner, R L; Wagner-Kuhr, J; Wagner, W; Wakisaka, T; Wallny, R; Wang, S M; Warburton, A; Waters, D; Weinberger, M; Wester, W C; Whitehouse, B; Whiteson, D; Wicklund, A B; Wicklund, E; Williams, G; Williams, H H; Wilson, P; Winer, B L; Wittich, P; Wolbers, S; Wolfe, C; Wright, T; Wu, X; Wynne, S M; Yagil, A; Yamamoto, K; Yamaoka, J; Yamashita, T; Yang, C; Yang, U K; Yang, Y C; Yao, W M; Yeh, G P; Yoh, J; Yorita, K; Yoshida, T; Yu, G B; Yu, I; Yu, S S; Yun, J C; Zanello, L; Zanetti, A; Zaw, I; Zhang, X; Zheng, Y; Zucchelli, S

    2008-02-22

    A direct measurement of the total decay width of the W boson Gamma(W) is presented using 350 pb(-1) of data from pp[over ] collisions at square root s = 1.96 TeV collected with the CDF II detector at the Fermilab Tevatron. The width is determined by normalizing predicted signal and background distributions to 230 185 W candidates decaying to enu and micronu in the transverse-mass region 50predicted shape to 6055 events in the high-M(T) region, 90

  12. Comparison of multiple linear regression, partial least squares and artificial neural networks for prediction of gas chromatographic relative retention times of trimethylsilylated anabolic androgenic steroids.

    Science.gov (United States)

    Fragkaki, A G; Farmaki, E; Thomaidis, N; Tsantili-Kakoulidou, A; Angelis, Y S; Koupparis, M; Georgakopoulos, C

    2012-09-21

    The comparison among different modelling techniques, such as multiple linear regression, partial least squares and artificial neural networks, has been performed in order to construct and evaluate models for prediction of gas chromatographic relative retention times of trimethylsilylated anabolic androgenic steroids. The performance of the quantitative structure-retention relationship study, using the multiple linear regression and partial least squares techniques, has been previously conducted. In the present study, artificial neural networks models were constructed and used for the prediction of relative retention times of anabolic androgenic steroids, while their efficiency is compared with that of the models derived from the multiple linear regression and partial least squares techniques. For overall ranking of the models, a novel procedure [Trends Anal. Chem. 29 (2010) 101-109] based on sum of ranking differences was applied, which permits the best model to be selected. The suggested models are considered useful for the estimation of relative retention times of designer steroids for which no analytical data are available. Copyright © 2012 Elsevier B.V. All rights reserved.

  13. Enumeration of self-avoiding walks on the square lattice

    International Nuclear Information System (INIS)

    Jensen, Iwan

    2004-01-01

    We describe a new algorithm for the enumeration of self-avoiding walks on the square lattice. Using up to 128 processors on a HP Alpha server cluster we have enumerated the number of self-avoiding walks on the square lattice to length 71. Series for the metric properties of mean-square end-to-end distance, mean-square radius of gyration and mean-square distance of monomers from the end points have been derived to length 59. An analysis of the resulting series yields accurate estimates of the critical exponents γ and ν confirming predictions of their exact values. Likewise we obtain accurate amplitude estimates yielding precise values for certain universal amplitude combinations. Finally we report on an analysis giving compelling evidence that the leading non-analytic correction-to-scaling exponent Δ 1 = 3/2

  14. Prediction of Placental Barrier Permeability: A Model Based on Partial Least Squares Variable Selection Procedure

    Directory of Open Access Journals (Sweden)

    Yong-Hong Zhang

    2015-05-01

    Full Text Available Assessing the human placental barrier permeability of drugs is very important to guarantee drug safety during pregnancy. Quantitative structure–activity relationship (QSAR method was used as an effective assessing tool for the placental transfer study of drugs, while in vitro human placental perfusion is the most widely used method. In this study, the partial least squares (PLS variable selection and modeling procedure was used to pick out optimal descriptors from a pool of 620 descriptors of 65 compounds and to simultaneously develop a QSAR model between the descriptors and the placental barrier permeability expressed by the clearance indices (CI. The model was subjected to internal validation by cross-validation and y-randomization and to external validation by predicting CI values of 19 compounds. It was shown that the model developed is robust and has a good predictive potential (r2 = 0.9064, RMSE = 0.09, q2 = 0.7323, rp2 = 0.7656, RMSP = 0.14. The mechanistic interpretation of the final model was given by the high variable importance in projection values of descriptors. Using PLS procedure, we can rapidly and effectively select optimal descriptors and thus construct a model with good stability and predictability. This analysis can provide an effective tool for the high-throughput screening of the placental barrier permeability of drugs.

  15. Particle swarm optimization-based least squares support vector regression for critical heat flux prediction

    International Nuclear Information System (INIS)

    Jiang, B.T.; Zhao, F.Y.

    2013-01-01

    Highlights: ► CHF data are collected from the published literature. ► Less training data are used to train the LSSVR model. ► PSO is adopted to optimize the key parameters to improve the model precision. ► The reliability of LSSVR is proved through parametric trends analysis. - Abstract: In view of practical importance of critical heat flux (CHF) for design and safety of nuclear reactors, accurate prediction of CHF is of utmost significance. This paper presents a novel approach using least squares support vector regression (LSSVR) and particle swarm optimization (PSO) to predict CHF. Two available published datasets are used to train and test the proposed algorithm, in which PSO is employed to search for the best parameters involved in LSSVR model. The CHF values obtained by the LSSVR model are compared with the corresponding experimental values and those of a previous method, adaptive neuro fuzzy inference system (ANFIS). This comparison is also carried out in the investigation of parametric trends of CHF. It is found that the proposed method can achieve the desired performance and yields a more satisfactory fit with experimental results than ANFIS. Therefore, LSSVR method is likely to be suitable for other parameters processing such as CHF

  16. Least squares reverse time migration of controlled order multiples

    Science.gov (United States)

    Liu, Y.

    2016-12-01

    Imaging using the reverse time migration of multiples generates inherent crosstalk artifacts due to the interference among different order multiples. Traditionally, least-square fitting has been used to address this issue by seeking the best objective function to measure the amplitude differences between the predicted and observed data. We have developed an alternative objective function by decomposing multiples into different orders to minimize the difference between Born modeling predicted multiples and specific-order multiples from observational data in order to attenuate the crosstalk. This method is denoted as the least-squares reverse time migration of controlled order multiples (LSRTM-CM). Our numerical examples demonstrated that the LSRTM-CM can significantly improve image quality compared with reverse time migration of multiples and least-square reverse time migration of multiples. Acknowledgments This research was funded by the National Nature Science Foundation of China (Grant Nos. 41430321 and 41374138).

  17. Customer demand prediction of service-oriented manufacturing using the least square support vector machine optimized by particle swarm optimization algorithm

    Science.gov (United States)

    Cao, Jin; Jiang, Zhibin; Wang, Kangzhou

    2017-07-01

    Many nonlinear customer satisfaction-related factors significantly influence the future customer demand for service-oriented manufacturing (SOM). To address this issue and enhance the prediction accuracy, this article develops a novel customer demand prediction approach for SOM. The approach combines the phase space reconstruction (PSR) technique with the optimized least square support vector machine (LSSVM). First, the prediction sample space is reconstructed by the PSR to enrich the time-series dynamics of the limited data sample. Then, the generalization and learning ability of the LSSVM are improved by the hybrid polynomial and radial basis function kernel. Finally, the key parameters of the LSSVM are optimized by the particle swarm optimization algorithm. In a real case study, the customer demand prediction of an air conditioner compressor is implemented. Furthermore, the effectiveness and validity of the proposed approach are demonstrated by comparison with other classical predication approaches.

  18. Creating Magic Squares.

    Science.gov (United States)

    Lyon, Betty Clayton

    1990-01-01

    One method of making magic squares using a prolongated square is illustrated. Discussed are third-order magic squares, fractional magic squares, fifth-order magic squares, decimal magic squares, and even magic squares. (CW)

  19. MKRMDA: multiple kernel learning-based Kronecker regularized least squares for MiRNA-disease association prediction.

    Science.gov (United States)

    Chen, Xing; Niu, Ya-Wei; Wang, Guang-Hui; Yan, Gui-Ying

    2017-12-12

    Recently, as the research of microRNA (miRNA) continues, there are plenty of experimental evidences indicating that miRNA could be associated with various human complex diseases development and progression. Hence, it is necessary and urgent to pay more attentions to the relevant study of predicting diseases associated miRNAs, which may be helpful for effective prevention, diagnosis and treatment of human diseases. Especially, constructing computational methods to predict potential miRNA-disease associations is worthy of more studies because of the feasibility and effectivity. In this work, we developed a novel computational model of multiple kernels learning-based Kronecker regularized least squares for MiRNA-disease association prediction (MKRMDA), which could reveal potential miRNA-disease associations by automatically optimizing the combination of multiple kernels for disease and miRNA. MKRMDA obtained AUCs of 0.9040 and 0.8446 in global and local leave-one-out cross validation, respectively. Meanwhile, MKRMDA achieved average AUCs of 0.8894 ± 0.0015 in fivefold cross validation. Furthermore, we conducted three different kinds of case studies on some important human cancers for further performance evaluation. In the case studies of colonic cancer, esophageal cancer and lymphoma based on known miRNA-disease associations in HMDDv2.0 database, 76, 94 and 88% of the corresponding top 50 predicted miRNAs were confirmed by experimental reports, respectively. In another two kinds of case studies for new diseases without any known associated miRNAs and diseases only with known associations in HMDDv1.0 database, the verified ratios of two different cancers were 88 and 94%, respectively. All the results mentioned above adequately showed the reliable prediction ability of MKRMDA. We anticipated that MKRMDA could serve to facilitate further developments in the field and the follow-up investigations by biomedical researchers.

  20. Using Squares to Sum Squares

    Science.gov (United States)

    DeTemple, Duane

    2010-01-01

    Purely combinatorial proofs are given for the sum of squares formula, 1[superscript 2] + 2[superscript 2] + ... + n[superscript 2] = n(n + 1) (2n + 1) / 6, and the sum of sums of squares formula, 1[superscript 2] + (1[superscript 2] + 2[superscript 2]) + ... + (1[superscript 2] + 2[superscript 2] + ... + n[superscript 2]) = n(n + 1)[superscript 2]…

  1. Optimization approach of background value and initial item for improving prediction precision of GM(1,1) model

    Institute of Scientific and Technical Information of China (English)

    Yuhong Wang; Qin Liu; Jianrong Tang; Wenbin Cao; Xiaozhong Li

    2014-01-01

    A combination method of optimization of the back-ground value and optimization of the initial item is proposed. The sequences of the unbiased exponential distribution are simulated and predicted through the optimization of the background value in grey differential equations. The principle of the new information priority in the grey system theory and the rationality of the initial item in the original GM(1,1) model are ful y expressed through the improvement of the initial item in the proposed time response function. A numerical example is employed to il ustrate that the proposed method is able to simulate and predict sequences of raw data with the unbiased exponential distribution and has better simulation performance and prediction precision than the original GM(1,1) model relatively.

  2. Measurement of speech levels in the presence of time varying background noise

    Science.gov (United States)

    Pearsons, K. S.; Horonjeff, R.

    1982-01-01

    Short-term speech level measurements which could be used to note changes in vocal effort in a time varying noise environment were studied. Knowing the changes in speech level would in turn allow prediction of intelligibility in the presence of aircraft flyover noise. Tests indicated that it is possible to use two second samples of speech to estimate long term root mean square speech levels. Other tests were also performed in which people read out loud during aircraft flyover noise. Results of these tests indicate that people do indeed raise their voice during flyovers at a rate of about 3-1/2 dB for each 10 dB increase in background level. This finding is in agreement with other tests of speech levels in the presence of steady state background noise.

  3. Predictive-property-ranked variable reduction in partial least squares modelling with final complexity adapted models: comparison of properties for ranking.

    Science.gov (United States)

    Andries, Jan P M; Vander Heyden, Yvan; Buydens, Lutgarde M C

    2013-01-14

    The calibration performance of partial least squares regression for one response (PLS1) can be improved by eliminating uninformative variables. Many variable-reduction methods are based on so-called predictor-variable properties or predictive properties, which are functions of various PLS-model parameters, and which may change during the steps of the variable-reduction process. Recently, a new predictive-property-ranked variable reduction method with final complexity adapted models, denoted as PPRVR-FCAM or simply FCAM, was introduced. It is a backward variable elimination method applied on the predictive-property-ranked variables. The variable number is first reduced, with constant PLS1 model complexity A, until A variables remain, followed by a further decrease in PLS complexity, allowing the final selection of small numbers of variables. In this study for three data sets the utility and effectiveness of six individual and nine combined predictor-variable properties are investigated, when used in the FCAM method. The individual properties include the absolute value of the PLS1 regression coefficient (REG), the significance of the PLS1 regression coefficient (SIG), the norm of the loading weight (NLW) vector, the variable importance in the projection (VIP), the selectivity ratio (SR), and the squared correlation coefficient of a predictor variable with the response y (COR). The selective and predictive performances of the models resulting from the use of these properties are statistically compared using the one-tailed Wilcoxon signed rank test. The results indicate that the models, resulting from variable reduction with the FCAM method, using individual or combined properties, have similar or better predictive abilities than the full spectrum models. After mean-centring of the data, REG and SIG, provide low numbers of informative variables, with a meaning relevant to the response, and lower than the other individual properties, while the predictive abilities are

  4. Partial least squares modeling of combined infrared, 1H NMR and 13C NMR spectra to predict long residue properties of crude oils

    NARCIS (Netherlands)

    de Peinder, P.; Visser, T.; Petrauskas, D.D.; Salvatori, F.; Soulimani, F.; Weckhuysen, B.M.

    2009-01-01

    Research has been carried out to determine the potential of partial least squares (PLS) modeling of mid-infrared (IR) spectra of crude oils combined with the corresponding 1H and 13C nuclear magnetic resonance (NMR) data, to predict the long residue (LR) properties of these substances. The study

  5. Constrained least squares regularization in PET

    International Nuclear Information System (INIS)

    Choudhury, K.R.; O'Sullivan, F.O.

    1996-01-01

    Standard reconstruction methods used in tomography produce images with undesirable negative artifacts in background and in areas of high local contrast. While sophisticated statistical reconstruction methods can be devised to correct for these artifacts, their computational implementation is excessive for routine operational use. This work describes a technique for rapid computation of approximate constrained least squares regularization estimates. The unique feature of the approach is that it involves no iterative projection or backprojection steps. This contrasts with the familiar computationally intensive algorithms based on algebraic reconstruction (ART) or expectation-maximization (EM) methods. Experimentation with the new approach for deconvolution and mixture analysis shows that the root mean square error quality of estimators based on the proposed algorithm matches and usually dominates that of more elaborate maximum likelihood, at a fraction of the computational effort

  6. Chaotic time series prediction for prenatal exposure to polychlorinated biphenyls in umbilical cord blood using the least squares SEATR model

    Science.gov (United States)

    Xu, Xijin; Tang, Qian; Xia, Haiyue; Zhang, Yuling; Li, Weiqiu; Huo, Xia

    2016-04-01

    Chaotic time series prediction based on nonlinear systems showed a superior performance in prediction field. We studied prenatal exposure to polychlorinated biphenyls (PCBs) by chaotic time series prediction using the least squares self-exciting threshold autoregressive (SEATR) model in umbilical cord blood in an electronic waste (e-waste) contaminated area. The specific prediction steps basing on the proposal methods for prenatal PCB exposure were put forward, and the proposed scheme’s validity was further verified by numerical simulation experiments. Experiment results show: 1) seven kinds of PCB congeners negatively correlate with five different indices for birth status: newborn weight, height, gestational age, Apgar score and anogenital distance; 2) prenatal PCB exposed group at greater risks compared to the reference group; 3) PCBs increasingly accumulated with time in newborns; and 4) the possibility of newborns suffering from related diseases in the future was greater. The desirable numerical simulation experiments results demonstrated the feasibility of applying mathematical model in the environmental toxicology field.

  7. A New Hybrid Approach for Wind Speed Prediction Using Fast Block Least Mean Square Algorithm and Artificial Neural Network

    Directory of Open Access Journals (Sweden)

    Ummuhan Basaran Filik

    2016-01-01

    Full Text Available A new hybrid wind speed prediction approach, which uses fast block least mean square (FBLMS algorithm and artificial neural network (ANN method, is proposed. FBLMS is an adaptive algorithm which has reduced complexity with a very fast convergence rate. A hybrid approach is proposed which uses two powerful methods: FBLMS and ANN method. In order to show the efficiency and accuracy of the proposed approach, seven-year real hourly collected wind speed data sets belonging to Turkish State Meteorological Service of Bozcaada and Eskisehir regions are used. Two different ANN structures are used to compare with this approach. The first six-year data is handled as a train set; the remaining one-year hourly data is handled as test data. Mean absolute error (MAE and root mean square error (RMSE are used for performance evaluations. It is shown for various cases that the performance of the new hybrid approach gives better results than the different conventional ANN structure.

  8. Least-squares model-based halftoning

    Science.gov (United States)

    Pappas, Thrasyvoulos N.; Neuhoff, David L.

    1992-08-01

    A least-squares model-based approach to digital halftoning is proposed. It exploits both a printer model and a model for visual perception. It attempts to produce an 'optimal' halftoned reproduction, by minimizing the squared error between the response of the cascade of the printer and visual models to the binary image and the response of the visual model to the original gray-scale image. Conventional methods, such as clustered ordered dither, use the properties of the eye only implicitly, and resist printer distortions at the expense of spatial and gray-scale resolution. In previous work we showed that our printer model can be used to modify error diffusion to account for printer distortions. The modified error diffusion algorithm has better spatial and gray-scale resolution than conventional techniques, but produces some well known artifacts and asymmetries because it does not make use of an explicit eye model. Least-squares model-based halftoning uses explicit eye models and relies on printer models that predict distortions and exploit them to increase, rather than decrease, both spatial and gray-scale resolution. We have shown that the one-dimensional least-squares problem, in which each row or column of the image is halftoned independently, can be implemented with the Viterbi's algorithm. Unfortunately, no closed form solution can be found in two dimensions. The two-dimensional least squares solution is obtained by iterative techniques. Experiments show that least-squares model-based halftoning produces more gray levels and better spatial resolution than conventional techniques. We also show that the least- squares approach eliminates the problems associated with error diffusion. Model-based halftoning can be especially useful in transmission of high quality documents using high fidelity gray-scale image encoders. As we have shown, in such cases halftoning can be performed at the receiver, just before printing. Apart from coding efficiency, this approach

  9. Developing a local least-squares support vector machines-based neuro-fuzzy model for nonlinear and chaotic time series prediction.

    Science.gov (United States)

    Miranian, A; Abdollahzade, M

    2013-02-01

    Local modeling approaches, owing to their ability to model different operating regimes of nonlinear systems and processes by independent local models, seem appealing for modeling, identification, and prediction applications. In this paper, we propose a local neuro-fuzzy (LNF) approach based on the least-squares support vector machines (LSSVMs). The proposed LNF approach employs LSSVMs, which are powerful in modeling and predicting time series, as local models and uses hierarchical binary tree (HBT) learning algorithm for fast and efficient estimation of its parameters. The HBT algorithm heuristically partitions the input space into smaller subdomains by axis-orthogonal splits. In each partitioning, the validity functions automatically form a unity partition and therefore normalization side effects, e.g., reactivation, are prevented. Integration of LSSVMs into the LNF network as local models, along with the HBT learning algorithm, yield a high-performance approach for modeling and prediction of complex nonlinear time series. The proposed approach is applied to modeling and predictions of different nonlinear and chaotic real-world and hand-designed systems and time series. Analysis of the prediction results and comparisons with recent and old studies demonstrate the promising performance of the proposed LNF approach with the HBT learning algorithm for modeling and prediction of nonlinear and chaotic systems and time series.

  10. Some Results on Mean Square Error for Factor Score Prediction

    Science.gov (United States)

    Krijnen, Wim P.

    2006-01-01

    For the confirmatory factor model a series of inequalities is given with respect to the mean square error (MSE) of three main factor score predictors. The eigenvalues of these MSE matrices are a monotonic function of the eigenvalues of the matrix gamma[subscript rho] = theta[superscript 1/2] lambda[subscript rho] 'psi[subscript rho] [superscript…

  11. Prediction of long-residue properties of potential blends from mathematically mixed infrared spectra of pure crude oils by partial least-squares regression models

    NARCIS (Netherlands)

    de Peinder, P.; Visser, T.; Petrauskas, D.D.; Salvatori, F.; Soulimani, F.; Weckhuysen, B.M.

    2009-01-01

    Research has been carried out to determine the feasibility of partial least-squares (PLS) regression models to predict the long-residue (LR) properties of potential blends from infrared (IR) spectra that have been created by linearly co-adding the IR spectra of crude oils. The study is the follow-up

  12. EEG Beta power but not background music predicts the recall scores in an foreign-vocobulary learning tast

    OpenAIRE

    Küssner, M.B.; de Groot, A.M.B.; Hofman, W.F.; Hillen, M.A.

    2016-01-01

    As tantalizing as the idea that background music beneficially affects foreign vocabulary learning may seem, there is-partly due to a lack of theory-driven research-no consistent evidence to support this notion. We investigated inter-individual differences in the effects of background music on foreign vocabulary learning. Based on Eysenck's theory of personality we predicted that individuals with a high level of cortical arousal should perform worse when learning with background music compared...

  13. Attractiveness Compensates for Low Status Background in the Prediction of Educational Attainment.

    Science.gov (United States)

    Bauldry, Shawn; Shanahan, Michael J; Russo, Rosemary; Roberts, Brent W; Damian, Rodica

    2016-01-01

    People who are perceived as good looking or as having a pleasant personality enjoy many advantages, including higher educational attainment. This study examines (1) whether associations between physical/personality attractiveness and educational attainment vary by parental socioeconomic resources and (2) whether parental socioeconomic resources predict these forms of attractiveness. Based on the theory of resource substitution with structural amplification, we hypothesized that both types of attractiveness would have a stronger association with educational attainment for people from disadvantaged backgrounds (resource substitution), but also that people from disadvantaged backgrounds would be less likely to be perceived as attractive (amplification). This study draws on data from the National Longitudinal Study of Adolescent to Adult Health-including repeated interviewer ratings of respondents' attractiveness-and trait-state structural equation models to examine the moderation (substitution) and mediation (amplification) of physical and personality attractiveness in the link between parental socioeconomic resources and educational attainment. Both perceived personality and physical attractiveness have stronger associations with educational attainment for people from families with lower levels of parental education (substitution). Further, parental education and income are associated with both dimensions of perceived attractiveness, and personality attractiveness is positively associated with educational attainment (amplification). Results do not differ by sex and race/ethnicity. Further, associations between perceived attractiveness and educational attainment remain after accounting for unmeasured family-level confounders using a sibling fixed-effects model. Perceived attractiveness, particularly personality attractiveness, is a more important psychosocial resource for educational attainment for people from disadvantaged backgrounds than for people from advantaged

  14. Multiclass Prediction with Partial Least Square Regression for Gene Expression Data: Applications in Breast Cancer Intrinsic Taxonomy

    Directory of Open Access Journals (Sweden)

    Chi-Cheng Huang

    2013-01-01

    Full Text Available Multiclass prediction remains an obstacle for high-throughput data analysis such as microarray gene expression profiles. Despite recent advancements in machine learning and bioinformatics, most classification tools were limited to the applications of binary responses. Our aim was to apply partial least square (PLS regression for breast cancer intrinsic taxonomy, of which five distinct molecular subtypes were identified. The PAM50 signature genes were used as predictive variables in PLS analysis, and the latent gene component scores were used in binary logistic regression for each molecular subtype. The 139 prototypical arrays for PAM50 development were used as training dataset, and three independent microarray studies with Han Chinese origin were used for independent validation (n=535. The agreement between PAM50 centroid-based single sample prediction (SSP and PLS-regression was excellent (weighted Kappa: 0.988 within the training samples, but deteriorated substantially in independent samples, which could attribute to much more unclassified samples by PLS-regression. If these unclassified samples were removed, the agreement between PAM50 SSP and PLS-regression improved enormously (weighted Kappa: 0.829 as opposed to 0.541 when unclassified samples were analyzed. Our study ascertained the feasibility of PLS-regression in multi-class prediction, and distinct clinical presentations and prognostic discrepancies were observed across breast cancer molecular subtypes.

  15. Wave-equation Q tomography and least-squares migration

    KAUST Repository

    Dutta, Gaurav

    2016-01-01

    optimization method that inverts for the subsurface Q distribution by minimizing a skeletonized misfit function ε. Here, ε is the sum of the squared differences between the observed and the predicted peak/centroid-frequency shifts of the early-arrivals. Through

  16. Prediction of UT1-UTC, LOD and AAM χ3 by combination of least-squares and multivariate stochastic methods

    Science.gov (United States)

    Niedzielski, Tomasz; Kosek, Wiesław

    2008-02-01

    This article presents the application of a multivariate prediction technique for predicting universal time (UT1-UTC), length of day (LOD) and the axial component of atmospheric angular momentum (AAM χ 3). The multivariate predictions of LOD and UT1-UTC are generated by means of the combination of (1) least-squares (LS) extrapolation of models for annual, semiannual, 18.6-year, 9.3-year oscillations and for the linear trend, and (2) multivariate autoregressive (MAR) stochastic prediction of LS residuals (LS + MAR). The MAR technique enables the use of the AAM χ 3 time-series as the explanatory variable for the computation of LOD or UT1-UTC predictions. In order to evaluate the performance of this approach, two other prediction schemes are also applied: (1) LS extrapolation, (2) combination of LS extrapolation and univariate autoregressive (AR) prediction of LS residuals (LS + AR). The multivariate predictions of AAM χ 3 data, however, are computed as a combination of the extrapolation of the LS model for annual and semiannual oscillations and the LS + MAR. The AAM χ 3 predictions are also compared with LS extrapolation and LS + AR prediction. It is shown that the predictions of LOD and UT1-UTC based on LS + MAR taking into account the axial component of AAM are more accurate than the predictions of LOD and UT1-UTC based on LS extrapolation or on LS + AR. In particular, the UT1-UTC predictions based on LS + MAR during El Niño/La Niña events exhibit considerably smaller prediction errors than those calculated by means of LS or LS + AR. The AAM χ 3 time-series is predicted using LS + MAR with higher accuracy than applying LS extrapolation itself in the case of medium-term predictions (up to 100 days in the future). However, the predictions of AAM χ 3 reveal the best accuracy for LS + AR.

  17. Comparison of Model Prediction with Measurements of Galactic Background Noise at L-Band

    Science.gov (United States)

    LeVine, David M.; Abraham, Saji; Kerr, Yann H.; Wilson, Willam J.; Skou, Niels; Sobjaerg, S.

    2004-01-01

    The spectral window at L-band (1.413 GHz) is important for passive remote sensing of surface parameters such as soil moisture and sea surface salinity that are needed to understand the hydrological cycle and ocean circulation. Radiation from celestial (mostly galactic) sources is strong in this window and an accurate accounting for this background radiation is often needed for calibration. Modem radio astronomy measurements in this spectral window have been converted into a brightness temperature map of the celestial sky at L-band suitable for use in correcting passive measurements. This paper presents a comparison of the background radiation predicted by this map with measurements made with several modem L-band remote sensing radiometers. The agreement validates the map and the procedure for locating the source of down-welling radiation.

  18. EEG Beta Power but Not Background Music Predicts the Recall Scores in a Foreign-Vocabulary Learning Task.

    Science.gov (United States)

    Küssner, Mats B; de Groot, Annette M B; Hofman, Winni F; Hillen, Marij A

    2016-01-01

    As tantalizing as the idea that background music beneficially affects foreign vocabulary learning may seem, there is-partly due to a lack of theory-driven research-no consistent evidence to support this notion. We investigated inter-individual differences in the effects of background music on foreign vocabulary learning. Based on Eysenck's theory of personality we predicted that individuals with a high level of cortical arousal should perform worse when learning with background music compared to silence, whereas individuals with a low level of cortical arousal should be unaffected by background music or benefit from it. Participants were tested in a paired-associate learning paradigm consisting of three immediate word recall tasks, as well as a delayed recall task one week later. Baseline cortical arousal assessed with spontaneous EEG measurement in silence prior to the learning rounds was used for the analyses. Results revealed no interaction between cortical arousal and the learning condition (background music vs. silence). Instead, we found an unexpected main effect of cortical arousal in the beta band on recall, indicating that individuals with high beta power learned more vocabulary than those with low beta power. To substantiate this finding we conducted an exact replication of the experiment. Whereas the main effect of cortical arousal was only present in a subsample of participants, a beneficial main effect of background music appeared. A combined analysis of both experiments suggests that beta power predicts the performance in the word recall task, but that there is no effect of background music on foreign vocabulary learning. In light of these findings, we discuss whether searching for effects of background music on foreign vocabulary learning, independent of factors such as inter-individual differences and task complexity, might be a red herring. Importantly, our findings emphasize the need for sufficiently powered research designs and exact replications

  19. EEG Beta Power but Not Background Music Predicts the Recall Scores in a Foreign-Vocabulary Learning Task.

    Directory of Open Access Journals (Sweden)

    Mats B Küssner

    Full Text Available As tantalizing as the idea that background music beneficially affects foreign vocabulary learning may seem, there is-partly due to a lack of theory-driven research-no consistent evidence to support this notion. We investigated inter-individual differences in the effects of background music on foreign vocabulary learning. Based on Eysenck's theory of personality we predicted that individuals with a high level of cortical arousal should perform worse when learning with background music compared to silence, whereas individuals with a low level of cortical arousal should be unaffected by background music or benefit from it. Participants were tested in a paired-associate learning paradigm consisting of three immediate word recall tasks, as well as a delayed recall task one week later. Baseline cortical arousal assessed with spontaneous EEG measurement in silence prior to the learning rounds was used for the analyses. Results revealed no interaction between cortical arousal and the learning condition (background music vs. silence. Instead, we found an unexpected main effect of cortical arousal in the beta band on recall, indicating that individuals with high beta power learned more vocabulary than those with low beta power. To substantiate this finding we conducted an exact replication of the experiment. Whereas the main effect of cortical arousal was only present in a subsample of participants, a beneficial main effect of background music appeared. A combined analysis of both experiments suggests that beta power predicts the performance in the word recall task, but that there is no effect of background music on foreign vocabulary learning. In light of these findings, we discuss whether searching for effects of background music on foreign vocabulary learning, independent of factors such as inter-individual differences and task complexity, might be a red herring. Importantly, our findings emphasize the need for sufficiently powered research designs and

  20. Measurement of the cross section for prompt diphoton production in pp collisions at square root of s=1.96 TeV.

    Science.gov (United States)

    Acosta, D; Adelman, J; Affolder, T; Akimoto, T; Albrow, M G; Ambrose, D; Amerio, S; Amidei, D; Anastassov, A; Anikeev, K; Annovi, A; Antos, J; Aoki, M; Apollinari, G; Arisawa, T; Arguin, J-F; Artikov, A; Ashmanskas, W; Attal, A; Azfar, F; Azzi-Bacchetta, P; Bacchetta, N; Bachacou, H; Badgett, W; Barbaro-Galtieri, A; Barker, G J; Barnes, V E; Barnett, B A; Baroiant, S; Barone, M; Bauer, G; Bedeschi, F; Behari, S; Belforte, S; Bellettini, G; Bellinger, J; Ben-Haim, E; Benjamin, D; Beretvas, A; Bhatti, A; Binkley, M; Bisello, D; Bishai, M; Blair, R E; Blocker, C; Bloom, K; Blumenfeld, B; Bocci, A; Bodek, A; Bolla, G; Bolshov, A; Booth, P S L; Bortoletto, D; Boudreau, J; Bourov, S; Brau, B; Bromberg, C; Brubaker, E; Budagov, J; Budd, H S; Burkett, K; Busetto, G; Bussey, P; Byrum, K L; Cabrera, S; Campanelli, M; Campbell, M; Canepa, A; Casarsa, M; Carlsmith, D; Carron, S; Carosi, R; Cavalli-Sforza, M; Castro, A; Catastini, P; Cauz, D; Cerri, A; Cerrito, L; Chapman, J; Chen, C; Chen, Y C; Chertok, M; Chiarelli, G; Chlachidze, G; Chlebana, F; Cho, I; Cho, K; Chokheli, D; Chou, J P; Chu, M L; Chuang, S; Chung, J Y; Chung, W-H; Chung, Y S; Ciobanu, C I; Ciocci, M A; Clark, A G; Clark, D; Coca, M; Connolly, A; Convery, M; Conway, J; Cooper, B; Cordelli, M; Cortiana, G; Cranshaw, J; Cuevas, J; Culbertson, R; Currat, C; Cyr, D; Dagenhart, D; Da Ronco, S; D'Auria, S; de Barbaro, P; De Cecco, S; De Lentdecker, G; Dell'Agnello, S; Dell'Orso, M; Demers, S; Demortier, L; Deninno, M; De Pedis, D; Derwent, P F; Dionisi, C; Dittmann, J R; Dörr, C; Doksus, P; Dominguez, A; Donati, S; Donega, M; Donini, J; D'Onofrio, M; Dorigo, T; Drollinger, V; Ebina, K; Eddy, N; Ehlers, J; Ely, R; Erbacher, R; Erdmann, M; Errede, D; Errede, S; Eusebi, R; Fang, H-C; Farrington, S; Fedorko, I; Fedorko, W T; Feild, R G; Feindt, M; Fernandez, J P; Ferretti, C; Field, R D; Flanagan, G; Flaugher, B; Flores-Castillo, L R; Foland, A; Forrester, S; Foster, G W; Franklin, M; Freeman, J C; Fujii, Y; Furic, I; Gajjar, A; Gallas, A; Galyardt, J; Gallinaro, M; Garcia-Sciveres, M; Garfinkel, A F; Gay, C; Gerberich, H; Gerdes, D W; Gerchtein, E; Giagu, S; Giannetti, P; Gibson, A; Gibson, K; Ginsburg, C; Giolo, K; Giordani, M; Giunta, M; Giurgiu, G; Glagolev, V; Glenzinski, D; Gold, M; Goldschmidt, N; Goldstein, D; Goldstein, J; Gomez, G; Gomez-Ceballos, G; Goncharov, M; González, O; Gorelov, I; Goshaw, A T; Gotra, Y; Goulianos, K; Gresele, A; Griffiths, M; Grosso-Pilcher, C; Grundler, U; Guenther, M; Guimaraes da Costa, J; Haber, C; Hahn, K; Hahn, S R; Halkiadakis, E; Hamilton, A; Han, B-Y; Handler, R; Happacher, F; Hara, K; Hare, M; Harr, R F; Harris, R M; Hartmann, F; Hatakeyama, K; Hauser, J; Hays, C; Hayward, H; Heider, E; Heinemann, B; Heinrich, J; Hennecke, M; Herndon, M; Hill, C; Hirschbuehl, D; Hocker, A; Hoffman, K D; Holloway, A; Hou, S; Houlden, M A; Huffman, B T; Huang, Y; Hughes, R E; Huston, J; Ikado, K; Incandela, J; Introzzi, G; Iori, M; Ishizawa, Y; Issever, C; Ivanov, A; Iwata, Y; Iyutin, B; James, E; Jang, D; Jarrell, J; Jeans, D; Jensen, H; Jeon, E J; Jones, M; Joo, K K; Jun, S Y; Junk, T; Kamon, T; Kang, J; Karagoz Unel, M; Karchin, P E; Kartal, S; Kato, Y; Kemp, Y; Kephart, R; Kerzel, U; Khotilovich, V; Kilminster, B; Kim, D H; Kim, H S; Kim, J E; Kim, M J; Kim, M S; Kim, S B; Kim, S H; Kim, T H; Kim, Y K; King, B T; Kirby, M; Kirsch, L; Klimenko, S; Knuteson, B; Ko, B R; Kobayashi, H; Koehn, P; Kong, D J; Kondo, K; Konigsberg, J; Kordas, K; Korn, A; Korytov, A; Kotelnikov, K; Kotwal, A V; Kovalev, A; Kraus, J; Kravchenko, I; Kreymer, A; Kroll, J; Kruse, M; Krutelyov, V; Kuhlmann, S E; Kwang, S; Laasanen, A T; Lai, S; Lami, S; Lammel, S; Lancaster, J; Lancaster, M; Lander, R; Lannon, K; Lath, A; Latino, G; Lauhakangas, R; Lazzizzera, I; Le, Y; Lecci, C; LeCompte, T; Lee, J; Lee, J; Lee, S W; Lefèvre, R; Leonardo, N; Leone, S; Levy, S; Lewis, J D; Li, K; Lin, C; Lin, C S; Lindgren, M; Liss, T M; Lister, A; Litvintsev, D O; Liu, T; Liu, Y; Lockyer, N S; Loginov, A; Loreti, M; Loverre, P; Lu, R-S; Lucchesi, D; Lujan, P; Lukens, P; Lungu, G; Lyons, L; Lys, J; Lysak, R; MacQueen, D; Madrak, R; Maeshima, K; Maksimovic, P; Malferrari, L; Manca, G; Marginean, R; Marino, C; Martin, A; Martin, M; Martin, V; Martínez, M; Maruyama, T; Matsunaga, H; Mattson, M; Mazzanti, P; McFarland, K S; McGivern, D; McIntyre, P M; McNamara, P; NcNulty, R; Mehta, A; Menzemer, S; Menzione, A; Merkel, P; Mesropian, C; Messina, A; Miao, T; Miladinovic, N; Miller, L; Miller, R; Miller, J S; Miquel, R; Miscetti, S; Mitselmakher, G; Miyamoto, A; Miyazaki, Y; Moggi, N; Mohr, B; Moore, R; Morello, M; Movilla Fernandez, P A; Mukherjee, A; Mulhearn, M; Muller, T; Mumford, R; Munar, A; Murat, P; Nachtman, J; Nahn, S; Nakamura, I; Nakano, I; Napier, A; Napora, R; Naumov, D; Necula, V; Niell, F; Nielsen, J; Nelson, C; Nelson, T; Neu, C; Neubauer, M S; Newman-Holmes, C; Nigmanov, T; Nodulman, L; Norniella, O; Oesterberg, K; Ogawa, T; Oh, S H; Oh, Y D; Ohsugi, T; Okusawa, T; Oldeman, R; Orava, R; Orejudos, W; Pagliarone, C; Palencia, E; Paoletti, R; Papadimitriou, V; Pashapour, S; Patrick, J; Pauletta, G; Paulini, M; Pauly, T; Paus, C; Pellett, D; Penzo, A; Phillips, T J; Piacentino, G; Piedra, J; Pitts, K T; Plager, C; Pompos, A; Pondrom, L; Pope, G; Portell, X; Poukhov, O; Prakoshyn, F; Pratt, T; Pronko, A; Proudfoot, J; Ptohos, F; Punzi, G; Rademacker, J; Rahaman, M A; Rakitine, A; Rappoccio, S; Ratnikov, F; Ray, H; Reisert, B; Rekovic, V; Renton, P; Rescigno, M; Rimondi, F; Rinnert, K; Ristori, L; Robertson, W J; Robson, A; Rodrigo, T; Rolli, S; Rosenson, L; Roser, R; Rossin, R; Rott, C; Russ, J; Rusu, V; Ruiz, A; Ryan, D; Saarikko, H; Sabik, S; Safonov, A; Denis, R St; Sakumoto, W K; Salamanna, G; Saltzberg, D; Sanchez, C; Sansoni, A; Santi, L; Sarkar, S; Sato, K; Savard, P; Savoy-Navarro, A; Schlabach, P; Schmidt, E E; Schmidt, M P; Schmitt, M; Scodellaro, L; Scribano, A; Scuri, F; Sedov, A; Seidel, S; Seiya, Y; Semeria, F; Sexton-Kennedy, L; Sfiligoi, I; Shapiro, M D; Shears, T; Shepard, P F; Sherman, D; Shimojima, M; Shochet, M; Shon, Y; Shreyber, I; Sidoti, A; Siegrist, J; Siket, M; Sill, A; Sinervo, P; Sisakyan, A; Skiba, A; Slaughter, A J; Sliwa, K; Smirnov, D; Smith, J R; Snider, F D; Snihur, R; Soha, A; Somalwar, S V; Spalding, J; Spezziga, M; Spiegel, L; Spinella, F; Spiropulu, M; Squillacioti, P; Stadie, H; Stelzer, B; Stelzer-Chilton, O; Strologas, J; Stuart, D; Sukhanov, A; Sumorok, K; Sun, H; Suzuki, T; Taffard, A; Tafirout, R; Takach, S F; Takano, H; Takashima, R; Takeuchi, Y; Takikawa, K; Tanaka, M; Tanaka, R; Tanimoto, N; Tapprogge, S; Tecchio, M; Teng, P K; Terashi, K; Tesarek, R J; Tether, S; Thom, J; Thompson, A S; Thomson, E; Tipton, P; Tiwari, V; Tkaczyk, S; Toback, D; Tollefson, K; Tomura, T; Tonelli, D; Tönnesmann, M; Torre, S; Torretta, D; Tourneur, S; Trischuk, W; Tseng, J; Tsuchiya, R; Tsuno, S; Tsybychev, D; Turini, N; Turner, M; Ukegawa, F; Unverhau, T; Uozumi, S; Usynin, D; Vacavant, L; Vaiciulis, A; Varganov, A; Vataga, E; Vejcik, S; Velev, G; Veszpremi, V; Veramendi, G; Vickey, T; Vidal, R; Vila, I; Vilar, R; Vollrath, I; Volobouev, I; von der Mey, M; Wagner, P; Wagner, R G; Wagner, R L; Wagner, W; Wallny, R; Walter, T; Yamashita, T; Yamamoto, K; Wan, Z; Wang, M J; Wang, S M; Warburton, A; Ward, B; Waschke, S; Waters, D; Watts, T; Weber, M; Wester, W C; Whitehouse, B; Wicklund, A B; Wicklund, E; Williams, H H; Wilson, P; Winer, B L; Wittich, P; Wolbers, S; Wolter, M; Worcester, M; Worm, S; Wright, T; Wu, X; Würthwein, F; Wyatt, A; Yagil, A; Yang, C; Yang, U K; Yao, W; Yeh, G P; Yi, K; Yoh, J; Yoon, P; Yorita, K; Yoshida, T; Yu, I; Yu, S; Yu, Z; Yun, J C; Zanello, L; Zanetti, A; Zaw, I; Zetti, F; Zhou, J; Zsenei, A; Zucchelli, S

    2005-07-08

    We report a measurement of the rate of prompt diphoton production in pp collisions at square root of s=1.96 TeV using a data sample of 207 pb(-1) collected with the upgraded Collider Detector at Fermilab. The background from nonprompt sources is determined using a statistical method based on differences in the electromagnetic showers. The cross section is measured as a function of the diphoton mass, the transverse momentum of the diphoton system, and the azimuthal angle between the two photons and is found to be consistent with perturbative QCD predictions.

  1. Least-mean-square spatial filter for IR sensors.

    Science.gov (United States)

    Takken, E H; Friedman, D; Milton, A F; Nitzberg, R

    1979-12-15

    A new least-mean-square filter is defined for signal-detection problems. The technique is proposed for scanning IR surveillance systems operating in poorly characterized but primarily low-frequency clutter interference. Near-optimal detection of point-source targets is predicted both for continuous-time and sampled-data systems.

  2. Non-destructive and rapid prediction of moisture content in red pepper (Capsicum annuum L.) powder using near-infrared spectroscopy and a partial least squares regression model

    Science.gov (United States)

    Purpose: The aim of this study was to develop a technique for the non-destructive and rapid prediction of the moisture content in red pepper powder using near-infrared (NIR) spectroscopy and a partial least squares regression (PLSR) model. Methods: Three red pepper powder products were separated in...

  3. Sound field simulation and acoustic animation in urban squares

    Science.gov (United States)

    Kang, Jian; Meng, Yan

    2005-04-01

    Urban squares are important components of cities, and the acoustic environment is important for their usability. While models and formulae for predicting the sound field in urban squares are important for their soundscape design and improvement, acoustic animation tools would be of great importance for designers as well as for public participation process, given that below a certain sound level, the soundscape evaluation depends mainly on the type of sounds rather than the loudness. This paper first briefly introduces acoustic simulation models developed for urban squares, as well as empirical formulae derived from a series of simulation. It then presents an acoustic animation tool currently being developed. In urban squares there are multiple dynamic sound sources, so that the computation time becomes a main concern. Nevertheless, the requirements for acoustic animation in urban squares are relatively low compared to auditoria. As a result, it is important to simplify the simulation process and algorithms. Based on a series of subjective tests in a virtual reality environment with various simulation parameters, a fast simulation method with acceptable accuracy has been explored. [Work supported by the European Commission.

  4. Search for Higgs boson production in dilepton and missing energy final states with 5.4 fb(-1) of pp collisions at square root(s) = 1.96 TeV.

    Science.gov (United States)

    Abazov, V M; Abbott, B; Abolins, M; Acharya, B S; Adams, M; Adams, T; Aguilo, E; Alexeev, G D; Alkhazov, G; Alton, A; Alverson, G; Alves, G A; Ancu, L S; Aoki, M; Arnoud, Y; Arov, M; Askew, A; Asman, B; Atramentov, O; Avila, C; BackusMayes, J; Badaud, F; Bagby, L; Baldin, B; Bandurin, D V; Banerjee, S; Barberis, E; Barfuss, A-F; Baringer, P; Barreto, J; Bartlett, J F; Bassler, U; Bauer, D; Beale, S; Bean, A; Begalli, M; Begel, M; Belanger-Champagne, C; Bellantoni, L; Benitez, J A; Beri, S B; Bernardi, G; Bernhard, R; Bertram, I; Besançon, M; Beuselinck, R; Bezzubov, V A; Bhat, P C; Bhatnagar, V; Blazey, G; Blessing, S; Bloom, K; Boehnlein, A; Boline, D; Bolton, T A; Boos, E E; Borissov, G; Bose, T; Brandt, A; Brock, R; Brooijmans, G; Bross, A; Brown, D; Bu, X B; Buchholz, D; Buehler, M; Buescher, V; Bunichev, V; Burdin, S; Burnett, T H; Buszello, C P; Calfayan, P; Calpas, B; Calvet, S; Camacho-Pérez, E; Cammin, J; Carrasco-Lizarraga, M A; Carrera, E; Casey, B C K; Castilla-Valdez, H; Chakrabarti, S; Chakraborty, D; Chan, K M; Chandra, A; Cheu, E; Chevalier-Théry, S; Cho, D K; Cho, S W; Choi, S; Choudhary, B; Christoudias, T; Cihangir, S; Claes, D; Clutter, J; Cooke, M; Cooper, W E; Corcoran, M; Couderc, F; Cousinou, M-C; Cutts, D; Cwiok, M; Das, A; Davies, G; De, K; de Jong, S J; De la Cruz-Burelo, E; DeVaughan, K; Déliot, F; Demarteau, M; Demina, R; Denisov, D; Denisov, S P; Desai, S; Diehl, H T; Diesburg, M; Dominguez, A; Dorland, T; Dubey, A; Dudko, L V; Duflot, L; Duggan, D; Duperrin, A; Dutt, S; Dyshkant, A; Eads, M; Edmunds, D; Ellison, J; Elvira, V D; Enari, Y; Eno, S; Evans, H; Evdokimov, A; Evdokimov, V N; Facini, G; Ferapontov, A V; Ferbel, T; Fiedler, F; Filthaut, F; Fisher, W; Fisk, H E; Fortner, M; Fox, H; Fuess, S; Gadfort, T; Galea, C F; Garcia-Bellido, A; Gavrilov, V; Gay, P; Geist, W; Geng, W; Gerbaudo, D; Gerber, C E; Gershtein, Y; Gillberg, D; Ginther, G; Golovanov, G; Gómez, B; Goussiou, A; Grannis, P D; Greder, S; Greenlee, H; Greenwood, Z D; Gregores, E M; Grenier, G; Gris, Ph; Grivaz, J-F; Grohsjean, A; Grünendahl, S; Grünewald, M W; Guo, F; Guo, J; Gutierrez, G; Gutierrez, P; Haas, A; Haefner, P; Hagopian, S; Haley, J; Hall, I; Han, L; Harder, K; Harel, A; Hauptman, J M; Hays, J; Hebbeker, T; Hedin, D; Hegeman, J G; Heinson, A P; Heintz, U; Hensel, C; Heredia-De la Cruz, I; Herner, K; Hesketh, G; Hildreth, M D; Hirosky, R; Hoang, T; Hobbs, J D; Hoeneisen, B; Hohlfeld, M; Hossain, S; Houben, P; Hu, Y; Hubacek, Z; Huske, N; Hynek, V; Iashvili, I; Illingworth, R; Ito, A S; Jabeen, S; Jaffré, M; Jain, S; Jamin, D; Jesik, R; Johns, K; Johnson, C; Johnson, M; Johnston, D; Jonckheere, A; Jonsson, P; Juste, A; Kajfasz, E; Karmanov, D; Kasper, P A; Katsanos, I; Kaushik, V; Kehoe, R; Kermiche, S; Khalatyan, N; Khanov, A; Kharchilava, A; Kharzheev, Y N; Khatidze, D; Kirby, M H; Kirsch, M; Kohli, J M; Kozelov, A V; Kraus, J; Kumar, A; Kupco, A; Kurca, T; Kuzmin, V A; Kvita, J; Lam, D; Lammers, S; Landsberg, G; Lebrun, P; Lee, H S; Lee, W M; Leflat, A; Lellouch, J; Li, L; Li, Q Z; Lietti, S M; Lim, J K; Lincoln, D; Linnemann, J; Lipaev, V V; Lipton, R; Liu, Y; Liu, Z; Lobodenko, A; Lokajicek, M; Love, P; Lubatti, H J; Luna-Garcia, R; Lyon, A L; Maciel, A K A; Mackin, D; Mättig, P; Magaña-Villalba, R; Mal, P K; Malik, S; Malyshev, V L; Maravin, Y; Martínez-Ortega, J; McCarthy, R; McGivern, C L; Meijer, M M; Melnitchouk, A; Mendoza, L; Menezes, D; Mercadante, P G; Merkin, M; Meyer, A; Meyer, J; Mondal, N K; Moulik, T; Muanza, G S; Mulhearn, M; Mundal, O; Mundim, L; Nagy, E; Naimuddin, M; Narain, M; Nayyar, R; Neal, H A; Negret, J P; Neustroev, P; Nilsen, H; Nogima, H; Novaes, S F; Nunnemann, T; Obrant, G; Onoprienko, D; Orduna, J; Osman, N; Osta, J; Otec, R; Otero y Garzón, G J; Owen, M; Padilla, M; Padley, P; Pangilinan, M; Parashar, N; Parihar, V; Park, S-J; Park, S K; Parsons, J; Partridge, R; Parua, N; Patwa, A; Penning, B; Perfilov, M; Peters, K; Peters, Y; Pétroff, P; Piegaia, R; Piper, J; Pleier, M-A; Podesta-Lerma, P L M; Podstavkov, V M; Pol, M-E; Polozov, P; Popov, A V; Prewitt, M; Price, D; Protopopescu, S; Qian, J; Quadt, A; Quinn, B; Rangel, M S; Ranjan, K; Ratoff, P N; Razumov, I; Renkel, P; Rich, P; Rijssenbeek, M; Ripp-Baudot, I; Rizatdinova, F; Robinson, S; Rominsky, M; Royon, C; Rubinov, P; Ruchti, R; Safronov, G; Sajot, G; Sánchez-Hernández, A; Sanders, M P; Sanghi, B; Savage, G; Sawyer, L; Scanlon, T; Schaile, D; Schamberger, R D; Scheglov, Y; Schellman, H; Schliephake, T; Schlobohm, S; Schwanenberger, C; Schwienhorst, R; Sekaric, J; Severini, H; Shabalina, E; Shary, V; Shchukin, A A; Shivpuri, R K; Simak, V; Sirotenko, V; Skubic, P; Slattery, P; Smirnov, D; Snow, G R; Snow, J; Snyder, S; Söldner-Rembold, S; Sonnenschein, L; Sopczak, A; Sosebee, M; Soustruznik, K; Spurlock, B; Stark, J; Stolin, V; Stoyanova, D A; Strandberg, J; Strang, M A; Strauss, E; Strauss, M; Ströhmer, R; Strom, D; Stutte, L; Svoisky, P; Takahashi, M; Tanasijczuk, A; Taylor, W; Tiller, B; Titov, M; Tokmenin, V V; Tsybychev, D; Tuchming, B; Tully, C; Tuts, P M; Unalan, R; Uvarov, L; Uvarov, S; Uzunyan, S; van den Berg, P J; Van Kooten, R; van Leeuwen, W M; Varelas, N; Varnes, E W; Vasilyev, I A; Verdier, P; Vertogradov, L S; Verzocchi, M; Vesterinen, M; Vilanova, D; Vint, P; Vokac, P; Wahl, H D; Wang, M H L S; Warchol, J; Watts, G; Wayne, M; Weber, G; Weber, M; Wetstein, M; White, A; Wicke, D; Williams, M R J; Wilson, G W; Wimpenny, S J; Wobisch, M; Wood, D R; Wyatt, T R; Xie, Y; Xu, C; Yacoob, S; Yamada, R; Yang, W-C; Yasuda, T; Yatsunenko, Y A; Ye, Z; Yin, H; Yip, K; Yoo, H D; Youn, S W; Yu, J; Zeitnitz, C; Zelitch, S; Zhao, T; Zhou, B; Zhu, J; Zielinski, M; Zieminska, D; Zivkovic, L; Zutshi, V; Zverev, E G

    2010-02-12

    A search for the standard model Higgs boson is presented using events with two charged leptons and large missing transverse energy selected from 5.4 fb(-1) of integrated luminosity in pp collisions at square root(s) = 1.96 TeV collected with the D0 detector at the Fermilab Tevatron collider. No significant excess of events above background predictions is found, and observed (expected) upper limits at 95% confidence level on the rate of Higgs boson production are derived that are a factor of 1.55 (1.36) above the predicted standard model cross section at m(H) = 165 GeV.

  5. Genomic value prediction for quantitative traits under the epistatic model

    Directory of Open Access Journals (Sweden)

    Xu Shizhong

    2011-01-01

    Full Text Available Abstract Background Most quantitative traits are controlled by multiple quantitative trait loci (QTL. The contribution of each locus may be negligible but the collective contribution of all loci is usually significant. Genome selection that uses markers of the entire genome to predict the genomic values of individual plants or animals can be more efficient than selection on phenotypic values and pedigree information alone for genetic improvement. When a quantitative trait is contributed by epistatic effects, using all markers (main effects and marker pairs (epistatic effects to predict the genomic values of plants can achieve the maximum efficiency for genetic improvement. Results In this study, we created 126 recombinant inbred lines of soybean and genotyped 80 makers across the genome. We applied the genome selection technique to predict the genomic value of somatic embryo number (a quantitative trait for each line. Cross validation analysis showed that the squared correlation coefficient between the observed and predicted embryo numbers was 0.33 when only main (additive effects were used for prediction. When the interaction (epistatic effects were also included in the model, the squared correlation coefficient reached 0.78. Conclusions This study provided an excellent example for the application of genome selection to plant breeding.

  6. [Prediction of total nitrogen and alkali hydrolysable nitrogen content in loess using hyperspectral data based on correlation analysis and partial least squares regression].

    Science.gov (United States)

    Liu, Xiu-ying; Wang, Li; Chang, Qing-rui; Wang, Xiao-xing; Shang, Yan

    2015-07-01

    Wuqi County of Shaanxi Province, where the vegetation recovering measures have been carried out for years, was taken as the study area. A total of 100 loess samples from 24 different profiles were collected. Total nitrogen (TN) and alkali hydrolysable nitrogen (AHN) contents of the soil samples were analyzed, and the soil samples were scanned in the visible/near-infrared (VNIR) region of 350-2500 nm in the laboratory. The calibration models were developed between TN and AHN contents and VNIR values based on correlation analysis (CA) and partial least squares regression (PLS). Independent samples validated the calibration models. The results indicated that the optimum model for predicting TN of loess was established by using first derivative of reflectance. The best model for predicting AHN of loess was established by using normal derivative spectra. The optimum TN model could effectively predict TN in loess from 0 to 40 cm, but the optimum AHN model could only roughly predict AHN at the same depth. This study provided a good method for rapidly predicting TN of loess where vegetation recovering measures have been adopted, but prediction of AHN needs to be further studied.

  7. Background noise levels in Europe

    OpenAIRE

    Gjestland, Truls

    2008-01-01

    - This report gives a brief overview of typical background noise levels in Europe, and suggests a procedure for the prediction of background noise levels based on population density. A proposal for the production of background noise maps for Europe is included.

  8. New model for prediction binary mixture of antihistamine decongestant using artificial neural networks and least squares support vector machine by spectrophotometry method

    Science.gov (United States)

    Mofavvaz, Shirin; Sohrabi, Mahmoud Reza; Nezamzadeh-Ejhieh, Alireza

    2017-07-01

    In the present study, artificial neural networks (ANNs) and least squares support vector machines (LS-SVM) as intelligent methods based on absorption spectra in the range of 230-300 nm have been used for determination of antihistamine decongestant contents. In the first step, one type of network (feed-forward back-propagation) from the artificial neural network with two different training algorithms, Levenberg-Marquardt (LM) and gradient descent with momentum and adaptive learning rate back-propagation (GDX) algorithm, were employed and their performance was evaluated. The performance of the LM algorithm was better than the GDX algorithm. In the second one, the radial basis network was utilized and results compared with the previous network. In the last one, the other intelligent method named least squares support vector machine was proposed to construct the antihistamine decongestant prediction model and the results were compared with two of the aforementioned networks. The values of the statistical parameters mean square error (MSE), Regression coefficient (R2), correlation coefficient (r) and also mean recovery (%), relative standard deviation (RSD) used for selecting the best model between these methods. Moreover, the proposed methods were compared to the high- performance liquid chromatography (HPLC) as a reference method. One way analysis of variance (ANOVA) test at the 95% confidence level applied to the comparison results of suggested and reference methods that there were no significant differences between them.

  9. Optimal background matching camouflage.

    Science.gov (United States)

    Michalis, Constantine; Scott-Samuel, Nicholas E; Gibson, David P; Cuthill, Innes C

    2017-07-12

    Background matching is the most familiar and widespread camouflage strategy: avoiding detection by having a similar colour and pattern to the background. Optimizing background matching is straightforward in a homogeneous environment, or when the habitat has very distinct sub-types and there is divergent selection leading to polymorphism. However, most backgrounds have continuous variation in colour and texture, so what is the best solution? Not all samples of the background are likely to be equally inconspicuous, and laboratory experiments on birds and humans support this view. Theory suggests that the most probable background sample (in the statistical sense), at the size of the prey, would, on average, be the most cryptic. We present an analysis, based on realistic assumptions about low-level vision, that estimates the distribution of background colours and visual textures, and predicts the best camouflage. We present data from a field experiment that tests and supports our predictions, using artificial moth-like targets under bird predation. Additionally, we present analogous data for humans, under tightly controlled viewing conditions, searching for targets on a computer screen. These data show that, in the absence of predator learning, the best single camouflage pattern for heterogeneous backgrounds is the most probable sample. © 2017 The Authors.

  10. Prediction of clinical depression scores and detection of changes in whole-brain using resting-state functional MRI data with partial least squares regression.

    Directory of Open Access Journals (Sweden)

    Kosuke Yoshida

    Full Text Available In diagnostic applications of statistical machine learning methods to brain imaging data, common problems include data high-dimensionality and co-linearity, which often cause over-fitting and instability. To overcome these problems, we applied partial least squares (PLS regression to resting-state functional magnetic resonance imaging (rs-fMRI data, creating a low-dimensional representation that relates symptoms to brain activity and that predicts clinical measures. Our experimental results, based upon data from clinically depressed patients and healthy controls, demonstrated that PLS and its kernel variants provided significantly better prediction of clinical measures than ordinary linear regression. Subsequent classification using predicted clinical scores distinguished depressed patients from healthy controls with 80% accuracy. Moreover, loading vectors for latent variables enabled us to identify brain regions relevant to depression, including the default mode network, the right superior frontal gyrus, and the superior motor area.

  11. Use of correspondence analysis partial least squares on linear and unimodal data

    DEFF Research Database (Denmark)

    Frisvad, Jens Christian; Norsker, Merete

    1996-01-01

    Correspondence analysis partial least squares (CA-PLS) has been compared with PLS conceming classification and prediction of unimodal growth temperature data and an example using infrared (IR) spectroscopy for predicting amounts of chemicals in mixtures. CA-PLS was very effective for ordinating...... that could only be seen in two-dimensional plots, and also less effective predictions. PLS was the best method in the linear case treated, with fewer components and a better prediction than CA-PLS....

  12. Does Social Background Influence Political Science Grades?

    Science.gov (United States)

    Tiruneh, Gizachew

    2013-01-01

    This paper tests a hypothesized linear relationship between social background and final grades in several political science courses that I taught at the University of Central Arkansas. I employ a cross-sectional research design and ordinary least square (OLS) estimators to test the foregoing hypothesis. Relying on a sample of up to 204…

  13. New approach to breast cancer CAD using partial least squares and kernel-partial least squares

    Science.gov (United States)

    Land, Walker H., Jr.; Heine, John; Embrechts, Mark; Smith, Tom; Choma, Robert; Wong, Lut

    2005-04-01

    Breast cancer is second only to lung cancer as a tumor-related cause of death in women. Currently, the method of choice for the early detection of breast cancer is mammography. While sensitive to the detection of breast cancer, its positive predictive value (PPV) is low, resulting in biopsies that are only 15-34% likely to reveal malignancy. This paper explores the use of two novel approaches called Partial Least Squares (PLS) and Kernel-PLS (K-PLS) to the diagnosis of breast cancer. The approach is based on optimization for the partial least squares (PLS) algorithm for linear regression and the K-PLS algorithm for non-linear regression. Preliminary results show that both the PLS and K-PLS paradigms achieved comparable results with three separate support vector learning machines (SVLMs), where these SVLMs were known to have been trained to a global minimum. That is, the average performance of the three separate SVLMs were Az = 0.9167927, with an average partial Az (Az90) = 0.5684283. These results compare favorably with the K-PLS paradigm, which obtained an Az = 0.907 and partial Az = 0.6123. The PLS paradigm provided comparable results. Secondly, both the K-PLS and PLS paradigms out performed the ANN in that the Az index improved by about 14% (Az ~ 0.907 compared to the ANN Az of ~ 0.8). The "Press R squared" value for the PLS and K-PLS machine learning algorithms were 0.89 and 0.9, respectively, which is in good agreement with the other MOP values.

  14. Digital squares

    DEFF Research Database (Denmark)

    Forchhammer, Søren; Kim, Chul E

    1988-01-01

    Digital squares are defined and their geometric properties characterized. A linear time algorithm is presented that considers a convex digital region and determines whether or not it is a digital square. The algorithm also determines the range of the values of the parameter set of its preimages....... The analysis involves transforming the boundary of a digital region into parameter space of slope and y-intercept...

  15. Predicting non-square 2D dice probabilities

    Science.gov (United States)

    Pender, G. A. T.; Uhrin, M.

    2014-07-01

    The prediction of the final state probabilities of a general cuboid randomly thrown onto a surface is a problem that naturally arises in the minds of men and women familiar with regular cubic dice and the basic concepts of probability. Indeed, it was considered by Newton in 1664 (Newton 1967 The Mathematical Papers of Issac Newton vol I (Cambridge: Cambridge University Press) pp 60-1). In this paper we make progress on the 2D problem (which can be realized in 3D by considering a long cuboid, or alternatively a rectangular cross-sectioned dreidel). For the two-dimensional case we suggest that the ratio of the probabilities of landing on each of the two sides is given by \\frac{\\sqrt{{{k}^{2}}+{{l}^{2}}}-k}{\\sqrt{{{k}^{2}}+{{l}^{2}}}-l}\\frac{arctan \\frac{l}{k}}{arctan \\frac{k}{l}} where k and l are the lengths of the two sides. We test this theory both experimentally and computationally, and find good agreement between our theory, experimental and computational results. Our theory is known, from its derivation, to be an approximation for particularly bouncy or ‘grippy’ surfaces where the die rolls through many revolutions before settling. On real surfaces we would expect (and we observe) that the true probability ratio for a 2D die is a somewhat closer to unity than predicted by our theory. This problem may also have wider relevance in the testing of physics engines.

  16. Genotype-based ancestral background consistently predicts efficacy and side effects across treatments in CATIE and STAR*D.

    Directory of Open Access Journals (Sweden)

    Daniel E Adkins

    Full Text Available Only a subset of patients will typically respond to any given prescribed drug. The time it takes clinicians to declare a treatment ineffective leaves the patient in an impaired state and at unnecessary risk for adverse drug effects. Thus, diagnostic tests robustly predicting the most effective and safe medication for each patient prior to starting pharmacotherapy would have tremendous clinical value. In this article, we evaluated the use of genetic markers to estimate ancestry as a predictive component of such diagnostic tests. We first estimated each patient's unique mosaic of ancestral backgrounds using genome-wide SNP data collected in the Clinical Antipsychotic Trials of Intervention Effectiveness (CATIE (n = 765 and the Sequenced Treatment Alternatives to Relieve Depression (STAR*D (n = 1892. Next, we performed multiple regression analyses to estimate the predictive power of these ancestral dimensions. For 136/89 treatment-outcome combinations tested in CATIE/STAR*D, results indicated 1.67/1.84 times higher median test statistics than expected under the null hypothesis assuming no predictive power (p<0.01, both samples. Thus, ancestry showed robust and pervasive correlations with drug efficacy and side effects in both CATIE and STAR*D. Comparison of the marginal predictive power of MDS ancestral dimensions and self-reported race indicated significant improvements to model fit with the inclusion of MDS dimensions, but mixed evidence for self-reported race. Knowledge of each patient's unique mosaic of ancestral backgrounds provides a potent immediate starting point for developing algorithms identifying the most effective and safe medication for a wide variety of drug-treatment response combinations. As relatively few new psychiatric drugs are currently under development, such personalized medicine offers a promising approach toward optimizing pharmacotherapy for psychiatric conditions.

  17. Vocabulary Knowledge Predicts Lexical Processing: Evidence from a Group of Participants with Diverse Educational Backgrounds

    Directory of Open Access Journals (Sweden)

    Nina Mainz

    2017-07-01

    Full Text Available Vocabulary knowledge is central to a speaker's command of their language. In previous research, greater vocabulary knowledge has been associated with advantages in language processing. In this study, we examined the relationship between individual differences in vocabulary and language processing performance more closely by (i using a battery of vocabulary tests instead of just one test, and (ii testing not only university students (Experiment 1 but young adults from a broader range of educational backgrounds (Experiment 2. Five vocabulary tests were developed, including multiple-choice and open antonym and synonym tests and a definition test, and administered together with two established measures of vocabulary. Language processing performance was measured using a lexical decision task. In Experiment 1, vocabulary and word frequency were found to predict word recognition speed while we did not observe an interaction between the effects. In Experiment 2, word recognition performance was predicted by word frequency and the interaction between word frequency and vocabulary, with high-vocabulary individuals showing smaller frequency effects. While overall the individual vocabulary tests were correlated and showed similar relationships with language processing as compared to a composite measure of all tests, they appeared to share less variance in Experiment 2 than in Experiment 1. Implications of our findings concerning the assessment of vocabulary size in individual differences studies and the investigation of individuals from more varied backgrounds are discussed.

  18. Vocabulary Knowledge Predicts Lexical Processing: Evidence from a Group of Participants with Diverse Educational Backgrounds

    Science.gov (United States)

    Mainz, Nina; Shao, Zeshu; Brysbaert, Marc; Meyer, Antje S.

    2017-01-01

    Vocabulary knowledge is central to a speaker's command of their language. In previous research, greater vocabulary knowledge has been associated with advantages in language processing. In this study, we examined the relationship between individual differences in vocabulary and language processing performance more closely by (i) using a battery of vocabulary tests instead of just one test, and (ii) testing not only university students (Experiment 1) but young adults from a broader range of educational backgrounds (Experiment 2). Five vocabulary tests were developed, including multiple-choice and open antonym and synonym tests and a definition test, and administered together with two established measures of vocabulary. Language processing performance was measured using a lexical decision task. In Experiment 1, vocabulary and word frequency were found to predict word recognition speed while we did not observe an interaction between the effects. In Experiment 2, word recognition performance was predicted by word frequency and the interaction between word frequency and vocabulary, with high-vocabulary individuals showing smaller frequency effects. While overall the individual vocabulary tests were correlated and showed similar relationships with language processing as compared to a composite measure of all tests, they appeared to share less variance in Experiment 2 than in Experiment 1. Implications of our findings concerning the assessment of vocabulary size in individual differences studies and the investigation of individuals from more varied backgrounds are discussed. PMID:28751871

  19. Estimating Model Prediction Error: Should You Treat Predictions as Fixed or Random?

    Science.gov (United States)

    Wallach, Daniel; Thorburn, Peter; Asseng, Senthold; Challinor, Andrew J.; Ewert, Frank; Jones, James W.; Rotter, Reimund; Ruane, Alexander

    2016-01-01

    Crop models are important tools for impact assessment of climate change, as well as for exploring management options under current climate. It is essential to evaluate the uncertainty associated with predictions of these models. We compare two criteria of prediction error; MSEP fixed, which evaluates mean squared error of prediction for a model with fixed structure, parameters and inputs, and MSEP uncertain( X), which evaluates mean squared error averaged over the distributions of model structure, inputs and parameters. Comparison of model outputs with data can be used to estimate the former. The latter has a squared bias term, which can be estimated using hindcasts, and a model variance term, which can be estimated from a simulation experiment. The separate contributions to MSEP uncertain (X) can be estimated using a random effects ANOVA. It is argued that MSEP uncertain (X) is the more informative uncertainty criterion, because it is specific to each prediction situation.

  20. Hybrid robust model based on an improved functional link neural network integrating with partial least square (IFLNN-PLS) and its application to predicting key process variables.

    Science.gov (United States)

    He, Yan-Lin; Xu, Yuan; Geng, Zhi-Qiang; Zhu, Qun-Xiong

    2016-03-01

    In this paper, a hybrid robust model based on an improved functional link neural network integrating with partial least square (IFLNN-PLS) is proposed. Firstly, an improved functional link neural network with small norm of expanded weights and high input-output correlation (SNEWHIOC-FLNN) was proposed for enhancing the generalization performance of FLNN. Unlike the traditional FLNN, the expanded variables of the original inputs are not directly used as the inputs in the proposed SNEWHIOC-FLNN model. The original inputs are attached to some small norm of expanded weights. As a result, the correlation coefficient between some of the expanded variables and the outputs is enhanced. The larger the correlation coefficient is, the more relevant the expanded variables tend to be. In the end, the expanded variables with larger correlation coefficient are selected as the inputs to improve the performance of the traditional FLNN. In order to test the proposed SNEWHIOC-FLNN model, three UCI (University of California, Irvine) regression datasets named Housing, Concrete Compressive Strength (CCS), and Yacht Hydro Dynamics (YHD) are selected. Then a hybrid model based on the improved FLNN integrating with partial least square (IFLNN-PLS) was built. In IFLNN-PLS model, the connection weights are calculated using the partial least square method but not the error back propagation algorithm. Lastly, IFLNN-PLS was developed as an intelligent measurement model for accurately predicting the key variables in the Purified Terephthalic Acid (PTA) process and the High Density Polyethylene (HDPE) process. Simulation results illustrated that the IFLNN-PLS could significant improve the prediction performance. Copyright © 2015 ISA. Published by Elsevier Ltd. All rights reserved.

  1. Prediction of retention indices for frequently reported compounds of plant essential oils using multiple linear regression, partial least squares, and support vector machine.

    Science.gov (United States)

    Yan, Jun; Huang, Jian-Hua; He, Min; Lu, Hong-Bing; Yang, Rui; Kong, Bo; Xu, Qing-Song; Liang, Yi-Zeng

    2013-08-01

    Retention indices for frequently reported compounds of plant essential oils on three different stationary phases were investigated. Multivariate linear regression, partial least squares, and support vector machine combined with a new variable selection approach called random-frog recently proposed by our group, were employed to model quantitative structure-retention relationships. Internal and external validations were performed to ensure the stability and predictive ability. All the three methods could obtain an acceptable model, and the optimal results by support vector machine based on a small number of informative descriptors with the square of correlation coefficient for cross validation, values of 0.9726, 0.9759, and 0.9331 on the dimethylsilicone stationary phase, the dimethylsilicone phase with 5% phenyl groups, and the PEG stationary phase, respectively. The performances of two variable selection approaches, random-frog and genetic algorithm, are compared. The importance of the variables was found to be consistent when estimated from correlation coefficients in multivariate linear regression equations and selection probability in model spaces. © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  2. Sub-Millimeter Tests of the Newtonian Inverse Square Law

    International Nuclear Information System (INIS)

    Adelberger, Eric

    2005-01-01

    It is remarkable that small-scale experiments can address important open issues in fundamental science such as: 'why is gravity so weak compared to the other interactions?' and 'why is the cosmological constant so small compared to the predictions of quantum mechanics?' String theory ideas (new scalar particles and extra dimensions) and other notions hint that Newton's Inverse-Square Law could break down at distances less than 1 mm. I will review some motivations for testing the Inverse-Square Law, and discuss recent mechanical experiments with torsion balances, small-scillators, micro-cantilevers, and ultra-cold neutrons. Our torsion-balance experiments have probed for gravitational-strength interactions with length scales down to 70 micrometers, which is approximately the diameter of a human hair.

  3. Modeling and forecasting monthly movement of annual average solar insolation based on the least-squares Fourier-model

    International Nuclear Information System (INIS)

    Yang, Zong-Chang

    2014-01-01

    Highlights: • Introduce a finite Fourier-series model for evaluating monthly movement of annual average solar insolation. • Present a forecast method for predicting its movement based on the extended Fourier-series model in the least-squares. • Shown its movement is well described by a low numbers of harmonics with approximately 6-term Fourier series. • Predict its movement most fitting with less than 6-term Fourier series. - Abstract: Solar insolation is one of the most important measurement parameters in many fields. Modeling and forecasting monthly movement of annual average solar insolation is of increasingly importance in areas of engineering, science and economics. In this study, Fourier-analysis employing finite Fourier-series is proposed for evaluating monthly movement of annual average solar insolation and extended in the least-squares for forecasting. The conventional Fourier analysis, which is the most common analysis method in the frequency domain, cannot be directly applied for prediction. Incorporated with the least-square method, the introduced Fourier-series model is extended to predict its movement. The extended Fourier-series forecasting model obtains its optimums Fourier coefficients in the least-square sense based on its previous monthly movements. The proposed method is applied to experiments and yields satisfying results in the different cities (states). It is indicated that monthly movement of annual average solar insolation is well described by a low numbers of harmonics with approximately 6-term Fourier series. The extended Fourier forecasting model predicts the monthly movement of annual average solar insolation most fitting with less than 6-term Fourier series

  4. Lax-pair operators for squared-sum and squared-difference eigenfunctions

    International Nuclear Information System (INIS)

    Ichikawa, Yoshihiko; Iino, Kazuhiro.

    1984-10-01

    Inter-relationship between various representations of the inverse scattering transformation is established by examining eigenfunctions of Lax-pair operators of the sine-Gordon equation and the modified Korteweg-de Vries equation. In particular, it is shown explicitly that there exists Lax-pair operators for the squared-sum and squared-difference eigenfunctions of the Ablowitz-Kaup-Newell-Segur inverse scattering transformation. (author)

  5. Square Root Unscented Kalman Filters for State Estimation of Induction Motor Drives

    DEFF Research Database (Denmark)

    Lascu, Cristian; Jafarzadeh, Saeed; Fadali, M.Sami

    2013-01-01

    This paper investigates the application, design, and implementation of the square root unscented Kalman filter (UKF) (SRUKF) for induction motor (IM) sensorless drives. The UKF uses nonlinear unscented transforms (UTs) in the prediction step in order to preserve the stochastic characteristics...... of a nonlinear system. The advantage of using the UT is its ability to capture the nonlinear behavior of the system, unlike the extended Kalman filter (EKF) that uses linearized models. The SRUKF implements the UKF using square root filtering to reduce computational errors. We discuss the theoretical aspects...

  6. Prediction of aged red wine aroma properties from aroma chemical composition. Partial least squares regression models.

    Science.gov (United States)

    Aznar, Margarita; López, Ricardo; Cacho, Juan; Ferreira, Vicente

    2003-04-23

    Partial least squares regression (PLSR) models able to predict some of the wine aroma nuances from its chemical composition have been developed. The aromatic sensory characteristics of 57 Spanish aged red wines were determined by 51 experts from the wine industry. The individual descriptions given by the experts were recorded, and the frequency with which a sensory term was used to define a given wine was taken as a measurement of its intensity. The aromatic chemical composition of the wines was determined by already published gas chromatography (GC)-flame ionization detector and GC-mass spectrometry methods. In the whole, 69 odorants were analyzed. Both matrixes, the sensory and chemical data, were simplified by grouping and rearranging correlated sensory terms or chemical compounds and by the exclusion of secondary aroma terms or of weak aroma chemicals. Finally, models were developed for 18 sensory terms and 27 chemicals or groups of chemicals. Satisfactory models, explaining more than 45% of the original variance, could be found for nine of the most important sensory terms (wood-vanillin-cinnamon, animal-leather-phenolic, toasted-coffee, old wood-reduction, vegetal-pepper, raisin-flowery, sweet-candy-cacao, fruity, and berry fruit). For this set of terms, the correlation coefficients between the measured and predicted Y (determined by cross-validation) ranged from 0.62 to 0.81. Models confirmed the existence of complex multivariate relationships between chemicals and odors. In general, pleasant descriptors were positively correlated to chemicals with pleasant aroma, such as vanillin, beta damascenone, or (E)-beta-methyl-gamma-octalactone, and negatively correlated to compounds showing less favorable odor properties, such as 4-ethyl and vinyl phenols, 3-(methylthio)-1-propanol, or phenylacetaldehyde.

  7. Prediction for human intelligence using morphometric characteristics of cortical surface: partial least square analysis.

    Science.gov (United States)

    Yang, J-J; Yoon, U; Yun, H J; Im, K; Choi, Y Y; Lee, K H; Park, H; Hough, M G; Lee, J-M

    2013-08-29

    A number of imaging studies have reported neuroanatomical correlates of human intelligence with various morphological characteristics of the cerebral cortex. However, it is not yet clear whether these morphological properties of the cerebral cortex account for human intelligence. We assumed that the complex structure of the cerebral cortex could be explained effectively considering cortical thickness, surface area, sulcal depth and absolute mean curvature together. In 78 young healthy adults (age range: 17-27, male/female: 39/39), we used the full-scale intelligence quotient (FSIQ) and the cortical measurements calculated in native space from each subject to determine how much combining various cortical measures explained human intelligence. Since each cortical measure is thought to be not independent but highly inter-related, we applied partial least square (PLS) regression, which is one of the most promising multivariate analysis approaches, to overcome multicollinearity among cortical measures. Our results showed that 30% of FSIQ was explained by the first latent variable extracted from PLS regression analysis. Although it is difficult to relate the first derived latent variable with specific anatomy, we found that cortical thickness measures had a substantial impact on the PLS model supporting the most significant factor accounting for FSIQ. Our results presented here strongly suggest that the new predictor combining different morphometric properties of complex cortical structure is well suited for predicting human intelligence. Copyright © 2013 IBRO. Published by Elsevier Ltd. All rights reserved.

  8. Irrational Square Roots

    Science.gov (United States)

    Misiurewicz, Michal

    2013-01-01

    If students are presented the standard proof of irrationality of [square root]2, can they generalize it to a proof of the irrationality of "[square root]p", "p" a prime if, instead of considering divisibility by "p", they cling to the notions of even and odd used in the standard proof?

  9. Wind Tunnel Strain-Gage Balance Calibration Data Analysis Using a Weighted Least Squares Approach

    Science.gov (United States)

    Ulbrich, N.; Volden, T.

    2017-01-01

    A new approach is presented that uses a weighted least squares fit to analyze wind tunnel strain-gage balance calibration data. The weighted least squares fit is specifically designed to increase the influence of single-component loadings during the regression analysis. The weighted least squares fit also reduces the impact of calibration load schedule asymmetries on the predicted primary sensitivities of the balance gages. A weighting factor between zero and one is assigned to each calibration data point that depends on a simple count of its intentionally loaded load components or gages. The greater the number of a data point's intentionally loaded load components or gages is, the smaller its weighting factor becomes. The proposed approach is applicable to both the Iterative and Non-Iterative Methods that are used for the analysis of strain-gage balance calibration data in the aerospace testing community. The Iterative Method uses a reasonable estimate of the tare corrected load set as input for the determination of the weighting factors. The Non-Iterative Method, on the other hand, uses gage output differences relative to the natural zeros as input for the determination of the weighting factors. Machine calibration data of a six-component force balance is used to illustrate benefits of the proposed weighted least squares fit. In addition, a detailed derivation of the PRESS residuals associated with a weighted least squares fit is given in the appendices of the paper as this information could not be found in the literature. These PRESS residuals may be needed to evaluate the predictive capabilities of the final regression models that result from a weighted least squares fit of the balance calibration data.

  10. Square through tube

    International Nuclear Information System (INIS)

    Akita, Junji; Honma, Toei.

    1975-01-01

    Object: To provide a square through tube involving thermal movement in pipelines such as water supply pump driving turbine exhaust pipe (square-shaped), which is wide in freedom with respect to shape and dimension thereof for efficient installation at site. Structure: In a through tube to be airtightly retained for purpose of decontamination in an atomic power plant, comprising a seal rubber plate, a band and a bolt and a nut for securing said plate, the seal rubber plate being worked into the desired shape so that it may be placed in intimate contact with the concrete floor surface by utilization of elasticity of rubber, thereby providing airtightness at a corner portion of the square tube. (Kamimura, M.)

  11. Influence of Task Difficulty and Background Music on Working Memory Activity: Developmental Considerations.

    Science.gov (United States)

    Kaniel, Shlomo; Aram, Dorit

    1998-01-01

    A study of 300 children in kindergarten, grade 2, and grade 6 found that background music improved visual discrimination task performance at the youngest and middle ages and had no effect on the oldest participants. On a square identification task, background music had no influence on easy and difficult tasks but lowered performance on…

  12. Development of a partial least squares-artificial neural network (PLS-ANN) hybrid model for the prediction of consumer liking scores of ready-to-drink green tea beverages.

    Science.gov (United States)

    Yu, Peigen; Low, Mei Yin; Zhou, Weibiao

    2018-01-01

    In order to develop products that would be preferred by consumers, the effects of the chemical compositions of ready-to-drink green tea beverages on consumer liking were studied through regression analyses. Green tea model systems were prepared by dosing solutions of 0.1% green tea extract with differing concentrations of eight flavour keys deemed to be important for green tea aroma and taste, based on a D-optimal experimental design, before undergoing commercial sterilisation. Sensory evaluation of the green tea model system was carried out using an untrained consumer panel to obtain hedonic liking scores of the samples. Regression models were subsequently trained to objectively predict the consumer liking scores of the green tea model systems. A linear partial least squares (PLS) regression model was developed to describe the effects of the eight flavour keys on consumer liking, with a coefficient of determination (R 2 ) of 0.733, and a root-mean-square error (RMSE) of 3.53%. The PLS model was further augmented with an artificial neural network (ANN) to establish a PLS-ANN hybrid model. The established hybrid model was found to give a better prediction of consumer liking scores, based on its R 2 (0.875) and RMSE (2.41%). Copyright © 2017 Elsevier Ltd. All rights reserved.

  13. Weighted conditional least-squares estimation

    International Nuclear Information System (INIS)

    Booth, J.G.

    1987-01-01

    A two-stage estimation procedure is proposed that generalizes the concept of conditional least squares. The method is instead based upon the minimization of a weighted sum of squares, where the weights are inverses of estimated conditional variance terms. Some general conditions are given under which the estimators are consistent and jointly asymptotically normal. More specific details are given for ergodic Markov processes with stationary transition probabilities. A comparison is made with the ordinary conditional least-squares estimators for two simple branching processes with immigration. The relationship between weighted conditional least squares and other, more well-known, estimators is also investigated. In particular, it is shown that in many cases estimated generalized least-squares estimators can be obtained using the weighted conditional least-squares approach. Applications to stochastic compartmental models, and linear models with nested error structures are considered

  14. Uncertainty analysis of pollutant build-up modelling based on a Bayesian weighted least squares approach

    International Nuclear Information System (INIS)

    Haddad, Khaled; Egodawatta, Prasanna; Rahman, Ataur; Goonetilleke, Ashantha

    2013-01-01

    Reliable pollutant build-up prediction plays a critical role in the accuracy of urban stormwater quality modelling outcomes. However, water quality data collection is resource demanding compared to streamflow data monitoring, where a greater quantity of data is generally available. Consequently, available water quality datasets span only relatively short time scales unlike water quantity data. Therefore, the ability to take due consideration of the variability associated with pollutant processes and natural phenomena is constrained. This in turn gives rise to uncertainty in the modelling outcomes as research has shown that pollutant loadings on catchment surfaces and rainfall within an area can vary considerably over space and time scales. Therefore, the assessment of model uncertainty is an essential element of informed decision making in urban stormwater management. This paper presents the application of a range of regression approaches such as ordinary least squares regression, weighted least squares regression and Bayesian weighted least squares regression for the estimation of uncertainty associated with pollutant build-up prediction using limited datasets. The study outcomes confirmed that the use of ordinary least squares regression with fixed model inputs and limited observational data may not provide realistic estimates. The stochastic nature of the dependent and independent variables need to be taken into consideration in pollutant build-up prediction. It was found that the use of the Bayesian approach along with the Monte Carlo simulation technique provides a powerful tool, which attempts to make the best use of the available knowledge in prediction and thereby presents a practical solution to counteract the limitations which are otherwise imposed on water quality modelling. - Highlights: ► Water quality data spans short time scales leading to significant model uncertainty. ► Assessment of uncertainty essential for informed decision making in water

  15. Square-root measurement for pure states

    International Nuclear Information System (INIS)

    Huang Siendong

    2005-01-01

    Square-root measurement is a very useful suboptimal measurement in many applications. It was shown that the square-root measurement minimizes the squared error for pure states. In this paper, the least squared error problem is reformulated and a new proof is provided. It is found that the least squared error depends only on the average density operator of the input states. The properties of the least squared error are then discussed, and it is shown that if the input pure states are uniformly distributed, the average probability of error has an upper bound depending on the least squared error, the rank of the average density operator, and the number of the input states. The aforementioned properties help explain why the square-root measurement can be effective in decoding processes

  16. JEM-X background models

    DEFF Research Database (Denmark)

    Huovelin, J.; Maisala, S.; Schultz, J.

    2003-01-01

    Background and determination of its components for the JEM-X X-ray telescope on INTEGRAL are discussed. A part of the first background observations by JEM-X are analysed and results are compared to predictions. The observations are based on extensive imaging of background near the Crab Nebula...... on revolution 41 of INTEGRAL. Total observing time used for the analysis was 216 502 s, with the average of 25 cps of background for each of the two JEM-X telescopes. JEM-X1 showed slightly higher average background intensity than JEM-X2. The detectors were stable during the long exposures, and weak orbital...... background was enhanced in the central area of a detector, and it decreased radially towards the edge, with a clear vignetting effect for both JEM-X units. The instrument background was weakest in the central area of a detector and showed a steep increase at the very edges of both JEM-X detectors...

  17. Offset Free Tracking Predictive Control Based on Dynamic PLS Framework

    Directory of Open Access Journals (Sweden)

    Jin Xin

    2017-10-01

    Full Text Available This paper develops an offset free tracking model predictive control based on a dynamic partial least square (PLS framework. First, state space model is used as the inner model of PLS to describe the dynamic system, where subspace identification method is used to identify the inner model. Based on the obtained model, multiple independent model predictive control (MPC controllers are designed. Due to the decoupling character of PLS, these controllers are running separately, which is suitable for distributed control framework. In addition, the increment of inner model output is considered in the cost function of MPC, which involves integral action in the controller. Hence, the offset free tracking performance is guaranteed. The results of an industry background simulation demonstrate the effectiveness of proposed method.

  18. Block copolymer morphologies confined by square-shaped particle: Hard and soft confinement

    International Nuclear Information System (INIS)

    Zhang Qiyi; Yang Wenyan; Hu Kaiyan

    2016-01-01

    The self-assembly of diblock copolymers confined around one square-shaped particle is studied systematically within two-dimensional self-consistent field theory (SCFT). In this model, we assume that the thin block copolymer film is confined in the vicinity of a square-shaped particle by a homopolymer melt, which is equivalent to the poor solvents. Multiple sequences of square-shaped particle-induced copolymer aggregates with different shapes and self-assembled internal morphologies are predicted as functions of the particle size, the structural portion of the copolymer, and the volume fraction of the copolymer. A rich variety of aggregates are found with complex internal self-assembled morphologies including complex structures of the vesicle, with one or several inverted micelle surrounded by the outer monolayer with the particle confined in the core. These results demonstrate that the assemblies of diblock copolymers formed around the square-shaped particle in poor solvents are of immediate interest to the assembly of copolymer and the morphology of biomembrane in the confined environment, as well as to the transitions of vesicles to micelles. (paper)

  19. Estimasi Model Seemingly Unrelated Regression (SUR dengan Metode Generalized Least Square (GLS

    Directory of Open Access Journals (Sweden)

    Ade Widyaningsih

    2015-04-01

    Full Text Available Regression analysis is a statistical tool that is used to determine the relationship between two or more quantitative variables so that one variable can be predicted from the other variables. A method that can used to obtain a good estimation in the regression analysis is ordinary least squares method. The least squares method is used to estimate the parameters of one or more regression but relationships among the errors in the response of other estimators are not allowed. One way to overcome this problem is Seemingly Unrelated Regression model (SUR in which parameters are estimated using Generalized Least Square (GLS. In this study, the author applies SUR model using GLS method on world gasoline demand data. The author obtains that SUR using GLS is better than OLS because SUR produce smaller errors than the OLS.

  20. Estimasi Model Seemingly Unrelated Regression (SUR dengan Metode Generalized Least Square (GLS

    Directory of Open Access Journals (Sweden)

    Ade Widyaningsih

    2014-06-01

    Full Text Available Regression analysis is a statistical tool that is used to determine the relationship between two or more quantitative variables so that one variable can be predicted from the other variables. A method that can used to obtain a good estimation in the regression analysis is ordinary least squares method. The least squares method is used to estimate the parameters of one or more regression but relationships among the errors in the response of other estimators are not allowed. One way to overcome this problem is Seemingly Unrelated Regression model (SUR in which parameters are estimated using Generalized Least Square (GLS. In this study, the author applies SUR model using GLS method on world gasoline demand data. The author obtains that SUR using GLS is better than OLS because SUR produce smaller errors than the OLS.

  1. Hourly cooling load forecasting using time-indexed ARX models with two-stage weighted least squares regression

    International Nuclear Information System (INIS)

    Guo, Yin; Nazarian, Ehsan; Ko, Jeonghan; Rajurkar, Kamlakar

    2014-01-01

    Highlights: • Developed hourly-indexed ARX models for robust cooling-load forecasting. • Proposed a two-stage weighted least-squares regression approach. • Considered the effect of outliers as well as trend of cooling load and weather patterns. • Included higher order terms and day type patterns in the forecasting models. • Demonstrated better accuracy compared with some ARX and ANN models. - Abstract: This paper presents a robust hourly cooling-load forecasting method based on time-indexed autoregressive with exogenous inputs (ARX) models, in which the coefficients are estimated through a two-stage weighted least squares regression. The prediction method includes a combination of two separate time-indexed ARX models to improve prediction accuracy of the cooling load over different forecasting periods. The two-stage weighted least-squares regression approach in this study is robust to outliers and suitable for fast and adaptive coefficient estimation. The proposed method is tested on a large-scale central cooling system in an academic institution. The numerical case studies show the proposed prediction method performs better than some ANN and ARX forecasting models for the given test data set

  2. The Versatile Magic Square.

    Science.gov (United States)

    Watson, Gale A.

    2003-01-01

    Demonstrates the transformations that are possible to construct a variety of magic squares, including modifications to challenge students from elementary grades through algebra. Presents an example of using magic squares with students who have special needs. (YDS)

  3. Predicting sample size required for classification performance

    Directory of Open Access Journals (Sweden)

    Figueroa Rosa L

    2012-02-01

    Full Text Available Abstract Background Supervised learning methods need annotated data in order to generate efficient models. Annotated data, however, is a relatively scarce resource and can be expensive to obtain. For both passive and active learning methods, there is a need to estimate the size of the annotated sample required to reach a performance target. Methods We designed and implemented a method that fits an inverse power law model to points of a given learning curve created using a small annotated training set. Fitting is carried out using nonlinear weighted least squares optimization. The fitted model is then used to predict the classifier's performance and confidence interval for larger sample sizes. For evaluation, the nonlinear weighted curve fitting method was applied to a set of learning curves generated using clinical text and waveform classification tasks with active and passive sampling methods, and predictions were validated using standard goodness of fit measures. As control we used an un-weighted fitting method. Results A total of 568 models were fitted and the model predictions were compared with the observed performances. Depending on the data set and sampling method, it took between 80 to 560 annotated samples to achieve mean average and root mean squared error below 0.01. Results also show that our weighted fitting method outperformed the baseline un-weighted method (p Conclusions This paper describes a simple and effective sample size prediction algorithm that conducts weighted fitting of learning curves. The algorithm outperformed an un-weighted algorithm described in previous literature. It can help researchers determine annotation sample size for supervised machine learning.

  4. Estimating the Acquisition Price of Enshi Yulu Young Tea Shoots Using Near-Infrared Spectroscopy by the Back Propagation Artificial Neural Network Model in Conjunction with Backward Interval Partial Least Squares Algorithm

    Science.gov (United States)

    Wang, Sh.-P.; Gong, Z.-M.; Su, X.-Zh.; Liao, J.-Zh.

    2017-09-01

    Near infrared spectroscopy and the back propagation artificial neural network model in conjunction with backward interval partial least squares algorithm were used to estimate the purchasing price of Enshi yulu young tea shoots. The near-infrared spectra regions most relevant to the tea shoots price model (5700.5-5935.8, 7613.6-7848.9, 8091.8-8327.1, 8331-8566.2, 9287.5-9522.5, and 9526.6-9761.9 cm-1) were selected using backward interval partial least squares algorithm. The first five principal components that explained 99.96% of the variability in those selected spectral data were then used to calibrate the back propagation artificial neural tea shoots purchasing price model. The performance of this model (coefficient of determination for prediction 0.9724; root-mean-square error of prediction 4.727) was superior to those of the back propagation artificial neural model (coefficient of determination for prediction 0.8653, root-mean-square error of prediction 5.125) and the backward interval partial least squares model (coefficient of determination for prediction 0.5932, root-mean-square error of prediction 25.125). The acquisition price model with the combined backward interval partial least squares-back propagation artificial neural network algorithms can evaluate the price of Enshi yulu tea shoots accurately, quickly and objectively.

  5. A predictive model of chemical flooding for enhanced oil recovery purposes: Application of least square support vector machine

    Directory of Open Access Journals (Sweden)

    Mohammad Ali Ahmadi

    2016-06-01

    Full Text Available Applying chemical flooding in petroleum reservoirs turns into interesting subject of the recent researches. Developing strategies of the aforementioned method are more robust and precise when they consider both economical point of views (net present value (NPV and technical point of views (recovery factor (RF. In the present study huge attempts are made to propose predictive model for specifying efficiency of chemical flooding in oil reservoirs. To gain this goal, the new type of support vector machine method which evolved by Suykens and Vandewalle was employed. Also, high precise chemical flooding data banks reported in previous works were employed to test and validate the proposed vector machine model. According to the mean square error (MSE, correlation coefficient and average absolute relative deviation, the suggested LSSVM model has acceptable reliability; integrity and robustness. Thus, the proposed intelligent based model can be considered as an alternative model to monitor the efficiency of chemical flooding in oil reservoir when the required experimental data are not available or accessible.

  6. Helicopter Rotor Noise Prediction: Background, Current Status, and Future Direction

    Science.gov (United States)

    Brentner, Kenneth S.

    1997-01-01

    Helicopter noise prediction is increasingly important. The purpose of this viewgraph presentation is to: 1) Put into perspective the recent progress; 2) Outline current prediction capabilities; 3) Forecast direction of future prediction research; 4) Identify rotorcraft noise prediction needs. The presentation includes an historical perspective, a description of governing equations, and the current status of source noise prediction.

  7. Predicting blood β-hydroxybutyrate using milk Fourier transform infrared spectrum, milk composition, and producer-reported variables with multiple linear regression, partial least squares regression, and artificial neural network.

    Science.gov (United States)

    Pralle, R S; Weigel, K W; White, H M

    2018-05-01

    Prediction of postpartum hyperketonemia (HYK) using Fourier transform infrared (FTIR) spectrometry analysis could be a practical diagnostic option for farms because these data are now available from routine milk analysis during Dairy Herd Improvement testing. The objectives of this study were to (1) develop and evaluate blood β-hydroxybutyrate (BHB) prediction models using multivariate linear regression (MLR), partial least squares regression (PLS), and artificial neural network (ANN) methods and (2) evaluate whether milk FTIR spectrum (mFTIR)-based models are improved with the inclusion of test-day variables (mTest; milk composition and producer-reported data). Paired blood and milk samples were collected from multiparous cows 5 to 18 d postpartum at 3 Wisconsin farms (3,629 observations from 1,013 cows). Blood BHB concentration was determined by a Precision Xtra meter (Abbot Diabetes Care, Alameda, CA), and milk samples were analyzed by a privately owned laboratory (AgSource, Menomonie, WI) for components and FTIR spectrum absorbance. Producer-recorded variables were extracted from farm management software. A blood BHB ≥1.2 mmol/L was considered HYK. The data set was divided into a training set (n = 3,020) and an external testing set (n = 609). Model fitting was implemented with JMP 12 (SAS Institute, Cary, NC). A 5-fold cross-validation was performed on the training data set for the MLR, PLS, and ANN prediction methods, with square root of blood BHB as the dependent variable. Each method was fitted using 3 combinations of variables: mFTIR, mTest, or mTest + mFTIR variables. Models were evaluated based on coefficient of determination, root mean squared error, and area under the receiver operating characteristic curve. Four models (PLS-mTest + mFTIR, ANN-mFTIR, ANN-mTest, and ANN-mTest + mFTIR) were chosen for further evaluation in the testing set after fitting to the full training set. In the cross-validation analysis, model fit was greatest for ANN, followed

  8. Petroleomics by electrospray ionization FT-ICR mass spectrometry coupled to partial least squares with variable selection methods: prediction of the total acid number of crude oils.

    Science.gov (United States)

    Terra, Luciana A; Filgueiras, Paulo R; Tose, Lílian V; Romão, Wanderson; de Souza, Douglas D; de Castro, Eustáquio V R; de Oliveira, Mirela S L; Dias, Júlio C M; Poppi, Ronei J

    2014-10-07

    Negative-ion mode electrospray ionization, ESI(-), with Fourier transform ion cyclotron resonance mass spectrometry (FT-ICR MS) was coupled to a Partial Least Squares (PLS) regression and variable selection methods to estimate the total acid number (TAN) of Brazilian crude oil samples. Generally, ESI(-)-FT-ICR mass spectra present a power of resolution of ca. 500,000 and a mass accuracy less than 1 ppm, producing a data matrix containing over 5700 variables per sample. These variables correspond to heteroatom-containing species detected as deprotonated molecules, [M - H](-) ions, which are identified primarily as naphthenic acids, phenols and carbazole analog species. The TAN values for all samples ranged from 0.06 to 3.61 mg of KOH g(-1). To facilitate the spectral interpretation, three methods of variable selection were studied: variable importance in the projection (VIP), interval partial least squares (iPLS) and elimination of uninformative variables (UVE). The UVE method seems to be more appropriate for selecting important variables, reducing the dimension of the variables to 183 and producing a root mean square error of prediction of 0.32 mg of KOH g(-1). By reducing the size of the data, it was possible to relate the selected variables with their corresponding molecular formulas, thus identifying the main chemical species responsible for the TAN values.

  9. Grey-Markov prediction model based on background value optimization and central-point triangular whitenization weight function

    Science.gov (United States)

    Ye, Jing; Dang, Yaoguo; Li, Bingjun

    2018-01-01

    Grey-Markov forecasting model is a combination of grey prediction model and Markov chain which show obvious optimization effects for data sequences with characteristics of non-stationary and volatility. However, the state division process in traditional Grey-Markov forecasting model is mostly based on subjective real numbers that immediately affects the accuracy of forecasting values. To seek the solution, this paper introduces the central-point triangular whitenization weight function in state division to calculate possibilities of research values in each state which reflect preference degrees in different states in an objective way. On the other hand, background value optimization is applied in the traditional grey model to generate better fitting data. By this means, the improved Grey-Markov forecasting model is built. Finally, taking the grain production in Henan Province as an example, it verifies this model's validity by comparing with GM(1,1) based on background value optimization and the traditional Grey-Markov forecasting model.

  10. Applications of square-related theorems

    Science.gov (United States)

    Srinivasan, V. K.

    2014-04-01

    The square centre of a given square is the point of intersection of its two diagonals. When two squares of different side lengths share the same square centre, there are in general four diagonals that go through the same square centre. The Two Squares Theorem developed in this paper summarizes some nice theoretical conclusions that can be obtained when two squares of different side lengths share the same square centre. These results provide the theoretical basis for two of the constructions given in the book of H.S. Hall and F.H. Stevens , 'A Shorter School Geometry, Part 1, Metric Edition'. In page 134 of this book, the authors present, in exercise 4, a practical construction which leads to a verification of the Pythagorean theorem. Subsequently in Theorems 29 and 30, the authors present the standard proofs of the Pythagorean theorem and its converse. In page 140, the authors present, in exercise 15, what amounts to a geometric construction, whose verification involves a simple algebraic identity. Both the constructions are of great importance and can be replicated by using the standard equipment provided in a 'geometry toolbox' carried by students in high schools. The author hopes that the results proved in this paper, in conjunction with the two constructions from the above-mentioned book, would provide high school students an appreciation of the celebrated theorem of Pythagoras. The diagrams that accompany this document are based on the free software GeoGebra. The author formally acknowledges his indebtedness to the creators of this free software at the end of this document.

  11. Detection of the power spectrum of cosmic microwave background lensing by the Atacama Cosmology Telescope.

    Science.gov (United States)

    Das, Sudeep; Sherwin, Blake D; Aguirre, Paula; Appel, John W; Bond, J Richard; Carvalho, C Sofia; Devlin, Mark J; Dunkley, Joanna; Dünner, Rolando; Essinger-Hileman, Thomas; Fowler, Joseph W; Hajian, Amir; Halpern, Mark; Hasselfield, Matthew; Hincks, Adam D; Hlozek, Renée; Huffenberger, Kevin M; Hughes, John P; Irwin, Kent D; Klein, Jeff; Kosowsky, Arthur; Lupton, Robert H; Marriage, Tobias A; Marsden, Danica; Menanteau, Felipe; Moodley, Kavilan; Niemack, Michael D; Nolta, Michael R; Page, Lyman A; Parker, Lucas; Reese, Erik D; Schmitt, Benjamin L; Sehgal, Neelima; Sievers, Jon; Spergel, David N; Staggs, Suzanne T; Swetz, Daniel S; Switzer, Eric R; Thornton, Robert; Visnjic, Katerina; Wollack, Ed

    2011-07-08

    We report the first detection of the gravitational lensing of the cosmic microwave background through a measurement of the four-point correlation function in the temperature maps made by the Atacama Cosmology Telescope. We verify our detection by calculating the levels of potential contaminants and performing a number of null tests. The resulting convergence power spectrum at 2° angular scales measures the amplitude of matter density fluctuations on comoving length scales of around 100 Mpc at redshifts around 0.5 to 3. The measured amplitude of the signal agrees with Lambda cold dark matter cosmology predictions. Since the amplitude of the convergence power spectrum scales as the square of the amplitude of the density fluctuations, the 4σ detection of the lensing signal measures the amplitude of density fluctuations to 12%.

  12. Prediction of solar activity from solar background magnetic field variations in cycles 21-23

    International Nuclear Information System (INIS)

    Shepherd, Simon J.; Zharkov, Sergei I.; Zharkova, Valentina V.

    2014-01-01

    A comprehensive spectral analysis of both the solar background magnetic field (SBMF) in cycles 21-23 and the sunspot magnetic field in cycle 23 reported in our recent paper showed the presence of two principal components (PCs) of SBMF having opposite polarity, e.g., originating in the northern and southern hemispheres, respectively. Over a duration of one solar cycle, both waves are found to travel with an increasing phase shift toward the northern hemisphere in odd cycles 21 and 23 and to the southern hemisphere in even cycle 22. These waves were linked to solar dynamo waves assumed to form in different layers of the solar interior. In this paper, for the first time, the PCs of SBMF in cycles 21-23 are analyzed with the symbolic regression technique using Hamiltonian principles, allowing us to uncover the underlying mathematical laws governing these complex waves in the SBMF presented by PCs and to extrapolate these PCs to cycles 24-26. The PCs predicted for cycle 24 very closely fit (with an accuracy better than 98%) the PCs derived from the SBMF observations in this cycle. This approach also predicts a strong reduction of the SBMF in cycles 25 and 26 and, thus, a reduction of the resulting solar activity. This decrease is accompanied by an increasing phase shift between the two predicted PCs (magnetic waves) in cycle 25 leading to their full separation into the opposite hemispheres in cycle 26. The variations of the modulus summary of the two PCs in SBMF reveals a remarkable resemblance to the average number of sunspots in cycles 21-24 and to predictions of reduced sunspot numbers compared to cycle 24: 80% in cycle 25 and 40% in cycle 26.

  13. Gas purity analytics, calibration studies, and background predictions towards the first results of XENON1T

    Energy Technology Data Exchange (ETDEWEB)

    Hasterok, Constanze

    2017-10-25

    The XENON1T experiment aims at the direct detection of the well motivated dark matter candidate of weakly interacting massive particles (WIMPs) scattering off xenon nuclei. The first science run of 34.2 live days has already achieved the most stringent upper limit on spin-independent WIMP-nucleon cross-sections above masses of 10 GeV with a minimum of 7.7.10{sup -47} cm{sup 2} at a mass of 35 GeV. Crucial for this unprecedented sensitivity are a high xenon gas purity and a good understanding of the background. In this work, a procedure is described that was developed to measure the purity of the experiment's xenon inventory of more than three tons during its initial transfer to the detector gas system. The technique of gas chromatography has been employed to analyze the noble gas for impurities with the focus on oxygen and krypton contaminations. Furthermore, studies on the calibration of the experiment's dominating background induced by natural gamma and beta radiation were performed. Hereby, the novel sources of radioactive isotopes that can be dissolved in the xenon were employed, namely {sup 220}Rn and tritium. The sources were analyzed in terms of a potential impact on the outcome of a dark matter search. As a result of the promising findings for {sup 220}Rn, the source was successfully deployed in the first science run of XENON1T. The first WIMP search of XENON1T is outlined in this thesis, in which a background component from interactions taking place in close proximity to the detector wall is identified, investigated and modeled. A background prediction was derived that was incorporated into the background model of the WIMP search which was found to be in good agreement with the observation.

  14. Graphs whose complement and square are isomorphic

    DEFF Research Database (Denmark)

    Pedersen, Anders Sune

    2014-01-01

    We study square-complementary graphs, that is, graphs whose complement and square are isomorphic. We prove several necessary conditions for a graph to be square-complementary, describe ways of building new square-complementary graphs from existing ones, construct infinite families of square-compl...

  15. Self-diffusion of particles interacting through a square-well or square-shoulder potential

    NARCIS (Netherlands)

    Wilbertz, H.; Michels, J.; Beijeren, H. van; Leegwater, J.A.

    1988-01-01

    The diffusion coefficient and velocity autocorrelation function for a fluid of particles interacting through a square-well or square-shoulder potential are calculated from a kinetic theory similar to the Davis-Rice-Sengers theory and the results are compared to those of computer simulations. At low

  16. XAFS study of copper(II) complexes with square planar and square pyramidal coordination geometries

    Science.gov (United States)

    Gaur, A.; Klysubun, W.; Nitin Nair, N.; Shrivastava, B. D.; Prasad, J.; Srivastava, K.

    2016-08-01

    X-ray absorption fine structure of six Cu(II) complexes, Cu2(Clna)4 2H2O (1), Cu2(ac)4 2H2O (2), Cu2(phac)4 (pyz) (3), Cu2(bpy)2(na)2 H2O (ClO4) (4), Cu2(teen)4(OH)2(ClO4)2 (5) and Cu2(tmen)4(OH)2(ClO4)2 (6) (where ac, phac, pyz, bpy, na, teen, tmen = acetate, phenyl acetate, pyrazole, bipyridine, nicotinic acid, tetraethyethylenediamine, tetramethylethylenediamine, respectively), which were supposed to have square pyramidal and square planar coordination geometries have been investigated. The differences observed in the X-ray absorption near edge structure (XANES) features of the standard compounds having four, five and six coordination geometry points towards presence of square planar and square pyramidal geometry around Cu centre in the studied complexes. The presence of intense pre-edge feature in the spectra of four complexes, 1-4, indicates square pyramidal coordination. Another important XANES feature, present in complexes 5 and 6, is prominent shoulder in the rising part of edge whose intensity decreases in the presence of axial ligands and thus indicates four coordination in these complexes. Ab initio calculations were carried out for square planar and square pyramidal Cu centres to observe the variation of 4p density of states in the presence and absence of axial ligands. To determine the number and distance of scattering atoms around Cu centre in the complexes, EXAFS analysis has been done using the paths obtained from Cu(II) oxide model and an axial Cu-O path from model of a square pyramidal complex. The results obtained from EXAFS analysis have been reported which confirmed the inference drawn from XANES features. Thus, it has been shown that these paths from model of a standard compound can be used to determine the structural parameters for complexes having unknown structure.

  17. Does the sensorimotor system minimize prediction error or select the most likely prediction during object lifting?

    Science.gov (United States)

    McGregor, Heather R.; Pun, Henry C. H.; Buckingham, Gavin; Gribble, Paul L.

    2016-01-01

    The human sensorimotor system is routinely capable of making accurate predictions about an object's weight, which allows for energetically efficient lifts and prevents objects from being dropped. Often, however, poor predictions arise when the weight of an object can vary and sensory cues about object weight are sparse (e.g., picking up an opaque water bottle). The question arises, what strategies does the sensorimotor system use to make weight predictions when one is dealing with an object whose weight may vary? For example, does the sensorimotor system use a strategy that minimizes prediction error (minimal squared error) or one that selects the weight that is most likely to be correct (maximum a posteriori)? In this study we dissociated the predictions of these two strategies by having participants lift an object whose weight varied according to a skewed probability distribution. We found, using a small range of weight uncertainty, that four indexes of sensorimotor prediction (grip force rate, grip force, load force rate, and load force) were consistent with a feedforward strategy that minimizes the square of prediction errors. These findings match research in the visuomotor system, suggesting parallels in underlying processes. We interpret our findings within a Bayesian framework and discuss the potential benefits of using a minimal squared error strategy. NEW & NOTEWORTHY Using a novel experimental model of object lifting, we tested whether the sensorimotor system models the weight of objects by minimizing lifting errors or by selecting the statistically most likely weight. We found that the sensorimotor system minimizes the square of prediction errors for object lifting. This parallels the results of studies that investigated visually guided reaching, suggesting an overlap in the underlying mechanisms between tasks that involve different sensory systems. PMID:27760821

  18. Phase I Final Report: Ultra-Low Background Alpha Activity Counter

    International Nuclear Information System (INIS)

    Warburton, W.K.

    2005-01-01

    In certain important physics experiments that search for rare-events, such as neutrino or double beta decay detections, it is critical to minimize the number of background events that arise from alpha particle emitted by the natural radioactivity in the materials used to construct the experiment. Similarly, the natural radioactivity in materials used to connect and package silicon microcircuits must also be minimized in order to eliminate ''soft errors'' caused by alpha particles depositing charges within the microcircuits and thereby changing their logic states. For these, and related reasons in the areas of environmental cleanup and nuclear materials tracking, there is a need that is important from commercial, scientific, and national security perspectives to develop an ultra-low background alpha counter that would be capable of measuring materials' alpha particle emissivity at rates well below 0.00001 alpha/cm 2 /hour. This rate, which corresponds to 24 alpha particles per square meter per day, is essentially impossible to achieve with existing commercial instruments because the natural radioactivity of the materials used to construct even the best of these counters produces background rates at the 0.005 alpha/cm 2 /hr level. Our company (XIA) had previously developed an instrument that uses electronic background suppression to operate at the 0.0005 0.005 alpha/cm 2 /hr level. This patented technology sets up an electric field between a large planar sample and a large planar anode, and fills the gap with pure Nitrogen. An alpha particle entering the chamber ionizes the Nitrogen, producing a ''track'' of electrons, which drift to the anode in the electric field. Tracks close to the anode take less than 10 microseconds (us) to be collected, giving a preamplifier signal with a 10 us risetime. Tracks from the sample have to drift across the full anode-sample gap and produce a 35 us risetime signal. By analyzing the preamplifier signals with a digital signal

  19. A non-iterative method for fitting decay curves with background

    International Nuclear Information System (INIS)

    Mukoyama, T.

    1982-01-01

    A non-iterative method for fitting a decay curve with background is presented. The sum of an exponential function and a constant term is linearized by the use of the difference equation and parameters are determined by the standard linear least-squares fitting. The validity of the present method has been tested against pseudo-experimental data. (orig.)

  20. Application of partial least squares near-infrared spectral classification in diabetic identification

    Science.gov (United States)

    Yan, Wen-juan; Yang, Ming; He, Guo-quan; Qin, Lin; Li, Gang

    2014-11-01

    In order to identify the diabetic patients by using tongue near-infrared (NIR) spectrum - a spectral classification model of the NIR reflectivity of the tongue tip is proposed, based on the partial least square (PLS) method. 39sample data of tongue tip's NIR spectra are harvested from healthy people and diabetic patients , respectively. After pretreatment of the reflectivity, the spectral data are set as the independent variable matrix, and information of classification as the dependent variables matrix, Samples were divided into two groups - i.e. 53 samples as calibration set and 25 as prediction set - then the PLS is used to build the classification model The constructed modelfrom the 53 samples has the correlation of 0.9614 and the root mean square error of cross-validation (RMSECV) of 0.1387.The predictions for the 25 samples have the correlation of 0.9146 and the RMSECV of 0.2122.The experimental result shows that the PLS method can achieve good classification on features of healthy people and diabetic patients.

  1. A scaled Lagrangian method for performing a least squares fit of a model to plant data

    International Nuclear Information System (INIS)

    Crisp, K.E.

    1988-01-01

    Due to measurement errors, even a perfect mathematical model will not be able to match all the corresponding plant measurements simultaneously. A further discrepancy may be introduced if an un-modelled change in conditions occurs within the plant which should have required a corresponding change in model parameters - e.g. a gradual deterioration in the performance of some component(s). Taking both these factors into account, what is required is that the overall discrepancy between the model predictions and the plant data is kept to a minimum. This process is known as 'model fitting', A method is presented for minimising any function which consists of the sum of squared terms, subject to any constraints. Its most obvious application is in the process of model fitting, where a weighted sum of squares of the differences between model predictions and plant data is the function to be minimised. When implemented within existing Central Electricity Generating Board computer models, it will perform a least squares fit of a model to plant data within a single job submission. (author)

  2. Study of the convergence behavior of the complex kernel least mean square algorithm.

    Science.gov (United States)

    Paul, Thomas K; Ogunfunmi, Tokunbo

    2013-09-01

    The complex kernel least mean square (CKLMS) algorithm is recently derived and allows for online kernel adaptive learning for complex data. Kernel adaptive methods can be used in finding solutions for neural network and machine learning applications. The derivation of CKLMS involved the development of a modified Wirtinger calculus for Hilbert spaces to obtain the cost function gradient. We analyze the convergence of the CKLMS with different kernel forms for complex data. The expressions obtained enable us to generate theory-predicted mean-square error curves considering the circularity of the complex input signals and their effect on nonlinear learning. Simulations are used for verifying the analysis results.

  3. Validation of adult height prediction based on automated bone age determination in the Paris Longitudinal Study of healthy children

    Energy Technology Data Exchange (ETDEWEB)

    Martin, David D. [Tuebingen University Children' s Hospital, Tuebingen (Germany); Filderklinik, Filderstadt (Germany); Schittenhelm, Jan [Tuebingen University Children' s Hospital, Tuebingen (Germany); Thodberg, Hans Henrik [Visiana, Holte (Denmark)

    2016-02-15

    An adult height prediction model based on automated determination of bone age was developed and validated in two studies from Zurich, Switzerland. Varied living conditions and genetic backgrounds might make the model less accurate. To validate the adult height prediction model on children from another geographical location. We included 51 boys and 58 girls from the Paris Longitudinal Study of children born 1953 to 1958. Radiographs were obtained once or twice a year in these children from birth to age 18. Bone age was determined using the BoneXpert method. Radiographs in children with bone age greater than 6 years were considered, in total 1,124 images. The root mean square deviation between the predicted and the observed adult height was 2.8 cm for boys in the bone age range 6-15 years and 3.1 cm for girls in the bone age range 6-13 years. The bias (the average signed difference) was zero, except for girls below bone age 12, where the predictions were 0.8 cm too low. The accuracy of the BoneXpert method in terms of root mean square error was as predicted by the model, i.e. in line with what was observed in the Zurich studies. (orig.)

  4. Validation of adult height prediction based on automated bone age determination in the Paris Longitudinal Study of healthy children

    International Nuclear Information System (INIS)

    Martin, David D.; Schittenhelm, Jan; Thodberg, Hans Henrik

    2016-01-01

    An adult height prediction model based on automated determination of bone age was developed and validated in two studies from Zurich, Switzerland. Varied living conditions and genetic backgrounds might make the model less accurate. To validate the adult height prediction model on children from another geographical location. We included 51 boys and 58 girls from the Paris Longitudinal Study of children born 1953 to 1958. Radiographs were obtained once or twice a year in these children from birth to age 18. Bone age was determined using the BoneXpert method. Radiographs in children with bone age greater than 6 years were considered, in total 1,124 images. The root mean square deviation between the predicted and the observed adult height was 2.8 cm for boys in the bone age range 6-15 years and 3.1 cm for girls in the bone age range 6-13 years. The bias (the average signed difference) was zero, except for girls below bone age 12, where the predictions were 0.8 cm too low. The accuracy of the BoneXpert method in terms of root mean square error was as predicted by the model, i.e. in line with what was observed in the Zurich studies. (orig.)

  5. An adaptive prediction and detection algorithm for multistream syndromic surveillance

    Directory of Open Access Journals (Sweden)

    Magruder Steve F

    2005-10-01

    Full Text Available Abstract Background Surveillance of Over-the-Counter pharmaceutical (OTC sales as a potential early indicator of developing public health conditions, in particular in cases of interest to biosurvellance, has been suggested in the literature. This paper is a continuation of a previous study in which we formulated the problem of estimating clinical data from OTC sales in terms of optimal LMS linear and Finite Impulse Response (FIR filters. In this paper we extend our results to predict clinical data multiple steps ahead using OTC sales as well as the clinical data itself. Methods The OTC data are grouped into a few categories and we predict the clinical data using a multichannel filter that encompasses all the past OTC categories as well as the past clinical data itself. The prediction is performed using FIR (Finite Impulse Response filters and the recursive least squares method in order to adapt rapidly to nonstationary behaviour. In addition, we inject simulated events in both clinical and OTC data streams to evaluate the predictions by computing the Receiver Operating Characteristic curves of a threshold detector based on predicted outputs. Results We present all prediction results showing the effectiveness of the combined filtering operation. In addition, we compute and present the performance of a detector using the prediction output. Conclusion Multichannel adaptive FIR least squares filtering provides a viable method of predicting public health conditions, as represented by clinical data, from OTC sales, and/or the clinical data. The potential value to a biosurveillance system cannot, however, be determined without studying this approach in the presence of transient events (nonstationary events of relatively short duration and fast rise times. Our simulated events superimposed on actual OTC and clinical data allow us to provide an upper bound on that potential value under some restricted conditions. Based on our ROC curves we argue that a

  6. 36 CFR 910.67 - Square guidelines.

    Science.gov (United States)

    2010-07-01

    ... 36 Parks, Forests, and Public Property 3 2010-07-01 2010-07-01 false Square guidelines. 910.67... GUIDELINES AND UNIFORM STANDARDS FOR URBAN PLANNING AND DESIGN OF DEVELOPMENT WITHIN THE PENNSYLVANIA AVENUE DEVELOPMENT AREA Glossary of Terms § 910.67 Square guidelines. Square Guidelines establish the Corporation's...

  7. A Note on Magic Squares

    Science.gov (United States)

    Williams, Horace E.

    1974-01-01

    A method for generating 3x3 magic squares is developed. A series of questions relating to these magic squares is posed. An invesitgation using matrix methods is suggested with some questions for consideration. (LS)

  8. Democracy Squared

    DEFF Research Database (Denmark)

    Rose, Jeremy; Sæbø, Øystein

    2005-01-01

    On-line political communities, such as the Norwegian site Demokratitorget (Democracy Square), are often designed according to a set of un-reflected assumptions about the political interests of their potential members. In political science, democracy is not taken as given in this way, but can...... be represented by different models which characterize different relationships between politicians and the citizens they represent. This paper uses quantitative and qualitative content analysis to analyze the communication mediated by the Democracy Square discussion forum in the first ten months of its life......-Republican model. In the qualitative analysis the discourse is analysed as repeating genres – patterns in the communication form which also reflect the conflict of interest between citizens and politicians. Though the analysis gives insight into the nature of the discourse the site supports, little is known about...

  9. Spectral/hp least-squares finite element formulation for the Navier-Stokes equations

    International Nuclear Information System (INIS)

    Pontaza, J.P.; Reddy, J.N.

    2003-01-01

    We consider the application of least-squares finite element models combined with spectral/hp methods for the numerical solution of viscous flow problems. The paper presents the formulation, validation, and application of a spectral/hp algorithm to the numerical solution of the Navier-Stokes equations governing two- and three-dimensional stationary incompressible and low-speed compressible flows. The Navier-Stokes equations are expressed as an equivalent set of first-order equations by introducing vorticity or velocity gradients as additional independent variables and the least-squares method is used to develop the finite element model. High-order element expansions are used to construct the discrete model. The discrete model thus obtained is linearized by Newton's method, resulting in a linear system of equations with a symmetric positive definite coefficient matrix that is solved in a fully coupled manner by a preconditioned conjugate gradient method. Spectral convergence of the L 2 least-squares functional and L 2 error norms is verified using smooth solutions to the two-dimensional stationary Poisson and incompressible Navier-Stokes equations. Numerical results for flow over a backward-facing step, steady flow past a circular cylinder, three-dimensional lid-driven cavity flow, and compressible buoyant flow inside a square enclosure are presented to demonstrate the predictive capability and robustness of the proposed formulation

  10. A Hybrid Least Square Support Vector Machine Model with Parameters Optimization for Stock Forecasting

    Directory of Open Access Journals (Sweden)

    Jian Chai

    2015-01-01

    Full Text Available This paper proposes an EMD-LSSVM (empirical mode decomposition least squares support vector machine model to analyze the CSI 300 index. A WD-LSSVM (wavelet denoising least squares support machine is also proposed as a benchmark to compare with the performance of EMD-LSSVM. Since parameters selection is vital to the performance of the model, different optimization methods are used, including simplex, GS (grid search, PSO (particle swarm optimization, and GA (genetic algorithm. Experimental results show that the EMD-LSSVM model with GS algorithm outperforms other methods in predicting stock market movement direction.

  11. An improved partial least-squares regression method for Raman spectroscopy

    Science.gov (United States)

    Momenpour Tehran Monfared, Ali; Anis, Hanan

    2017-10-01

    It is known that the performance of partial least-squares (PLS) regression analysis can be improved using the backward variable selection method (BVSPLS). In this paper, we further improve the BVSPLS based on a novel selection mechanism. The proposed method is based on sorting the weighted regression coefficients, and then the importance of each variable of the sorted list is evaluated using root mean square errors of prediction (RMSEP) criterion in each iteration step. Our Improved BVSPLS (IBVSPLS) method has been applied to leukemia and heparin data sets and led to an improvement in limit of detection of Raman biosensing ranged from 10% to 43% compared to PLS. Our IBVSPLS was also compared to the jack-knifing (simpler) and Genetic Algorithm (more complex) methods. Our method was consistently better than the jack-knifing method and showed either a similar or a better performance compared to the genetic algorithm.

  12. 21-cm lensing and the cold spot in the cosmic microwave background.

    Science.gov (United States)

    Kovetz, Ely D; Kamionkowski, Marc

    2013-04-26

    An extremely large void and a cosmic texture are two possible explanations for the cold spot seen in the cosmic microwave background. We investigate how well these two hypotheses can be tested with weak lensing of 21-cm fluctuations from the epoch of reionization measured with the Square Kilometer Array. While the void explanation for the cold spot can be tested with Square Kilometer Array, given enough observation time, the texture scenario requires significantly prolonged observations, at the highest frequencies that correspond to the epoch of reionization, over the field of view containing the cold spot.

  13. Critical-thinking ability in respiratory care students and its correlation with age, educational background, and performance on national board examinations.

    Science.gov (United States)

    Wettstein, Richard B; Wilkins, Robert L; Gardner, Donna D; Restrepo, Ruben D

    2011-03-01

    Critical thinking is an important characteristic to develop in respiratory care students. We used the short-form Watson-Glaser Critical Thinking Appraisal instrument to measure critical-thinking ability in 55 senior respiratory care students in a baccalaureate respiratory care program. We calculated the Pearson correlation coefficient to assess the relationships between critical-thinking score, age, and student performance on the clinical-simulation component of the national respiratory care boards examination. We used chi-square analysis to assess the association between critical-thinking score and educational background. There was no significant relationship between critical-thinking score and age, or between critical-thinking score and student performance on the clinical-simulation component. There was a significant (P = .04) positive association between a strong science-course background and critical-thinking score, which might be useful in predicting a student's ability to perform in areas where critical thinking is of paramount importance, such as clinical competencies, and to guide candidate-selection for respiratory care programs.

  14. Cosmic gamma-ray background from dark matter annihilation

    International Nuclear Information System (INIS)

    Ando, Shin'ichiro

    2007-01-01

    High-energy photons from pair annihilation of dark matter particles contribute to the cosmic gamma-ray background (CGB) observed in a wide energy range. The precise shape of the energy spectrum of CGB depends on the nature of dark matter particles. In order to discriminate between the signals from dark matter annihilation and other astrophysical sources, however, the information from the energy spectrum of CGB may not be sufficient. We show that dark matter annihilation not only contributes to the mean CGB intensity, but also produces a characteristic anisotropy, which provides a powerful tool for testing the origins of the observed CGB. We show that the expected sensitivity of future gamma-ray detectors such as GLAST should allow us to measure the angular power spectrum of CGB anisotropy, if dark matter particles are supersymmetric neutralinos and they account for most of the observed mean intensity. As the intensity of photons from annihilation is proportional to the density squared, we show that the predicted shape of the angular power spectrum of gamma rays from dark matter annihilation is different from that due to other astrophysical sources such as blazars, whose intensity is linearly proportional to density. Therefore, the angular power spectrum of the CGB provides a 'smoking-gun' signature of gamma rays from dark matter annihilation

  15. The Relationship of a Pilot's Educational Background, Aeronautical Experience and Recency of Experience to Performance In Initial Training at a Regional Airline

    Science.gov (United States)

    Shane, Nancy R.

    The purpose of this study was to determine how a pilot's educational background, aeronautical experience and recency of experience relate to their performance during initial training at a regional airline. Results show that variables in pilots' educational background, aeronautical experience and recency of experience do predict performance in training. The most significant predictors include years since graduation from college, multi-engine time, total time and whether or not a pilot had military flying experience. Due to the pilot shortage, the pilots entering regional airline training classes since August 2013 have varied backgrounds, aeronautical experience and recency of experience. As explained by Edward Thorndike's law of exercise and the law of recency, pilots who are actively using their aeronautical knowledge and exercising their flying skills should exhibit strong performance in those areas and pilots who have not been actively using their aeronautical knowledge and exercising their flying skills should exhibit degraded performance in those areas. Through correlation, chi-square and multiple regression analysis, this study tests this theory as it relates to performance in initial training at a regional airline.

  16. Some Theoretical Essences of Lithuania Squares Formation

    Directory of Open Access Journals (Sweden)

    Gintautas Tiškus

    2016-04-01

    Full Text Available In the Lithuanian acts of law and in the scientific literature there are no clear criteria and notions to define a square. The unbuilt city space places or the gaps between buildings are often defined as the squares, which do not have clear limits or destination. The mandatory attributes of the place which is called the square are indicated in the article, the notion of square is defined. The article deals with Lithuanian squares theme, analyses the differences between representation and representativeness. The article aims to indicate an influence of city environmental context and monument in the square on its function. The square is an independent element of city plan structure, but it is not an independent element of city spatial structure. The space and environment of the square are related to each other not only by physical, aesthetical relations, but as well as by causalities, which may be named as the essences of squares’ formation. The interdisciplinary discourse analysis method is applied in the article.

  17. Latin Squares

    Indian Academy of Sciences (India)

    Admin

    2012-09-07

    Sep 7, 2012 ... must first talk of permutations and Latin squares. A permutation of a finite set of objects is a linear arrange- ment of ... with a special element 1 ... Of course, this has .... tion method to disprove Euler's conjecture for infinitely.

  18. The background in the experiment Gerda

    Science.gov (United States)

    Agostini, M.; Allardt, M.; Andreotti, E.; Bakalyarov, A. M.; Balata, M.; Barabanov, I.; Barnabé Heider, M.; Barros, N.; Baudis, L.; Bauer, C.; Becerici-Schmidt, N.; Bellotti, E.; Belogurov, S.; Belyaev, S. T.; Benato, G.; Bettini, A.; Bezrukov, L.; Bode, T.; Brudanin, V.; Brugnera, R.; Budjáš, D.; Caldwell, A.; Cattadori, C.; Chernogorov, A.; Cossavella, F.; Demidova, E. V.; Domula, A.; Egorov, V.; Falkenstein, R.; Ferella, A.; Freund, K.; Frodyma, N.; Gangapshev, A.; Garfagnini, A.; Gotti, C.; Grabmayr, P.; Gurentsov, V.; Gusev, K.; Guthikonda, K. K.; Hampel, W.; Hegai, A.; Heisel, M.; Hemmer, S.; Heusser, G.; Hofmann, W.; Hult, M.; Inzhechik, L. V.; Ioannucci, L.; Csáthy, J. Janicskó; Jochum, J.; Junker, M.; Kihm, T.; Kirpichnikov, I. V.; Kirsch, A.; Klimenko, A.; Knöpfle, K. T.; Kochetov, O.; Kornoukhov, V. N.; Kuzminov, V. V.; Laubenstein, M.; Lazzaro, A.; Lebedev, V. I.; Lehnert, B.; Liao, H. Y.; Lindner, M.; Lippi, I.; Liu, X.; Lubashevskiy, A.; Lubsandorzhiev, B.; Lutter, G.; Macolino, C.; Machado, A. A.; Majorovits, B.; Maneschg, W.; Nemchenok, I.; Nisi, S.; O'Shaughnessy, C.; Palioselitis, D.; Pandola, L.; Pelczar, K.; Pessina, G.; Pullia, A.; Riboldi, S.; Sada, C.; Salathe, M.; Schmitt, C.; Schreiner, J.; Schulz, O.; Schwingenheuer, B.; Schönert, S.; Shevchik, E.; Shirchenko, M.; Simgen, H.; Smolnikov, A.; Stanco, L.; Strecker, H.; Tarka, M.; Ur, C. A.; Vasenko, A. A.; Volynets, O.; von Sturm, K.; Wagner, V.; Walter, M.; Wegmann, A.; Wester, T.; Wojcik, M.; Yanovich, E.; Zavarise, P.; Zhitnikov, I.; Zhukov, S. V.; Zinatulina, D.; Zuber, K.; Zuzel, G.

    2014-04-01

    The GERmanium Detector Array ( Gerda) experiment at the Gran Sasso underground laboratory (LNGS) of INFN is searching for neutrinoless double beta () decay of Ge. The signature of the signal is a monoenergetic peak at 2039 keV, the value of the decay. To avoid bias in the signal search, the present analysis does not consider all those events, that fall in a 40 keV wide region centered around . The main parameters needed for the analysis are described. A background model was developed to describe the observed energy spectrum. The model contains several contributions, that are expected on the basis of material screening or that are established by the observation of characteristic structures in the energy spectrum. The model predicts a flat energy spectrum for the blinding window around with a background index ranging from 17.6 to 23.8 cts/(keV kg yr). A part of the data not considered before has been used to test if the predictions of the background model are consistent. The observed number of events in this energy region is consistent with the background model. The background at is dominated by close sources, mainly due to K, Bi, Th, Co and emitting isotopes from the Ra decay chain. The individual fractions depend on the assumed locations of the contaminants. It is shown, that after removal of the known peaks, the energy spectrum can be fitted in an energy range of 200 keV around with a constant background. This gives a background index consistent with the full model and uncertainties of the same size.

  19. Stochastic backgrounds of gravitational waves

    International Nuclear Information System (INIS)

    Maggiore, M.

    2001-01-01

    We review the motivations for the search for stochastic backgrounds of gravitational waves and we compare the experimental sensitivities that can be reached in the near future with the existing bounds and with the theoretical predictions. (author)

  20. Evaluation of the prediction precision capability of partial least squares regression approach for analysis of high alloy steel by laser induced breakdown spectroscopy

    Science.gov (United States)

    Sarkar, Arnab; Karki, Vijay; Aggarwal, Suresh K.; Maurya, Gulab S.; Kumar, Rohit; Rai, Awadhesh K.; Mao, Xianglei; Russo, Richard E.

    2015-06-01

    Laser induced breakdown spectroscopy (LIBS) was applied for elemental characterization of high alloy steel using partial least squares regression (PLSR) with an objective to evaluate the analytical performance of this multivariate approach. The optimization of the number of principle components for minimizing error in PLSR algorithm was investigated. The effect of different pre-treatment procedures on the raw spectral data before PLSR analysis was evaluated based on several statistical (standard error of prediction, percentage relative error of prediction etc.) parameters. The pre-treatment with "NORM" parameter gave the optimum statistical results. The analytical performance of PLSR model improved by increasing the number of laser pulses accumulated per spectrum as well as by truncating the spectrum to appropriate wavelength region. It was found that the statistical benefit of truncating the spectrum can also be accomplished by increasing the number of laser pulses per accumulation without spectral truncation. The constituents (Co and Mo) present in hundreds of ppm were determined with relative precision of 4-9% (2σ), whereas the major constituents Cr and Ni (present at a few percent levels) were determined with a relative precision of ~ 2%(2σ).

  1. Extraordinary lateral beaming of sound from a square-lattice phononic crystal

    Energy Technology Data Exchange (ETDEWEB)

    Bai, Xiaoxue; Qiu, Chunyin; He, Hailong; Peng, Shasha; Ke, Manzhu [Key Laboratory of Artificial Micro- and Nano-structures of Ministry of Education and School of Physics and Technology, Wuhan University, Wuhan 430072 (China); Liu, Zhengyou, E-mail: zyliu@whu.edu.cn [Key Laboratory of Artificial Micro- and Nano-structures of Ministry of Education and School of Physics and Technology, Wuhan University, Wuhan 430072 (China); Institute for Advanced Studies, Wuhan University, Wuhan 430072 (China)

    2017-03-03

    Highlights: • An extraordinary lateral beaming phenomenon is observed in a finite phononic crystal made of square lattice. • The phenomenon can be explained by the equivalence of the states located around the four corners of the first Brillouin zone. • The lateral beaming behavior enables a simple design of acoustic beam splitters. • In some sense, the phenomenon can be described by a near zero refractive index. - Abstract: This work revisits the sound transmission through a finite phononic crystal of square lattice. In addition to a direct, ordinary transmission through the sample, an extraordinary lateral beaming effect is also observed. The phenomenon stems from the equivalence of the states located around the four corners of the first Brillouin zone. The experimental result agrees well with the theoretical prediction. The lateral beaming behavior enables a simple design for realizing acoustic beam splitters.

  2. Pinning of superconducting vortices in MoGe/Au Thin nano–squares

    Energy Technology Data Exchange (ETDEWEB)

    Serrier-Garcia, Lise, E-mail: serriergarcia.lise@fys.kuleuven.be; Timmermans, Matias; Van de Vondel, Joris; Moshchalkov, Victor V.

    2017-02-15

    Highlights: • A scanning tunneling spectroscopy study of vortex patterns in mesoscopic superconducting squares is reported. • The impact of defects and corrugations inherently present in nanofabricated structures is explored. • Hillocks at the edge can attract and repulse vortices. • The small surface corrugation creates metastable states. • Vortex rotations during dynamical vortex penetrations are visualized. - Abstract: In this work, we report a scanning tunneling spectroscopy study of vortex patterns in mesoscopic superconducting squares and explore the impact of defects and corrugations inherently present in nanofabricated structures. We find that a hillock at the edge can function as an attractive or repulsive pinning center for vortices deforming the, theoretically predicted, symmetry-induced vortex configurations. In addition, we exploit the inherently present imperfections, creating metastable states, to visualize the dynamics of vortex penetration during magnetic field sweeps.

  3. Vortex-Induced Vibrations of a Square Cylinder with Damped Free-End Conditions

    Directory of Open Access Journals (Sweden)

    S. Manzoor

    2013-01-01

    Full Text Available The authors report the results of vortex-induced vibrations of a square cylinder in a wind tunnel. This constitutes a high mass ratio environment. The square cylinder is mounted in the wind tunnel in such a fashion that it only performs rigid body oscillations perpendicular to the flow direction with damped free-end conditions. This physical situation allows a direct evaluation for analytical models relying on simplified 2D assumptions. The results are also compared with two-dimensional fluid-structure (CFD-CSD numerical simulations. The comparison shows that despite having one-dimensional motion, the analytical model does not predict the VIV region with correctness. Results show that the numerical simulations and experimental results differ from the analytical model for the prediction of reduced velocity corresponding to peak amplitude. Also the analytical reduced velocity envelope is underpredicted compared to both numerical simulations and experimental data despite the structure being lightly damped. The findings are significant as the experimental results for freely oscillating high mass ratio body show differences from the low mass ratio especially in the transition between VIV and galloping regions. However the numerical simulations show comparatively close agreement.

  4. The Anisotropy of the Microwave Background to l = 3500: Deep Field Observations with the Cosmic Background Imager

    Science.gov (United States)

    Mason, B. S.; Pearson, T. J.; Readhead, A. C. S.; Shepherd, M. C.; Sievers, J.; Udomprasert, P. S.; Cartwright, J. K.; Farmer, A. J.; Padin, S.; Myers, S. T.; hide

    2002-01-01

    We report measurements of anisotropy in the cosmic microwave background radiation over the multipole range l approximately 200 (right arrow) 3500 with the Cosmic Background Imager based on deep observations of three fields. These results confirm the drop in power with increasing l first reported in earlier measurements with this instrument, and extend the observations of this decline in power out to l approximately 2000. The decline in power is consistent with the predicted damping of primary anisotropies. At larger multipoles, l = 2000-3500, the power is 3.1 sigma greater than standard models for intrinsic microwave background anisotropy in this multipole range, and 3.5 sigma greater than zero. This excess power is not consistent with expected levels of residual radio source contamination but, for sigma 8 is approximately greater than 1, is consistent with predicted levels due to a secondary Sunyaev-Zeldovich anisotropy. Further observations are necessary to confirm the level of this excess and, if confirmed, determine its origin.

  5. Coupling parameter series expansion for fluid with square-well plus repulsive-square-barrier potential

    Directory of Open Access Journals (Sweden)

    Shiqi Zhou

    2013-10-01

    Full Text Available Monte Carlo simulations in the canonical ensemble are performed for fluid with potential consisting of a square-well plus a square-barrier to obtain thermodynamic properties such as pressure, excess energy, constant volume excess heat capacity, and excess chemical potential, and structural property such as radial distribution function. The simulations cover a wide density range for the fluid phase, several temperatures, and different combinations of the parameters defining the potential. These simulation data have been used to test performances of a coupling parameter series expansion (CPSE recently proposed by one of the authors [S. Zhou, Phys. Rev. E 74, 031119 (2006], and a traditional 2nd-order high temperature series expansion (HTSE based on a macroscopic compressibility approximation (MAC used with confidence since its introduction in 1967. It is found that (i the MCA-based 2nd-order HTSE unexpectedly and depressingly fails for most situations investigated, and the present simulation results can serve well as strict criteria for testing liquid state theories. (ii The CPSE perturbation scheme is shown to be capable of predicting very accurately most of the thermodynamic properties simulated, but the most appropriate level of truncating the CPSE differs and depends on the range of the potential to be calculated; in particular, the shorter the potential range is, the higher the most appropriate truncating level can be, and along with rising of the potential range the performance of the CPSE perturbation scheme will decrease at higher truncating level. (iii The CPSE perturbation scheme can calculate satisfactorily bulk fluid rdf, and such calculations can be done for all fluid states of the whole phase diagram. (iv The CPSE is a convergent series at higher temperatures, but show attribute of asymptotic series at lower temperatures, and as a result, the surest asymptotic value occurs at lower-order truncation.

  6. Preliminary background prediction for the INTEGRAL x-ray monitor

    DEFF Research Database (Denmark)

    Feroci, M.; Costa, E.; Budtz-Joergensen, C.

    1996-01-01

    The JEM-X (joint European x-ray monitor) experiment will be flown onboard the ESA's INTEGRAL satellite. The instrumental background level of the two JEM-X twin detectors will depend on several parameters, among which the satellite orbit and mass distribution, and the detectors materials play...

  7. Prediction of octanol-water partition coefficients of organic compounds by multiple linear regression, partial least squares, and artificial neural network.

    Science.gov (United States)

    Golmohammadi, Hassan

    2009-11-30

    A quantitative structure-property relationship (QSPR) study was performed to develop models those relate the structure of 141 organic compounds to their octanol-water partition coefficients (log P(o/w)). A genetic algorithm was applied as a variable selection tool. Modeling of log P(o/w) of these compounds as a function of theoretically derived descriptors was established by multiple linear regression (MLR), partial least squares (PLS), and artificial neural network (ANN). The best selected descriptors that appear in the models are: atomic charge weighted partial positively charged surface area (PPSA-3), fractional atomic charge weighted partial positive surface area (FPSA-3), minimum atomic partial charge (Qmin), molecular volume (MV), total dipole moment of molecule (mu), maximum antibonding contribution of a molecule orbital in the molecule (MAC), and maximum free valency of a C atom in the molecule (MFV). The result obtained showed the ability of developed artificial neural network to prediction of partition coefficients of organic compounds. Also, the results revealed the superiority of ANN over the MLR and PLS models. Copyright 2009 Wiley Periodicals, Inc.

  8. Search for the Higgs boson in events with missing transverse energy and b quark jets produced in pp collisions at square root(s)=1.96 TeV.

    Science.gov (United States)

    Aaltonen, T; Adelman, J; Akimoto, T; Albrow, M G; Alvarez González, B; Amerio, S; Amidei, D; Anastassov, A; Annovi, A; Antos, J; Aoki, M; Apollinari, G; Apresyan, A; Arisawa, T; Artikov, A; Ashmanskas, W; Attal, A; Aurisano, A; Azfar, F; Azzi-Bacchetta, P; Azzurri, P; Bacchetta, N; Badgett, W; Barbaro-Galtieri, A; Barnes, V E; Barnett, B A; Baroiant, S; Bartsch, V; Bauer, G; Beauchemin, P-H; Bedeschi, F; Bednar, P; Behari, S; Bellettini, G; Bellinger, J; Belloni, A; Benjamin, D; Beretvas, A; Beringer, J; Berry, T; Bhatti, A; Binkley, M; Bisello, D; Bizjak, I; Blair, R E; Blocker, C; Blumenfeld, B; Bocci, A; Bodek, A; Boisvert, V; Bolla, G; Bolshov, A; Bortoletto, D; Boudreau, J; Boveia, A; Brau, B; Bridgeman, A; Brigliadori, L; Bromberg, C; Brubaker, E; Budagov, J; Budd, H S; Budd, S; Burkett, K; Busetto, G; Bussey, P; Buzatu, A; Byrum, K L; Cabrera, S; Campanelli, M; Campbell, M; Canelli, F; Canepa, A; Carlsmith, D; Carosi, R; Carrillo, S; Carron, S; Casal, B; Casarsa, M; Castro, A; Catastini, P; Cauz, D; Cavalli-Sforza, M; Cerri, A; Cerrito, L; Chang, S H; Chen, Y C; Chertok, M; Chiarelli, G; Chlachidze, G; Chlebana, F; Cho, K; Chokheli, D; Chou, J P; Choudalakis, G; Chuang, S H; Chung, K; Chung, W H; Chung, Y S; Ciobanu, C I; Ciocci, M A; Clark, A; Clark, D; Compostella, G; Convery, M E; Conway, J; Cooper, B; Copic, K; Cordelli, M; Cortiana, G; Crescioli, F; Cuenca Almenar, C; Cuevas, J; Culbertson, R; Cully, J C; Dagenhart, D; Datta, M; Davies, T; de Barbaro, P; De Cecco, S; Deisher, A; De Lentdecker, G; De Lorenzo, G; Dell'Orso, M; Demortier, L; Deng, J; Deninno, M; De Pedis, D; Derwent, P F; Di Giovanni, G P; Dionisi, C; Di Ruzza, B; Dittmann, J R; D'Onofrio, M; Donati, S; Dong, P; Donini, J; Dorigo, T; Dube, S; Efron, J; Erbacher, R; Errede, D; Errede, S; Eusebi, R; Fang, H C; Farrington, S; Fedorko, W T; Feild, R G; Feindt, M; Fernandez, J P; Ferrazza, C; Field, R; Flanagan, G; Forrest, R; Forrester, S; Franklin, M; Freeman, J C; Furic, I; Gallinaro, M; Galyardt, J; Garberson, F; Garcia, J E; Garfinkel, A F; Genser, K; Gerberich, H; Gerdes, D; Giagu, S; Giakoumopolou, V; Giannetti, P; Gibson, K; Gimmell, J L; Ginsburg, C M; Giokaris, N; Giordani, M; Giromini, P; Giunta, M; Glagolev, V; Glenzinski, D; Gold, M; Goldschmidt, N; Golossanov, A; Gomez, G; Gomez-Ceballos, G; Goncharov, M; González, O; Gorelov, I; Goshaw, A T; Goulianos, K; Gresele, A; Grinstein, S; Grosso-Pilcher, C; Grundler, U; Guimaraes da Costa, J; Gunay-Unalan, Z; Haber, C; Hahn, K; Hahn, S R; Halkiadakis, E; Hamilton, A; Han, B-Y; Han, J Y; Handler, R; Happacher, F; Hara, K; Hare, D; Hare, M; Harper, S; Harr, R F; Harris, R M; Hartz, M; Hatakeyama, K; Hauser, J; Hays, C; Heck, M; Heijboer, A; Heinemann, B; Heinrich, J; Henderson, C; Herndon, M; Heuser, J; Hewamanage, S; Hidas, D; Hill, C S; Hirschbuehl, D; Hocker, A; Hou, S; Houlden, M; Hsu, S-C; Huffman, B T; Hughes, R E; Husemann, U; Huston, J; Incandela, J; Introzzi, G; Iori, M; Ivanov, A; Iyutin, B; James, E; Jayatilaka, B; Jeans, D; Jeon, E J; Jindariani, S; Johnson, W; Jones, M; Joo, K K; Jun, S Y; Jung, J E; Junk, T R; Kamon, T; Kar, D; Karchin, P E; Kato, Y; Kephart, R; Kerzel, U; Khotilovich, V; Kilminster, B; Kim, D H; Kim, H S; Kim, J E; Kim, M J; Kim, S B; Kim, S H; Kim, Y K; Kimura, N; Kirsch, L; Klimenko, S; Klute, M; Knuteson, B; Ko, B R; Koay, S A; Kondo, K; Kong, D J; Konigsberg, J; Korytov, A; Kotwal, A V; Kraus, J; Kreps, M; Kroll, J; Krumnack, N; Kruse, M; Krutelyov, V; Kubo, T; Kuhlmann, S E; Kuhr, T; Kulkarni, N P; Kusakabe, Y; Kwang, S; Laasanen, A T; Lai, S; Lami, S; Lammel, S; Lancaster, M; Lander, R L; Lannon, K; Lath, A; Latino, G; Lazzizzera, I; LeCompte, T; Lee, J; Lee, J; Lee, Y J; Lee, S W; Lefèvre, R; Leonardo, N; Leone, S; Levy, S; Lewis, J D; Lin, C; Lin, C S; Linacre, J; Lindgren, M; Lipeles, E; Lister, A; Litvintsev, D O; Liu, T; Lockyer, N S; Loginov, A; Loreti, M; Lovas, L; Lu, R-S; Lucchesi, D; Lueck, J; Luci, C; Lujan, P; Lukens, P; Lungu, G; Lyons, L; Lys, J; Lysak, R; Lytken, E; Mack, P; Macqueen, D; Madrak, R; Maeshima, K; Makhoul, K; Maki, T; Maksimovic, P; Malde, S; Malik, S; Manca, G; Manousakis, A; Margaroli, F; Marino, C; Marino, C P; Martin, A; Martin, M; Martin, V; Martínez, M; Martínez-Ballarín, R; Maruyama, T; Mastrandrea, P; Masubuchi, T; Mattson, M E; Mazzanti, P; McFarland, K S; McIntyre, P; McNulty, R; Mehta, A; Mehtala, P; Menzemer, S; Menzione, A; Merkel, P; Mesropian, C; Messina, A; Miao, T; Miladinovic, N; Miles, J; Miller, R; Mills, C; Milnik, M; Mitra, A; Mitselmakher, G; Miyake, H; Moed, S; Moggi, N; Moon, C S; Moore, R; Morello, M; Movilla Fernandez, P; Mülmenstädt, J; Mukherjee, A; Muller, Th; Mumford, R; Murat, P; Mussini, M; Nachtman, J; Nagai, Y; Nagano, A; Naganoma, J; Nakamura, K; Nakano, I; Napier, A; Necula, V; Neu, C; Neubauer, M S; Nielsen, J; Nodulman, L; Norman, M; Norniella, O; Nurse, E; Oh, S H; Oh, Y D; Oksuzian, I; Okusawa, T; Oldeman, R; Orava, R; Osterberg, K; Pagan Griso, S; Pagliarone, C; Palencia, E; Papadimitriou, V; Papaikonomou, A; Paramonov, A A; Parks, B; Pashapour, S; Patrick, J; Pauletta, G; Paulini, M; Paus, C; Pellett, D E; Penzo, A; Phillips, T J; Piacentino, G; Piedra, J; Pinera, L; Pitts, K; Plager, C; Pondrom, L; Portell, X; Poukhov, O; Pounder, N; Prakoshyn, F; Pronko, A; Proudfoot, J; Ptohos, F; Punzi, G; Pursley, J; Rademacker, J; Rahaman, A; Ramakrishnan, V; Ranjan, N; Redondo, I; Reisert, B; Rekovic, V; Renton, P; Rescigno, M; Richter, S; Rimondi, F; Ristori, L; Robson, A; Rodrigo, T; Rogers, E; Rolli, S; Roser, R; Rossi, M; Rossin, R; Rott, C; Roy, P; Ruiz, A; Russ, J; Rusu, V; Saarikko, H; Safonov, A; Sakumoto, W K; Salamanna, G; Saltó, O; Santi, L; Sarkar, S; Sartori, L; Sato, K; Savoy-Navarro, A; Scheidle, T; Schlabach, P; Schmidt, E E; Schmidt, M A; Schmidt, M P; Schmitt, M; Schwarz, T; Scodellaro, L; Scott, A L; Scribano, A; Scuri, F; Sedov, A; Seidel, S; Seiya, Y; Semenov, A; Sexton-Kennedy, L; Sfyrla, A; Shalhout, S Z; Shapiro, M D; Shears, T; Shepard, P F; Sherman, D; Shimojima, M; Shochet, M; Shon, Y; Shreyber, I; Sidoti, A; Sinervo, P; Sisakyan, A; Slaughter, A J; Slaunwhite, J; Sliwa, K; Smith, J R; Snider, F D; Snihur, R; Soderberg, M; Soha, A; Somalwar, S; Sorin, V; Spalding, J; Spinella, F; Spreitzer, T; Squillacioti, P; Stanitzki, M; St Denis, R; Stelzer, B; Stelzer-Chilton, O; Stentz, D; Strologas, J; Stuart, D; Suh, J S; Sukhanov, A; Sun, H; Suslov, I; Suzuki, T; Taffard, A; Takashima, R; Takeuchi, Y; Tanaka, R; Tecchio, M; Teng, P K; Terashi, K; Thom, J; Thompson, A S; Thompson, G A; Thomson, E; Tipton, P; Tiwari, V; Tkaczyk, S; Toback, D; Tokar, S; Tollefson, K; Tomura, T; Tonelli, D; Torre, S; Torretta, D; Tourneur, S; Trischuk, W; Tu, Y; Turini, N; Ukegawa, F; Uozumi, S; Vallecorsa, S; van Remortel, N; Varganov, A; Vataga, E; Vázquez, F; Velev, G; Vellidis, C; Veszpremi, V; Vidal, M; Vidal, R; Vila, I; Vilar, R; Vine, T; Vogel, M; Volobouev, I; Volpi, G; Würthwein, F; Wagner, P; Wagner, R G; Wagner, R L; Wagner-Kuhr, J; Wagner, W; Wakisaka, T; Wallny, R; Wang, S M; Warburton, A; Waters, D; Weinberger, M; Wester, W C; Whitehouse, B; Whiteson, D; Wicklund, A B; Wicklund, E; Williams, G; Williams, H H; Wilson, P; Winer, B L; Wittich, P; Wolbers, S; Wolfe, C; Wright, T; Wu, X; Wynne, S M; Yagil, A; Yamamoto, K; Yamaoka, J; Yamashita, T; Yang, C; Yang, U K; Yang, Y C; Yao, W M; Yeh, G P; Yoh, J; Yorita, K; Yoshida, T; Yu, G B; Yu, I; Yu, S S; Yun, J C; Zanello, L; Zanetti, A; Zaw, I; Zhang, X; Zheng, Y; Zucchelli, S

    2008-05-30

    We search for the standard model Higgs boson produced in association with an electroweak vector boson in events with no identified charged leptons, large imbalance in transverse momentum, and two jets where at least one contains a secondary vertex consistent with the decay of b hadrons. We use approximately 1 fb(-1) integrated luminosity of pp collisions at square root(s)=1.96 TeV recorded by the Collider Detector at Fermilab II experiment at the Tevatron. We find 268 (16) single (double) b-tagged candidate events, where 248+/-43 (14.4+/-2.7) are expected from standard model background processes. We observe no significant excess over the expected background and thus set 95% confidence level upper limits on the Higgs boson production cross section for several Higgs boson masses ranging from 110 to 140 GeV/c(2). For a mass of 115 GeV/c(2), the observed (expected) limit is 20.4 (14.2) times the standard model prediction.

  9. Baseline correction combined partial least squares algorithm and its application in on-line Fourier transform infrared quantitative analysis.

    Science.gov (United States)

    Peng, Jiangtao; Peng, Silong; Xie, Qiong; Wei, Jiping

    2011-04-01

    In order to eliminate the lower order polynomial interferences, a new quantitative calibration algorithm "Baseline Correction Combined Partial Least Squares (BCC-PLS)", which combines baseline correction and conventional PLS, is proposed. By embedding baseline correction constraints into PLS weights selection, the proposed calibration algorithm overcomes the uncertainty in baseline correction and can meet the requirement of on-line attenuated total reflectance Fourier transform infrared (ATR-FTIR) quantitative analysis. The effectiveness of the algorithm is evaluated by the analysis of glucose and marzipan ATR-FTIR spectra. BCC-PLS algorithm shows improved prediction performance over PLS. The root mean square error of cross-validation (RMSECV) on marzipan spectra for the prediction of the moisture is found to be 0.53%, w/w (range 7-19%). The sugar content is predicted with a RMSECV of 2.04%, w/w (range 33-68%). Copyright © 2011 Elsevier B.V. All rights reserved.

  10. Analysis of background components in Ge-spectrometry and their influence on detection limits

    Energy Technology Data Exchange (ETDEWEB)

    Heusser, G [Max-Planck-Institut fuer Kernphysik, Heidelberg (Germany)

    1997-03-01

    In low radioactivity measurements the system own background of the spectrometer is, besides the counting efficiency, the limiting factor for the achievable sensitivity. Since the latter is mostly fixed, background reduction is the only way to gain sensitivity, although it is inversely proportional only to the square root of the background rate but directly proportional to the counting efficiency. A thorough understanding of the background sources and their quantitative contribution helps to choose the most adequate suppression method in order to reach a certain required level of detection limit. For Ge-spectrometry the background can be reduced by 5 to 6 orders of magnitude compared to the unshielded case applying state-of-the-art techniques. This reduction factor holds for the continuous background spectrum as well as for the line background as demonstrated for a Ge detector of the Heidelberg-Moscow double beta decay experiment. (orig./DG)

  11. Regional differences in prediction models of lung function in Germany

    Directory of Open Access Journals (Sweden)

    Schäper Christoph

    2010-04-01

    Full Text Available Abstract Background Little is known about the influencing potential of specific characteristics on lung function in different populations. The aim of this analysis was to determine whether lung function determinants differ between subpopulations within Germany and whether prediction equations developed for one subpopulation are also adequate for another subpopulation. Methods Within three studies (KORA C, SHIP-I, ECRHS-I in different areas of Germany 4059 adults performed lung function tests. The available data consisted of forced expiratory volume in one second, forced vital capacity and peak expiratory flow rate. For each study multivariate regression models were developed to predict lung function and Bland-Altman plots were established to evaluate the agreement between predicted and measured values. Results The final regression equations for FEV1 and FVC showed adjusted r-square values between 0.65 and 0.75, and for PEF they were between 0.46 and 0.61. In all studies gender, age, height and pack-years were significant determinants, each with a similar effect size. Regarding other predictors there were some, although not statistically significant, differences between the studies. Bland-Altman plots indicated that the regression models for each individual study adequately predict medium (i.e. normal but not extremely high or low lung function values in the whole study population. Conclusions Simple models with gender, age and height explain a substantial part of lung function variance whereas further determinants add less than 5% to the total explained r-squared, at least for FEV1 and FVC. Thus, for different adult subpopulations of Germany one simple model for each lung function measures is still sufficient.

  12. A prediction rule for the development of delirium among patients in medical wards: Chi-Square Automatic Interaction Detector (CHAID) decision tree analysis model.

    Science.gov (United States)

    Kobayashi, Daiki; Takahashi, Osamu; Arioka, Hiroko; Koga, Shinichiro; Fukui, Tsuguya

    2013-10-01

    To predict development of delirium among patients in medical wards by a Chi-Square Automatic Interaction Detector (CHAID) decision tree model. This was a retrospective cohort study of all adult patients admitted to medical wards at a large community hospital. The subject patients were randomly assigned to either a derivation or validation group (2:1) by computed random number generation. Baseline data and clinically relevant factors were collected from the electronic chart. Primary outcome was the development of delirium during hospitalization. All potential predictors were included in a forward stepwise logistic regression model. CHAID decision tree analysis was also performed to make another prediction model with the same group of patients. Receiver operating characteristic curves were drawn, and the area under the curves (AUCs) were calculated for both models. In the validation group, these receiver operating characteristic curves and AUCs were calculated based on the rules from derivation. A total of 3,570 patients were admitted: 2,400 patients assigned to the derivation group and 1,170 to the validation group. A total of 91 and 51 patients, respectively, developed delirium. Statistically significant predictors were delirium history, age, underlying malignancy, and activities of daily living impairment in CHAID decision tree model, resulting in six distinctive groups by the level of risk. AUC was 0.82 in derivation and 0.82 in validation with CHAID model and 0.78 in derivation and 0.79 in validation with logistic model. We propose a validated CHAID decision tree prediction model to predict the development of delirium among medical patients. Copyright © 2013 American Association for Geriatric Psychiatry. Published by Elsevier Inc. All rights reserved.

  13. Comparison of Prediction-Error-Modelling Criteria

    DEFF Research Database (Denmark)

    Jørgensen, John Bagterp; Jørgensen, Sten Bay

    2007-01-01

    Single and multi-step prediction-error-methods based on the maximum likelihood and least squares criteria are compared. The prediction-error methods studied are based on predictions using the Kalman filter and Kalman predictors for a linear discrete-time stochastic state space model, which is a r...

  14. Search for the Higgs boson using neural networks in events with missing energy and b-quark jets in pp collisions at square root(s) = 1.96 TeV.

    Science.gov (United States)

    Aaltonen, T; Adelman, J; Alvarez González, B; Amerio, S; Amidei, D; Anastassov, A; Annovi, A; Antos, J; Apollinari, G; Apresyan, A; Arisawa, T; Artikov, A; Asaadi, J; Ashmanskas, W; Attal, A; Aurisano, A; Azfar, F; Badgett, W; Barbaro-Galtieri, A; Barnes, V E; Barnett, B A; Barria, P; Bartos, P; Bauer, G; Beauchemin, P-H; Bedeschi, F; Beecher, D; Behari, S; Bellettini, G; Bellinger, J; Benjamin, D; Beretvas, A; Bhatti, A; Binkley, M; Bisello, D; Bizjak, I; Blair, R E; Blocker, C; Blumenfeld, B; Bocci, A; Bodek, A; Boisvert, V; Bortoletto, D; Boudreau, J; Boveia, A; Brau, B; Bridgeman, A; Brigliadori, L; Bromberg, C; Brubaker, E; Budagov, J; Budd, H S; Budd, S; Burkett, K; Busetto, G; Bussey, P; Buzatu, A; Byrum, K L; Cabrera, S; Calancha, C; Camarda, S; Campanelli, M; Campbell, M; Canelli, F; Canepa, A; Carls, B; Carlsmith, D; Carosi, R; Carrillo, S; Carron, S; Casal, B; Casarsa, M; Castro, A; Catastini, P; Cauz, D; Cavaliere, V; Cavalli-Sforza, M; Cerri, A; Cerrito, L; Chang, S H; Chen, Y C; Chertok, M; Chiarelli, G; Chlachidze, G; Chlebana, F; Cho, K; Chokheli, D; Chou, J P; Chung, K; Chung, W H; Chung, Y S; Chwalek, T; Ciobanu, C I; Ciocci, M A; Clark, A; Clark, D; Compostella, G; Convery, M E; Conway, J; Corbo, M; Cordelli, M; Cox, C A; Cox, D J; Crescioli, F; Cuenca Almenar, C; Cuevas, J; Culbertson, R; Cully, J C; Dagenhart, D; Datta, M; Davies, T; de Barbaro, P; De Cecco, S; Deisher, A; De Lorenzo, G; Dell'Orso, M; Deluca, C; Demortier, L; Deng, J; Deninno, M; d'Errico, M; Di Canto, A; di Giovanni, G P; Di Ruzza, B; Dittmann, J R; D'Onofrio, M; Donati, S; Dong, P; Dorigo, T; Dube, S; Ebina, K; Elagin, A; Erbacher, R; Errede, D; Errede, S; Ershaidat, N; Eusebi, R; Fang, H C; Farrington, S; Fedorko, W T; Feild, R G; Feindt, M; Fernandez, J P; Ferrazza, C; Field, R; Flanagan, G; Forrest, R; Frank, M J; Franklin, M; Freeman, J C; Furic, I; Gallinaro, M; Galyardt, J; Garberson, F; Garcia, J E; Garfinkel, A F; Garosi, P; Gerberich, H; Gerdes, D; Gessler, A; Giagu, S; Giakoumopoulou, V; Giannetti, P; Gibson, K; Gimmell, J L; Ginsburg, C M; Giokaris, N; Giordani, M; Giromini, P; Giunta, M; Giurgiu, G; Glagolev, V; Glenzinski, D; Gold, M; Goldschmidt, N; Golossanov, A; Gomez, G; Gomez-Ceballos, G; Goncharov, M; González, O; Gorelov, I; Goshaw, A T; Goulianos, K; Gresele, A; Grinstein, S; Grosso-Pilcher, C; Group, R C; Grundler, U; Guimaraes da Costa, J; Gunay-Unalan, Z; Haber, C; Hahn, S R; Halkiadakis, E; Han, B-Y; Han, J Y; Happacher, F; Hara, K; Hare, D; Hare, M; Harr, R F; Hartz, M; Hatakeyama, K; Hays, C; Heck, M; Heinrich, J; Herndon, M; Heuser, J; Hewamanage, S; Hidas, D; Hill, C S; Hirschbuehl, D; Hocker, A; Hou, S; Houlden, M; Hsu, S-C; Hughes, R E; Hurwitz, M; Husemann, U; Hussein, M; Huston, J; Incandela, J; Introzzi, G; Iori, M; Ivanov, A; James, E; Jang, D; Jayatilaka, B; Jeon, E J; Jha, M K; Jindariani, S; Johnson, W; Jones, M; Joo, K K; Jun, S Y; Jung, J E; Junk, T R; Kamon, T; Kar, D; Karchin, P E; Kato, Y; Kephart, R; Ketchum, W; Keung, J; Khotilovich, V; Kilminster, B; Kim, D H; Kim, H S; Kim, H W; Kim, J E; Kim, M J; Kim, S B; Kim, S H; Kim, Y K; Kimura, N; Kirsch, L; Klimenko, S; Kondo, K; Kong, D J; Konigsberg, J; Korytov, A; Kotwal, A V; Kreps, M; Kroll, J; Krop, D; Krumnack, N; Kruse, M; Krutelyov, V; Kuhr, T; Kulkarni, N P; Kurata, M; Kwang, S; Laasanen, A T; Lami, S; Lammel, S; Lancaster, M; Lander, R L; Lannon, K; Lath, A; Latino, G; Lazzizzera, I; LeCompte, T; Lee, E; Lee, H S; Lee, J S; Lee, S W; Leone, S; Lewis, J D; Lin, C-J; Linacre, J; Lindgren, M; Lipeles, E; Lister, A; Litvintsev, D O; Liu, C; Liu, T; Lockyer, N S; Loginov, A; Lovas, L; Lucchesi, D; Lueck, J; Lujan, P; Lukens, P; Lungu, G; Lys, J; Lysak, R; MacQueen, D; Madrak, R; Maeshima, K; Makhoul, K; Maksimovic, P; Malde, S; Malik, S; Manca, G; Manousakis-Katsikakis, A; Margaroli, F; Marino, C; Marino, C P; Martin, A; Martin, V; Martínez, M; Martínez-Ballarín, R; Mastrandrea, P; Mathis, M; Mattson, M E; Mazzanti, P; McFarland, K S; McIntyre, P; McNulty, R; Mehta, A; Mehtala, P; Menzione, A; Mesropian, C; Miao, T; Mietlicki, D; Miladinovic, N; Miller, R; Mills, C; Milnik, M; Mitra, A; Mitselmakher, G; Miyake, H; Moed, S; Moggi, N; Mondragon, M N; Moon, C S; Moore, R; Morello, M J; Morlock, J; Movilla Fernandez, P; Mülmenstädt, J; Mukherjee, A; Muller, Th; Murat, P; Mussini, M; Nachtman, J; Nagai, Y; Naganoma, J; Nakamura, K; Nakano, I; Napier, A; Nett, J; Neu, C; Neubauer, M S; Neubauer, S; Nielsen, J; Nodulman, L; Norman, M; Norniella, O; Nurse, E; Oakes, L; Oh, S H; Oh, Y D; Oksuzian, I; Okusawa, T; Orava, R; Osterberg, K; Pagan Griso, S; Pagliarone, C; Palencia, E; Papadimitriou, V; Papaikonomou, A; Paramanov, A A; Parks, B; Pashapour, S; Patrick, J; Pauletta, G; Paulini, M; Paus, C; Peiffer, T; Pellett, D E; Penzo, A; Phillips, T J; Piacentino, G; Pianori, E; Pinera, L; Pitts, K; Plager, C; Pondrom, L; Potamianos, K; Poukhov, O; Prokoshin, F; Pronko, A; Ptohos, F; Pueschel, E; Punzi, G; Pursley, J; Rademacker, J; Rahaman, A; Ramakrishnan, V; Ranjan, N; Redondo, I; Renton, P; Renz, M; Rescigno, M; Richter, S; Rimondi, F; Ristori, L; Robson, A; Rodrigo, T; Rodriguez, T; Rogers, E; Rolli, S; Roser, R; Rossi, M; Rossin, R; Roy, P; Ruiz, A; Russ, J; Rusu, V; Rutherford, B; Saarikko, H; Safonov, A; Sakumoto, W K; Santi, L; Sartori, L; Sato, K; Savoy-Navarro, A; Schlabach, P; Schmidt, A; Schmidt, E E; Schmidt, M A; Schmidt, M P; Schmitt, M; Schwarz, T; Scodellaro, L; Scribano, A; Scuri, F; Sedov, A; Seidel, S; Seiya, Y; Semenov, A; Sexton-Kennedy, L; Sforza, F; Sfyrla, A; Shalhout, S Z; Shears, T; Shepard, P F; Shimojima, M; Shiraishi, S; Shochet, M; Shon, Y; Shreyber, I; Simonenko, A; Sinervo, P; Sisakyan, A; Slaughter, A J; Slaunwhite, J; Sliwa, K; Smith, J R; Snider, F D; Snihur, R; Soha, A; Somalwar, S; Sorin, V; Squillacioti, P; Stanitzki, M; St Denis, R; Stelzer, B; Stelzer-Chilton, O; Stentz, D; Strologas, J; Strycker, G L; Suh, J S; Sukhanov, A; Suslov, I; Taffard, A; Takashima, R; Takeuchi, Y; Tanaka, R; Tang, J; Tecchio, M; Teng, P K; Thom, J; Thome, J; Thompson, G A; Thomson, E; Tipton, P; Ttito-Guzmán, P; Tkaczyk, S; Toback, D; Tokar, S; Tollefson, K; Tomura, T; Tonelli, D; Torre, S; Torretta, D; Totaro, P; Tourneur, S; Trovato, M; Tsai, S-Y; Tu, Y; Turini, N; Ukegawa, F; Uozumi, S; van Remortel, N; Varganov, A; Vataga, E; Vázquez, F; Velev, G; Vellidis, C; Vidal, M; Vila, I; Vilar, R; Vogel, M; Volobouev, I; Volpi, G; Wagner, P; Wagner, R G; Wagner, R L; Wagner, W; Wagner-Kuhr, J; Wakisaka, T; Wallny, R; Wang, S M; Warburton, A; Waters, D; Weinberger, M; Weinelt, J; Wester, W C; Whitehouse, B; Whiteson, D; Wicklund, A B; Wicklund, E; Wilbur, S; Williams, G; Williams, H H; Wilson, P; Winer, B L; Wittich, P; Wolbers, S; Wolfe, C; Wolfe, H; Wright, T; Wu, X; Würthwein, F; Yagil, A; Yamamoto, K; Yamaoka, J; Yang, U K; Yang, Y C; Yao, W M; Yeh, G P; Yi, K; Yoh, J; Yorita, K; Yoshida, T; Yu, G B; Yu, I; Yu, S S; Yun, J C; Zanetti, A; Zeng, Y; Zhang, X; Zheng, Y; Zucchelli, S

    2010-04-09

    We report on a search for the standard model Higgs boson produced in association with a W or Z boson in pp collisions at square root(s)=1.96 TeV recorded by the CDF II experiment at the Tevatron in a data sample corresponding to an integrated luminosity of 2.1 fb(-1). We consider events which have no identified charged leptons, an imbalance in transverse momentum, and two or three jets where at least one jet is consistent with originating from the decay of a b hadron. We find good agreement between data and background predictions. We place 95% confidence level upper limits on the production cross section for several Higgs boson masses ranging from 110 GeV/c(2) to 150 GeV/c(2). For a mass of 115 GeV/c(2) the observed (expected) limit is 6.9 (5.6) times the standard model prediction.

  15. Prediction of Biomass Production and Nutrient Uptake in Land Application Using Partial Least Squares Regression Analysis

    Directory of Open Access Journals (Sweden)

    Vasileios A. Tzanakakis

    2014-12-01

    Full Text Available Partial Least Squares Regression (PLSR can integrate a great number of variables and overcome collinearity problems, a fact that makes it suitable for intensive agronomical practices such as land application. In the present study a PLSR model was developed to predict important management goals, including biomass production and nutrient recovery (i.e., nitrogen and phosphorus, associated with treatment potential, environmental impacts, and economic benefits. Effluent loading and a considerable number of soil parameters commonly monitored in effluent irrigated lands were considered as potential predictor variables during the model development. All data were derived from a three year field trial including plantations of four different plant species (Acacia cyanophylla, Eucalyptus camaldulensis, Populus nigra, and Arundo donax, irrigated with pre-treated domestic effluent. PLSR method was very effective despite the small sample size and the wide nature of data set (with many highly correlated inputs and several highly correlated responses. Through PLSR method the number of initial predictor variables was reduced and only several variables were remained and included in the final PLSR model. The important input variables maintained were: Effluent loading, electrical conductivity (EC, available phosphorus (Olsen-P, Na+, Ca2+, Mg2+, K2+, SAR, and NO3−-N. Among these variables, effluent loading, EC, and nitrates had the greater contribution to the final PLSR model. PLSR is highly compatible with intensive agronomical practices such as land application, in which a large number of highly collinear and noisy input variables is monitored to assess plant species performance and to detect impacts on the environment.

  16. Background music and cognitive performance.

    Science.gov (United States)

    Angel, Leslie A; Polzella, Donald J; Elvers, Greg C

    2010-06-01

    The present experiment employed standardized test batteries to assess the effects of fast-tempo music on cognitive performance among 56 male and female university students. A linguistic processing task and a spatial processing task were selected from the Criterion Task Set developed to assess verbal and nonverbal performance. Ten excerpts from Mozart's music matched for tempo were selected. Background music increased the speed of spatial processing and the accuracy of linguistic processing. The findings suggest that background music can have predictable effects on cognitive performance.

  17. Application of transmission infrared spectroscopy and partial least squares regression to predict immunoglobulin G concentration in dairy and beef cow colostrum.

    Science.gov (United States)

    Elsohaby, Ibrahim; Windeyer, M Claire; Haines, Deborah M; Homerosky, Elizabeth R; Pearson, Jennifer M; McClure, J Trenton; Keefe, Greg P

    2018-03-06

    The objective of this study was to explore the potential of transmission infrared (TIR) spectroscopy in combination with partial least squares regression (PLSR) for quantification of dairy and beef cow colostral immunoglobulin G (IgG) concentration and assessment of colostrum quality. A total of 430 colostrum samples were collected from dairy (n = 235) and beef (n = 195) cows and tested by a radial immunodiffusion (RID) assay and TIR spectroscopy. Colostral IgG concentrations obtained by the RID assay were linked to the preprocessed spectra and divided into combined and prediction data sets. Three PLSR calibration models were built: one for the dairy cow colostrum only, the second for beef cow colostrum only, and the third for the merged dairy and beef cow colostrum. The predictive performance of each model was evaluated separately using the independent prediction data set. The Pearson correlation coefficients between IgG concentrations as determined by the TIR-based assay and the RID assay were 0.84 for dairy cow colostrum, 0.88 for beef cow colostrum, and 0.92 for the merged set of dairy and beef cow colostrum. The average of the differences between colostral IgG concentrations obtained by the RID- and TIR-based assays were -3.5, 2.7, and 1.4 g/L for dairy, beef, and merged colostrum samples, respectively. Further, the average relative error of the colostral IgG predicted by the TIR spectroscopy from the RID assay was 5% for dairy cow, 1.2% for beef cow, and 0.8% for the merged data set. The average intra-assay CV% of the IgG concentration predicted by the TIR-based method were 3.2%, 2.5%, and 6.9% for dairy cow, beef cow, and merged data set, respectively.The utility of TIR method for assessment of colostrum quality was evaluated using the entire data set and showed that TIR spectroscopy accurately identified the quality status of 91% of dairy cow colostrum, 95% of beef cow colostrum, and 89% and 93% of the merged dairy and beef cow colostrum samples

  18. Identifying Dirac cones in carbon allotropes with square symmetry

    Energy Technology Data Exchange (ETDEWEB)

    Wang, Jinying [College of Chemistry and Molecular Engineering, Peking University, Beijing 100871 (China); Huang, Huaqing; Duan, Wenhui [Department of Physics, Tsinghua University, Beijing 100084 (China); Liu, Zhirong, E-mail: LiuZhiRong@pku.edu.cn [College of Chemistry and Molecular Engineering, Peking University, Beijing 100871 (China); State Key Laboratory for Structural Chemistry of Unstable and Stable Species and Beijing National Laboratory for Molecular Sciences (BNLMS), Peking University, Beijing 100871 (China)

    2013-11-14

    A theoretical study is conducted to search for Dirac cones in two-dimensional carbon allotropes with square symmetry. By enumerating the carbon atoms in a unit cell up to 12, an allotrope with octatomic rings is recognized to possess Dirac cones under a simple tight-binding approach. The obtained Dirac cones are accompanied by flat bands at the Fermi level, and the resulting massless Dirac-Weyl fermions are chiral particles with a pseudospin of S = 1, rather than the conventional S = 1/2 of graphene. The spin-1 Dirac cones are also predicted to exist in hexagonal graphene antidot lattices.

  19. Due Date Assignment in a Dynamic Job Shop with the Orthogonal Kernel Least Squares Algorithm

    Science.gov (United States)

    Yang, D. H.; Hu, L.; Qian, Y.

    2017-06-01

    Meeting due dates is a key goal in the manufacturing industries. This paper proposes a method for due date assignment (DDA) by using the Orthogonal Kernel Least Squares Algorithm (OKLSA). A simulation model is built to imitate the production process of a highly dynamic job shop. Several factors describing job characteristics and system state are extracted as attributes to predict job flow-times. A number of experiments under conditions of varying dispatching rules and 90% shop utilization level have been carried out to evaluate the effectiveness of OKLSA applied for DDA. The prediction performance of OKLSA is compared with those of five conventional DDA models and back-propagation neural network (BPNN). The experimental results indicate that OKLSA is statistically superior to other DDA models in terms of mean absolute lateness and root mean squares lateness in most cases. The only exception occurs when the shortest processing time rule is used for dispatching jobs, the difference between OKLSA and BPNN is not statistically significant.

  20. The number counts and infrared backgrounds from infrared-bright galaxies

    International Nuclear Information System (INIS)

    Hacking, P.B.; Soifer, B.T.

    1991-01-01

    Extragalactic number counts and diffuse backgrounds at 25, 60, and 100 microns are predicted using new luminosity functions and improved spectral-energy distribution density functions derived from IRAS observations of nearby galaxies. Galaxies at redshifts z less than 3 that are like those in the local universe should produce a minimum diffuse background of 0.0085, 0.038, and 0.13 MJy/sr at 25, 60, and 100 microns, respectively. Models with significant luminosity evolution predict backgrounds about a factor of 4 greater than this minimum. 22 refs

  1. From Square Dance to Mathematics

    Science.gov (United States)

    Bremer, Zoe

    2010-01-01

    In this article, the author suggests a cross-curricular idea that can link with PE, dance, music and history. Teacher David Schmitz, a maths teacher in Illinois who was also a square dance caller, had developed a maths course that used the standard square dance syllabus to teach mathematical principles. He presents an intensive, two-week course…

  2. Lagrange’s Four-Square Theorem

    Directory of Open Access Journals (Sweden)

    Watase Yasushige

    2015-02-01

    Full Text Available This article provides a formalized proof of the so-called “the four-square theorem”, namely any natural number can be expressed by a sum of four squares, which was proved by Lagrange in 1770. An informal proof of the theorem can be found in the number theory literature, e.g. in [14], [1] or [23].

  3. A cross-correlation objective function for least-squares migration and visco-acoustic imaging

    KAUST Repository

    Dutta, Gaurav

    2014-08-05

    Conventional acoustic least-squares migration inverts for a reflectivity image that best matches the amplitudes of the observed data. However, for field data applications, it is not easy to match the recorded amplitudes because of the visco-elastic nature of the earth and inaccuracies in the estimation of source signature and strength at different shot locations. To relax the requirement for strong amplitude matching of least-squares migration, we use a normalized cross-correlation objective function that is only sensitive to the similarity between the predicted and the observed data. Such a normalized cross-correlation objective function is also equivalent to a time-domain phase inversion method where the main emphasis is only on matching the phase of the data rather than the amplitude. Numerical tests on synthetic and field data show that such an objective function can be used as an alternative to visco-acoustic least-squares reverse time migration (Qp-LSRTM) when there is strong attenuation in the subsurface and the estimation of the attenuation parameter Qp is insufficiently accurate.

  4. A cross-correlation objective function for least-squares migration and visco-acoustic imaging

    KAUST Repository

    Dutta, Gaurav; Sinha, Mrinal; Schuster, Gerard T.

    2014-01-01

    Conventional acoustic least-squares migration inverts for a reflectivity image that best matches the amplitudes of the observed data. However, for field data applications, it is not easy to match the recorded amplitudes because of the visco-elastic nature of the earth and inaccuracies in the estimation of source signature and strength at different shot locations. To relax the requirement for strong amplitude matching of least-squares migration, we use a normalized cross-correlation objective function that is only sensitive to the similarity between the predicted and the observed data. Such a normalized cross-correlation objective function is also equivalent to a time-domain phase inversion method where the main emphasis is only on matching the phase of the data rather than the amplitude. Numerical tests on synthetic and field data show that such an objective function can be used as an alternative to visco-acoustic least-squares reverse time migration (Qp-LSRTM) when there is strong attenuation in the subsurface and the estimation of the attenuation parameter Qp is insufficiently accurate.

  5. Additive survival least square support vector machines: A simulation study and its application to cervical cancer prediction

    Science.gov (United States)

    Khotimah, Chusnul; Purnami, Santi Wulan; Prastyo, Dedy Dwi; Chosuvivatwong, Virasakdi; Sriplung, Hutcha

    2017-11-01

    Support Vector Machines (SVMs) has been widely applied for prediction in many fields. Recently, SVM is also developed for survival analysis. In this study, Additive Survival Least Square SVM (A-SURLSSVM) approach is used to analyze cervical cancer dataset and its performance is compared with the Cox model as a benchmark. The comparison is evaluated based on the prognostic index produced: concordance index (c-index), log rank, and hazard ratio. The higher prognostic index represents the better performance of the corresponding methods. This work also applied feature selection to choose important features using backward elimination technique based on the c-index criterion. The cervical cancer dataset consists of 172 patients. The empirical results show that nine out of the twelve features: age at marriage, age of first getting menstruation, age, parity, type of treatment, history of family planning, stadium, long-time of menstruation, and anemia status are selected as relevant features that affect the survival time of cervical cancer patients. In addition, the performance of the proposed method is evaluated through a simulation study with the different number of features and censoring percentages. Two out of three performance measures (c-index and hazard ratio) obtained from A-SURLSSVM consistently yield better results than the ones obtained from Cox model when it is applied on both simulated and cervical cancer data. Moreover, the simulation study showed that A-SURLSSVM performs better when the percentage of censoring data is small.

  6. Sulfur Speciation of Crude Oils by Partial Least Squares Regression Modeling of Their Infrared Spectra

    NARCIS (Netherlands)

    de Peinder, P.; Visser, T.; Wagemans, R.W.P.; Blomberg, J.; Chaabani, H.; Soulimani, F.; Weckhuysen, B.M.

    2013-01-01

    Research has been carried out to determine the feasibility of partial least-squares regression (PLS) modeling of infrared (IR) spectra of crude oils as a tool for fast sulfur speciation. The study is a continuation of a previously developed method to predict long and short residue properties of

  7. Partial update least-square adaptive filtering

    CERN Document Server

    Xie, Bei

    2014-01-01

    Adaptive filters play an important role in the fields related to digital signal processing and communication, such as system identification, noise cancellation, channel equalization, and beamforming. In practical applications, the computational complexity of an adaptive filter is an important consideration. The Least Mean Square (LMS) algorithm is widely used because of its low computational complexity (O(N)) and simplicity in implementation. The least squares algorithms, such as Recursive Least Squares (RLS), Conjugate Gradient (CG), and Euclidean Direction Search (EDS), can converge faster a

  8. The inverse square law of gravitation

    International Nuclear Information System (INIS)

    Cook, A.H.

    1987-01-01

    The inverse square law of gravitation is very well established over the distances of celestial mechanics, while in electrostatics the law has been shown to be followed to very high precision. However, it is only within the last century that any laboratory experiments have been made to test the inverse square law for gravitation, and all but one has been carried out in the last ten years. At the same time, there has been considerable interest in the possibility of deviations from the inverse square law, either because of a possible bearing on unified theories of forces, including gravitation or, most recently, because of a possible additional fifth force of nature. In this article the various lines of evidence for the inverse square law are summarized, with emphasis upon the recent laboratory experiments. (author)

  9. Dual stacked partial least squares for analysis of near-infrared spectra

    Energy Technology Data Exchange (ETDEWEB)

    Bi, Yiming [Institute of Automation, Chinese Academy of Sciences, 100190 Beijing (China); Xie, Qiong, E-mail: yimbi@163.com [Institute of Automation, Chinese Academy of Sciences, 100190 Beijing (China); Peng, Silong; Tang, Liang; Hu, Yong; Tan, Jie [Institute of Automation, Chinese Academy of Sciences, 100190 Beijing (China); Zhao, Yuhui [School of Economics and Business, Northeastern University at Qinhuangdao, 066000 Qinhuangdao City (China); Li, Changwen [Food Research Institute of Tianjin Tasly Group, 300410 Tianjin (China)

    2013-08-20

    Graphical abstract: -- Highlights: •Dual stacking steps are used for multivariate calibration of near-infrared spectra. •A selective weighting strategy is introduced that only a subset of all available sub-models is used for model fusion. •Using two public near-infrared datasets, the proposed method achieved competitive results. •The method can be widely applied in many fields, such as Mid-infrared spectra data and Raman spectra data. -- Abstract: A new ensemble learning algorithm is presented for quantitative analysis of near-infrared spectra. The algorithm contains two steps of stacked regression and Partial Least Squares (PLS), termed Dual Stacked Partial Least Squares (DSPLS) algorithm. First, several sub-models were generated from the whole calibration set. The inner-stack step was implemented on sub-intervals of the spectrum. Then the outer-stack step was used to combine these sub-models. Several combination rules of the outer-stack step were analyzed for the proposed DSPLS algorithm. In addition, a novel selective weighting rule was also involved to select a subset of all available sub-models. Experiments on two public near-infrared datasets demonstrate that the proposed DSPLS with selective weighting rule provided superior prediction performance and outperformed the conventional PLS algorithm. Compared with the single model, the new ensemble model can provide more robust prediction result and can be considered an alternative choice for quantitative analytical applications.

  10. Dual stacked partial least squares for analysis of near-infrared spectra

    International Nuclear Information System (INIS)

    Bi, Yiming; Xie, Qiong; Peng, Silong; Tang, Liang; Hu, Yong; Tan, Jie; Zhao, Yuhui; Li, Changwen

    2013-01-01

    Graphical abstract: -- Highlights: •Dual stacking steps are used for multivariate calibration of near-infrared spectra. •A selective weighting strategy is introduced that only a subset of all available sub-models is used for model fusion. •Using two public near-infrared datasets, the proposed method achieved competitive results. •The method can be widely applied in many fields, such as Mid-infrared spectra data and Raman spectra data. -- Abstract: A new ensemble learning algorithm is presented for quantitative analysis of near-infrared spectra. The algorithm contains two steps of stacked regression and Partial Least Squares (PLS), termed Dual Stacked Partial Least Squares (DSPLS) algorithm. First, several sub-models were generated from the whole calibration set. The inner-stack step was implemented on sub-intervals of the spectrum. Then the outer-stack step was used to combine these sub-models. Several combination rules of the outer-stack step were analyzed for the proposed DSPLS algorithm. In addition, a novel selective weighting rule was also involved to select a subset of all available sub-models. Experiments on two public near-infrared datasets demonstrate that the proposed DSPLS with selective weighting rule provided superior prediction performance and outperformed the conventional PLS algorithm. Compared with the single model, the new ensemble model can provide more robust prediction result and can be considered an alternative choice for quantitative analytical applications

  11. Positive solution of non-square fully Fuzzy linear system of equation in general form using least square method

    Directory of Open Access Journals (Sweden)

    Reza Ezzati

    2014-08-01

    Full Text Available In this paper, we propose the least square method for computing the positive solution of a non-square fully fuzzy linear system. To this end, we use Kaffman' arithmetic operations on fuzzy numbers \\cite{17}. Here, considered existence of exact solution using pseudoinverse, if they are not satisfy in positive solution condition, we will compute fuzzy vector core and then we will obtain right and left spreads of positive fuzzy vector by introducing constrained least squares problem. Using our proposed method, non-square fully fuzzy linear system of equations always has a solution. Finally, we illustrate the efficiency of proposed method by solving some numerical examples.

  12. Chaos Time Series Prediction Based on Membrane Optimization Algorithms

    Directory of Open Access Journals (Sweden)

    Meng Li

    2015-01-01

    Full Text Available This paper puts forward a prediction model based on membrane computing optimization algorithm for chaos time series; the model optimizes simultaneously the parameters of phase space reconstruction (τ,m and least squares support vector machine (LS-SVM (γ,σ by using membrane computing optimization algorithm. It is an important basis for spectrum management to predict accurately the change trend of parameters in the electromagnetic environment, which can help decision makers to adopt an optimal action. Then, the model presented in this paper is used to forecast band occupancy rate of frequency modulation (FM broadcasting band and interphone band. To show the applicability and superiority of the proposed model, this paper will compare the forecast model presented in it with conventional similar models. The experimental results show that whether single-step prediction or multistep prediction, the proposed model performs best based on three error measures, namely, normalized mean square error (NMSE, root mean square error (RMSE, and mean absolute percentage error (MAPE.

  13. Mathematical Construction of Magic Squares Utilizing Base-N Arithmetic

    Science.gov (United States)

    O'Brien, Thomas D.

    2006-01-01

    Magic squares have been of interest as a source of recreation for over 4,500 years. A magic square consists of a square array of n[squared] positive and distinct integers arranged so that the sum of any column, row, or main diagonal is the same. In particular, an array of consecutive integers from 1 to n[squared] forming an nxn magic square is…

  14. BIOMECHANICS. Why the seahorse tail is square.

    Science.gov (United States)

    Porter, Michael M; Adriaens, Dominique; Hatton, Ross L; Meyers, Marc A; McKittrick, Joanna

    2015-07-03

    Whereas the predominant shapes of most animal tails are cylindrical, seahorse tails are square prisms. Seahorses use their tails as flexible grasping appendages, in spite of a rigid bony armor that fully encases their bodies. We explore the mechanics of two three-dimensional-printed models that mimic either the natural (square prism) or hypothetical (cylindrical) architecture of a seahorse tail to uncover whether or not the square geometry provides any functional advantages. Our results show that the square prism is more resilient when crushed and provides a mechanism for preserving articulatory organization upon extensive bending and twisting, as compared with its cylindrical counterpart. Thus, the square architecture is better than the circular one in the context of two integrated functions: grasping ability and crushing resistance. Copyright © 2015, American Association for the Advancement of Science.

  15. Modeling and control of PEMFC based on least squares support vector machines

    International Nuclear Information System (INIS)

    Li Xi; Cao Guangyi; Zhu Xinjian

    2006-01-01

    The proton exchange membrane fuel cell (PEMFC) is one of the most important power supplies. The operating temperature of the stack is an important controlled variable, which impacts the performance of the PEMFC. In order to improve the generating performance of the PEMFC, prolong its life and guarantee safety, credibility and low cost of the PEMFC system, it must be controlled efficiently. A nonlinear predictive control algorithm based on a least squares support vector machine (LS-SVM) model is presented for a family of complex systems with severe nonlinearity, such as the PEMFC, in this paper. The nonlinear off line model of the PEMFC is built by a LS-SVM model with radial basis function (RBF) kernel so as to implement nonlinear predictive control of the plant. During PEMFC operation, the off line model is linearized at each sampling instant, and the generalized predictive control (GPC) algorithm is applied to the predictive control of the plant. Experimental results demonstrate the effectiveness and advantages of this approach

  16. A simulation model for visitors’ thermal comfort at urban public squares using non-probabilistic binary-linear classifier through soft-computing methodologies

    International Nuclear Information System (INIS)

    Kariminia, Shahab; Shamshirband, Shahaboddin; Hashim, Roslan; Saberi, Ahmadreza; Petković, Dalibor; Roy, Chandrabhushan; Motamedi, Shervin

    2016-01-01

    Sustaining outdoor life in cities is decreasing because of the recent rapid urbanisation without considering climate-responsive urban design concepts. Such inadvertent climatic modifications at the indoor level have imposed considerable demand on the urban energy resources. It is important to provide comfortable ambient climate at open urban squares. Researchers need to predict the comfortable conditions at such outdoor squares. The main objective of this study is predict the visitors' outdoor comfort indices by using a developed computational model termed as SVM-WAVELET (Support Vector Machines combined with Discrete Wavelet Transform algorithm). For data collection, the field study was conducted in downtown Isfahan, Iran (51°41′ E, 32°37′ N) with hot and arid summers. Based on different environmental elements, four separate locations were monitored across two public squares. Meteorological data were measured simultaneously by surveying the visitors' thermal sensations. According to the subjects' thermal feeling and their characteristics, their level of comfort was estimated. Further, the adapted computational model was used to estimate the visitors’ thermal sensations in terms of thermal comfort indices. The SVM-WAVELET results indicate that R"2 value for input parameters, including Thermal Sensation, PMW (The predicted mean vote), PET (physiologically equivalent temperature), SET (standard effective temperature) and T_m_r_t were estimated at 0.482, 0.943, 0.988, 0.969 and 0.840, respectively. - Highlights: • To explore the visitors' thermal sensation at urban public squares. • This article introduces findings of outdoor comfort prediction. • The developed SVM-WAVELET soft-computing technique was used. • SVM-WAVELET estimation results are more reliable and accurate.

  17. The Cosmic Microwave Background

    Directory of Open Access Journals (Sweden)

    Jones Aled

    1998-01-01

    Full Text Available We present a brief review of current theory and observations of the cosmic microwave background (CMB. New predictions for cosmological defect theories and an overview of the inflationary theory are discussed. Recent results from various observations of the anisotropies of the microwave background are described and a summary of the proposed experiments is presented. A new analysis technique based on Bayesian statistics that can be used to reconstruct the underlying sky fluctuations is summarised. Current CMB data is used to set some preliminary constraints on the values of fundamental cosmological parameters $Omega$ and $H_circ$ using the maximum likelihood technique. In addition, secondary anisotropies due to the Sunyaev-Zel'dovich effect are described.

  18. A Degree-Scale Measurement of the Anisotropy in the Cosmic Microwave Background

    Science.gov (United States)

    Wollack, Ed; Jarosik, Norm; Netterfield, Barth; Page, Lyman; Wilkinson, David

    1995-01-01

    We report the detection of anisotropy in the microwave sky at 3O GHz and at l deg angular scales. The most economical interpretation of the data is that the fluctuations are intrinsic to the cosmic microwave background. However, galactic free-free emission is ruled out with only 90% confidence. The most likely root-mean-squared amplitude of the fluctuations, assuming they are described by a Gaussian auto-correlation function with a coherence angle of 1.2 deg, is 41(+16/-13) (mu)K. We also present limits on the anisotropy of the polarization of the cosmic microwave background.

  19. Non-stationary least-squares complex decomposition for microseismic noise attenuation

    Science.gov (United States)

    Chen, Yangkang

    2018-06-01

    Microseismic data processing and imaging are crucial for subsurface real-time monitoring during hydraulic fracturing process. Unlike the active-source seismic events or large-scale earthquake events, the microseismic event is usually of very small magnitude, which makes its detection challenging. The biggest trouble of microseismic data is the low signal-to-noise ratio issue. Because of the small energy difference between effective microseismic signal and ambient noise, the effective signals are usually buried in strong random noise. I propose a useful microseismic denoising algorithm that is based on decomposing a microseismic trace into an ensemble of components using least-squares inversion. Based on the predictive property of useful microseismic event along the time direction, the random noise can be filtered out via least-squares fitting of multiple damping exponential components. The method is flexible and almost automated since the only parameter needed to be defined is a decomposition number. I use some synthetic and real data examples to demonstrate the potential of the algorithm in processing complicated microseismic data sets.

  20. Delayed ripple counter simplifies square-root computation

    Science.gov (United States)

    Cliff, R.

    1965-01-01

    Ripple subtract technique simplifies the logic circuitry required in a binary computing device to derive the square root of a number. Successively higher numbers are subtracted from a register containing the number out of which the square root is to be extracted. The last number subtracted will be the closest integer to the square root of the number.

  1. Speech Intelligibility Prediction Based on Mutual Information

    DEFF Research Database (Denmark)

    Jensen, Jesper; Taal, Cees H.

    2014-01-01

    This paper deals with the problem of predicting the average intelligibility of noisy and potentially processed speech signals, as observed by a group of normal hearing listeners. We propose a model which performs this prediction based on the hypothesis that intelligibility is monotonically related...... to the mutual information between critical-band amplitude envelopes of the clean signal and the corresponding noisy/processed signal. The resulting intelligibility predictor turns out to be a simple function of the mean-square error (mse) that arises when estimating a clean critical-band amplitude using...... a minimum mean-square error (mmse) estimator based on the noisy/processed amplitude. The proposed model predicts that speech intelligibility cannot be improved by any processing of noisy critical-band amplitudes. Furthermore, the proposed intelligibility predictor performs well ( ρ > 0.95) in predicting...

  2. [Study on Vis/NIR spectra detecting system for watermelons and quality predicting in motion].

    Science.gov (United States)

    Tian, Hai-Qing; Ying, Yi-Bin; Xu, Hui-Rong; Lu, Hui-Shan; Xie, Li-Juan

    2009-06-01

    To make Vis/NIR diffuse transmittance technique applied to quality prediction for watermelon in motion, the dynamic spectra detecting system was rebuilt. Spectra detecting experiments were conducted and the effects of noises caused by motion on spectra were analyzed. Then the least--square filtering method and Norris differential filtering method were adopted to eliminate the effects of noise on spectra smoothing, and statistical models between the spectra and soluble solids content were developed using partial least square method. The performance of different models was assessed in terms of correlation coefficients (r) of validation set of samples, root mean square errors of calibration (RMSEC) and root mean square errors of prediction (RMSEP). Calibration and prediction results indicated that Norris differential method was an effective method to smooth spectra and improve calibration and prediction results, especially, with r of 0.895, RMSEC of 0.549, and RMSEP of 0.760 for the calibration and prediction result of the first derivative spectra.

  3. Simplified neural networks for solving linear least squares and total least squares problems in real time.

    Science.gov (United States)

    Cichocki, A; Unbehauen, R

    1994-01-01

    In this paper a new class of simplified low-cost analog artificial neural networks with on chip adaptive learning algorithms are proposed for solving linear systems of algebraic equations in real time. The proposed learning algorithms for linear least squares (LS), total least squares (TLS) and data least squares (DLS) problems can be considered as modifications and extensions of well known algorithms: the row-action projection-Kaczmarz algorithm and/or the LMS (Adaline) Widrow-Hoff algorithms. The algorithms can be applied to any problem which can be formulated as a linear regression problem. The correctness and high performance of the proposed neural networks are illustrated by extensive computer simulation results.

  4. The Multivariate Regression Statistics Strategy to Investigate Content-Effect Correlation of Multiple Components in Traditional Chinese Medicine Based on a Partial Least Squares Method.

    Science.gov (United States)

    Peng, Ying; Li, Su-Ning; Pei, Xuexue; Hao, Kun

    2018-03-01

    Amultivariate regression statisticstrategy was developed to clarify multi-components content-effect correlation ofpanaxginseng saponins extract and predict the pharmacological effect by components content. In example 1, firstly, we compared pharmacological effects between panax ginseng saponins extract and individual saponin combinations. Secondly, we examined the anti-platelet aggregation effect in seven different saponin combinations of ginsenoside Rb1, Rg1, Rh, Rd, Ra3 and notoginsenoside R1. Finally, the correlation between anti-platelet aggregation and the content of multiple components was analyzed by a partial least squares algorithm. In example 2, firstly, 18 common peaks were identified in ten different batches of panax ginseng saponins extracts from different origins. Then, we investigated the anti-myocardial ischemia reperfusion injury effects of the ten different panax ginseng saponins extracts. Finally, the correlation between the fingerprints and the cardioprotective effects was analyzed by a partial least squares algorithm. Both in example 1 and 2, the relationship between the components content and pharmacological effect was modeled well by the partial least squares regression equations. Importantly, the predicted effect curve was close to the observed data of dot marked on the partial least squares regression model. This study has given evidences that themulti-component content is a promising information for predicting the pharmacological effects of traditional Chinese medicine.

  5. Data Series Subtraction with Unknown and Unmodeled Background Noise

    Science.gov (United States)

    Vitale, Stefano; Congedo, Giuseppe; Dolesi, Rita; Ferroni, Valerio; Hueller, Mauro; Vetrugno, Daniele; Weber, William Joseph; Audley, Heather; Danzmann, Karsten; Diepholz, Ingo; hide

    2014-01-01

    LISA Pathfinder (LPF), the precursor mission to a gravitational wave observatory of the European Space Agency, will measure the degree to which two test masses can be put into free fall, aiming to demonstrate a suppression of disturbance forces corresponding to a residual relative acceleration with a power spectral density (PSD) below (30 fm/sq s/Hz)(sup 2) around 1 mHz. In LPF data analysis, the disturbance forces are obtained as the difference between the acceleration data and a linear combination of other measured data series. In many circumstances, the coefficients for this linear combination are obtained by fitting these data series to the acceleration, and the disturbance forces appear then as the data series of the residuals of the fit. Thus the background noise or, more precisely, its PSD, whose knowledge is needed to build up the likelihood function in ordinary maximum likelihood fitting, is here unknown, and its estimate constitutes instead one of the goals of the fit. In this paper we present a fitting method that does not require the knowledge of the PSD of the background noise. The method is based on the analytical marginalization of the posterior parameter probability density with respect to the background noise PSD, and returns an estimate both for the fitting parameters and for the PSD. We show that both these estimates are unbiased, and that, when using averaged Welchs periodograms for the residuals, the estimate of the PSD is consistent, as its error tends to zero with the inverse square root of the number of averaged periodograms. Additionally, we find that the method is equivalent to some implementations of iteratively reweighted least-squares fitting. We have tested the method both on simulated data of known PSD and on data from several experiments performed with the LISA Pathfinder end-to-end mission simulator.

  6. The Square Light Clock and Special Relativity

    Science.gov (United States)

    Galli, J. Ronald; Amiri, Farhang

    2012-01-01

    A thought experiment that includes a square light clock is similar to the traditional vertical light beam and mirror clock, except it is made up of four mirrors placed at a 45[degree] angle at each corner of a square of length L[subscript 0], shown in Fig. 1. Here we have shown the events as measured in the rest frame of the square light clock. By…

  7. General background conditions for K-bounce and adiabaticity

    Energy Technology Data Exchange (ETDEWEB)

    Romano, Antonio Enea [University of Crete, Department of Physics, Heraklion (Greece); Kyoto University, Yukawa Institute for Theoretical Physics, Kyoto (Japan); Universidad de Antioquia, Instituto de Fisica, A.A.1226, Medellin (Colombia)

    2017-03-15

    We study the background conditions for a bounce uniquely driven by a single scalar field model with a generalized kinetic term K(X), without any additional matter field. At the background level we impose the existence of two turning points where the derivative of the Hubble parameter H changes sign and of a bounce point where the Hubble parameter vanishes. We find the conditions for K(X) and the potential which ensure the above requirements. We then give the examples of two models constructed according to these conditions. One is based on a quadratic K(X), and the other on a K(X) which is avoiding divergences of the second time derivative of the scalar field, which may otherwise occur. An appropriate choice of the initial conditions can lead to a sequence of consecutive bounces, or oscillations of H. In the region where these models have a constant potential they are adiabatic on any scale and because of this they may not conserve curvature perturbations on super-horizon scales. While at the perturbation level one class of models is free from ghosts and singularities of the classical equations of motion, in general gradient instabilities are present around the bounce time, because the sign of the squared speed of sound is opposite to the sign of the time derivative of H. We discuss how this kind of instabilities could be avoided by modifying the Lagrangian by introducing Galilean terms in order to prevent a negative squared speed of sound around the bounce. (orig.)

  8. General background conditions for K-bounce and adiabaticity

    International Nuclear Information System (INIS)

    Romano, Antonio Enea

    2017-01-01

    We study the background conditions for a bounce uniquely driven by a single scalar field model with a generalized kinetic term K(X), without any additional matter field. At the background level we impose the existence of two turning points where the derivative of the Hubble parameter H changes sign and of a bounce point where the Hubble parameter vanishes. We find the conditions for K(X) and the potential which ensure the above requirements. We then give the examples of two models constructed according to these conditions. One is based on a quadratic K(X), and the other on a K(X) which is avoiding divergences of the second time derivative of the scalar field, which may otherwise occur. An appropriate choice of the initial conditions can lead to a sequence of consecutive bounces, or oscillations of H. In the region where these models have a constant potential they are adiabatic on any scale and because of this they may not conserve curvature perturbations on super-horizon scales. While at the perturbation level one class of models is free from ghosts and singularities of the classical equations of motion, in general gradient instabilities are present around the bounce time, because the sign of the squared speed of sound is opposite to the sign of the time derivative of H. We discuss how this kind of instabilities could be avoided by modifying the Lagrangian by introducing Galilean terms in order to prevent a negative squared speed of sound around the bounce. (orig.)

  9. Spectrum unfolding by the least-squares methods

    International Nuclear Information System (INIS)

    Perey, F.G.

    1977-01-01

    The method of least squares is briefly reviewed, and the conditions under which it may be used are stated. From this analysis, a least-squares approach to the solution of the dosimetry neutron spectrum unfolding problem is introduced. The mathematical solution to this least-squares problem is derived from the general solution. The existence of this solution is analyzed in some detail. A chi 2 -test is derived for the consistency of the input data which does not require the solution to be obtained first. The fact that the problem is technically nonlinear, but should be treated in general as a linear one, is argued. Therefore, the solution should not be obtained by iteration. Two interpretations are made for the solution of the code STAY'SL, which solves this least-squares problem. The relationship of the solution to this least-squares problem to those obtained currently by other methods of solving the dosimetry neutron spectrum unfolding problem is extensively discussed. It is shown that the least-squares method does not require more input information than would be needed by current methods in order to estimate the uncertainties in their solutions. From this discussion it is concluded that the proposed least-squares method does provide the best complete solution, with uncertainties, to the problem as it is understood now. Finally, some implications of this method are mentioned regarding future work required in order to exploit its potential fully

  10. Space-time coupled spectral/hp least-squares finite element formulation for the incompressible Navier-Stokes equations

    International Nuclear Information System (INIS)

    Pontaza, J.P.; Reddy, J.N.

    2004-01-01

    We consider least-squares finite element models for the numerical solution of the non-stationary Navier-Stokes equations governing viscous incompressible fluid flows. The paper presents a formulation where the effects of space and time are coupled, resulting in a true space-time least-squares minimization procedure, as opposed to a space-time decoupled formulation where a least-squares minimization procedure is performed in space at each time step. The formulation is first presented for the linear advection-diffusion equation and then extended to the Navier-Stokes equations. The formulation has no time step stability restrictions and is spectrally accurate in both space and time. To allow the use of practical C 0 element expansions in the resulting finite element model, the Navier-Stokes equations are expressed as an equivalent set of first-order equations by introducing vorticity as an additional independent variable and the least-squares method is used to develop the finite element model of the governing equations. High-order element expansions are used to construct the discrete model. The discrete model thus obtained is linearized by Newton's method, resulting in a linear system of equations with a symmetric positive definite coefficient matrix that is solved in a fully coupled manner by a preconditioned conjugate gradient method in matrix-free form. Spectral convergence of the L 2 least-squares functional and L 2 error norms in space-time is verified using a smooth solution to the two-dimensional non-stationary incompressible Navier-Stokes equations. Numerical results are presented for impulsively started lid-driven cavity flow, oscillatory lid-driven cavity flow, transient flow over a backward-facing step, and flow around a circular cylinder; the results demonstrate the predictive capability and robustness of the proposed formulation. Even though the space-time coupled formulation is emphasized, we also present the formulation and numerical results for least-squares

  11. Elmo bumpy square plasma confinement device

    Science.gov (United States)

    Owen, L.W.

    1985-01-01

    The invention is an Elmo bumpy type plasma confinement device having a polygonal configuration of closed magnet field lines for improved plasma confinement. In the preferred embodiment, the device is of a square configuration which is referred to as an Elmo bumpy square (EBS). The EBS is formed by four linear magnetic mirror sections each comprising a plurality of axisymmetric assemblies connected in series and linked by 90/sup 0/ sections of a high magnetic field toroidal solenoid type field generating coils. These coils provide corner confinement with a minimum of radial dispersion of the confined plasma to minimize the detrimental effects of the toroidal curvature of the magnetic field. Each corner is formed by a plurality of circular or elliptical coils aligned about the corner radius to provide maximum continuity in the closing of the magnetic field lines about the square configuration confining the plasma within a vacuum vessel located within the various coils forming the square configuration confinement geometry.

  12. Filter Tuning Using the Chi-Squared Statistic

    Science.gov (United States)

    Lilly-Salkowski, Tyler

    2017-01-01

    The Goddard Space Flight Center (GSFC) Flight Dynamics Facility (FDF) performs orbit determination (OD) for the Aqua and Aura satellites. Both satellites are located in low Earth orbit (LEO), and are part of what is considered the A-Train satellite constellation. Both spacecraft are currently in the science phase of their respective missions. The FDF has recently been tasked with delivering definitive covariance for each satellite.The main source of orbit determination used for these missions is the Orbit Determination Toolkit developed by Analytical Graphics Inc. (AGI). This software uses an Extended Kalman Filter (EKF) to estimate the states of both spacecraft. The filter incorporates force modelling, ground station and space network measurements to determine spacecraft states. It also generates a covariance at each measurement. This covariance can be useful for evaluating the overall performance of the tracking data measurements and the filter itself. An accurate covariance is also useful for covariance propagation which is utilized in collision avoidance operations. It is also valuable when attempting to determine if the current orbital solution will meet mission requirements in the future.This paper examines the use of the Chi-square statistic as a means of evaluating filter performance. The Chi-square statistic is calculated to determine the realism of a covariance based on the prediction accuracy and the covariance values at a given point in time. Once calculated, it is the distribution of this statistic that provides insight on the accuracy of the covariance.For the EKF to correctly calculate the covariance, error models associated with tracking data measurements must be accurately tuned. Over estimating or under estimating these error values can have detrimental effects on the overall filter performance. The filter incorporates ground station measurements, which can be tuned based on the accuracy of the individual ground stations. It also includes

  13. Multivariat least-squares methods applied to the quantitative spectral analysis of multicomponent samples

    International Nuclear Information System (INIS)

    Haaland, D.M.; Easterling, R.G.; Vopicka, D.A.

    1985-01-01

    In an extension of earlier work, weighted multivariate least-squares methods of quantitative FT-IR analysis have been developed. A linear least-squares approximation to nonlinearities in the Beer-Lambert law is made by allowing the reference spectra to be a set of known mixtures, The incorporation of nonzero intercepts in the relation between absorbance and concentration further improves the approximation of nonlinearities while simultaneously accounting for nonzero spectra baselines. Pathlength variations are also accommodated in the analysis, and under certain conditions, unknown sample pathlengths can be determined. All spectral data are used to improve the precision and accuracy of the estimated concentrations. During the calibration phase of the analysis, pure component spectra are estimated from the standard mixture spectra. These can be compared with the measured pure component spectra to determine which vibrations experience nonlinear behavior. In the predictive phase of the analysis, the calculated spectra are used in our previous least-squares analysis to estimate sample component concentrations. These methods were applied to the analysis of the IR spectra of binary mixtures of esters. Even with severely overlapping spectral bands and nonlinearities in the Beer-Lambert law, the average relative error in the estimated concentration was <1%

  14. Use of structure-activity landscape index curves and curve integrals to evaluate the performance of multiple machine learning prediction models

    Directory of Open Access Journals (Sweden)

    LeDonne Norman C

    2011-02-01

    Full Text Available Abstract Background Standard approaches to address the performance of predictive models that used common statistical measurements for the entire data set provide an overview of the average performance of the models across the entire predictive space, but give little insight into applicability of the model across the prediction space. Guha and Van Drie recently proposed the use of structure-activity landscape index (SALI curves via the SALI curve integral (SCI as a means to map the predictive power of computational models within the predictive space. This approach evaluates model performance by assessing the accuracy of pairwise predictions, comparing compound pairs in a manner similar to that done by medicinal chemists. Results The SALI approach was used to evaluate the performance of continuous prediction models for MDR1-MDCK in vitro efflux potential. Efflux models were built with ADMET Predictor neural net, support vector machine, kernel partial least squares, and multiple linear regression engines, as well as SIMCA-P+ partial least squares, and random forest from Pipeline Pilot as implemented by AstraZeneca, using molecular descriptors from SimulationsPlus and AstraZeneca. Conclusion The results indicate that the choice of training sets used to build the prediction models is of great importance in the resulting model quality and that the SCI values calculated for these models were very similar to their Kendall τ values, leading to our suggestion of an approach to use this SALI/SCI paradigm to evaluate predictive model performance that will allow more informed decisions regarding model utility. The use of SALI graphs and curves provides an additional level of quality assessment for predictive models.

  15. Non-linear HVAC computations using least square support vector machines

    International Nuclear Information System (INIS)

    Kumar, Mahendra; Kar, I.N.

    2009-01-01

    This paper aims to demonstrate application of least square support vector machines (LS-SVM) to model two complex heating, ventilating and air-conditioning (HVAC) relationships. The two applications considered are the estimation of the predicted mean vote (PMV) for thermal comfort and the generation of psychrometric chart. LS-SVM has the potential for quick, exact representations and also possesses a structure that facilitates hardware implementation. The results show very good agreement between function values computed from conventional model and LS-SVM model in real time. The robustness of LS-SVM models against input noises has also been analyzed.

  16. Sets of Mutually Orthogonal Sudoku Latin Squares

    Science.gov (United States)

    Vis, Timothy; Petersen, Ryan M.

    2009-01-01

    A Latin square of order "n" is an "n" x "n" array using n symbols, such that each symbol appears exactly once in each row and column. A set of Latin squares is c ordered pairs of symbols appearing in the cells of the array are distinct. The popular puzzle Sudoku involves Latin squares with n = 9, along with the added condition that each of the 9…

  17. Direct-on-Filter α-Quartz Estimation in Respirable Coal Mine Dust Using Transmission Fourier Transform Infrared Spectrometry and Partial Least Squares Regression.

    Science.gov (United States)

    Miller, Arthur L; Weakley, Andrew Todd; Griffiths, Peter R; Cauda, Emanuele G; Bayman, Sean

    2017-05-01

    In order to help reduce silicosis in miners, the National Institute for Occupational Health and Safety (NIOSH) is developing field-portable methods for measuring airborne respirable crystalline silica (RCS), specifically the polymorph α-quartz, in mine dusts. In this study we demonstrate the feasibility of end-of-shift measurement of α-quartz using a direct-on-filter (DoF) method to analyze coal mine dust samples deposited onto polyvinyl chloride filters. The DoF method is potentially amenable for on-site analyses, but deviates from the current regulatory determination of RCS for coal mines by eliminating two sample preparation steps: ashing the sampling filter and redepositing the ash prior to quantification by Fourier transform infrared (FT-IR) spectrometry. In this study, the FT-IR spectra of 66 coal dust samples from active mines were used, and the RCS was quantified by using: (1) an ordinary least squares (OLS) calibration approach that utilizes standard silica material as done in the Mine Safety and Health Administration's P7 method; and (2) a partial least squares (PLS) regression approach. Both were capable of accounting for kaolinite, which can confound the IR analysis of silica. The OLS method utilized analytical standards for silica calibration and kaolin correction, resulting in a good linear correlation with P7 results and minimal bias but with the accuracy limited by the presence of kaolinite. The PLS approach also produced predictions well-correlated to the P7 method, as well as better accuracy in RCS prediction, and no bias due to variable kaolinite mass. Besides decreased sensitivity to mineral or substrate confounders, PLS has the advantage that the analyst is not required to correct for the presence of kaolinite or background interferences related to the substrate, making the method potentially viable for automated RCS prediction in the field. This study demonstrated the efficacy of FT-IR transmission spectrometry for silica determination in

  18. On square-free edge colorings of graphs

    DEFF Research Database (Denmark)

    Barat, Janos; Varju, P.P.

    2008-01-01

    An edge coloring of a graph is called square-free, if the sequence of colors on certain walks is not a square, that is not of the form x(1,)...,x(m), x(1),...,x(m), for any m epsilon N. Recently, various classes of walks have been suggested to be considered in the above definition. We construct...... graphs, for which the minimum number of colors needed for a square-free coloring is different if the considered set of walks vary, solving a problem posed by Bre ar and Klav2ar. We also prove the following: if an edge coloring of G is not square-free (even in the most general sense), then the length...

  19. All SQUARE

    CERN Multimedia

    1972-01-01

    With the existing Systems for using the accelerated protons, it is possible to supply only one slow ejected beam (feeding the East Hall) and, at the same time, to have only a small percentage of the beam on an internal target (feeding the South Hall). The arrangement will be replaced by a new System called SQUARE (Semi- QUAdrupole Resonant Extraction) which will give greater flexibility in supplying the three areas.

  20. Optimization Method of Fusing Model Tree into Partial Least Squares

    Directory of Open Access Journals (Sweden)

    Yu Fang

    2017-01-01

    Full Text Available Partial Least Square (PLS can’t adapt to the characteristics of the data of many fields due to its own features multiple independent variables, multi-dependent variables and non-linear. However, Model Tree (MT has a good adaptability to nonlinear function, which is made up of many multiple linear segments. Based on this, a new method combining PLS and MT to analysis and predict the data is proposed, which build MT through the main ingredient and the explanatory variables(the dependent variable extracted from PLS, and extract residual information constantly to build Model Tree until well-pleased accuracy condition is satisfied. Using the data of the maxingshigan decoction of the monarch drug to treat the asthma or cough and two sample sets in the UCI Machine Learning Repository, the experimental results show that, the ability of explanation and predicting get improved in the new method.

  1. Non-spill control squared cascade

    International Nuclear Information System (INIS)

    Kai, Tsunetoshi; Inoue, Yoshiya; Oya, Akio; Suemori, Nobuo.

    1974-01-01

    Object: To reduce a mixed loss thus enhancing separating efficiency by the provision of a simple arrangement wherein a reflux portion in a conventional spill control squared cascade is replaced by a special stage including centrifugal separators. Structure: Steps in the form of a square cascade, in which a plurality of centrifugal separators are connected by pipe lines, are accumulated in multistage fashion to form a squared cascade. Between the adjoining steps is disposed a special stage including a centrifugal separator which receives both lean flow from the upper step and rich flow from the lower step. The centrifugal separator in the special stage has its rich side connected to the upper step and its lean side connected to the lower step. Special stages are each disposed at the upper side of the uppermost step and at the lower side of the lowermost step. (Kamimura, M.)

  2. Multiplier less high-speed squaring circuit for binary numbers

    Science.gov (United States)

    Sethi, Kabiraj; Panda, Rutuparna

    2015-03-01

    The squaring operation is important in many applications in signal processing, cryptography etc. In general, squaring circuits reported in the literature use fast multipliers. A novel idea of a squaring circuit without using multipliers is proposed in this paper. Ancient Indian method used for squaring decimal numbers is extended here for binary numbers. The key to our success is that no multiplier is used. Instead, one squaring circuit is used. The hardware architecture of the proposed squaring circuit is presented. The design is coded in VHDL and synthesised and simulated in Xilinx ISE Design Suite 10.1 (Xilinx Inc., San Jose, CA, USA). It is implemented in Xilinx Vertex 4vls15sf363-12 device (Xilinx Inc.). The results in terms of time delay and area is compared with both modified Booth's algorithm and squaring circuit using Vedic multipliers. Our proposed squaring circuit seems to have better performance in terms of both speed and area.

  3. Sums of squares of integers

    CERN Document Server

    Moreno, Carlos J

    2005-01-01

    Introduction Prerequisites Outline of Chapters 2 - 8 Elementary Methods Introduction Some Lemmas Two Fundamental Identities Euler's Recurrence for Sigma(n)More Identities Sums of Two Squares Sums of Four Squares Still More Identities Sums of Three Squares An Alternate Method Sums of Polygonal Numbers Exercises Bernoulli Numbers Overview Definition of the Bernoulli Numbers The Euler-MacLaurin Sum Formula The Riemann Zeta Function Signs of Bernoulli Numbers Alternate The von Staudt-Clausen Theorem Congruences of Voronoi and Kummer Irregular Primes Fractional Parts of Bernoulli Numbers Exercises Examples of Modular Forms Introduction An Example of Jacobi and Smith An Example of Ramanujan and Mordell An Example of Wilton: t (n) Modulo 23 An Example of Hamburger Exercises Hecke's Theory of Modular FormsIntroduction Modular Group ? and its Subgroup ? 0 (N) Fundamental Domains For ? and ? 0 (N) Integral Modular Forms Modular Forms of Type Mk(? 0(N);chi) and Euler-Poincare series Hecke Operators Dirichlet Series and ...

  4. The Multivariate Regression Statistics Strategy to Investigate Content-Effect Correlation of Multiple Components in Traditional Chinese Medicine Based on a Partial Least Squares Method

    Directory of Open Access Journals (Sweden)

    Ying Peng

    2018-03-01

    Full Text Available Amultivariate regression statisticstrategy was developed to clarify multi-components content-effect correlation ofpanaxginseng saponins extract and predict the pharmacological effect by components content. In example 1, firstly, we compared pharmacological effects between panax ginseng saponins extract and individual saponin combinations. Secondly, we examined the anti-platelet aggregation effect in seven different saponin combinations of ginsenoside Rb1, Rg1, Rh, Rd, Ra3 and notoginsenoside R1. Finally, the correlation between anti-platelet aggregation and the content of multiple components was analyzed by a partial least squares algorithm. In example 2, firstly, 18 common peaks were identified in ten different batches of panax ginseng saponins extracts from different origins. Then, we investigated the anti-myocardial ischemia reperfusion injury effects of the ten different panax ginseng saponins extracts. Finally, the correlation between the fingerprints and the cardioprotective effects was analyzed by a partial least squares algorithm. Both in example 1 and 2, the relationship between the components content and pharmacological effect was modeled well by the partial least squares regression equations. Importantly, the predicted effect curve was close to the observed data of dot marked on the partial least squares regression model. This study has given evidences that themulti-component content is a promising information for predicting the pharmacological effects of traditional Chinese medicine.

  5. Least-squares methods for identifying biochemical regulatory networks from noisy measurements

    Directory of Open Access Journals (Sweden)

    Heslop-Harrison Pat

    2007-01-01

    Full Text Available Abstract Background We consider the problem of identifying the dynamic interactions in biochemical networks from noisy experimental data. Typically, approaches for solving this problem make use of an estimation algorithm such as the well-known linear Least-Squares (LS estimation technique. We demonstrate that when time-series measurements are corrupted by white noise and/or drift noise, more accurate and reliable identification of network interactions can be achieved by employing an estimation algorithm known as Constrained Total Least Squares (CTLS. The Total Least Squares (TLS technique is a generalised least squares method to solve an overdetermined set of equations whose coefficients are noisy. The CTLS is a natural extension of TLS to the case where the noise components of the coefficients are correlated, as is usually the case with time-series measurements of concentrations and expression profiles in gene networks. Results The superior performance of the CTLS method in identifying network interactions is demonstrated on three examples: a genetic network containing four genes, a network describing p53 activity and mdm2 messenger RNA interactions, and a recently proposed kinetic model for interleukin (IL-6 and (IL-12b messenger RNA expression as a function of ATF3 and NF-κB promoter binding. For the first example, the CTLS significantly reduces the errors in the estimation of the Jacobian for the gene network. For the second, the CTLS reduces the errors from the measurements that are corrupted by white noise and the effect of neglected kinetics. For the third, it allows the correct identification, from noisy data, of the negative regulation of (IL-6 and (IL-12b by ATF3. Conclusion The significant improvements in performance demonstrated by the CTLS method under the wide range of conditions tested here, including different levels and types of measurement noise and different numbers of data points, suggests that its application will enable

  6. Vapor-liquid equilibrium and critical asymmetry of square well and short square well chain fluids.

    Science.gov (United States)

    Li, Liyan; Sun, Fangfang; Chen, Zhitong; Wang, Long; Cai, Jun

    2014-08-07

    The critical behavior of square well fluids with variable interaction ranges and of short square well chain fluids have been investigated by grand canonical ensemble Monte Carlo simulations. The critical temperatures and densities were estimated by a finite-size scaling analysis with the help of histogram reweighting technique. The vapor-liquid coexistence curve in the near-critical region was determined using hyper-parallel tempering Monte Carlo simulations. The simulation results for coexistence diameters show that the contribution of |t|(1-α) to the coexistence diameter dominates the singular behavior in all systems investigated. The contribution of |t|(2β) to the coexistence diameter is larger for the system with a smaller interaction range λ. While for short square well chain fluids, longer the chain length, larger the contribution of |t|(2β). The molecular configuration greatly influences the critical asymmetry: a short soft chain fluid shows weaker critical asymmetry than a stiff chain fluid with same chain length.

  7. Status of the Simbol-X Background Simulation Activities

    Science.gov (United States)

    Tenzer, C.; Briel, U.; Bulgarelli, A.; Chipaux, R.; Claret, A.; Cusumano, G.; Dell'Orto, E.; Fioretti, V.; Foschini, L.; Hauf, S.; Kendziorra, E.; Kuster, M.; Laurent, P.; Tiengo, A.

    2009-05-01

    The Simbol-X background simulation group is working towards a simulation based background and mass model which can be used before and during the mission. Using the Geant4 toolkit, a Monte-Carlo code to simulate the detector background of the Simbol-X focal plane instrument has been developed with the aim to optimize the design of the instrument. Achieving an overall low instrument background has direct impact on the sensitivity of Simbol-X and thus will be crucial for the success of the mission. We present results of recent simulation studies concerning the shielding of the detectors with respect to the diffuse cosmic hard X-ray background and to the cosmic-ray proton induced background. Besides estimates of the level and spectral shape of the remaining background expected in the low and high energy detector, also anti-coincidence rates and resulting detector dead time predictions are discussed.

  8. Phase transition in a modified square Josephson-junction array

    CERN Document Server

    Han, J

    1999-01-01

    We study the phase transition in a modified square proximity-coupled Josephson-junction array with small superconducting islands at the center of each plaquette. We find that the modified square array undergoes a Kosterlitz-Thouless-Berezinskii-like phase transition, but at a lower temperature than the simple square array with the same single-junction critical current. The IV characteristics, as well as the phase transition, resemble qualitatively those of a disordered simple square array. The effects of the presence of the center islands in the modified square array are discussed.

  9. INVESTIGATION OF REPRESENTATIVE CHARACTERISTICS OF LUKIŠKIŲ SQUARE IN VILNIUS

    Directory of Open Access Journals (Sweden)

    Gintautas TIŠKUS

    2018-05-01

    Full Text Available Representative squares are sites where ideological monuments or monuments are built, or squares to important state or municipal buildings. Representation square is a mark or sign system that is intended to provide some relevant information. This information may include events or actors that are important and desirable to commemorate, ideological - political motives that want to be given exclusive importance. In determining the structure of the area’s representative area, causal relationships with the represented object are identified. The original key information is the name. Subsequently, the type of object represented – an event or personality, a connection to the square, the importance of the mark on the square (dominant, noticeable or invisible, and the interaction between the label and the environment are determined. After examining the structure of the square of the square and writing the values in the matrix, we can determine the representation level of the square. In the representation square we can name the square, for which the context of the environment and the level of representational marks for the purpose of the square have a decisive or significant influence.

  10. A Weighted Least Squares Approach To Robustify Least Squares Estimates.

    Science.gov (United States)

    Lin, Chowhong; Davenport, Ernest C., Jr.

    This study developed a robust linear regression technique based on the idea of weighted least squares. In this technique, a subsample of the full data of interest is drawn, based on a measure of distance, and an initial set of regression coefficients is calculated. The rest of the data points are then taken into the subsample, one after another,…

  11. Online and Batch Supervised Background Estimation via L1 Regression

    KAUST Repository

    Dutta, Aritra

    2017-11-23

    We propose a surprisingly simple model for supervised video background estimation. Our model is based on $\\\\ell_1$ regression. As existing methods for $\\\\ell_1$ regression do not scale to high-resolution videos, we propose several simple and scalable methods for solving the problem, including iteratively reweighted least squares, a homotopy method, and stochastic gradient descent. We show through extensive experiments that our model and methods match or outperform the state-of-the-art online and batch methods in virtually all quantitative and qualitative measures.

  12. Online and Batch Supervised Background Estimation via L1 Regression

    KAUST Repository

    Dutta, Aritra; Richtarik, Peter

    2017-01-01

    We propose a surprisingly simple model for supervised video background estimation. Our model is based on $\\ell_1$ regression. As existing methods for $\\ell_1$ regression do not scale to high-resolution videos, we propose several simple and scalable methods for solving the problem, including iteratively reweighted least squares, a homotopy method, and stochastic gradient descent. We show through extensive experiments that our model and methods match or outperform the state-of-the-art online and batch methods in virtually all quantitative and qualitative measures.

  13. Behaviour of FRP confined concrete in square columns

    Directory of Open Access Journals (Sweden)

    de Diego, A.

    2015-12-01

    Full Text Available A significant amount of research has been conducted on FRP-confined circular columns, but much less is known about rectangular/square columns in which the effectiveness of confinement is much reduced. This paper presents the results of experimental investigations on low strength square concrete columns confined with FRP. Axial compression tests were performed on ten intermediate size columns. The tests results indicate that FRP composites can significantly improve the bearing capacity and ductility of square section reinforced concrete columns with rounded corners. The strength enhancement ratio is greater the lower the concrete strength and also increases with the stiffness of the jacket. The confined concrete behaviour was predicted according to the more accepted theoretical models and compared with experimental results. There are two key parameters which critically influence the fitting of the models: the strain efficiency factor and the effect of confinement in non-circular sections.La mayoría de las investigaciones sobre hormigón confinado con FRP se han realizado sobre pilares de sección circular, pero el comportamiento en secciones cuadradas/rectangulares, donde el confinamiento es menos eficaz, es mucho menos conocido. Este trabajo presenta los resultados de un estudio experimental sobre probetas de hormigón de baja resistencia y sección cuadrada. Se han ensayado a compresión centrada diez probetas de tamaño intermedio. Los resultados indican que el confinamiento mejora significativamente la resistencia y ductilidad del hormigón en columnas de sección cuadrada con las esquinas redondeadas. El incremento de resistencia es mayor cuanto menor es la resistencia del hormigón sin confinar y también aumenta con la rigidez del encamisado. Los resultados se compararon con los obtenidos según los modelos teóricos más aceptados. Hay dos parámetros críticos en el ajuste de los modelos: el factor de eficiencia de la deformación y el

  14. DIFFERENTIATION OF AURANTII FRUCTUS IMMATURUS AND FRUCTUS PONICIRI TRIFOLIATAE IMMATURUS BY FLOW-INJECTION WITH ULTRAVIOLET SPECTROSCOPIC DETECTION AND PROTON NUCLEAR MAGNETIC RESONANCE USING PARTIAL LEAST-SQUARES DISCRIMINANT ANALYSIS.

    Science.gov (United States)

    Zhang, Mengliang; Zhao, Yang; Harrington, Peter de B; Chen, Pei

    2016-03-01

    Two simple fingerprinting methods, flow-injection coupled to ultraviolet spectroscopy and proton nuclear magnetic resonance, were used for discriminating between Aurantii fructus immaturus and Fructus poniciri trifoliatae immaturus . Both methods were combined with partial least-squares discriminant analysis. In the flow-injection method, four data representations were evaluated: total ultraviolet absorbance chromatograms, averaged ultraviolet spectra, absorbance at 193, 205, 225, and 283 nm, and absorbance at 225 and 283 nm. Prediction rates of 100% were achieved for all data representations by partial least-squares discriminant analysis using leave-one-sample-out cross-validation. The prediction rate for the proton nuclear magnetic resonance data by partial least-squares discriminant analysis with leave-one-sample-out cross-validation was also 100%. A new validation set of data was collected by flow-injection with ultraviolet spectroscopic detection two weeks later and predicted by partial least-squares discriminant analysis models constructed by the initial data representations with no parameter changes. The classification rates were 95% with the total ultraviolet absorbance chromatograms datasets and 100% with the other three datasets. Flow-injection with ultraviolet detection and proton nuclear magnetic resonance are simple, high throughput, and low-cost methods for discrimination studies.

  15. Around and Beyond the Square of Opposition

    CERN Document Server

    Béziau, Jean-Yves

    2012-01-01

    aiThe theory of oppositions based on Aristotelian foundations of logic has been pictured in a striking square diagram which can be understood and applied in many different ways having repercussions in various fields: epistemology, linguistics, mathematics, psychology. The square can also be generalized in other two-dimensional or multi-dimensional objects extending in breadth and depth the original theory of oppositions of Aristotle. The square of opposition is a very attractive theme which has been going through centuries without evaporating. Since 10 years there is a new growing interest for

  16. Deformation analysis with Total Least Squares

    Directory of Open Access Journals (Sweden)

    M. Acar

    2006-01-01

    Full Text Available Deformation analysis is one of the main research fields in geodesy. Deformation analysis process comprises measurement and analysis phases. Measurements can be collected using several techniques. The output of the evaluation of the measurements is mainly point positions. In the deformation analysis phase, the coordinate changes in the point positions are investigated. Several models or approaches can be employed for the analysis. One approach is based on a Helmert or similarity coordinate transformation where the displacements and the respective covariance matrix are transformed into a unique datum. Traditionally a Least Squares (LS technique is used for the transformation procedure. Another approach that could be introduced as an alternative methodology is the Total Least Squares (TLS that is considerably a new approach in geodetic applications. In this study, in order to determine point displacements, 3-D coordinate transformations based on the Helmert transformation model were carried out individually by the Least Squares (LS and the Total Least Squares (TLS, respectively. The data used in this study was collected by GPS technique in a landslide area located nearby Istanbul. The results obtained from these two approaches have been compared.

  17. Phase shift extraction and wavefront retrieval from interferograms with background and contrast fluctuations

    International Nuclear Information System (INIS)

    Liu, Qian; Wang, Yang; He, Jianguo; Ji, Fang

    2015-01-01

    The fluctuations of background and contrast cause measurement errors in the phase-shifting technique. To extract the phase shifts from interferograms with background and contrast fluctuations, an iterative algorithm is represented. The phase shifts and wavefront phase are calculated in two individual steps with the least-squares method. The fluctuation factors are determined when the phase shifts are calculated, and the fluctuations are compensated when the wavefront phase is calculated. The advantage of the algorithm lies in its ability to extract phase shifts from interferograms with background and contrast fluctuations converging stably and rapidly. Simulations and experiments verify the effectiveness and reliability of the proposed algorithm. The convergence accuracy and speed are demonstrated by the simulation results. The experiment results show its ability for suppressing phase retrieval errors. (paper)

  18. [Main Components of Xinjiang Lavender Essential Oil Determined by Partial Least Squares and Near Infrared Spectroscopy].

    Science.gov (United States)

    Liao, Xiang; Wang, Qing; Fu, Ji-hong; Tang, Jun

    2015-09-01

    This work was undertaken to establish a quantitative analysis model which can rapid determinate the content of linalool, linalyl acetate of Xinjiang lavender essential oil. Totally 165 lavender essential oil samples were measured by using near infrared absorption spectrum (NIR), after analyzing the near infrared spectral absorption peaks of all samples, lavender essential oil have abundant chemical information and the interference of random noise may be relatively low on the spectral intervals of 7100~4500 cm(-1). Thus, the PLS models was constructed by using this interval for further analysis. 8 abnormal samples were eliminated. Through the clustering method, 157 lavender essential oil samples were divided into 105 calibration set samples and 52 validation set samples. Gas chromatography mass spectrometry (GC-MS) was used as a tool to determine the content of linalool and linalyl acetate in lavender essential oil. Then the matrix was established with the GC-MS raw data of two compounds in combination with the original NIR data. In order to optimize the model, different pretreatment methods were used to preprocess the raw NIR spectral to contrast the spectral filtering effect, after analysizing the quantitative model results of linalool and linalyl acetate, the root mean square error prediction (RMSEP) of orthogonal signal transformation (OSC) was 0.226, 0.558, spectrally, it was the optimum pretreatment method. In addition, forward interval partial least squares (FiPLS) method was used to exclude the wavelength points which has nothing to do with determination composition or present nonlinear correlation, finally 8 spectral intervals totally 160 wavelength points were obtained as the dataset. Combining the data sets which have optimized by OSC-FiPLS with partial least squares (PLS) to establish a rapid quantitative analysis model for determining the content of linalool and linalyl acetate in Xinjiang lavender essential oil, numbers of hidden variables of two

  19. Least-squares reverse time migration of multiples

    KAUST Repository

    Zhang, Dongliang; Schuster, Gerard T.

    2013-01-01

    The theory of least-squares reverse time migration of multiples (RTMM) is presented. In this method, least squares migration (LSM) is used to image free-surface multiples where the recorded traces are used as the time histories of the virtual

  20. Precision tracking at high background rates with the ATLAS muon spectrometer

    CERN Document Server

    Hertenberger, Ralf; The ATLAS collaboration

    2012-01-01

    Since start of data taking the ATLAS muon spectrometer performs according to specification. End of this decade after the luminosity upgrade of LHC by a factor of ten the proportionally increasing background rates require the replacement of the detectors in the most forward part of the muon spectrometer to ensure high quality muon triggering and tracking at background hit rates of up to 15,kHz/cm$^2$. Square meter sized micromegas detectors together with improved thin gap trigger detectors are suggested as replacement. Micromegas detectors are intrinsically high rate capable. A single hit spatial resolution below 40,$mu$m has been shown for 250,$mu$m anode strip pitch and perpendicular incidence of high energy muons or pions. The ongoing development of large micromegas structures and their investigation under non-perpendicular incidence or in high background environments requires precise and reliable monitoring of muon tracks. A muon telescope consisting of six small micromegas works reliably and is presently ...

  1. Prediction Model for Predicting Powdery Mildew using ANN for Medicinal Plant— Picrorhiza kurrooa

    Science.gov (United States)

    Shivling, V. D.; Ghanshyam, C.; Kumar, Rakesh; Kumar, Sanjay; Sharma, Radhika; Kumar, Dinesh; Sharma, Atul; Sharma, Sudhir Kumar

    2017-02-01

    Plant disease fore casting system is an important system as it can be used for prediction of disease, further it can be used as an alert system to warn the farmers in advance so as to protect their crop from being getting infected. Fore casting system will predict the risk of infection for crop by using the environmental factors that favor in germination of disease. In this study an artificial neural network based system for predicting the risk of powdery mildew in Picrorhiza kurrooa was developed. For development, Levenberg-Marquardt backpropagation algorithm was used having a single hidden layer of ten nodes. Temperature and duration of wetness are the major environmental factors that favor infection. Experimental data was used as a training set and some percentage of data was used for testing and validation. The performance of the system was measured in the form of the coefficient of correlation (R), coefficient of determination (R2), mean square error and root mean square error. For simulating the network an inter face was developed. Using this interface the network was simulated by putting temperature and wetness duration so as to predict the level of risk at that particular value of the input data.

  2. A new stabilized least-squares imaging condition

    International Nuclear Information System (INIS)

    Vivas, Flor A; Pestana, Reynam C; Ursin, Bjørn

    2009-01-01

    The classical deconvolution imaging condition consists of dividing the upgoing wave field by the downgoing wave field and summing over all frequencies and sources. The least-squares imaging condition consists of summing the cross-correlation of the upgoing and downgoing wave fields over all frequencies and sources, and dividing the result by the total energy of the downgoing wave field. This procedure is more stable than using the classical imaging condition, but it still requires stabilization in zones where the energy of the downgoing wave field is small. To stabilize the least-squares imaging condition, the energy of the downgoing wave field is replaced by its average value computed in a horizontal plane in poorly illuminated regions. Applications to the Marmousi and Sigsbee2A data sets show that the stabilized least-squares imaging condition produces better images than the least-squares and cross-correlation imaging conditions

  3. Background Stress Inventory: Developing a Measure of Understudied Stress.

    Science.gov (United States)

    Terrill, Alexandra L; Gjerde, Jill M; Garofalo, John P

    2015-10-01

    Background stress is an understudied source of stress that involves both ambient stress and daily hassles upon which new stressors are superimposed. To date, an accurate measure of the background stress construct has not been available. We developed the Background Stress Inventory, a 25-item self-report measure that asks respondents to indicate how distressed they have felt over the past month and the majority of the past year across five domains: financial, occupation, environment, health and social. Seven hundred seventy-two participants completed the paper-and-pencil measure; the sample was randomly split into two separate subsamples for analyses. Exploratory factor analysis suggested five factors corresponding to these domains, and confirmatory factor analysis showed acceptable global fit (X(2)(255) = 456.47, comparative fit index = 0.94, root mean square error of approximation = 0.045). Cronbach's alpha (0.89) indicated good internal reliability. Construct validity analyses showed significant positive relationships with measures of perceived stressfulness (r = 0.62) and daily hassles (0.41), p's < 0.01. Depressive symptoms (0.62) and basal blood pressure (0.21) were both significantly associated with background stress, p's < 0.01. The importance of the proposed measure is reflected in the limited research base on the impact of background stress. Systematic investigation of this measure will provide insight into this understudied form of chronic stress and its potential influence on both psychological and physical endpoints. Copyright © 2013 John Wiley & Sons, Ltd.

  4. Connectivity percolation in suspensions of attractive square-well spherocylinders.

    Science.gov (United States)

    Dixit, Mohit; Meyer, Hugues; Schilling, Tanja

    2016-01-01

    We have studied the connectivity percolation transition in suspensions of attractive square-well spherocylinders by means of Monte Carlo simulation and connectedness percolation theory. In the 1980s the percolation threshold of slender fibers has been predicted to scale as the fibers' inverse aspect ratio [Phys. Rev. B 30, 3933 (1984)PRBMDO1098-012110.1103/PhysRevB.30.3933]. The main finding of our study is that the attractive spherocylinder system reaches this inverse scaling regime at much lower aspect ratios than found in suspensions of hard spherocylinders. We explain this difference by showing that third virial corrections of the pair connectedness functions, which are responsible for the deviation from the scaling regime, are less important for attractive potentials than for hard particles.

  5. [Discrimination of types of polyacrylamide based on near infrared spectroscopy coupled with least square support vector machine].

    Science.gov (United States)

    Zhang, Hong-Guang; Yang, Qin-Min; Lu, Jian-Gang

    2014-04-01

    In this paper, a novel discriminant methodology based on near infrared spectroscopic analysis technique and least square support vector machine was proposed for rapid and nondestructive discrimination of different types of Polyacrylamide. The diffuse reflectance spectra of samples of Non-ionic Polyacrylamide, Anionic Polyacrylamide and Cationic Polyacrylamide were measured. Then principal component analysis method was applied to reduce the dimension of the spectral data and extract of the principal compnents. The first three principal components were used for cluster analysis of the three different types of Polyacrylamide. Then those principal components were also used as inputs of least square support vector machine model. The optimization of the parameters and the number of principal components used as inputs of least square support vector machine model was performed through cross validation based on grid search. 60 samples of each type of Polyacrylamide were collected. Thus a total of 180 samples were obtained. 135 samples, 45 samples for each type of Polyacrylamide, were randomly split into a training set to build calibration model and the rest 45 samples were used as test set to evaluate the performance of the developed model. In addition, 5 Cationic Polyacrylamide samples and 5 Anionic Polyacrylamide samples adulterated with different proportion of Non-ionic Polyacrylamide were also prepared to show the feasibilty of the proposed method to discriminate the adulterated Polyacrylamide samples. The prediction error threshold for each type of Polyacrylamide was determined by F statistical significance test method based on the prediction error of the training set of corresponding type of Polyacrylamide in cross validation. The discrimination accuracy of the built model was 100% for prediction of the test set. The prediction of the model for the 10 mixing samples was also presented, and all mixing samples were accurately discriminated as adulterated samples. The

  6. Revising the predictions of inflation for the cosmic microwave background anisotropies.

    Science.gov (United States)

    Agulló, Iván; Navarro-Salas, José; Olmo, Gonzalo J; Parker, Leonard

    2009-08-07

    We point out that, if quantum field renormalization is taken into account and the counterterms are evaluated at the Hubble-radius crossing time or few e-foldings after it, the predictions of slow-roll inflation for both the scalar and the tensorial power spectrum change significantly. This leads to a change in the consistency condition that relates the tensor-to-scalar amplitude ratio with spectral indices. A reexamination of the potentials varphi;{2} and varphi;{4} shows that both are compatible with five-year WMAP data. Only when the counterterms are evaluated at much larger times beyond the end of inflation does one recover the standard predictions. The alternative predictions presented here may soon come within the range of measurement of near-future experiments.

  7. Patient-specific prediction of functional recovery after stroke.

    Science.gov (United States)

    Douiri, Abdel; Grace, Justin; Sarker, Shah-Jalal; Tilling, Kate; McKevitt, Christopher; Wolfe, Charles DA; Rudd, Anthony G

    2017-07-01

    Background and aims Clinical predictive models for stroke recovery could offer the opportunity of targeted early intervention and more specific information for patients and carers. In this study, we developed and validated a patient-specific prognostic model for monitoring recovery after stroke and assessed its clinical utility. Methods Four hundred and ninety-five patients from the population-based South London Stroke Register were included in a substudy between 2002 and 2004. Activities of daily living were assessed using Barthel Index) at one, two, three, four, six, eight, 12, 26, and 52 weeks after stroke. Penalized linear mixed models were developed to predict patients' functional recovery trajectories. An external validation cohort included 1049 newly registered stroke patients between 2005 and 2011. Prediction errors on discrimination and calibration were assessed. The potential clinical utility was evaluated using prognostic accuracy measurements and decision curve analysis. Results Predictive recovery curves showed good accuracy, with root mean squared deviation of 3 Barthel Index points and a R 2 of 83% up to one year after stroke in the external cohort. The negative predictive values of the risk of poor recovery (Barthel Index <8) at three and 12 months were also excellent, 96% (95% CI [93.6-97.4]) and 93% [90.8-95.3], respectively, with a potential clinical utility measured by likelihood ratios (LR+:17 [10.8-26.8] at three months and LR+:11 [6.5-17.2] at 12 months). Decision curve analysis showed an increased clinical benefit, particularly at threshold probabilities of above 5% for predictive risk of poor outcomes. Conclusions A recovery curves tool seems to accurately predict progression of functional recovery in poststroke patients.

  8. Entrywise Squared Transforms for GAMP Supplementary Material

    DEFF Research Database (Denmark)

    2016-01-01

    Supplementary material for a study on Entrywise Squared Transforms for Generalized Approximate Message Passing (GAMP). See the README file for the details.......Supplementary material for a study on Entrywise Squared Transforms for Generalized Approximate Message Passing (GAMP). See the README file for the details....

  9. Distribution of squares modulo a composite number

    OpenAIRE

    Aryan, Farzad

    2015-01-01

    In this paper we study the distribution of squares modulo a square-free number $q$. We also look at inverse questions for the large sieve in the distribution aspect and we make improvements on existing results on the distribution of $s$-tuples of reduced residues.

  10. Data-driven background predictions for a search of direct gluino pair production in the single-lepton final state using 13 TeV pp-collisions at the CMS experiment

    Energy Technology Data Exchange (ETDEWEB)

    Lobanov, Artur; Seitz, Claudia; Melzer-Pellmann, Isabell [DESY, Hamburg (Germany)

    2016-07-01

    We present a search for direct gluino-pair production in events with a single lepton using 13 TeV pp-collisions at the CMS experiment. This final state is characterised by high multiplicities of jets and b-quark jets, as well as a large scalar sum of all jet transverse momenta, and a large scalar sum of the transverse missing momentum and the lepton transverse momentum, called L{sub T}. The dominating Standard Model backgrounds in this phase-space are tt+jets and W+jets production. A data-driven method is used to estimate the background in the search regions. All backgrounds except for QCD in the (high ΔΦ(W,l)) signal regions are predicted by from the number of events in the low ΔΦ(W,l) region, with transfer factors determined also from data, while for the multi-jet events a fake-lepton enriched side-band is used. We conclude by showing predictions and final results from data corresponding to 2.1 fb{sup -1} integrated luminosity recorded with the CMS detector during the LHC Run2 in 2015.

  11. A MEASUREMENT OF SECONDARY COSMIC MICROWAVE BACKGROUND ANISOTROPIES FROM THE 2500 SQUARE-DEGREE SPT-SZ SURVEY

    Energy Technology Data Exchange (ETDEWEB)

    George, E. M.; Reichardt, C. L.; Aird, K. A.; Benson, B. A.; Bleem, L. E.; Carlstrom, J. E.; Chang, C. L.; Cho, H-M.; Crawford, T. M.; Crites, A. T.; de Haan, T.; Dobbs, M. A.; Dudley, J.; Halverson, N. W.; Harrington, N. L.; Holder, G. P.; Holzapfel, W. L.; Hou, Z.; Hrubes, J. D.; Keisler, R.; Knox, L.; Lee, A. T.; Leitch, E. M.; Lueker, M.; Luong-Van, D.; McMahon, J. J.; Mehl, J.; Meyer, S. S.; Millea, M.; Mocanu, L. M.; Mohr, J. J.; Montroy, T. E.; Padin, S.; Plagge, T.; Pryke, C.; Ruhl, J. E.; Schaffer, K. K.; Shaw, L.; Shirokoff, E.; Spieler, H. G.; Staniszewski, Z.; Stark, A. A.; Story, K. T.; van Engelen, A.; Vanderlinde, K.; Vieira, J. D.; Williamson, R.; Zahn, O.

    2015-01-28

    We present measurements of secondary cosmic microwave background (CMB) anisotropies and cosmic infrared background (CIB) fluctuations using data from the South Pole Telescope (SPT) covering the complete 2540 deg(2) SPT-SZ survey area. Data in the three SPT-SZ frequency bands centered at 95, 150, and 220 GHz, are used to produce six angular power spectra (three single-frequency auto-spectra and three cross-spectra) covering the multipole range 2000 < ℓ < 11, 000 (angular scales 5' gsim θ gsim 1'). These are the most precise measurements of the angular power spectra at ℓ > 2500 at these frequencies. The main contributors to the power spectra at these angular scales and frequencies are the primary CMB, CIB, thermal and kinematic Sunyaev-Zel'dovich effects (tSZ and kSZ), and radio galaxies. We include a constraint on the tSZ power from a measurement of the tSZ bispectrum from 800 deg(2) of the SPT-SZ survey. We measure the tSZ power at 143  GHz to be $D^{\\rm tSZ}_{3000} = 4.08^{+0.58}_{-0.67}\\,\\mu {\\rm K}^2{}$ and the kSZ power to be $D^{\\rm kSZ}_{3000} = 2.9 \\pm 1.3\\, \\mu {\\rm K}^2{}$. The data prefer positive kSZ power at 98.1% CL. We measure a correlation coefficient of $\\xi = 0.113^{+0.057}_{-0.054}$ between sources of tSZ and CIB power, with ξ < 0 disfavored at a confidence level of 99.0%. The constraint on kSZ power can be interpreted as an upper limit on the duration of reionization. When the post-reionization homogeneous kSZ signal is accounted for, we find an upper limit on the duration Δz < 5.4  at 95% CL.

  12. House Price Prediction Using LSTM

    OpenAIRE

    Chen, Xiaochen; Wei, Lai; Xu, Jiaxin

    2017-01-01

    In this paper, we use the house price data ranging from January 2004 to October 2016 to predict the average house price of November and December in 2016 for each district in Beijing, Shanghai, Guangzhou and Shenzhen. We apply Autoregressive Integrated Moving Average model to generate the baseline while LSTM networks to build prediction model. These algorithms are compared in terms of Mean Squared Error. The result shows that the LSTM model has excellent properties with respect to predict time...

  13. Spectrum of the extragalactic background light

    Energy Technology Data Exchange (ETDEWEB)

    Bruzual A, G [Centro de Investigacion de Astronomia, Merida (Venezuela)

    1981-01-01

    The observed spectrum of the extragalactic background light in the range from ultraviolet to optical wavelengths is compared with a model prediction. The model uses the locally observed luminosity function of galaxies as well as evolutionary models for galaxy spectral energy distributions. The predicition is too faint by a factor of about 10.

  14. Study on particle deposition in vertical square ventilation duct flows by different models

    International Nuclear Information System (INIS)

    Zhang Jinping; Li Angui

    2008-01-01

    A proper representation of the air flow in a ventilation duct is crucial for adequate prediction of the deposition velocity of particles. In this paper, the mean turbulent air flow fields are predicted by two different numerical models (the Reynolds stress transport model (RSM) and the realizable k-εmodel). Contours of mean streamwise velocity deduced from the k-ε model are compared with those obtained from the Reynolds stress transport model. Dimensionless deposition velocities of particles in downward and upward ventilation duct flows are also compared based on the flow fields presented by the two different numerical models. Trajectories of the particles are tracked using a one way coupling Lagrangian eddy-particle interaction model. Thousands of individual particles are released in the represented flow, and dimensionless deposition velocities are evaluated for the vertical walls in fully developed smooth vertical downward and upward square duct flows generated by the RSM and realizable k-ε model. The effects of particle diameter, dimensionless relaxation time, flow direction and air speed in vertical upward and downward square duct flows on the particle deposition velocities are discussed. The effects of lift and gravity on the particle deposition velocities are evaluated in vertical flows presented by the RSM. It is shown that the particle deposition velocities based on the RSM and realizable k-εmodel have subtle differences. The flow direction and the lift force significantly affect the particle deposition velocities in vertical duct flows. The simulation results are compared with earlier experimental data and the numerical results for fully developed duct flows. It is shown that the deposition velocities predicted are in agreement with the experimental data and the numerical results

  15. Physical modelling and numerical simulation of the round-to-square forward extrusion

    DEFF Research Database (Denmark)

    Gouveia, B.P.P.A.; Rodrigues, J.M.C.; Martins, P.A.F.

    2001-01-01

    , and comparisons are made between the numerical predictions and experimental data obtained through the utilisation of physical modelling. Assessment is made in terms of flow pattern and strain distribution for two different cross-sections corresponding to the axial symmetry planes of the three......-dimensional extrusion part. The experimental distribution of strain is determined from the shape change of quadrilateral grids previously printed on the surface of the axial cross-sections of the undeformed billets by means of large deformation square-grid analysis. Good agreement is obtained between physical...

  16. Testing the gravitational inverse-square law

    International Nuclear Information System (INIS)

    Adelberger, Eric; Heckel, B.; Hoyle, C.D.

    2005-01-01

    If the universe contains more than three spatial dimensions, as many physicists believe, our current laws of gravity should break down at small distances. When Isaac Newton realized that the acceleration of the Moon as it orbited around the Earth could be related to the acceleration of an apple as it fell to the ground, it was the first time that two seemingly unrelated physical phenomena had been 'unified'. The quest to unify all the forces of nature is one that still keeps physicists busy today. Newton showed that the gravitational attraction between two point bodies is proportional to the product of their masses and inversely proportional to the square of the distance between them. Newton's theory, which assumes that the gravitational force acts instantaneously, remained essentially unchallenged for roughly two centuries until Einstein proposed the general theory of relativity in 1915. Einstein's radical new theory made gravity consistent with the two basic ideas of relativity: the world is 4D - the three directions of space combined with time - and no physical effect can travel faster than light. The theory of general relativity states that gravity is not a force in the usual sense but a consequence of the curvature of this space-time produced by mass or energy. However, in the limit of low velocities and weak gravitational fields, Einstein's theory still predicts that the gravitational force between two point objects obeys an inverse-square law. One of the outstanding challenges in physics is to finish what Newton started and achieve the ultimate 'grand unification' - to unify gravity with the other three fundamental forces (the electromagnetic force, and the strong and weak nuclear forces) into a single quantum theory. In string theory - one of the leading candidates for an ultimate theory - the fundamental entities of nature are 1D strings and higher-dimensional objects called 'branes', rather than the point-like particles we are familiar with. String

  17. Absorbing systematic effects to obtain a better background model in a search for new physics

    International Nuclear Information System (INIS)

    Caron, S; Horner, S; Sundermann, J E; Cowan, G; Gross, E

    2009-01-01

    This paper presents a novel approach to estimate the Standard Model backgrounds based on modifying Monte Carlo predictions within their systematic uncertainties. The improved background model is obtained by altering the original predictions with successively more complex correction functions in signal-free control selections. Statistical tests indicate when sufficient compatibility with data is reached. In this way, systematic effects are absorbed into the new background model. The same correction is then applied on the Monte Carlo prediction in the signal region. Comparing this method to other background estimation techniques shows improvements with respect to statistical and systematic uncertainties. The proposed method can also be applied in other fields beyond high energy physics.

  18. Time Scale in Least Square Method

    Directory of Open Access Journals (Sweden)

    Özgür Yeniay

    2014-01-01

    Full Text Available Study of dynamic equations in time scale is a new area in mathematics. Time scale tries to build a bridge between real numbers and integers. Two derivatives in time scale have been introduced and called as delta and nabla derivative. Delta derivative concept is defined as forward direction, and nabla derivative concept is defined as backward direction. Within the scope of this study, we consider the method of obtaining parameters of regression equation of integer values through time scale. Therefore, we implemented least squares method according to derivative definition of time scale and obtained coefficients related to the model. Here, there exist two coefficients originating from forward and backward jump operators relevant to the same model, which are different from each other. Occurrence of such a situation is equal to total number of values of vertical deviation between regression equations and observation values of forward and backward jump operators divided by two. We also estimated coefficients for the model using ordinary least squares method. As a result, we made an introduction to least squares method on time scale. We think that time scale theory would be a new vision in least square especially when assumptions of linear regression are violated.

  19. Plane-wave Least-squares Reverse Time Migration

    KAUST Repository

    Dai, Wei

    2012-11-04

    Least-squares reverse time migration is formulated with a new parameterization, where the migration image of each shot is updated separately and a prestack image is produced with common image gathers. The advantage is that it can offer stable convergence for least-squares migration even when the migration velocity is not completely accurate. To significantly reduce computation cost, linear phase shift encoding is applied to hundreds of shot gathers to produce dozens of planes waves. A regularization term which penalizes the image difference between nearby angles are used to keep the prestack image consistent through all the angles. Numerical tests on a marine dataset is performed to illustrate the advantages of least-squares reverse time migration in the plane-wave domain. Through iterations of least-squares migration, the migration artifacts are reduced and the image resolution is improved. Empirical results suggest that the LSRTM in plane wave domain is an efficient method to improve the image quality and produce common image gathers.

  20. Anomalous structural transition of confined hard squares.

    Science.gov (United States)

    Gurin, Péter; Varga, Szabolcs; Odriozola, Gerardo

    2016-11-01

    Structural transitions are examined in quasi-one-dimensional systems of freely rotating hard squares, which are confined between two parallel walls. We find two competing phases: one is a fluid where the squares have two sides parallel to the walls, while the second one is a solidlike structure with a zigzag arrangement of the squares. Using transfer matrix method we show that the configuration space consists of subspaces of fluidlike and solidlike phases, which are connected with low probability microstates of mixed structures. The existence of these connecting states makes the thermodynamic quantities continuous and precludes the possibility of a true phase transition. However, thermodynamic functions indicate strong tendency for the phase transition and our replica exchange Monte Carlo simulation study detects several important markers of the first order phase transition. The distinction of a phase transition from a structural change is practically impossible with simulations and experiments in such systems like the confined hard squares.

  1. Least Squares Data Fitting with Applications

    DEFF Research Database (Denmark)

    Hansen, Per Christian; Pereyra, Víctor; Scherer, Godela

    As one of the classical statistical regression techniques, and often the first to be taught to new students, least squares fitting can be a very effective tool in data analysis. Given measured data, we establish a relationship between independent and dependent variables so that we can use the data....... In a number of applications, the accuracy and efficiency of the least squares fit is central, and Per Christian Hansen, Víctor Pereyra, and Godela Scherer survey modern computational methods and illustrate them in fields ranging from engineering and environmental sciences to geophysics. Anyone working...... with problems of linear and nonlinear least squares fitting will find this book invaluable as a hands-on guide, with accessible text and carefully explained problems. Included are • an overview of computational methods together with their properties and advantages • topics from statistical regression analysis...

  2. Multivariate power-law models for streamflow prediction in the Mekong Basin

    Directory of Open Access Journals (Sweden)

    Guillaume Lacombe

    2014-11-01

    New hydrological insights for the region: A combination of 3–6 explanatory variables – chosen among annual rainfall, drainage area, perimeter, elevation, slope, drainage density and latitude – is sufficient to predict a range of flow metrics with a prediction R-squared ranging from 84 to 95%. The inclusion of forest or paddy percentage coverage as an additional explanatory variable led to slight improvements in the predictive power of some of the low-flow models (lowest prediction R-squared = 89%. A physical interpretation of the model structure was possible for most of the resulting relationships. Compared to regional regression models developed in other parts of the world, this new set of equations performs reasonably well.

  3. Effect of background dielectric on TE-polarized photonic bandgap of metallodielectric photonic crystals using Dirichlet-to-Neumann map method.

    Science.gov (United States)

    Sedghi, Aliasghar; Rezaei, Behrooz

    2016-11-20

    Using the Dirichlet-to-Neumann map method, we have calculated the photonic band structure of two-dimensional metallodielectric photonic crystals having the square and triangular lattices of circular metal rods in a dielectric background. We have selected the transverse electric mode of electromagnetic waves, and the resulting band structures showed the existence of photonic bandgap in these structures. We theoretically study the effect of background dielectric on the photonic bandgap.

  4. Cosmic far-infrared background at high galactic latitudes

    International Nuclear Information System (INIS)

    Stecker, F.W.; Puget, J.L.; Fazio, G.G.

    1977-01-01

    We predict far-infrared background fluxes from various cosmic sources. These fluxes lie near the high-frequency side of the blackbody radiation spectrum. These sources could account for a significant fraction of the background radiation at frequencies above 400 GHz which might be misinterpreted as a ''Comptonization'' distortion of the blackbody radiation. Particular attention is paid to the possible contributions from external galaxies, from rich clusters of galaxies, and from galactic dust emission

  5. Cosmic far-infrared background at high galactic latitudes

    International Nuclear Information System (INIS)

    Stecker, F.W.; Puget, J.L.; Fazio, G.G.

    1976-12-01

    Far-infrared background fluxes from various cosmic sources are predicted. These fluxes lie near the high-frequency side of the blackbody radiation spectrum. These sources could account for a significant fraction of the background radiation at frequencies above 400 GHz, which might be misinterpreted as a comptonization distortion of the blackbody radiation. Particular attention is paid to the possible contributions from external galaxies, rich clusters of galaxies and from galactic dust emission

  6. Closure of the squared Zakharov--Shabat eigenstates

    International Nuclear Information System (INIS)

    Kaup, D.J.

    1976-01-01

    By solution of the inverse scattering problem for a third-order (degenerate) eigenvalue problem, the closure of the squared eigenfunctions of the Zakharov--Shabat equations is found. The question of the completeness of squared eigenstates occurs in many aspects of ''inverse scattering transforms'' (solving nonlinear evolution equations exactly by inverse scattering techniques), as well as in various aspects of the inverse scattering problem. The method used here is quite suggestive as to how one might find the closure of the squared eigenfunctions of other eigenvalue equations, and the strong analogy between these results and the problem of finding the closure of the eigenvectors of a nonself-adjoint matrix is pointed out

  7. Facilitated ion transfer of protonated primary organic amines studied by square wave voltammetry and chronoamperometry

    Energy Technology Data Exchange (ETDEWEB)

    Torralba, E. [Departamento de Química Física, Facultad de Química, Universidad de Murcia, Murcia 30100 (Spain); Ortuño, J.A. [Departamento de Química Analítica, Facultad de Química, Universidad de Murcia, Murcia 30100 (Spain); Molina, A., E-mail: amolina@um.es [Departamento de Química Física, Facultad de Química, Universidad de Murcia, Murcia 30100 (Spain); Serna, C. [Departamento de Química Física, Facultad de Química, Universidad de Murcia, Murcia 30100 (Spain); Karimian, F. [Department of Chemistry, Faculty of Sciences, Ferdowsi University of Mashhad, Mashhad (Iran, Islamic Republic of)

    2014-05-01

    Highlights: • Facilitated ion transfer of organic protonated amines is studied. • Cyclic square wave voltammetry is used as main technique. • Complexation constants and standard ion transfer potentials are determined. • Diffusion coefficients in the organic and aqueous phases are determined. • The goodness of square wave voltammetry as analytical tool is shown. - Abstract: The transfer of the protonated forms of heptylamine, octylamine, decylamine, procaine and procainamide facilitated by dibenzo-18-crown-6 from water to a solvent polymeric membrane has been investigated by using cyclic square wave voltammetry. The experimental voltammograms obtained are in good agreement with theoretical predictions. The values of the standard ion transfer potential, complexation constant and diffusion coefficient in water have been obtained from these experiments, and have been used to draw some conclusions about the lipophilicity of these species and the relative stability of the organic ammonium complexes with dibenzo-18-crown-6. The results have been compared with those provided by linear sweep voltammetry. Calibration graphs were obtained with both techniques. An interesting chronoamperometric method for the determination of the diffusion coefficient of the target ion in the membrane has been developed and applied to all these protonated amines.

  8. The cosmic microwave background: past, present and future

    International Nuclear Information System (INIS)

    Silk, Joseph

    2007-01-01

    The cosmic microwave background has provided an unprecedented cosmological window on the very early universe for probing the initial conditions from which structure evolved. Infinitesimal variations in temperature on the sky, first predicted in 1967 but only discovered in the 1990s, provide the fossil fluctuations that seeded the formation of the galaxies. The cosmic microwave background radiation has now been mapped with ground-based, balloon-borne and satellite telescopes. I describe its current status and future challenges

  9. Least Squares Adjustment: Linear and Nonlinear Weighted Regression Analysis

    DEFF Research Database (Denmark)

    Nielsen, Allan Aasbjerg

    2007-01-01

    This note primarily describes the mathematics of least squares regression analysis as it is often used in geodesy including land surveying and satellite positioning applications. In these fields regression is often termed adjustment. The note also contains a couple of typical land surveying...... and satellite positioning application examples. In these application areas we are typically interested in the parameters in the model typically 2- or 3-D positions and not in predictive modelling which is often the main concern in other regression analysis applications. Adjustment is often used to obtain...... the clock error) and to obtain estimates of the uncertainty with which the position is determined. Regression analysis is used in many other fields of application both in the natural, the technical and the social sciences. Examples may be curve fitting, calibration, establishing relationships between...

  10. The WIMP Forest: Indirect Detection of a Chiral Square

    Energy Technology Data Exchange (ETDEWEB)

    Bertone, Gianfranco; Jackson, C.B.; Shaughnessy, Gabe; Tait, Tim M.P.; Vallinotto, Alberto

    2009-04-01

    The spectrum of photons arising from WIMP annihilation carries a detailed imprint of the structure of the dark sector. In particular, loop-level annihilations into a photon and another boson can in principle lead to a series of lines (a WIMP forest) at energies up to the WIMP mass. A specific model which illustrates this feature nicely is a theory of two universal extra dimensions compactified on a chiral square. Aside from the continuum emission, which is a generic prediction of most dark matter candidates, we find a 'forest' of prominent annihilation lines that, after convolution with the angular resolution of current experiments, leads to a distinctive (2-bump plus continuum) spectrum, which may be visible in the near future with the Fermi Gamma-Ray Space Telescope (formerly known as GLAST).

  11. MAGIC MOORE-PENROSE INVERSES AND PHILATELIC MAGIC SQUARES WITH SPECIAL EMPHASIS ON THE DANIELS–ZLOBEC MAGIC SQUARE

    Directory of Open Access Journals (Sweden)

    Ka Lok Chu

    2011-02-01

    Full Text Available We study singular magic matrices in which the numbers in the rows and columns and in the two main diagonals all add up to the same sum. Our interest focuses on such magic matrices for which the Moore–Penrose inverse is also magic. Special attention is given to the “Daniels–Zlobec magic square’’ introduced by the British magician and television performer Paul Daniels (b. 1938 and considered by Zlobec (2001; see also Murray (1989, pp. 30–32. We introduce the concept of a “philatelic magic square” as a square arrangement of images of postage stamps so that the associated nominal values form a magic square. Three philatelic magic squares with stamps especially chosen for Sanjo Zlobec are presented in celebration of his 70th birthday; most helpful in identifying these stamps was an Excel checklist by Männikkö (2009.

  12. Square Stent: A New Self-Expandable Endoluminal Device and Its Applications

    International Nuclear Information System (INIS)

    Pavcnik, Dusan; Uchida, Barry; Timmermans, Hans; Keller, Frederick S.; Roesch, Josef

    2001-01-01

    The square stent is a new, simply constructed, self-expanding device that has recently been described. Compared with other stents, the square stent has a minimal amount of metal and thus requires a smaller-diameter catheter for introduction. Despite the small amount of metal present, the square stent has adequate expansile force. We have been evaluating the square stent for various interventional applications. In addition to the basic square stent, combinations of square stents and coverings for square stents were developed and evaluated to expand its uses and indications. One of the coverings tested is a new biomaterial: small intestinal submucosa (SIS). This paper will discuss the various applications of the square stent, which include a retrievable inferior vena cava filter, vascular occluder, graft adapter, and venous and aortic valves. In addition, we will review the important properties of SIS as a covering for the square stent

  13. Ablation plume dynamics in a background gas

    DEFF Research Database (Denmark)

    Amoruso, Salvatore; Schou, Jørgen; Lunney, James G.

    2010-01-01

    The expansion of a plume in a background gas of pressure comparable to that used in pulsed laser deposition (PLD) has been analyzed in terms of the model of Predtechensky and Mayorov (PM). This approach gives a relatively clear and simple description of the essential hydrodynamics during the expa......The expansion of a plume in a background gas of pressure comparable to that used in pulsed laser deposition (PLD) has been analyzed in terms of the model of Predtechensky and Mayorov (PM). This approach gives a relatively clear and simple description of the essential hydrodynamics during...... the expansion. The model also leads to an insightful treatment of the stopping behavior in dimensionless units for plumes and background gases of different atomic/molecular masses. The energetics of the plume dynamics can also be treated with this model. Experimental time-of-flight data of silver ions in a neon...... background gas show a fair agreement with predictions from the PM-model. Finally we discuss the validity of the model, if the work done by the pressure of the background gas is neglected....

  14. Quantitative Analysis of Adulterations in Oat Flour by FT-NIR Spectroscopy, Incomplete Unbalanced Randomized Block Design, and Partial Least Squares

    Directory of Open Access Journals (Sweden)

    Ning Wang

    2014-01-01

    Full Text Available This paper developed a rapid and nondestructive method for quantitative analysis of a cheaper adulterant (wheat flour in oat flour by NIR spectroscopy and chemometrics. Reflectance FT-NIR spectra in the range of 4000 to 12000 cm−1 of 300 oat flour objects adulterated with wheat flour were measured. The doping levels of wheat flour ranged from 5% to 50% (w/w. To ensure the generalization performance of the method, both the oat and the wheat flour samples were collected from different producing areas and an incomplete unbalanced randomized block (IURB design was performed to include the significant variations that may be encountered in future samples. Partial least squares regression (PLSR was used to develop calibration models for predicting the levels of wheat flour. Different preprocessing methods including smoothing, taking second-order derivative (D2, and standard normal variate (SNV transformation were investigated to improve the model accuracy of PLS. The root mean squared error of Monte Carlo cross-validation (RMSEMCCV and root mean squared error of prediction (RMSEP were 1.921 and 1.975 (%, w/w by D2-PLS, respectively. The results indicate that NIR and chemometrics can provide a rapid method for quantitative analysis of wheat flour in oat flour.

  15. Numerical analysis of turbulent flow and heat transfer in a square sectioned U-bend duct by elliptic-blending second moment closure

    International Nuclear Information System (INIS)

    Shin, Jong Keun; Choi, Young Don; An, Jeong Soo

    2007-01-01

    A second moment turbulence closure using the elliptic-blending equation is introduced to analyze the turbulence and heat transfer in a square sectioned U-bend duct flow. The turbulent heat flux model based on the elliptic concept satisfies the near-wall balance between viscous diffusion, viscous dissipation and temperature-pressure gradient correlation, and also has the characteristics of approaching its respective conventional high Reynolds number model far away from the wall. Also, the traditional GGDH heat flux model is compared with the present elliptic concept-based heat flux model. The turbulent heat flux models are closely linked to the elliptic blending second moment closure which is used for the prediction of Reynolds stresses. The predicted results show their reasonable agreement with experimental data for a square sectioned U-bend duct flow field adopted in the present study

  16. A least-squares computational ''tool kit''

    International Nuclear Information System (INIS)

    Smith, D.L.

    1993-04-01

    The information assembled in this report is intended to offer a useful computational ''tool kit'' to individuals who are interested in a variety of practical applications for the least-squares method of parameter estimation. The fundamental principles of Bayesian analysis are outlined first and these are applied to development of both the simple and the generalized least-squares conditions. Formal solutions that satisfy these conditions are given subsequently. Their application to both linear and non-linear problems is described in detail. Numerical procedures required to implement these formal solutions are discussed and two utility computer algorithms are offered for this purpose (codes LSIOD and GLSIOD written in FORTRAN). Some simple, easily understood examples are included to illustrate the use of these algorithms. Several related topics are then addressed, including the generation of covariance matrices, the role of iteration in applications of least-squares procedures, the effects of numerical precision and an approach that can be pursued in developing data analysis packages that are directed toward special applications

  17. The Isotropic Radio Background and Annihilating Dark Matter

    Energy Technology Data Exchange (ETDEWEB)

    Hooper, Dan [Fermi National Accelerator Laboratory (FNAL), Batavia, IL (United States); Belikov, Alexander V. [Institut d' Astrophysique (France); Jeltema, Tesla E. [Univ. of California, Santa Cruz, CA (United States); Linden, Tim [Univ. of California, Santa Cruz, CA (United States); Profumo, Stefano [Univ. of California, Santa Cruz, CA (United States); Slatyer, Tracy R. [Princeton Univ., Princeton, NJ (United States)

    2012-11-01

    Observations by ARCADE-2 and other telescopes sensitive to low frequency radiation have revealed the presence of an isotropic radio background with a hard spectral index. The intensity of this observed background is found to exceed the flux predicted from astrophysical sources by a factor of approximately 5-6. In this article, we consider the possibility that annihilating dark matter particles provide the primary contribution to the observed isotropic radio background through the emission of synchrotron radiation from electron and positron annihilation products. For reasonable estimates of the magnetic fields present in clusters and galaxies, we find that dark matter could potentially account for the observed radio excess, but only if it annihilates mostly to electrons and/or muons, and only if it possesses a mass in the range of approximately 5-50 GeV. For such models, the annihilation cross section required to normalize the synchrotron signal to the observed excess is sigma v ~ (0.4-30) x 10^-26 cm^3/s, similar to the value predicted for a simple thermal relic (sigma v ~ 3 x 10^-26 cm^3/s). We find that in any scenario in which dark matter annihilations are responsible for the observed excess radio emission, a significant fraction of the isotropic gamma ray background observed by Fermi must result from dark matter as well.

  18. Prediction of potential compressive strength of Portland clinker from its mineralogy

    DEFF Research Database (Denmark)

    Svinning, K.; Høskuldsson, Agnar; Justnes, H.

    2010-01-01

    Based on a statistical model first applied for prediction of compressive strength up to 28 d from the microstructure of Portland cement, potential compressive strength of clinker has been predicted from its mineralogy. The prediction model was evaluated by partial least squares regression...

  19. Square Turing patterns in reaction-diffusion systems with coupled layers

    Energy Technology Data Exchange (ETDEWEB)

    Li, Jing [State Key Laboratory for Mesoscopic Physics and School of Physics, Peking University, Beijing 100871 (China); Wang, Hongli, E-mail: hlwang@pku.edu.cn, E-mail: qi@pku.edu.cn [State Key Laboratory for Mesoscopic Physics and School of Physics, Peking University, Beijing 100871 (China); Center for Quantitative Biology, Peking University, Beijing 100871 (China); Ouyang, Qi, E-mail: hlwang@pku.edu.cn, E-mail: qi@pku.edu.cn [State Key Laboratory for Mesoscopic Physics and School of Physics, Peking University, Beijing 100871 (China); Center for Quantitative Biology, Peking University, Beijing 100871 (China); The Peking-Tsinghua Center for Life Sciences, Beijing 100871 (China)

    2014-06-15

    Square Turing patterns are usually unstable in reaction-diffusion systems and are rarely observed in corresponding experiments and simulations. We report here an example of spontaneous formation of square Turing patterns with the Lengyel-Epstein model of two coupled layers. The squares are found to be a result of the resonance between two supercritical Turing modes with an appropriate ratio. Besides, the spatiotemporal resonance of Turing modes resembles to the mode-locking phenomenon. Analysis of the general amplitude equations for square patterns reveals that the fixed point corresponding to square Turing patterns is stationary when the parameters adopt appropriate values.

  20. On squares of representations of compact Lie algebras

    International Nuclear Information System (INIS)

    Zeier, Robert; Zimborás, Zoltán

    2015-01-01

    We study how tensor products of representations decompose when restricted from a compact Lie algebra to one of its subalgebras. In particular, we are interested in tensor squares which are tensor products of a representation with itself. We show in a classification-free manner that the sum of multiplicities and the sum of squares of multiplicities in the corresponding decomposition of a tensor square into irreducible representations has to strictly grow when restricted from a compact semisimple Lie algebra to a proper subalgebra. For this purpose, relevant details on tensor products of representations are compiled from the literature. Since the sum of squares of multiplicities is equal to the dimension of the commutant of the tensor-square representation, it can be determined by linear-algebra computations in a scenario where an a priori unknown Lie algebra is given by a set of generators which might not be a linear basis. Hence, our results offer a test to decide if a subalgebra of a compact semisimple Lie algebra is a proper one without calculating the relevant Lie closures, which can be naturally applied in the field of controlled quantum systems

  1. On squares of representations of compact Lie algebras

    Energy Technology Data Exchange (ETDEWEB)

    Zeier, Robert, E-mail: robert.zeier@ch.tum.de [Department Chemie, Technische Universität München, Lichtenbergstrasse 4, 85747 Garching (Germany); Zimborás, Zoltán, E-mail: zimboras@gmail.com [Department of Computer Science, University College London, Gower St., London WC1E 6BT (United Kingdom)

    2015-08-15

    We study how tensor products of representations decompose when restricted from a compact Lie algebra to one of its subalgebras. In particular, we are interested in tensor squares which are tensor products of a representation with itself. We show in a classification-free manner that the sum of multiplicities and the sum of squares of multiplicities in the corresponding decomposition of a tensor square into irreducible representations has to strictly grow when restricted from a compact semisimple Lie algebra to a proper subalgebra. For this purpose, relevant details on tensor products of representations are compiled from the literature. Since the sum of squares of multiplicities is equal to the dimension of the commutant of the tensor-square representation, it can be determined by linear-algebra computations in a scenario where an a priori unknown Lie algebra is given by a set of generators which might not be a linear basis. Hence, our results offer a test to decide if a subalgebra of a compact semisimple Lie algebra is a proper one without calculating the relevant Lie closures, which can be naturally applied in the field of controlled quantum systems.

  2. Mean square stabilization and mean square exponential stabilization of stochastic BAM neural networks with Markovian jumping parameters

    International Nuclear Information System (INIS)

    Ye, Zhiyong; Zhang, He; Zhang, Hongyu; Zhang, Hua; Lu, Guichen

    2015-01-01

    Highlights: •This paper introduces a non-conservative Lyapunov functional. •The achieved results impose non-conservative and can be widely used. •The conditions are easily checked by the Matlab LMI Tool Box. The desired state feedback controller can be well represented by the conditions. -- Abstract: This paper addresses the mean square exponential stabilization problem of stochastic bidirectional associative memory (BAM) neural networks with Markovian jumping parameters and time-varying delays. By establishing a proper Lyapunov–Krasovskii functional and combining with LMIs technique, several sufficient conditions are derived for ensuring exponential stabilization in the mean square sense of such stochastic BAM neural networks. In addition, the achieved results are not difficult to verify for determining the mean square exponential stabilization of delayed BAM neural networks with Markovian jumping parameters and impose less restrictive and less conservative than the ones in previous papers. Finally, numerical results are given to show the effectiveness and applicability of the achieved results

  3. TAHRIR SQUARE: A Narrative of a Public Space

    Directory of Open Access Journals (Sweden)

    Hussam Hussein Salama

    2013-03-01

    Full Text Available This paper investigates the patterns of public discourse that occurred in Tahrir Square during the 18 days of the Egyptian Revolution. For protestors Tahrir Square became an urban utopia, a place of community engagement, collective projects, social discourse, and most importantly, freedom of speech and expression. This paper traces these forms of spatial adaptation, and the patterns of social organization and discourse that emerged in the square during that period. The paper builds on Henri Lefebvre’s interpretation of space and his three dimensional conceptualization: the perceived, the conceived, and the lived.

  4. Cosmic microwave background distortions at high frequencies

    International Nuclear Information System (INIS)

    Peter, W.; Peratt, A.L.

    1988-01-01

    The authors analyze the deviation of the cosmic background radiation spectrum from the 2.76+-0.02 0 Κ blackbody curve. If the cosmic background radiation is due to absorption and re-emission of synchrotron radiation from galactic-width current filaments, higher-order synchrotron modes are less thermalized than lower-order modes, causing a distortion of the blackbody curve at higher frequencies. New observations of the microwave background spectrum at short wavelengths should provide an indication of the number of synchrotron modes thermalized in this process. The deviation of the spectrum from that of a perfect blackbody can thus be correlated with astronomical observations such as filament temperatures and electron energies. The results are discussed and compared with the theoretical predictions of other models which assume the presence of intergalactic superconducting cosmic strings

  5. Cosmic microwave background probes models of inflation

    Science.gov (United States)

    Davis, Richard L.; Hodges, Hardy M.; Smoot, George F.; Steinhardt, Paul J.; Turner, Michael S.

    1992-01-01

    Inflation creates both scalar (density) and tensor (gravity wave) metric perturbations. We find that the tensor-mode contribution to the cosmic microwave background anisotropy on large-angular scales can only exceed that of the scalar mode in models where the spectrum of perturbations deviates significantly from scale invariance. If the tensor mode dominates at large-angular scales, then the value of DeltaT/T predicted on 1 deg is less than if the scalar mode dominates, and, for cold-dark-matter models, bias factors greater than 1 can be made consistent with Cosmic Background Explorer (COBE) DMR results.

  6. Perception of Length to Width Relations of City Squares

    Directory of Open Access Journals (Sweden)

    Harold T. Nefs

    2013-04-01

    Full Text Available In this paper, we focus on how people perceive the aspect ratio of city squares. Earlier research has focused on distance perception but not so much on the perceived aspect ratio of the surrounding space. Furthermore, those studies have focused on “open” spaces rather than urban areas enclosed by walls, houses and filled with people, cars, etc. In two experiments, we therefore measured, using a direct and an indirect method, the perceived aspect ratio of five city squares in the historic city center of Delft, the Netherlands. We also evaluated whether the perceived aspect ratio of city squares was affected by the position of the observer on the square. In the first experiment, participants were asked to set the aspect ratio of a small rectangle such that it matched the perceived aspect ratio of the city square. In the second experiment, participants were asked to estimate the length and width of the city square separately. In the first experiment, we found that the perceived aspect ratio was in general lower than the physical aspect ratio. However, in the second experiment, we found that the calculated ratios were close to veridical except for the most elongated city square. We conclude therefore that the outcome depends on how the measurements are performed. Furthermore, although indirect measurements are nearly veridical, the perceived aspect ratio is an underestimation of the physical aspect ratio when measured in a direct way. Moreover, the perceived aspect ratio also depends on the location of the observer. These results may be beneficial to the design of large open urban environments, and in particular to rectangular city squares.

  7. Good Filtrations and the Steinberg Square

    DEFF Research Database (Denmark)

    Kildetoft, Tobias

    that tensoring the Steinberg module with a simple module of restricted highest weight gives a module with a good filtration. This result was first proved by Andersen when the characteristic is large enough. In this dissertation, generalizations of those results, which are joint work with Daniel Nakano......, the socle completely determines how a Steinberg square decomposes. The dissertation also investigates the socle of the Steinberg square for a finite group of Lie type, again providing formulas which describe how to find the multiplicity of a simple module in the socle, given information about...

  8. Masticatory Muscle Sleep Background EMG Activity is Elevated in Myofascial TMD Patients

    Science.gov (United States)

    Raphael, Karen G.; Janal, Malvin N.; Sirois, David A.; Dubrovsky, Boris; Wigren, Pia E.; Klausner, Jack J.; Krieger, Ana C.; Lavigne, Gilles J.

    2013-01-01

    Despite theoretical speculation and strong clinical belief, recent research using laboratory polysomnographic (PSG) recording has provided new evidence that frequency of sleep bruxism (SB) masseter muscle events, including grinding or clenching of the teeth during sleep, is not increased for women with chronic myofascial temporomandibular disorder (TMD). The current case-control study compares a large sample of women suffering from chronic myofascial TMD (n=124) with a demographically matched control group without TMD (n=46) on sleep background electromyography (EMG) during a laboratory PSG study. Background EMG activity was measured as EMG root mean square (RMS) from the right masseter muscle after lights out. Sleep background EMG activity was defined as EMG RMS remaining after activity attributable to SB, other orofacial activity, other oromotor activity and movement artifacts were removed. Results indicated that median background EMG during these non SB-event periods was significantly higher (pcases exceeding control activity. Moreover, for TMD cases, background EMG was positively associated and SB event-related EMG was negatively associated with pain intensity ratings (0–10 numerical scale) on post sleep waking. These data provide the foundation for a new focus on small, but persistent, elevations in sleep EMG activity over the course of the night as a mechanism of pain induction or maintenance. PMID:24237356

  9. Counting Triangles to Sum Squares

    Science.gov (United States)

    DeMaio, Joe

    2012-01-01

    Counting complete subgraphs of three vertices in complete graphs, yields combinatorial arguments for identities for sums of squares of integers, odd integers, even integers and sums of the triangular numbers.

  10. Influences of culture and environmental attitude on thermal, emotional and perceptual evaluations of a public square

    Science.gov (United States)

    Knez, Igor; Thorsson, Sofia

    2006-05-01

    The main objective of the present quasi-experimental study was to examine the influence of culture (Swedish vs Japanese) and environmental attitude (urban vs open-air person) on participants’ thermal, emotional and perceptual assessments of a square, within the PET (physiological equivalent temperature) comfortable interval of 18 23°C. It was predicted that persons living in different cultures with different environmental attitudes would psychologically evaluate a square differently despite similar thermal conditions. Consistent with this prediction, Japanese participants estimated the current weather as warmer than did Swedish participants and, consistent with this, they felt less thermally comfortable on the site, although participants in both countries perceived similar comfortable thermal outdoor conditions according to the PET index. Compared to the Japanese, the Swedes estimated both the current weather and the site as windier and colder, indicating a consistency in weather assessment on calm-windy and warm-cold scales in participants in both cultures. Furthermore, Swedish participants felt more glad and calm on the site and, in line with their character (more glad than gloomy), they estimated the square as more beautiful and pleasant than did Japanese participants. All this indicates that thermal, emotional and perceptual assessments of a physical place may be intertwined with psychological schema-based and socio-cultural processes, rather than fixed by general thermal indices developed in line with physiological heat balance models. In consequence, this implies that thermal comfort indices may not be applicable in different cultural/climate zones without modifications, and that they may not be appropriate if we do not take into account the psychological processes involved in environmental assessment.

  11. Model output statistics applied to wind power prediction

    Energy Technology Data Exchange (ETDEWEB)

    Joensen, A; Giebel, G; Landberg, L [Risoe National Lab., Roskilde (Denmark); Madsen, H; Nielsen, H A [The Technical Univ. of Denmark, Dept. of Mathematical Modelling, Lyngby (Denmark)

    1999-03-01

    Being able to predict the output of a wind farm online for a day or two in advance has significant advantages for utilities, such as better possibility to schedule fossil fuelled power plants and a better position on electricity spot markets. In this paper prediction methods based on Numerical Weather Prediction (NWP) models are considered. The spatial resolution used in NWP models implies that these predictions are not valid locally at a specific wind farm. Furthermore, due to the non-stationary nature and complexity of the processes in the atmosphere, and occasional changes of NWP models, the deviation between the predicted and the measured wind will be time dependent. If observational data is available, and if the deviation between the predictions and the observations exhibits systematic behavior, this should be corrected for; if statistical methods are used, this approaches is usually referred to as MOS (Model Output Statistics). The influence of atmospheric turbulence intensity, topography, prediction horizon length and auto-correlation of wind speed and power is considered, and to take the time-variations into account, adaptive estimation methods are applied. Three estimation techniques are considered and compared, Extended Kalman Filtering, recursive least squares and a new modified recursive least squares algorithm. (au) EU-JOULE-3. 11 refs.

  12. Predictive ability of machine learning methods for massive crop yield prediction

    Directory of Open Access Journals (Sweden)

    Alberto Gonzalez-Sanchez

    2014-04-01

    Full Text Available An important issue for agricultural planning purposes is the accurate yield estimation for the numerous crops involved in the planning. Machine learning (ML is an essential approach for achieving practical and effective solutions for this problem. Many comparisons of ML methods for yield prediction have been made, seeking for the most accurate technique. Generally, the number of evaluated crops and techniques is too low and does not provide enough information for agricultural planning purposes. This paper compares the predictive accuracy of ML and linear regression techniques for crop yield prediction in ten crop datasets. Multiple linear regression, M5-Prime regression trees, perceptron multilayer neural networks, support vector regression and k-nearest neighbor methods were ranked. Four accuracy metrics were used to validate the models: the root mean square error (RMS, root relative square error (RRSE, normalized mean absolute error (MAE, and correlation factor (R. Real data of an irrigation zone of Mexico were used for building the models. Models were tested with samples of two consecutive years. The results show that M5-Prime and k-nearest neighbor techniques obtain the lowest average RMSE errors (5.14 and 4.91, the lowest RRSE errors (79.46% and 79.78%, the lowest average MAE errors (18.12% and 19.42%, and the highest average correlation factors (0.41 and 0.42. Since M5-Prime achieves the largest number of crop yield models with the lowest errors, it is a very suitable tool for massive crop yield prediction in agricultural planning.

  13. Spontaneous Formation of A Nanotube From A Square Ag Nanowire: An Atomistic View

    Science.gov (United States)

    Konuk Onat, Mine; Durukanoglu, Sondan

    2012-02-01

    We have performed molecular static calculations to investigate the recently observed phenomenon of the spontaneous formation of a nanotube from a regular, square Ag nanowire[1]. In the simulations, atoms are allowed to interact via the model potential obtained from the modified embedded atom method. Our simulations predict that this particular type of structural phase transformation is controlled by the nature of applied strain, length of the wire and initial cross-sectional shape. For such a perfect structural transformation, the axially oriented fcc nanowire needs (1) to be formed by stacking A and B layers of an fcc crystal, both possessing the geometry of two interpenetrating one-lattice-parameter-wide squares, containing four atoms each, (2) to have an optimum length of eight layers, and (3) to be exposed to a combination of low and high stress along the length direction. The results further offer insights into atomistic nature of this specific structural transformation into a nanotube with the smallest possible cross-section. [1] M.J. Lagos et al., Nature Nanotech. 4, 149 (2009).

  14. Application of Artificial Neural Networks in Canola Crop Yield Prediction

    Directory of Open Access Journals (Sweden)

    S. J. Sajadi

    2014-02-01

    Full Text Available Crop yield prediction has an important role in agricultural policies such as specification of the crop price. Crop yield prediction researches have been based on regression analysis. In this research canola yield was predicted using Artificial Neural Networks (ANN using 11 crop year climate data (1998-2009 in Gonbad-e-Kavoos region of Golestan province. ANN inputs were mean weekly rainfall, mean weekly temperature, mean weekly relative humidity and mean weekly sun shine hours and ANN output was canola yield (kg/ha. Multi-Layer Perceptron networks (MLP with Levenberg-Marquardt backpropagation learning algorithm was used for crop yield prediction and Root Mean Square Error (RMSE and square of the Correlation Coefficient (R2 criterions were used to evaluate the performance of the ANN. The obtained results show that the 13-20-1 network has the lowest RMSE equal to 101.235 and maximum value of R2 equal to 0.997 and is suitable for predicting canola yield with climate factors.

  15. Improvement of Risk Prediction After Transcatheter Aortic Valve Replacement by Combining Frailty With Conventional Risk Scores.

    Science.gov (United States)

    Schoenenberger, Andreas W; Moser, André; Bertschi, Dominic; Wenaweser, Peter; Windecker, Stephan; Carrel, Thierry; Stuck, Andreas E; Stortecky, Stefan

    2018-02-26

    This study sought to evaluate whether frailty improves mortality prediction in combination with the conventional scores. European System for Cardiac Operative Risk Evaluation (EuroSCORE) or Society of Thoracic Surgeons (STS) score have not been evaluated in combined models with frailty for mortality prediction after transcatheter aortic valve replacement (TAVR). This prospective cohort comprised 330 consecutive TAVR patients ≥70 years of age. Conventional scores and a frailty index (based on assessment of cognition, mobility, nutrition, and activities of daily living) were evaluated to predict 1-year all-cause mortality using Cox proportional hazards regression (providing hazard ratios [HRs] with confidence intervals [CIs]) and measures of test performance (providing likelihood ratio [LR] chi-square test statistic and C-statistic [CS]). All risk scores were predictive of the outcome (EuroSCORE, HR: 1.90 [95% CI: 1.45 to 2.48], LR chi-square test statistic 19.29, C-statistic 0.67; STS score, HR: 1.51 [95% CI: 1.21 to 1.88], LR chi-square test statistic 11.05, C-statistic 0.64; frailty index, HR: 3.29 [95% CI: 1.98 to 5.47], LR chi-square test statistic 22.28, C-statistic 0.66). A combination of the frailty index with either EuroSCORE (LR chi-square test statistic 38.27, C-statistic 0.72) or STS score (LR chi-square test statistic 28.71, C-statistic 0.68) improved mortality prediction. The frailty index accounted for 58.2% and 77.6% of the predictive information in the combined model with EuroSCORE and STS score, respectively. Net reclassification improvement and integrated discrimination improvement confirmed that the added frailty index improved risk prediction. This is the first study showing that the assessment of frailty significantly enhances prediction of 1-year mortality after TAVR in combined risk models with conventional risk scores and relevantly contributes to this improvement. Copyright © 2018 American College of Cardiology Foundation

  16. String pair production in non homogeneous backgrounds

    Energy Technology Data Exchange (ETDEWEB)

    Bolognesi, S. [Department of Physics “E. Fermi” University of Pisa, and INFN - Sezione di Pisa,Largo Pontecorvo, 3, Ed. C, 56127 Pisa (Italy); Rabinovici, E. [Racah Institute of Physics, The Hebrew University of Jerusalem,91904 Jerusalem (Israel); Tallarita, G. [Departamento de Ciencias, Facultad de Artes Liberales,Universidad Adolfo Ibáñez, Santiago 7941169 (Chile)

    2016-04-28

    We consider string pair production in non homogeneous electric backgrounds. We study several particular configurations which can be addressed with the Euclidean world-sheet instanton technique, the analogue of the world-line instanton for particles. In the first case the string is suspended between two D-branes in flat space-time, in the second case the string lives in AdS and terminates on one D-brane (this realizes the holographic Schwinger effect). In some regions of parameter space the result is well approximated by the known analytical formulas, either the particle pair production in non-homogeneous background or the string pair production in homogeneous background. In other cases we see effects which are intrinsically stringy and related to the non-homogeneity of the background. The pair production is enhanced already for particles in time dependent electric field backgrounds. The string nature enhances this even further. For spacial varying electrical background fields the string pair production is less suppressed than the rate of particle pair production. We discuss in some detail how the critical field is affected by the non-homogeneity, for both time and space dependent electric field backgrouds. We also comment on what could be an interesting new prediction for the small field limit. The third case we consider is pair production in holographic confining backgrounds with homogeneous and non-homogeneous fields.

  17. String pair production in non homogeneous backgrounds

    International Nuclear Information System (INIS)

    Bolognesi, S.; Rabinovici, E.; Tallarita, G.

    2016-01-01

    We consider string pair production in non homogeneous electric backgrounds. We study several particular configurations which can be addressed with the Euclidean world-sheet instanton technique, the analogue of the world-line instanton for particles. In the first case the string is suspended between two D-branes in flat space-time, in the second case the string lives in AdS and terminates on one D-brane (this realizes the holographic Schwinger effect). In some regions of parameter space the result is well approximated by the known analytical formulas, either the particle pair production in non-homogeneous background or the string pair production in homogeneous background. In other cases we see effects which are intrinsically stringy and related to the non-homogeneity of the background. The pair production is enhanced already for particles in time dependent electric field backgrounds. The string nature enhances this even further. For spacial varying electrical background fields the string pair production is less suppressed than the rate of particle pair production. We discuss in some detail how the critical field is affected by the non-homogeneity, for both time and space dependent electric field backgrouds. We also comment on what could be an interesting new prediction for the small field limit. The third case we consider is pair production in holographic confining backgrounds with homogeneous and non-homogeneous fields.

  18. On root mean square approximation by exponential functions

    OpenAIRE

    Sharipov, Ruslan

    2014-01-01

    The problem of root mean square approximation of a square integrable function by finite linear combinations of exponential functions is considered. It is subdivided into linear and nonlinear parts. The linear approximation problem is solved. Then the nonlinear problem is studied in some particular example.

  19. Cosmic microwave background observables of small field models of inflation

    International Nuclear Information System (INIS)

    Ben-Dayan, Ido; Brustein, Ram

    2010-01-01

    We construct a class of single small field models of inflation that can predict, contrary to popular wisdom, an observable gravitational wave signal in the cosmic microwave background anisotropies. The spectral index, its running, the tensor to scalar ratio and the number of e-folds can cover all the parameter space currently allowed by cosmological observations. A unique feature of models in this class is their ability to predict a negative spectral index running in accordance with recent cosmic microwave background observations. We discuss the new class of models from an effective field theory perspective and show that if the dimensionless trilinear coupling is small, as required for consistency, then the observed spectral index running implies a high scale of inflation and hence an observable gravitational wave signal. All the models share a distinct prediction of higher power at smaller scales, making them easy targets for detection

  20. Radix-16 Combined Division and Square Root Unit

    DEFF Research Database (Denmark)

    Nannarelli, Alberto

    2011-01-01

    Division and square root, based on the digitrecurrence algorithm, can be implemented in a combined unit. Several implementations of combined division/square root units have been presented mostly for radices 2 and 4. Here, we present a combined radix-16 unit obtained by overlapping two radix-4...... result digit selection functions, as it is normally done for division only units. The latency of the unit is reduced by retiming and low power methods are applied as well. The proposed unit is compared to a radix-4 combined division/square root unit, and to a radix-16 unit, obtained by cascading two...

  1. Potable NIR spectroscopy predicting soluble solids content of pears based on LEDs

    Energy Technology Data Exchange (ETDEWEB)

    Liu Yande; Liu Wei; Sun Xudong; Gao Rongjie; Pan Yuanyuan; Ouyang Aiguo, E-mail: jxliuyd@163.com [School of Mechatronics Engineering, East China Jiaotong University, Changbei Open and Developing District, Nanchang, 330013 (China)

    2011-01-01

    A portable near-infrared (NIR) instrument was developed for predicting soluble solids content (SSC) of pears equipped with light emitting diodes (LEDs). NIR spectra were collected on the calibration and prediction sets (145:45). Relationships between spectra and SSC were developed by multivariate linear regression (MLR), partial least squares (PLS) and artificial neural networks (ANNs) in the calibration set. The 45 unknown pears were applied to evaluate the performance of them in terms of root mean square errors of prediction (RMSEP) and correlation coefficients (r). The best result was obtained by PLS with RMSEP of 0.62{sup 0}Brix and r of 0.82. The results showed that the SSC of pears could be predicted by the portable NIR instrument.

  2. Potable NIR spectroscopy predicting soluble solids content of pears based on LEDs

    International Nuclear Information System (INIS)

    Liu Yande; Liu Wei; Sun Xudong; Gao Rongjie; Pan Yuanyuan; Ouyang Aiguo

    2011-01-01

    A portable near-infrared (NIR) instrument was developed for predicting soluble solids content (SSC) of pears equipped with light emitting diodes (LEDs). NIR spectra were collected on the calibration and prediction sets (145:45). Relationships between spectra and SSC were developed by multivariate linear regression (MLR), partial least squares (PLS) and artificial neural networks (ANNs) in the calibration set. The 45 unknown pears were applied to evaluate the performance of them in terms of root mean square errors of prediction (RMSEP) and correlation coefficients (r). The best result was obtained by PLS with RMSEP of 0.62 0 Brix and r of 0.82. The results showed that the SSC of pears could be predicted by the portable NIR instrument.

  3. Iterative least-squares solvers for the Navier-Stokes equations

    Energy Technology Data Exchange (ETDEWEB)

    Bochev, P. [Univ. of Texas, Arlington, TX (United States)

    1996-12-31

    In the recent years finite element methods of least-squares type have attracted considerable attention from both mathematicians and engineers. This interest has been motivated, to a large extent, by several valuable analytic and computational properties of least-squares variational principles. In particular, finite element methods based on such principles circumvent Ladyzhenskaya-Babuska-Brezzi condition and lead to symmetric and positive definite algebraic systems. Thus, it is not surprising that numerical solution of fluid flow problems has been among the most promising and successful applications of least-squares methods. In this context least-squares methods offer significant theoretical and practical advantages in the algorithmic design, which makes resulting methods suitable, among other things, for large-scale numerical simulations.

  4. Gamma-Ray Background Variability in Mobile Detectors

    Science.gov (United States)

    Aucott, Timothy John

    . This is accomplished by making many hours of background measurements with a truck-mounted system, which utilizes high-purity germanium detectors for spectroscopy and sodium iodide detectors for coded aperture imaging. This system also utilizes various peripheral sensors, such as panoramic cameras, laser ranging systems, global positioning systems, and a weather station to provide context for the gamma-ray data. About three hundred hours of data were taken in the San Francisco Bay Area, covering a wide variety of environments that might be encountered in operational scenarios. These measurements were used in a source injection study to evaluate the sensitivity of different algorithms (imaging and spectroscopy) and hardware (sodium iodide and high-purity germanium detectors). These measurements confirm that background distributions in large, mobile detector systems are dominated by systematic, not statistical variations, and both spectroscopy and imaging were found to substantially reduce this variability. Spectroscopy performed better than the coded aperture for the given scintillator array (one square meter of sodium iodide) for a variety of sources and geometries. By modeling the statistical and systematic uncertainties of the background, the data can be sampled to simulate the performance of a detector array of arbitrary size and resolution. With a larger array or lower resolution detectors, however imaging was better able to compensate for background variability.

  5. Evaluation of multiple protein docking structures using correctly predicted pairwise subunits

    Directory of Open Access Journals (Sweden)

    Esquivel-Rodríguez Juan

    2012-03-01

    Full Text Available Abstract Background Many functionally important proteins in a cell form complexes with multiple chains. Therefore, computational prediction of multiple protein complexes is an important task in bioinformatics. In the development of multiple protein docking methods, it is important to establish a metric for evaluating prediction results in a reasonable and practical fashion. However, since there are only few works done in developing methods for multiple protein docking, there is no study that investigates how accurate structural models of multiple protein complexes should be to allow scientists to gain biological insights. Methods We generated a series of predicted models (decoys of various accuracies by our multiple protein docking pipeline, Multi-LZerD, for three multi-chain complexes with 3, 4, and 6 chains. We analyzed the decoys in terms of the number of correctly predicted pair conformations in the decoys. Results and conclusion We found that pairs of chains with the correct mutual orientation exist even in the decoys with a large overall root mean square deviation (RMSD to the native. Therefore, in addition to a global structure similarity measure, such as the global RMSD, the quality of models for multiple chain complexes can be better evaluated by using the local measurement, the number of chain pairs with correct mutual orientation. We termed the fraction of correctly predicted pairs (RMSD at the interface of less than 4.0Å as fpair and propose to use it for evaluation of the accuracy of multiple protein docking.

  6. Power Efficient Division and Square Root Unit

    DEFF Research Database (Denmark)

    Liu, Wei; Nannarelli, Alberto

    2012-01-01

    Although division and square root are not frequent operations, most processors implement them in hardware to not compromise the overall performance. Two classes of algorithms implement division or square root: digit-recurrence and multiplicative (e.g., Newton-Raphson) algorithms. Previous work....... The proposed unit is compared to similar solutions based on the digit-recurrence algorithm and it is compared to a unit based on the multiplicative Newton-Raphson algorithm....

  7. Square and bow-tie configurations in the cyclic evasion problem

    Science.gov (United States)

    Arnold, M. D.; Golich, M.; Grim, A.; Vargas, L.; Zharnitsky, V.

    2017-05-01

    Cyclic evasion of four agents on the plane is considered. There are two stationary shapes of configurations: square and degenerate bow-tie. The bow-tie is asymptotically attracting while the square is of focus-center type. Normal form analysis shows that square is nonlinearly unstable. The stable manifold consists of parallelograms that all converge to the square configuration. Based on these observations and numerical simulations, it is conjectured that any non-parallelogram non-degenerate configuration converges to the bow-tie.

  8. Background Noise Analysis in a Few-Photon-Level Qubit Memory

    Science.gov (United States)

    Mittiga, Thomas; Kupchak, Connor; Jordaan, Bertus; Namazi, Mehdi; Nolleke, Christian; Figeroa, Eden

    2014-05-01

    We have developed an Electromagnetically Induced Transparency based polarization qubit memory. The device is composed of a dual-rail probe field polarization setup colinear with an intense control field to store and retrieve any arbitrary polarization state by addressing a Λ-type energy level scheme in a 87Rb vapor cell. To achieve a signal-to-background ratio at the few photon level sufficient for polarization tomography of the retrieved state, the intense control field is filtered out through an etalon filtrating system. We have developed an analytical model predicting the influence of the signal-to-background ratio on the fidelities and compared it to experimental data. Experimentally measured global fidelities have been found to follow closely the theoretical prediction as signal-to-background decreases. These results suggest the plausibility of employing room temperature memories to store photonic qubits at the single photon level and for future applications in long distance quantum communication schemes.

  9. Some results on square-free colorings of graphs

    DEFF Research Database (Denmark)

    Barat, Janos

    2004-01-01

    on the vertices or edges of a path. Conversely one can form sequences from a vertex or edge coloring of a graph in different ways. Thus there are several possibilities to generalize the square-free concept to graphs. Following Alon, Grytczuk, Haluszczak, Riordan and Bresar, Klavzar we study several so called...... square-free graph parameters, and answer some questions they posed. The main result is that the class of k-trees has bounded square-free vertex coloring parameter. Thus we can color the vertices of a k-tree using O(c^k) colors if c>6 such that the color sequence on any path is square......-free. It is conjectured that a similar phenomenon holds for planar graphs, so a finite number of colors are enough. We support this conjecture by showing that this number is at most 12 for outerplanar graphs. On the other hand we prove that some outerplanar graphs require at least 7 colors. Using this latter we construct...

  10. The current strain distribution in the North China Basin of eastern China by least-squares collocation

    Science.gov (United States)

    Wu, J. C.; Tang, H. W.; Chen, Y. Q.; Li, Y. X.

    2006-07-01

    In this paper, the velocities of 154 stations obtained in 2001 and 2003 GPS survey campaigns are applied to formulate a continuous velocity field by the least-squares collocation method. The strain rate field obtained by the least-squares collocation method shows more clear deformation patterns than that of the conventional discrete triangle method. The significant deformation zones obtained are mainly located in three places, to the north of Tangshan, between Tianjing and Shijiazhuang, and to the north of Datong, which agree with the places of the Holocene active deformation zones obtained by geological investigations. The maximum shear strain rate is located at latitude 38.6°N and longitude 116.8°E, with a magnitude of 0.13 ppm/a. The strain rate field obtained can be used for earthquake prediction research in the North China Basin.

  11. Salt-induced square prism Pd microtubes and their ethanol electrocatalysis properties

    International Nuclear Information System (INIS)

    Jiang, Kunpeng; Ma, Shenghua; Wang, Yinan; Zhang, Ying; Han, Xiaojun

    2017-01-01

    Highlights: • A simple method is established to fabricate square prism Pd microtubes. • The novel square prism Pd microtubes are based on a salt-induced aggregation event. • The surface of the square prism tubes convert from cataphracted nanosheets to spheres after calcinations treatment. • The square prism pure Pd tubes show excellent electro catalytic activity towards ethanol oxidation. - Abstract: The synthesis of square prism tubes are always challenging due to their thermo and dynamical instability. We demonstrated a simple method using Pd"2"+ doped PoPD oligomers as building blocks to assemble into 1D square prism metal-organic microtubes, which consists of cataphracted nanosheets on the surfaces. After high temperature treatment, the microtubes became square prism Pd tubes with a cross section size of 3 μm. The pure Pd microtubes showed excellent catalyzing activity towards the electro oxidation of ethanol. Their electrochemically active surface area is 48.2 m"2 g"−"1, which indicates the square prism Pd tubes have great potential in the field of fuel cell.

  12. Salt-induced square prism Pd microtubes and their ethanol electrocatalysis properties

    Energy Technology Data Exchange (ETDEWEB)

    Jiang, Kunpeng; Ma, Shenghua; Wang, Yinan; Zhang, Ying; Han, Xiaojun, E-mail: hanxiaojun@hit.edu.cn

    2017-05-01

    Highlights: • A simple method is established to fabricate square prism Pd microtubes. • The novel square prism Pd microtubes are based on a salt-induced aggregation event. • The surface of the square prism tubes convert from cataphracted nanosheets to spheres after calcinations treatment. • The square prism pure Pd tubes show excellent electro catalytic activity towards ethanol oxidation. - Abstract: The synthesis of square prism tubes are always challenging due to their thermo and dynamical instability. We demonstrated a simple method using Pd{sup 2+} doped PoPD oligomers as building blocks to assemble into 1D square prism metal-organic microtubes, which consists of cataphracted nanosheets on the surfaces. After high temperature treatment, the microtubes became square prism Pd tubes with a cross section size of 3 μm. The pure Pd microtubes showed excellent catalyzing activity towards the electro oxidation of ethanol. Their electrochemically active surface area is 48.2 m{sup 2} g{sup −1}, which indicates the square prism Pd tubes have great potential in the field of fuel cell.

  13. Subquantum nonlocal correlations induced by the background random field

    Energy Technology Data Exchange (ETDEWEB)

    Khrennikov, Andrei, E-mail: Andrei.Khrennikov@lnu.s [International Center for Mathematical Modelling in Physics and Cognitive Sciences, Linnaeus University, Vaexjoe (Sweden); Institute of Information Security, Russian State University for Humanities, Moscow (Russian Federation)

    2011-10-15

    We developed a purely field model of microphenomena-prequantum classical statistical field theory (PCSFT). This model not only reproduces important probabilistic predictions of quantum mechanics (QM) including correlations for entangled systems, but also gives a possibility to go beyond QM, i.e. to make predictions of phenomena that could be observed at the subquantum level. In this paper, we discuss one such prediction-the existence of nonlocal correlations between prequantum random fields corresponding to all quantum systems. (And by PCSFT, quantum systems are represented by classical Gaussian random fields and quantum observables by quadratic forms of these fields.) The source of these correlations is the common background field. Thus all prequantum random fields are 'entangled', but in the sense of classical signal theory. On the one hand, PCSFT demystifies quantum nonlocality by reducing it to nonlocal classical correlations based on the common random background. On the other hand, it demonstrates total generality of such correlations. They exist even for distinguishable quantum systems in factorizable states (by PCSFT terminology-for Gaussian random fields with covariance operators corresponding to factorizable quantum states).

  14. Subquantum nonlocal correlations induced by the background random field

    International Nuclear Information System (INIS)

    Khrennikov, Andrei

    2011-01-01

    We developed a purely field model of microphenomena-prequantum classical statistical field theory (PCSFT). This model not only reproduces important probabilistic predictions of quantum mechanics (QM) including correlations for entangled systems, but also gives a possibility to go beyond QM, i.e. to make predictions of phenomena that could be observed at the subquantum level. In this paper, we discuss one such prediction-the existence of nonlocal correlations between prequantum random fields corresponding to all quantum systems. (And by PCSFT, quantum systems are represented by classical Gaussian random fields and quantum observables by quadratic forms of these fields.) The source of these correlations is the common background field. Thus all prequantum random fields are 'entangled', but in the sense of classical signal theory. On the one hand, PCSFT demystifies quantum nonlocality by reducing it to nonlocal classical correlations based on the common random background. On the other hand, it demonstrates total generality of such correlations. They exist even for distinguishable quantum systems in factorizable states (by PCSFT terminology-for Gaussian random fields with covariance operators corresponding to factorizable quantum states).

  15. Background or Experience? Using Logistic Regression to Predict College Retention

    Science.gov (United States)

    Synco, Tracee M.

    2012-01-01

    Tinto, Astin and countless others have researched the retention and attrition of students from college for more than thirty years. However, the six year graduation rate for all first-time full-time freshmen for the 2002 cohort was 57%. This study sought to determine the retention variables that predicted continued enrollment of entering freshmen…

  16. Background area effects on feature detectability in CT and uncorrelated noise

    International Nuclear Information System (INIS)

    Swensson, R.G.; Judy, P.F.

    1987-01-01

    Receiver operating characteristic curve measures of feature detectability decrease substantially when the surrounding area of uniform-noise background is small relative to that of the feature itself. The effect occurs with both fixed and variable-level backgrounds, but differs in form for CT and uncorrelated noise. Cross-correlation image calculations can only predict these effects by treating feature detection as the discrimination of a local change (a ''feature'') from the estimated level of an assumed-uniform region of background

  17. A Newton Algorithm for Multivariate Total Least Squares Problems

    Directory of Open Access Journals (Sweden)

    WANG Leyang

    2016-04-01

    Full Text Available In order to improve calculation efficiency of parameter estimation, an algorithm for multivariate weighted total least squares adjustment based on Newton method is derived. The relationship between the solution of this algorithm and that of multivariate weighted total least squares adjustment based on Lagrange multipliers method is analyzed. According to propagation of cofactor, 16 computational formulae of cofactor matrices of multivariate total least squares adjustment are also listed. The new algorithm could solve adjustment problems containing correlation between observation matrix and coefficient matrix. And it can also deal with their stochastic elements and deterministic elements with only one cofactor matrix. The results illustrate that the Newton algorithm for multivariate total least squares problems could be practiced and have higher convergence rate.

  18. Profiled support vector machines for antisense oligonucleotide efficacy prediction

    Directory of Open Access Journals (Sweden)

    Martín-Guerrero José D

    2004-09-01

    Full Text Available Abstract Background This paper presents the use of Support Vector Machines (SVMs for prediction and analysis of antisense oligonucleotide (AO efficacy. The collected database comprises 315 AO molecules including 68 features each, inducing a problem well-suited to SVMs. The task of feature selection is crucial given the presence of noisy or redundant features, and the well-known problem of the curse of dimensionality. We propose a two-stage strategy to develop an optimal model: (1 feature selection using correlation analysis, mutual information, and SVM-based recursive feature elimination (SVM-RFE, and (2 AO prediction using standard and profiled SVM formulations. A profiled SVM gives different weights to different parts of the training data to focus the training on the most important regions. Results In the first stage, the SVM-RFE technique was most efficient and robust in the presence of low number of samples and high input space dimension. This method yielded an optimal subset of 14 representative features, which were all related to energy and sequence motifs. The second stage evaluated the performance of the predictors (overall correlation coefficient between observed and predicted efficacy, r; mean error, ME; and root-mean-square-error, RMSE using 8-fold and minus-one-RNA cross-validation methods. The profiled SVM produced the best results (r = 0.44, ME = 0.022, and RMSE= 0.278 and predicted high (>75% inhibition of gene expression and low efficacy (http://aosvm.cgb.ki.se/. Conclusions The SVM approach is well suited to the AO prediction problem, and yields a prediction accuracy superior to previous methods. The profiled SVM was found to perform better than the standard SVM, suggesting that it could lead to improvements in other prediction problems as well.

  19. How-To-Do-It: Snails, Pill Bugs, Mealworms, and Chi-Square? Using Invertebrate Behavior to Illustrate Hypothesis Testing with Chi-Square.

    Science.gov (United States)

    Biermann, Carol

    1988-01-01

    Described is a study designed to introduce students to the behavior of common invertebrate animals, and to use of the chi-square statistical technique. Discusses activities with snails, pill bugs, and mealworms. Provides an abbreviated chi-square table and instructions for performing the experiments and statistical tests. (CW)

  20. Multisource least-squares reverse-time migration with structure-oriented filtering

    Science.gov (United States)

    Fan, Jing-Wen; Li, Zhen-Chun; Zhang, Kai; Zhang, Min; Liu, Xue-Tong

    2016-09-01

    The technology of simultaneous-source acquisition of seismic data excited by several sources can significantly improve the data collection efficiency. However, direct imaging of simultaneous-source data or blended data may introduce crosstalk noise and affect the imaging quality. To address this problem, we introduce a structure-oriented filtering operator as preconditioner into the multisource least-squares reverse-time migration (LSRTM). The structure-oriented filtering operator is a nonstationary filter along structural trends that suppresses crosstalk noise while maintaining structural information. The proposed method uses the conjugate-gradient method to minimize the mismatch between predicted and observed data, while effectively attenuating the interference noise caused by exciting several sources simultaneously. Numerical experiments using synthetic data suggest that the proposed method can suppress the crosstalk noise and produce highly accurate images.

  1. Model variations in predicting incidence of Plasmodium falciparum malaria using 1998-2007 morbidity and meteorological data from south Ethiopia

    Directory of Open Access Journals (Sweden)

    Loha Eskindir

    2010-06-01

    Full Text Available Abstract Background Malaria transmission is complex and is believed to be associated with local climate changes. However, simple attempts to extrapolate malaria incidence rates from averaged regional meteorological conditions have proven unsuccessful. Therefore, the objective of this study was to determine if variations in specific meteorological factors are able to consistently predict P. falciparum malaria incidence at different locations in south Ethiopia. Methods Retrospective data from 42 locations were collected including P. falciparum malaria incidence for the period of 1998-2007 and meteorological variables such as monthly rainfall (all locations, temperature (17 locations, and relative humidity (three locations. Thirty-five data sets qualified for the analysis. Ljung-Box Q statistics was used for model diagnosis, and R squared or stationary R squared was taken as goodness of fit measure. Time series modelling was carried out using Transfer Function (TF models and univariate auto-regressive integrated moving average (ARIMA when there was no significant predictor meteorological variable. Results Of 35 models, five were discarded because of the significant value of Ljung-Box Q statistics. Past P. falciparum malaria incidence alone (17 locations or when coupled with meteorological variables (four locations was able to predict P. falciparum malaria incidence within statistical significance. All seasonal AIRMA orders were from locations at altitudes above 1742 m. Monthly rainfall, minimum and maximum temperature was able to predict incidence at four, five and two locations, respectively. In contrast, relative humidity was not able to predict P. falciparum malaria incidence. The R squared values for the models ranged from 16% to 97%, with the exception of one model which had a negative value. Models with seasonal ARIMA orders were found to perform better. However, the models for predicting P. falciparum malaria incidence varied from location

  2. Experimental test of a new technique of background suppression in digital mammography

    Energy Technology Data Exchange (ETDEWEB)

    Bisogni, M.G.; Bottari, S.; Ciocci, M.A.; Fantacci, M.E.; Maestro, P.; Malakhov, N.; Marrocchesi, P.S. E-mail: marrocchesi@pi.infn.it; Novelli, M.; Quattrocchi, M.; Pilo, F.; Rosso, V.; Turini, N.; Zucca, S

    2002-02-01

    A multiple-exposure technique in digital mammography has been developed to suppress the physical background in the image due to Compton scattering in the body. A pair of X-ray masks, shaped in a projective geometry and positioned upstream and downstream the patient, are coupled mechanically and moved in four steps along a square pattern in order to irradiate the full area in four consecutive short exposures. A proof-of-principle apparatus is under test with a breast phantom and a standard mammographic X-ray unit. Results are reported.

  3. Experimental test of a new technique of background suppression in digital mammography

    International Nuclear Information System (INIS)

    Bisogni, M.G.; Bottari, S.; Ciocci, M.A.; Fantacci, M.E.; Maestro, P.; Malakhov, N.; Marrocchesi, P.S.; Novelli, M.; Quattrocchi, M.; Pilo, F.; Rosso, V.; Turini, N.; Zucca, S.

    2002-01-01

    A multiple-exposure technique in digital mammography has been developed to suppress the physical background in the image due to Compton scattering in the body. A pair of X-ray masks, shaped in a projective geometry and positioned upstream and downstream the patient, are coupled mechanically and moved in four steps along a square pattern in order to irradiate the full area in four consecutive short exposures. A proof-of-principle apparatus is under test with a breast phantom and a standard mammographic X-ray unit. Results are reported

  4. Space and protest: A tale of two Egyptian squares

    OpenAIRE

    Mohamed, A.A.; Van Nes, A.; Salheen, M.A.

    2015-01-01

    Protests and revolts take place in public space. How they can be controlled or how protests develop depend on the physical layout of the built environment. This study reveals the relationship between urban space and protest for two Egyptian squares: Tahrir Square and Rabaa Al-Adawiya in Cairo. For analysis, the research uses space syntax method. The results of this analysis are then compared with descriptions of the protest behaviour. As it turns out, the spatial properties of Tahrir square s...

  5. Variable selection based on clustering analysis for improvement of polyphenols prediction in green tea using synchronous fluorescence spectra

    Science.gov (United States)

    Shan, Jiajia; Wang, Xue; Zhou, Hao; Han, Shuqing; Riza, Dimas Firmanda Al; Kondo, Naoshi

    2018-04-01

    Synchronous fluorescence spectra, combined with multivariate analysis were used to predict flavonoids content in green tea rapidly and nondestructively. This paper presented a new and efficient spectral intervals selection method called clustering based partial least square (CL-PLS), which selected informative wavelengths by combining clustering concept and partial least square (PLS) methods to improve models’ performance by synchronous fluorescence spectra. The fluorescence spectra of tea samples were obtained and k-means and kohonen-self organizing map clustering algorithms were carried out to cluster full spectra into several clusters, and sub-PLS regression model was developed on each cluster. Finally, CL-PLS models consisting of gradually selected clusters were built. Correlation coefficient (R) was used to evaluate the effect on prediction performance of PLS models. In addition, variable influence on projection partial least square (VIP-PLS), selectivity ratio partial least square (SR-PLS), interval partial least square (iPLS) models and full spectra PLS model were investigated and the results were compared. The results showed that CL-PLS presented the best result for flavonoids prediction using synchronous fluorescence spectra.

  6. Background {sup 99m}Tc-methoxyisobutylisonitrile uptake of breast-specific gamma imaging in relation to background parenchymal enhancement in magnetic resonance imaging

    Energy Technology Data Exchange (ETDEWEB)

    Yoon, Hai-Jeon; Kim, Bom Sahn [Ewha Womans University, Department of Nuclear Medicine, Yangchun-Ku, Seoul (Korea, Republic of); Kim, Yemi [Ewha Womans University, Clinical Research Institute and Department of Conservative Dentistry, Seoul (Korea, Republic of); Lee, Jee Eun [Ewha Womans University, Department of Radiology, Seoul (Korea, Republic of)

    2015-01-15

    This study investigated factors that could affect background uptake of {sup 99m}Tc- methoxyisobutylisonitrile (MIBI) on normal breast by breast-specific gamma imaging (BSGI). In addition, the impact of background {sup 99m}Tc-MIBI uptake on the diagnostic performance of BSGI was further investigated. One hundred forty-five women with unilateral breast cancer who underwent BSGI, MRI, and mammography were retrospectively enrolled. Background uptake on BSGI was evaluated qualitatively and quantitatively. Patients were classified into non-dense and dense breast groups according to mammographic breast density. Background parenchymal enhancement (BPE) was rated according to BI-RADS classification. The relationship of age, menopausal status, mammographic breast density, and BPE with background {sup 99m}Tc-MIBI uptake was analyzed. Heterogeneous texture and high background uptake ratio on BSGI were significantly correlated with younger age (p < 0.001, respectively), premenopausal status (p < 0.001 and p = 0.003), dense breast (p < 0.001, respectively), and marked BPE (p < 0.001, respectively). On multivariate analysis, only BPE remained a significant factor for background MIBI uptake (p < 0.001).There was a significant reduction in positive predictive value (p = 0.024 and p = 0.002) as background MIBI uptake and BPE grade increased. BPE on MRI was the most important factor for background MIBI uptake on BSGI. High background MIBI uptake or marked BPE can diminish the diagnostic performance of BSGI. (orig.)

  7. Analysis of Nonlinear Dynamics by Square Matrix Method

    Energy Technology Data Exchange (ETDEWEB)

    Yu, Li Hua [Brookhaven National Lab. (BNL), Upton, NY (United States). Energy and Photon Sciences Directorate. National Synchrotron Light Source II

    2016-07-25

    The nonlinear dynamics of a system with periodic structure can be analyzed using a square matrix. In this paper, we show that because the special property of the square matrix constructed for nonlinear dynamics, we can reduce the dimension of the matrix from the original large number for high order calculation to low dimension in the first step of the analysis. Then a stable Jordan decomposition is obtained with much lower dimension. The transformation to Jordan form provides an excellent action-angle approximation to the solution of the nonlinear dynamics, in good agreement with trajectories and tune obtained from tracking. And more importantly, the deviation from constancy of the new action-angle variable provides a measure of the stability of the phase space trajectories and their tunes. Thus the square matrix provides a novel method to optimize the nonlinear dynamic system. The method is illustrated by many examples of comparison between theory and numerical simulation. Finally, in particular, we show that the square matrix method can be used for optimization to reduce the nonlinearity of a system.

  8. Radioactivity backgrounds in ZEPLIN-III

    Science.gov (United States)

    Araújo, H. M.; Akimov, D. Yu.; Barnes, E. J.; Belov, V. A.; Bewick, A.; Burenkov, A. A.; Chepel, V.; Currie, A.; Deviveiros, L.; Edwards, B.; Ghag, C.; Hollingsworth, A.; Horn, M.; Kalmus, G. E.; Kobyakin, A. S.; Kovalenko, A. G.; Lebedenko, V. N.; Lindote, A.; Lopes, M. I.; Lüscher, R.; Majewski, P.; Murphy, A. St. J.; Neves, F.; Paling, S. M.; Pinto da Cunha, J.; Preece, R.; Quenby, J. J.; Reichhart, L.; Scovell, P. R.; Silva, C.; Solovov, V. N.; Smith, N. J. T.; Smith, P. F.; Stekhanov, V. N.; Sumner, T. J.; Thorne, C.; Walker, R. J.

    2012-03-01

    We examine electron and nuclear recoil backgrounds from radioactivity in the ZEPLIN-III dark matter experiment at Boulby. The rate of low-energy electron recoils in the liquid xenon WIMP target is 0.75 ± 0.05 events/kg/day/keV, which represents a 20-fold improvement over the rate observed during the first science run. Energy and spatial distributions agree with those predicted by component-level Monte Carlo simulations propagating the effects of the radiological contamination measured for materials employed in the experiment. Neutron elastic scattering is predicted to yield 3.05 ± 0.5 nuclear recoils with energy 5-50 keV per year, which translates to an expectation of 0.4 events in a 1 yr dataset in anti-coincidence with the veto detector for realistic signal acceptance. Less obvious background sources are discussed, especially in the context of future experiments. These include contamination of scintillation pulses with Cherenkov light from Compton electrons and from β activity internal to photomultipliers, which can increase the size and lower the apparent time constant of the scintillation response. Another challenge is posed by multiple-scatter γ-rays with one or more vertices in regions that yield no ionisation. If the discrimination power achieved in the first run can be replicated, ZEPLIN-III should reach a sensitivity of ˜1 × 10-8pb · yr to the scalar WIMP-nucleon elastic cross-section, as originally conceived.

  9. An ultralow background germanium gamma-ray spectrometer

    International Nuclear Information System (INIS)

    Reeves, R.H.; Brodzinski, R.L.; Hensley, W.K.; Ryge, P.

    1984-01-01

    The monitoring of minimum detectable activity is becoming increasingly important as environmental concerns and regulations require more sensitive measurement of the radioactivity levels in the workplace and the home. In measuring this activity, however, the background becomes one of the limiting factors. Anticoincidence systems utilizing both NaI(T1) and plastic scintillators have proven effective in reducing some components of the background, but radiocontaminants in the various regions of these systems have limited their effectiveness, and their cost is often prohibitive. In order to obtain a genuinely low background detector system, all components must be free of detectable radioactivity, and the cosmic ray produced contribution must be significantly reduced. Current efforts by the authors to measure the double beta decay of Germanium 76 as predicted by Grand Unified Theories have resulted in the development of a high resolution germanium diode gamma spectrometer with an exceptionally low background. This paper describes the development of this system, outlines the configuration and operation of its preamplifier, linear amplifier, analog-to-digital converter, 4096-channel analyzer, shielding consisting of lead-sandwiched plastic scintillators wrapped in cadmium foil, photomultiplier, and its pulse generator and discriminator, and then discusses how the system can be utilized to significantly reduce the background in high resolution photon spectrometers at only moderate cost

  10. Direct numerical simulation of free and forced square jets

    International Nuclear Information System (INIS)

    Gohil, Trushar B.; Saha, Arun K.; Muralidhar, K.

    2015-01-01

    Highlights: • Free square jet at Re = 500–2000 is studied using DNS. • Forced square jet at Re = 1000 subjected to varicose perturbation is also investigated at various forcing frequencies. • Vortex interactions within the jet and jet spreading are affected both for free and forced jets. • Perturbation at higher frequency shows axis-switching. - Abstract: Direct numerical simulation (DNS) of incompressible, spatially developing square jets in the Reynolds number range of 500–2000 is reported. The three-dimensional unsteady Navier–Stokes equations are solved using high order spatial and temporal discretization. The objective of the present work is to understand the evolution of free and forced square jets by examining the formation of large-scale structures. Coherent structures and related interactions of free jets suggest control strategies that can be used to achieve enhanced spreading and mixing of the jet with the surrounding fluid. The critical Reynolds number for the onset on unsteadiness in an unperturbed free square jet is found to be 875–900 while it reduces to the range 500–525 in the presence of small-scale perturbations. Disturbances applied at the flow inlet cause saturation of KH-instability and early transition to turbulence. Forced jet calculations have been carried out using varicose perturbation with amplitude of 15%, while frequency is independently varied. Simulations show that the initial development of the square jet is influenced by the four corners leading to the appearance hairpin structures along with the formation of vortex rings. Farther downstream, adjacent vortices strongly interact leading to their rapid breakup. Excitation frequencies in the range 0.4–0.6 cause axis-switching of the jet cross-section. Results show that square jets achieve greater spreading but are less controllable in comparison to the circular ones

  11. Square vortex lattice in p-wave superconductors

    International Nuclear Information System (INIS)

    Shiraishi, J.

    1999-01-01

    Making use of the Ginzburg Landau equation for isotropic p-wave superconductors, we construct the single vortex solution in part analytically. The fourfold symmetry breaking term arising from the tetragonal symmetry distortion of the Fermi surface is crucial, since this term indicates a fourfold distortion of the vortex core somewhat similar to the one found in d-wave superconductors. This fourfold distortion of the vortex core in turn favors the square vortex lattice as observed recently by small angle neutron scattering (SANS) experiment from Sr 2 RuO 4 . We find that the hexagonal vortex lattice at H = H c1 transforms into the square one for H = H cr = 0.26 H c2 . On the other hand the SANS data does not reveal such transition. The square vortex covers everywhere studied by the SANS implying H cr is very close to H c1 . Therefore some improvement in the present model is certainly desirable. (orig.)

  12. Effect of longitudinal and transverse vibrations of an upstream square cylinder on vortex shedding behind two inline square cylinders

    International Nuclear Information System (INIS)

    Patil, Pratish P; Tiwari, Shaligram

    2009-01-01

    The characteristics of unsteady wakes behind a stationary square cylinder and another upstream vibrating square cylinder have been investigated numerically with the help of a developed computational code. The effect of longitudinal as well as transverse vibrations of the upstream cylinder is studied on the coupled wake between the two cylinders, which is found to control the vortex shedding behavior behind the downstream stationary cylinder. Computations are carried out for a fixed value of Reynolds number (Re = 200) and three different values of excitation frequencies of the upstream cylinder, namely less than, equal to and greater than the natural frequency of vortex shedding corresponding to flow past a stationary square cylinder. The vortex shedding characteristics of the unsteady wakes behind the vibrating and stationary cylinders are found to differ significantly for longitudinal and transverse modes of vibration of the upstream cylinder. The wake of the downstream stationary cylinder is found to depict a synchronization behavior with the upstream cylinder vibration. The spacing between the two cylinders has been identified to be the key parameter influencing the synchronization phenomenon. The effect of cylinder spacing on the wake synchronization and the hydrodynamic forces has been examined. In addition, a comparison of the drag forces for flow past transversely vibrating square and circular cylinders for similar amplitudes and frequencies of cylinder vibration has been presented while employing the tested computational code.

  13. Cosmological constraints on the very low frequency gravitational-wave background

    International Nuclear Information System (INIS)

    Seto, Naoki; Cooray, Asantha

    2006-01-01

    The curl modes of cosmic microwave background polarization allow one to indirectly constrain the primordial background of gravitational waves with frequencies around 10 -18 to 10 -16 Hz. The proposed high precision timing observations of a large sample of millisecond pulsars with the pulsar timing array or with the square kilometer array can either detect or constrain the stochastic gravitational-wave background at frequencies greater than roughly 0.1 yr -1 . While existing techniques are limited to either observe or constrain the gravitational-wave background across six or more orders of magnitude between 10 -16 and 10 -10 Hz, we suggest that the anisotropy pattern of time variation of the redshift related to a sample of high-redshift objects can be used to study the background around a frequency of 10 -12 Hz. Useful observations to detect an anisotropy signal in the global redshift change include spectroscopic observations of the Ly-α forest in absorption towards a sample of quasars, redshifted 21 cm line observations either in absorption or emission towards a sample of neutral HI regions before or during reionization, and high-frequency (0.1 to 1 Hz) gravitational-wave analysis of a sample of neutron star-neutron star binaries detected with gravitational-wave instruments such as the Decihertz Interferometer Gravitational Wave Observatory (DECIGO). For reasonable observations expected in the future involving extragalactic sources, we find limits at the level of Ω GW -6 at a frequency around 10 -12 Hz while the ultimate limit is likely to be around Ω GW -11 . On the other hand, if there is a background of gravitational waves at 10 -12 Hz with an amplitude larger than this limit, its presence will be visible as a measurable anisotropy in the time-evolving redshift of extragalactic sources

  14. The benefits of Square dancing as a means of physical activity for Czech dancers with hearing loss

    Directory of Open Access Journals (Sweden)

    Petra Kurková

    2014-12-01

    Full Text Available Background: Hearing, a strong line of communication that enables individuals to learn about the world around them, is a major factor contributing to the psychomotor development of every individual. Hearing loss can also affect the conception and perception of sounds and rhythm. Objective: The purpose of this study was to describe and analyse the benefits of Square and Round dancing for persons with hearing loss. Methods: The present study is an analytic-descriptive qualitative research. The sample was constituted non-probabilistically based on the following features: a a participant had to have hearing problems (hearing loss and b had to have participated regularly in Square dance for at least two years. Each participant was asked to name possible people to be interviewed (snowball technique. We analysed the data of 7 individuals (6 males and 1 female with hearing loss. The mean age of the dancers with hearing loss was 51.3 years. The participants had no cochlear implants or any other physical or vision related impairments. Results: The present findings constitute the first published survey regarding Czech Square dancers' status, their family's hearing status, hearing aid use, communication preference, education in integrated or segregated settings, the influence of family background on dance initiation, coach preference (hearing or deaf, and the environment for participation in Square dance as a mode of physical activity with regular dancers and with dancers with hearing loss as well. In the present sample of dancers with hearing loss, most were from hearing families and had hearing siblings. The degree to which individuals with hearing loss feel comfortable with the hearing world appears to influence their later preference for participating in regular, as opposed to segregated, physical activities. More than half of the dancers with hearing loss who participated in this research study would like to meet with the deaf minority. One of the

  15. A robust background regression based score estimation algorithm for hyperspectral anomaly detection

    Science.gov (United States)

    Zhao, Rui; Du, Bo; Zhang, Liangpei; Zhang, Lefei

    2016-12-01

    Anomaly detection has become a hot topic in the hyperspectral image analysis and processing fields in recent years. The most important issue for hyperspectral anomaly detection is the background estimation and suppression. Unreasonable or non-robust background estimation usually leads to unsatisfactory anomaly detection results. Furthermore, the inherent nonlinearity of hyperspectral images may cover up the intrinsic data structure in the anomaly detection. In order to implement robust background estimation, as well as to explore the intrinsic data structure of the hyperspectral image, we propose a robust background regression based score estimation algorithm (RBRSE) for hyperspectral anomaly detection. The Robust Background Regression (RBR) is actually a label assignment procedure which segments the hyperspectral data into a robust background dataset and a potential anomaly dataset with an intersection boundary. In the RBR, a kernel expansion technique, which explores the nonlinear structure of the hyperspectral data in a reproducing kernel Hilbert space, is utilized to formulate the data as a density feature representation. A minimum squared loss relationship is constructed between the data density feature and the corresponding assigned labels of the hyperspectral data, to formulate the foundation of the regression. Furthermore, a manifold regularization term which explores the manifold smoothness of the hyperspectral data, and a maximization term of the robust background average density, which suppresses the bias caused by the potential anomalies, are jointly appended in the RBR procedure. After this, a paired-dataset based k-nn score estimation method is undertaken on the robust background and potential anomaly datasets, to implement the detection output. The experimental results show that RBRSE achieves superior ROC curves, AUC values, and background-anomaly separation than some of the other state-of-the-art anomaly detection methods, and is easy to implement

  16. The use of near-infrared scanning for the prediction of pulp yield and ...

    African Journals Online (AJOL)

    Calibration models to predict pulp yield, cellulose and lignin content were developed by applying chemometrics and partial least squares regression. Validation and determination of prediction accuracy of the models were performed using independent data. The prediction of cellulose and lignin were acceptable with ...

  17. The equivalent square concept for the head scatter factor based on scatter from flattening filter

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Siyong; Palta, Jatinder R.; Zhu, Timothy C. [Department of Radiation Oncology, University of Florida College of Medicine, Gainesville, Florida (United States)

    1998-06-01

    The equivalent field relationship between square and circular fields for the head scatter factor was evaluated at the source plane. The method was based on integrating the head scatter parameter for projected shaped fields in the source plane and finding a field that produced the same ratio of head scatter to primary dose on the central axis. A value of {sigma}/R{approx_equal}0.9 was obtained, where {sigma} was one-half of the side length of the equivalent square and R was the radius of the circular field. The assumptions were that the equivalent field relationship for head scatter depends primarily on the characteristics of scatter from the flattening filter, and that the differential scatter-to-primary ratio of scatter from the flattening filter decreases linearly with the radius, within the physical radius of the flattening filter. Lam and co-workers showed empirically that the area-to-perimeter ratio formula, when applied to an equivalent square formula at the flattening filter plane, gave an accurate prediction of the head scatter factor. We have analytically investigated the validity of the area-to-perimeter ratio formula. Our results support the fact that the area-to-perimeter ratio formula can also be used as the equivalent field formula for head scatter at the source plane. The equivalent field relationships for wedge and tertiary collimator scatter were also evaluated. (author)

  18. The equivalent square concept for the head scatter factor based on scatter from flattening filter

    International Nuclear Information System (INIS)

    Kim, Siyong; Palta, Jatinder R.; Zhu, Timothy C.

    1998-01-01

    The equivalent field relationship between square and circular fields for the head scatter factor was evaluated at the source plane. The method was based on integrating the head scatter parameter for projected shaped fields in the source plane and finding a field that produced the same ratio of head scatter to primary dose on the central axis. A value of σ/R≅0.9 was obtained, where σ was one-half of the side length of the equivalent square and R was the radius of the circular field. The assumptions were that the equivalent field relationship for head scatter depends primarily on the characteristics of scatter from the flattening filter, and that the differential scatter-to-primary ratio of scatter from the flattening filter decreases linearly with the radius, within the physical radius of the flattening filter. Lam and co-workers showed empirically that the area-to-perimeter ratio formula, when applied to an equivalent square formula at the flattening filter plane, gave an accurate prediction of the head scatter factor. We have analytically investigated the validity of the area-to-perimeter ratio formula. Our results support the fact that the area-to-perimeter ratio formula can also be used as the equivalent field formula for head scatter at the source plane. The equivalent field relationships for wedge and tertiary collimator scatter were also evaluated. (author)

  19. Experimental tests of the gravitational inverse-square law for mass separations from 2 to 105 cm

    International Nuclear Information System (INIS)

    Hoskins, J.K.; Newman, R.D.; Spero, R.; Schultz, J.

    1985-01-01

    We report two experiments which test the inverse-square distance dependence of the Newtonian gravitational force law. One experiment uses a torsion balance consisting of a 60-cm-long copper bar suspended at its midpoint by a tungsten wire, to compare the torque produced by copper masses 105 cm from the balance axis with the torque produced by a copper mass 5 cm from the side of the balance bar, near its end. Defining R/sub expt/ to be the measured ratio of the torques due to the masses at 105 cm and 5 cm, and R/sub Newton/ to be the corresponding ratio computed assuming an inverse-square force law, we find deltaequivalent(R/sub expt//R/sub Newton/-1) = (1.2 +- 7) x 10 -4 . Assuming a force deviating from an inverse-square distance dependence by a factor [1+epsilon lnr(cm)], this result implies epsilon = (0.5 +- 2.7) x 10 -4 . An earlier experiment, which has been reported previously, is described here in detail. This experiment tested the inverse-square law over a distance range of approximately 2 to 5 cm, by probing the gravitational field inside a steel mass tube using a copper test mass suspended from the end of a torsion balance bar. This experiment yielded a value for the parameter epsilon defined above: epsilon = (1 +- 7) x 10 -5 . The results of both of these experiments are in good agreement with the Newton- ian prediction. Limits on the strength and range of a Yukawa potential term superimposed on the Newtonian gravitational potential are discussed

  20. ΛGR Centennial: Cosmic Web in Dark Energy Background

    Science.gov (United States)

    Chernin, A. D.

    The basic building blocks of the Cosmic Web are groups and clusters of galaxies, super-clusters (pancakes) and filaments embedded in the universal dark energy background. The background produces antigravity, and the antigravity effect is strong in groups, clusters and superclusters. Antigravity is very weak in filaments where matter (dark matter and baryons) produces gravity dominating in the filament internal dynamics. Gravity-antigravity interplay on the large scales is a grandiose phenomenon predicted by ΛGR theory and seen in modern observations of the Cosmic Web.

  1. Open bosonic string in background electromagnetic field

    International Nuclear Information System (INIS)

    Nesterenko, V.V.

    1987-01-01

    The classical and quantum dynamics of an open string propagating in the D-dimensional space-time in the presence of a background electromagnetic field is investigated. An important point in this consideration is the use of the generalized light-like gauge. There are considered the strings of two types; the neutral strings with charges at their ends obeying the condition q 1 +q 2 =0 and the charged strings having a net charge q 1 +q 2 ≠ 0. The consistency of theory demands that the background electric field does not exceed its critical value. The distance between the mass levels of the neutral open string decreases (1-e 2 ) times in comparison with the free string, where e is the dimensionless strength of the electric field. The magnetic field does not affect this distance. It is shown that at a classical level the squared mass of the neutral open string has a tachyonic contribution due to the motion of the string as a whole in transverse directions. The tachyonic term disappears if one considers, instead of M 2 , the string energy in a special reference frame where the projection of the total canonical momentum of the string onto the electric field vanishes. The contributions due to zero point fluctuations to the energy spectrum of the neutral string and to the Virasoro operators in the theory of charged string are found

  2. Prediction of valid acidity in intact apples with Fourier transform near infrared spectroscopy.

    Science.gov (United States)

    Liu, Yan-De; Ying, Yi-Bin; Fu, Xia-Ping

    2005-03-01

    To develop nondestructive acidity prediction for intact Fuji apples, the potential of Fourier transform near infrared (FT-NIR) method with fiber optics in interactance mode was investigated. Interactance in the 800 nm to 2619 nm region was measured for intact apples, harvested from early to late maturity stages. Spectral data were analyzed by two multivariate calibration techniques including partial least squares (PLS) and principal component regression (PCR) methods. A total of 120 Fuji apples were tested and 80 of them were used to form a calibration data set. The influences of different data preprocessing and spectra treatments were also quantified. Calibration models based on smoothing spectra were slightly worse than that based on derivative spectra, and the best result was obtained when the segment length was 5 nm and the gap size was 10 points. Depending on data preprocessing and PLS method, the best prediction model yielded correlation coefficient of determination (r2) of 0.759, low root mean square error of prediction (RMSEP) of 0.0677, low root mean square error of calibration (RMSEC) of 0.0562. The results indicated the feasibility of FT-NIR spectral analysis for predicting apple valid acidity in a nondestructive way.

  3. The background in the 0νββ experiment GERDA

    Energy Technology Data Exchange (ETDEWEB)

    Agostini, M.; Bode, T.; Budjas, D.; Csathy, J.J.; Lazzaro, A.; Schoenert, S. [Technische Universitaet Muenchen, Physik Department and Excellence Cluster Universe, Muenchen (Germany); Allardt, M.; Barros, N.; Domula, A.; Lehnert, B.; Wester, T.; Zuber, K. [Technische Universitaet Dresden, Institut fuer Kern- und Teilchenphysik, Dresden (Germany); Andreotti, E. [Institute for Reference Materials and Measurements, Geel (Belgium); Eberhard Karls Universitaet Tuebingen, Physikalisches Institut, Tuebingen (Germany); Bakalyarov, A.M.; Belyaev, S.T.; Lebedev, V.I.; Zhukov, S.V. [National Research Centre ' ' Kurchatov Institute' ' , Moscow (Russian Federation); Balata, M.; Ioannucci, L.; Junker, M.; Laubenstein, M.; Macolino, C.; Nisi, S.; Pandola, L.; Zavarise, P. [INFN Laboratori Nazionali del Gran Sasso, LNGS, Assergi (Italy); Barabanov, I.; Bezrukov, L.; Gurentsov, V.; Inzhechik, L.V.; Kuzminov, V.V.; Lubsandorzhiev, B.; Yanovich, E. [Institute for Nuclear Research of the Russian Academy of Sciences, Moscow (Russian Federation); Barnabe Heider, M. [Max Planck Institut fuer Kernphysik, Heidelberg (Germany); Technische Universitaet Muenchen, Physik Department and Excellence Cluster Universe, Muenchen (Germany); Baudis, L.; Benato, G.; Ferella, A.; Guthikonda, K.K.; Tarka, M.; Walter, M. [Physik Institut der Universitaet Zuerich, Zurich (Switzerland); Bauer, C.; Hampel, W.; Heisel, M.; Heusser, G.; Hofmann, W.; Kihm, T.; Kirsch, A.; Knoepfle, K.T.; Lindner, M.; Lubashevskiy, A.; Machado, A.A.; Maneschg, W.; Salathe, M.; Schreiner, J.; Schwingenheuer, B.; Simgen, H.; Smolnikov, A.; Strecker, H.; Wagner, V.; Wegmann, A. [Max Planck Institut fuer Kernphysik, Heidelberg (Germany); Becerici-Schmidt, N.; Caldwell, A.; Cossavella, F.; Liao, H.Y.; Liu, X.; Majorovits, B.; O' Shaughnessy, C.; Palioselitis, D.; Schulz, O.; Volynets, O. [Max-Planck-Institut fuer Physik, Muenchen (Germany); Bellotti, E.; Pessina, G. [Universita Milano Bicocca, Dipartimento di Fisica, Milan (Italy); INFN Milano Bicocca, Milan (Italy); Belogurov, S.; Kornoukhov, V.N. [Institute for Nuclear Research of the Russian Academy of Sciences, Moscow (Russian Federation); Institute for Theoretical and Experimental Physics, Moscow (Russian Federation); Bettini, A.; Brugnera, R.; Garfagnini, A.; Hemmer, S.; Sada, C. [Dipartimento di Fisica e Astronomia dell' Universita di Padova, Padua (Italy); INFN Padova, Padua (Italy); Brudanin, V.; Egorov, V.; Kochetov, O.; Nemchenok, I.; Shevchik, E.; Zhitnikov, I.; Zinatulina, D. [Joint Institute for Nuclear Research, Dubna (Russian Federation); Cattadori, C.; Gotti, C. [INFN Milano Bicocca, Milan (Italy); Chernogorov, A.; Demidova, E.V.; Kirpichnikov, I.V.; Vasenko, A.A. [Institute for Theoretical and Experimental Physics, Moscow (Russian Federation); Falkenstein, R.; Freund, K.; Grabmayr, P.; Hegai, A.; Jochum, J.; Schmitt, C. [Eberhard Karls Universitaet Tuebingen, Physikalisches Institut, Tuebingen (Germany); Frodyma, N.; Pelczar, K.; Wojcik, M.; Zuzel, G. [Jagiellonian University, Institute of Physics, Cracow (Poland); Gangapshev, A. [Max Planck Institut fuer Kernphysik, Heidelberg (Germany); Institute for Nuclear Research of the Russian Academy of Sciences, Moscow (Russian Federation); Gusev, K. [Joint Institute for Nuclear Research, Dubna (Russian Federation); National Research Centre ' ' Kurchatov Institute' ' , Moscow (Russian Federation); Technische Universitaet Muenchen, Physik Department and Excellence Cluster Universe, Muenchen (Germany); Hult, M.; Lutter, G. [Institute for Reference Materials and Measurements, Geel (Belgium); Klimenko, A. [Joint Institute for Nuclear Research, Dubna (Russian Federation); Max Planck Institut fuer Kernphysik, Heidelberg (Germany); Lippi, I.; Stanco, L.; Ur, C.A. [INFN Padova, Padua (Italy); Pullia, A.; Riboldi, S. [Universita degli Studi di Milano (IT); INFN Milano, Dipartimento di Fisica, Milan (IT); Shirchenko, M. [Joint Institute for Nuclear Research, Dubna (RU); National Research Centre ' ' Kurchatov Institute' ' , Moscow (RU); Sturm, K. von [Dipartimento di Fisica e Astronomia dell' Universita di Padova, Padua (IT); INFN Padova, Padua (IT); Eberhard Karls Universitaet Tuebingen, Physikalisches Institut, Tuebingen (DE)

    2014-04-15

    The GERmanium Detector Array (GERDA) experiment at the Gran Sasso underground laboratory (LNGS) of INFN is searching for neutrinoless double beta (0νββ) decay of {sup 76}Ge. The signature of the signal is a monoenergetic peak at 2039 keV, the Q{sub ββ} value of the decay. To avoid bias in the signal search, the present analysis does not consider all those events, that fall in a 40 keV wide region centered around Q{sub ββ}. The main parameters needed for the 0νββ analysis are described. A background model was developed to describe the observed energy spectrum. The model contains several contributions, that are expected on the basis of material screening or that are established by the observation of characteristic structures in the energy spectrum. The model predicts a flat energy spectrum for the blinding window around Q{sub ββ} with a background index ranging from 17.6 to 23.8 x 10{sup -3} cts/(keV kg yr). A part of the data not considered before has been used to test if the predictions of the background model are consistent. The observed number of events in this energy region is consistent with the background model. The background at Q{sub ββ} is dominated by close sources,mainly due to {sup 42}K, {sup 214}Bi, {sup 228}Th, {sup 60}Co and α emitting isotopes from the {sup 226}Ra decay chain. The individual fractions depend on the assumed locations of the contaminants. It is shown, that after removal of the known γ peaks, the energy spectrum can be fitted in an energy range of 200 keV around Q{sub ββ} with a constant background. This gives a background index consistent with the full model and uncertainties of the same size. (orig.)

  4. Narrowing of the balance function with centrality in Au+Au collisions at the square root of SNN = 130 GeV.

    Science.gov (United States)

    Adams, J; Adler, C; Ahammed, Z; Allgower, C; Amonett, J; Anderson, B D; Anderson, M; Averichev, G S; Balewski, J; Barannikova, O; Barnby, L S; Baudot, J; Bekele, S; Belaga, V V; Bellwied, R; Berger, J; Bichsel, H; Billmeier, A; Bland, L C; Blyth, C O; Bonner, B E; Boucham, A; Brandin, A; Bravar, A; Cadman, R V; Caines, H; Calderónde la Barca Sánchez, M; Cardenas, A; Carroll, J; Castillo, J; Castro, M; Cebra, D; Chaloupka, P; Chattopadhyay, S; Chen, Y; Chernenko, S P; Cherney, M; Chikanian, A; Choi, B; Christie, W; Coffin, J P; Cormier, T M; Corral, M M; Cramer, J G; Crawford, H J; Derevschikov, A A; Didenko, L; Dietel, T; Draper, J E; Dunin, V B; Dunlop, J C; Eckardt, V; Efimov, L G; Emelianov, V; Engelage, J; Eppley, G; Erazmus, B; Fachini, P; Faine, V; Faivre, J; Fatemi, R; Filimonov, K; Finch, E; Fisyak, Y; Flierl, D; Foley, K J; Fu, J; Gagliardi, C A; Gagunashvili, N; Gans, J; Gaudichet, L; Germain, M; Geurts, F; Ghazikhanian, V; Grachov, O; Grigoriev, V; Guedon, M; Guertin, S M; Gushin, E; Hallman, T J; Hardtke, D; Harris, J W; Heinz, M; Henry, T W; Heppelmann, S; Herston, T; Hippolyte, B; Hirsch, A; Hjort, E; Hoffmann, G W; Horsley, M; Huang, H Z; Humanic, T J; Igo, G; Ishihara, A; Ivanshin, Yu I; Jacobs, P; Jacobs, W W; Janik, M; Johnson, I; Jones, P G; Judd, E G; Kaneta, M; Kaplan, M; Keane, D; Kiryluk, J; Kisiel, A; Klay, J; Klein, S R; Klyachko, A; Kollegger, T; Konstantinov, A S; Kopytine, M; Kotchenda, L; Kovalenko, A D; Kramer, M; Kravtsov, P; Krueger, K; Kuhn, C; Kulikov, A I; Kunde, G J; Kunz, C L; Kutuev, R Kh; Kuznetsov, A A; Lamont, M A C; Landgraf, J M; Lange, S; Lansdell, C P; Lasiuk, B; Laue, F; Lauret, J; Lebedev, A; Lednický, R; Leontiev, V M; LeVine, M J; Li, Q; Lindenbaum, S J; Lisa, M A; Liu, F; Liu, L; Liu, Z; Liu, Q J; Ljubicic, T; Llope, W J; Long, H; Longacre, R S; Lopez-Noriega, M; Love, W A; Ludlam, T; Lynn, D; Ma, J; Magestro, D; Majka, R; Margetis, S; Markert, C; Martin, L; Marx, J; Matis, H S; Matulenko, Yu A; McShane, T S; Meissner, F; Melnick, Yu; Meschanin, A; Messer, M; Miller, M L; Milosevich, Z; Minaev, N G; Mitchell, J; Moore, C F; Morozov, V; de Moura, M M; Munhoz, M G; Nelson, J M; Nevski, P; Nikitin, V A; Nogach, L V; Norman, B; Nurushev, S B; Odyniec, G; Ogawa, A; Okorokov, V; Oldenburg, M; Olson, D; Paic, G; Pandey, S U; Panebratsev, Y; Panitkin, S Y; Pavlinov, A I; Pawlak, T; Perevoztchikov, V; Peryt, W; Petrov, V A; Planinic, M; Pluta, J; Porile, N; Porter, J; Poskanzer, A M; Potrebenikova, E; Prindle, D; Pruneau, C; Putschke, J; Rai, G; Rakness, G; Ravel, O; Ray, R L; Razin, S V; Reichhold, D; Reid, J G; Renault, G; Retiere, F; Ridiger, A; Ritter, H G; Roberts, J B; Rogachevski, O V; Romero, J L; Rose, A; Roy, C; Rykov, V; Sakrejda, I; Salur, S; Sandweiss, J; Savin, I; Schambach, J; Scharenberg, R P; Schmitz, N; Schroeder, L S; Schüttauf, A; Schweda, K; Seger, J; Seliverstov, D; Seyboth, P; Shahaliev, E; Shestermanov, K E; Shimanskii, S S; Simon, F; Skoro, G; Smirnov, N; Snellings, R; Sorensen, P; Sowinski, J; Spinka, H M; Srivastava, B; Stephenson, E J; Stock, R; Stolpovsky, A; Strikhanov, M; Stringfellow, B; Struck, C; Suaide, A A P; Sugarbaker, E; Suire, C; Sumbera, M; Surrow, B; Symons, T J M; de Toledo, A Szanto; Szarwas, P; Tai, A; Takahashi, J; Tang, A H; Thein, D; Thomas, J H; Thompson, M; Tikhomirov, V; Tokarev, M; Tonjes, M B; Trainor, T A; Trentalange, S; Tribble, R E; Trofimov, V; Tsai, O; Ullrich, T; Underwood, D G; Van Buren, G; Vander Molen, A M; Vasilevski, I M; Vasiliev, A N; Vigdor, S E; Voloshin, S A; Wang, F; Ward, H; Watson, J W; Wells, R; Westfall, G D; Whitten, C; Wieman, H; Willson, R; Wissink, S W; Witt, R; Wood, J; Xu, N; Xu, Z; Yakutin, A E; Yamamoto, E; Yang, J; Yepes, P; Yurevich, V I; Zanevski, Y V; Zborovský, I; Zhang, H; Zhang, W M; Zoulkarneev, R; Zubarev, A N

    2003-05-02

    The balance function is a new observable based on the principle that charge is locally conserved when particles are pair produced. Balance functions have been measured for charged particle pairs and identified charged pion pairs in Au+Au collisions at the square root of SNN = 130 GeV at the Relativistic Heavy Ion Collider using STAR. Balance functions for peripheral collisions have widths consistent with model predictions based on a superposition of nucleon-nucleon scattering. Widths in central collisions are smaller, consistent with trends predicted by models incorporating late hadronization.

  5. The elastic buckling of super-graphene and super-square carbon nanotube networks

    International Nuclear Information System (INIS)

    Li Ying; Qiu Xinming; Yin Yajun; Yang Fan; Fan Qinshan

    2010-01-01

    The super-graphene (SG) and super-square (SS) carbon nanotube network are built by the straight single-walled carbon nanotubes and corresponding junctions. The elastic buckling behaviors of these carbon nanotube networks under different boundary conditions are explored through the molecular structural mechanics method. The following results are obtained: (a) The critical buckling forces of the SG and SS networks decrease as the side lengths or aspect ratios of the networks increase. The continuum plate theory could give good predictions to the buckling of the SS network but not the SG network with non-uniform buckling modes. (b) The carbon nanotube networks are more stable structures than the graphene structures with less carbon atoms.

  6. Improved linear least squares estimation using bounded data uncertainty

    KAUST Repository

    Ballal, Tarig

    2015-04-01

    This paper addresses the problemof linear least squares (LS) estimation of a vector x from linearly related observations. In spite of being unbiased, the original LS estimator suffers from high mean squared error, especially at low signal-to-noise ratios. The mean squared error (MSE) of the LS estimator can be improved by introducing some form of regularization based on certain constraints. We propose an improved LS (ILS) estimator that approximately minimizes the MSE, without imposing any constraints. To achieve this, we allow for perturbation in the measurement matrix. Then we utilize a bounded data uncertainty (BDU) framework to derive a simple iterative procedure to estimate the regularization parameter. Numerical results demonstrate that the proposed BDU-ILS estimator is superior to the original LS estimator, and it converges to the best linear estimator, the linear-minimum-mean-squared error estimator (LMMSE), when the elements of x are statistically white.

  7. Improved linear least squares estimation using bounded data uncertainty

    KAUST Repository

    Ballal, Tarig; Al-Naffouri, Tareq Y.

    2015-01-01

    This paper addresses the problemof linear least squares (LS) estimation of a vector x from linearly related observations. In spite of being unbiased, the original LS estimator suffers from high mean squared error, especially at low signal-to-noise ratios. The mean squared error (MSE) of the LS estimator can be improved by introducing some form of regularization based on certain constraints. We propose an improved LS (ILS) estimator that approximately minimizes the MSE, without imposing any constraints. To achieve this, we allow for perturbation in the measurement matrix. Then we utilize a bounded data uncertainty (BDU) framework to derive a simple iterative procedure to estimate the regularization parameter. Numerical results demonstrate that the proposed BDU-ILS estimator is superior to the original LS estimator, and it converges to the best linear estimator, the linear-minimum-mean-squared error estimator (LMMSE), when the elements of x are statistically white.

  8. Establishment and assessment of CHF data base for square-lattice rod bundles

    International Nuclear Information System (INIS)

    Hwang, Dae Hyun; Seo, K. W.; Kim, K. K.; Zee, S. Q.

    2002-02-01

    A CHF data base is constructed for square-lattice rod bundles, and assessed with various existing CHF prediction models. The CHF data base consists of 10725 data points obtained from 147 test bundles with uniform axial power distributions and 29 test bundles with non-uniform axial power distributions. The local thermal-hydraulic conditions in the subchannels are calculated by employing a subchannel analysis code MATRA. The influence of turbulent mixing parameter on CHF is evaluated quantitatively for selected test bundles with representative cross sectional configurations. The performance of various CHF prediction models including empirical correlations for round tubes or rod bundles, theoretical DNB models such as sublayer dryout model and bubble crowding model, and CHF lookup table for round tubes, are assessed for the localized rod bundle CHF data base. In view of the analysis result, it reveals that the 1995 AECL-IPPE CHF lookup table method is one of promising models in the aspect of the prediction accuracy and the applicable range. As the result of analysis employing the CHF lookup table for 9113 data points with uniform axial heat profile, the mean and the standard deviation of P/M are calculated as 1.003 and 0.115 by HBM, 1.022 and 0.319 by DSM respectively

  9. Two-dimensional square ternary Cu2MX4 (M = Mo, W; X = S, Se) monolayers and nanoribbons predicted from density functional theory

    KAUST Repository

    Gan, Liyong

    2014-03-19

    Two-dimensional (2D) materials often adopt a hexagonal lattice. We report on a class of 2D materials, Cu2MX4 (M = Mo, W; X = S, Se), that has a square lattice. Up to three monolayers, the systems are kinetically stable. All of them are semiconductors with band gaps from 2.03 to 2.48 eV. Specifically, the states giving rise to the valence band maximum are confined to the Cu and X atoms, while those giving rise to the conduction band minimum are confined to the M atoms, suggesting that spontaneous charge separation occurs. The semiconductive nature makes the materials promising for transistors, optoelectronics, and solar energy conversion. Moreover, the ferromagnetism on the edges of square Cu2MX4 nanoribbons opens applications in spintronics.

  10. Two-dimensional square ternary Cu2MX4 (M = Mo, W; X = S, Se) monolayers and nanoribbons predicted from density functional theory

    KAUST Repository

    Gan, Liyong; Schwingenschlö gl, Udo

    2014-01-01

    Two-dimensional (2D) materials often adopt a hexagonal lattice. We report on a class of 2D materials, Cu2MX4 (M = Mo, W; X = S, Se), that has a square lattice. Up to three monolayers, the systems are kinetically stable. All of them are semiconductors with band gaps from 2.03 to 2.48 eV. Specifically, the states giving rise to the valence band maximum are confined to the Cu and X atoms, while those giving rise to the conduction band minimum are confined to the M atoms, suggesting that spontaneous charge separation occurs. The semiconductive nature makes the materials promising for transistors, optoelectronics, and solar energy conversion. Moreover, the ferromagnetism on the edges of square Cu2MX4 nanoribbons opens applications in spintronics.

  11. Proportionate-type normalized last mean square algorithms

    CERN Document Server

    Wagner, Kevin

    2013-01-01

    The topic of this book is proportionate-type normalized least mean squares (PtNLMS) adaptive filtering algorithms, which attempt to estimate an unknown impulse response by adaptively giving gains proportionate to an estimate of the impulse response and the current measured error. These algorithms offer low computational complexity and fast convergence times for sparse impulse responses in network and acoustic echo cancellation applications. New PtNLMS algorithms are developed by choosing gains that optimize user-defined criteria, such as mean square error, at all times. PtNLMS algorithms ar

  12. Building 'Tower' on the Partizans square in Užice (1961

    Directory of Open Access Journals (Sweden)

    Kuzović Duško

    2016-01-01

    Full Text Available The building 'Tower' at the Partisans Square in Užice was built from 1958 to 1961 designed by architect Milorad Pantović. It is located in the north zone on the street of Kralja Petra, in a continuous series of buildings that form the eastern side of the square room. Thanks to the topography of the terrain and the amount of public space, the tower dominates the square. Floors: Basement 1 + 2 basement + ground + mezzanine + 11 floors + attic. Tower at the Partisans Square represents the first multi-storey building which was built in Užice. The architectural value of the building represents a significant segment of the architectural heritage in Serbia and Yugoslavia.

  13. The circumference of the square of a connected graph

    DEFF Research Database (Denmark)

    Brandt, S.; Muttel, J.; Rautenbach, D.

    2014-01-01

    The celebrated result of Fleischner states that the square of every 2-connected graph is Hamiltonian. We investigate what happens if the graph is just connected. For every n a parts per thousand yen 3, we determine the smallest length c(n) of a longest cycle in the square of a connected graph of ...... of order n and show that c(n) is a logarithmic function in n. Furthermore, for every c a parts per thousand yen 3, we characterize the connected graphs of largest order whose square contains no cycle of length at least c....

  14. Statistical Validation of Engineering and Scientific Models: Background

    International Nuclear Information System (INIS)

    Hills, Richard G.; Trucano, Timothy G.

    1999-01-01

    A tutorial is presented discussing the basic issues associated with propagation of uncertainty analysis and statistical validation of engineering and scientific models. The propagation of uncertainty tutorial illustrates the use of the sensitivity method and the Monte Carlo method to evaluate the uncertainty in predictions for linear and nonlinear models. Four example applications are presented; a linear model, a model for the behavior of a damped spring-mass system, a transient thermal conduction model, and a nonlinear transient convective-diffusive model based on Burger's equation. Correlated and uncorrelated model input parameters are considered. The model validation tutorial builds on the material presented in the propagation of uncertainty tutoriaI and uses the damp spring-mass system as the example application. The validation tutorial illustrates several concepts associated with the application of statistical inference to test model predictions against experimental observations. Several validation methods are presented including error band based, multivariate, sum of squares of residuals, and optimization methods. After completion of the tutorial, a survey of statistical model validation literature is presented and recommendations for future work are made

  15. Spectrum and isotropy of the submillimeter background radiation

    International Nuclear Information System (INIS)

    Muehlner, D.

    1977-01-01

    Two great astronomical discoveries have most shaped our present concept of the Big Bang universe. Like the Hubble recession of the galaxies, the discovery of the 3 0 K background radiation by Penzias and Wilson in 1965 has given rise to a line of research which is still very active today. Penzias and Wilson's universal microwave background at 7 cm was immediately interpreted by R.H. Dicke's group at Princeton as coming from the primordial fireball of incandescent plasma which filled the universe for the million years or so after its explosive birth. This interpretation gives rise to two crucial predictions as to the nature of the background radiation. Its spectrum should be thermal even after having been red shifted by a factor of approximately 1000 by the expansion of the universe, and the radiation should be isotropic - assuming that the universe itself is isotropic. If the background radiation is indeed from the primordial fireball it affords us the only direct view at the very young universe. This paper deals with the spectrum and then the isotropy of the background radiation, with emphasis on high frequency or submillimeter measurements. Prospects for the future are discussed briefly. (Auth.)

  16. High-frequency matrix converter with square wave input

    Science.gov (United States)

    Carr, Joseph Alexander; Balda, Juan Carlos

    2015-03-31

    A device for producing an alternating current output voltage from a high-frequency, square-wave input voltage comprising, high-frequency, square-wave input a matrix converter and a control system. The matrix converter comprises a plurality of electrical switches. The high-frequency input and the matrix converter are electrically connected to each other. The control system is connected to each switch of the matrix converter. The control system is electrically connected to the input of the matrix converter. The control system is configured to operate each electrical switch of the matrix converter converting a high-frequency, square-wave input voltage across the first input port of the matrix converter and the second input port of the matrix converter to an alternating current output voltage at the output of the matrix converter.

  17. Modeling of MHD natural convection in a square enclosure having an adiabatic square shaped body using Lattice Boltzmann Method

    Directory of Open Access Journals (Sweden)

    Ahmed Kadhim Hussein

    2016-03-01

    Full Text Available A steady laminar two-dimensional magneto-hydrodynamics (MHD natural convection flow in a square enclosure filled with an electrically conducting fluid is numerically investigated using Lattice Boltzmann Method (LBM. The left and right vertical sidewalls of the square enclosure are maintained at hot and cold temperatures respectively. The horizontal top and bottom walls are considered thermally insulated. An adiabatic square shaped body is located in the center of a square enclosure and an external magnetic field is applied parallel to the horizontal x-axis. In the present work, the following parametric ranges of the non-dimensional groups are utilized: Hartmann number is varied as 0 ⩽ Ha ⩽ 50, Rayleigh number is varied as 103 ⩽ Ra ⩽ 105, Prandtl number is varied 0.05 ⩽ Pr ⩽ 5. It is found that the Hartmann number, Rayleigh number, and Prandtl number have an important role on the flow and thermal characteristics. It is found that when the Hartmann number increases the average Nusselt number decreases. The results also explain that the effect of magnetic field on flow field increases by increasing Prandtl number. However, the Prandtl number effect on the average Nusselt number with a magnetic field is less than the case without a magnetic field. Comparisons with previously published numerical works are performed and good agreements between the results are observed.

  18. Simultaneous estimation of cross-validation errors in least squares collocation applied for statistical testing and evaluation of the noise variance components

    Science.gov (United States)

    Behnabian, Behzad; Mashhadi Hossainali, Masoud; Malekzadeh, Ahad

    2018-02-01

    The cross-validation technique is a popular method to assess and improve the quality of prediction by least squares collocation (LSC). We present a formula for direct estimation of the vector of cross-validation errors (CVEs) in LSC which is much faster than element-wise CVE computation. We show that a quadratic form of CVEs follows Chi-squared distribution. Furthermore, a posteriori noise variance factor is derived by the quadratic form of CVEs. In order to detect blunders in the observations, estimated standardized CVE is proposed as the test statistic which can be applied when noise variances are known or unknown. We use LSC together with the methods proposed in this research for interpolation of crustal subsidence in the northern coast of the Gulf of Mexico. The results show that after detection and removing outliers, the root mean square (RMS) of CVEs and estimated noise standard deviation are reduced about 51 and 59%, respectively. In addition, RMS of LSC prediction error at data points and RMS of estimated noise of observations are decreased by 39 and 67%, respectively. However, RMS of LSC prediction error on a regular grid of interpolation points covering the area is only reduced about 4% which is a consequence of sparse distribution of data points for this case study. The influence of gross errors on LSC prediction results is also investigated by lower cutoff CVEs. It is indicated that after elimination of outliers, RMS of this type of errors is also reduced by 19.5% for a 5 km radius of vicinity. We propose a method using standardized CVEs for classification of dataset into three groups with presumed different noise variances. The noise variance components for each of the groups are estimated using restricted maximum-likelihood method via Fisher scoring technique. Finally, LSC assessment measures were computed for the estimated heterogeneous noise variance model and compared with those of the homogeneous model. The advantage of the proposed method is the

  19. High-Energy Neutron Backgrounds for Underground Dark Matter Experiments

    Energy Technology Data Exchange (ETDEWEB)

    Chen, Yu [Syracuse Univ., NY (United States)

    2016-01-01

    Direct dark matter detection experiments usually have excellent capability to distinguish nuclear recoils, expected interactions with Weakly Interacting Massive Particle (WIMP) dark matter, and electronic recoils, so that they can efficiently reject background events such as gamma-rays and charged particles. However, both WIMPs and neutrons can induce nuclear recoils. Neutrons are then the most crucial background for direct dark matter detection. It is important to understand and account for all sources of neutron backgrounds when claiming a discovery of dark matter detection or reporting limits on the WIMP-nucleon cross section. One type of neutron background that is not well understood is the cosmogenic neutrons from muons interacting with the underground cavern rock and materials surrounding a dark matter detector. The Neutron Multiplicity Meter (NMM) is a water Cherenkov detector capable of measuring the cosmogenic neutron flux at the Soudan Underground Laboratory, which has an overburden of 2090 meters water equivalent. The NMM consists of two 2.2-tonne gadolinium-doped water tanks situated atop a 20-tonne lead target. It detects a high-energy (>~ 50 MeV) neutron via moderation and capture of the multiple secondary neutrons released when the former interacts in the lead target. The multiplicity of secondary neutrons for the high-energy neutron provides a benchmark for comparison to the current Monte Carlo predictions. Combining with the Monte Carlo simulation, the muon-induced high-energy neutron flux above 50 MeV is measured to be (1.3 ± 0.2) ~ 10-9 cm-2s-1, in reasonable agreement with the model prediction. The measured multiplicity spectrum agrees well with that of Monte Carlo simulation for multiplicity below 10, but shows an excess of approximately a factor of three over Monte Carlo prediction for multiplicities ~ 10 - 20. In an effort to reduce neutron backgrounds for the dark matter experiment SuperCDMS SNO- LAB, an active neutron veto was developed

  20. [Study on predicting firmness of watermelon by Vis/NIR diffuse transmittance technique].

    Science.gov (United States)

    Tian, Hai-Qing; Ying, Yi-Bin; Lu, Hui-Shan; Xu, Hui-Rong; Xie, Li-Juan; Fu, Xia-Ping; Yu, Hai-Yan

    2007-06-01

    Watermelon is a popular fruit in the world and firmness (FM) is one of the major characteristics used for assessing watermelon quality. The objective of the present research was to study the potential of visible/near Infrared (Vis/NIR) diffuse transmittance spectroscopy as a way for the nondestructive measurement of FM of watermelon. Statistical models between the spectra and FM were developed using partial least square (PLS) and principle component regression (PCR) methods. Performance of different models was assessed in terms of correlation coefficients (r) of validation set of samples and root mean square errors of prediction (RMSEP). Models for three kinds of mathematical treatments of spectra (original, first derivative and second derivative) were established. Savitsky-Goaly filter smoothing method was used for spectra data smoothing. The PLS model of the second derivative spectra gave the best prediction of FM, with a correlation coefficient (r) of 0. 974 and root mean square errors of prediction (RMSEP) of 0. 589 N using Savitsky-Goaly filter smoothing method. The results of this study indicate that NIR diffuse transmittance spectroscopy can be used to predict the FM of watermelon. The Vis/NIR diffuse transmittance technique will be valuable for the nandestructive detection large shape and thick peel fruits'.

  1. Background matching and camouflage efficiency predict population density in four-eyed turtle (Sacalia quadriocellata).

    Science.gov (United States)

    Xiao, Fanrong; Yang, Canchao; Shi, Haitao; Wang, Jichao; Sun, Liang; Lin, Liu

    2016-10-01

    Background matching is an important way to camouflage and is widespread among animals. In the field, however, few studies have addressed background matching, and there has been no reported camouflage efficiency in freshwater turtles. Background matching and camouflage efficiency of the four-eyed turtle, Sacalia quadriocellata, among three microhabitat sections of Hezonggou stream were investigated by measuring carapace components of CIE L*a*b* (International Commission on Illumination; lightness, red/green and yellow/blue) color space, and scoring camouflage efficiency through the use of humans as predators. The results showed that the color difference (ΔE), lightness difference (ΔL(*)), and chroma difference (Δa(*)b(*)) between carapace and the substrate background in midstream were significantly lower than that upstream and downstream, indicating that the four-eyed turtle carapace color most closely matched the substrate of midstream. In line with these findings, the camouflage efficiency was the best for the turtles that inhabit midstream. These results suggest that the four-eyed turtles may enhance camouflage efficiency by selecting microhabitat that best match their carapace color. This finding may explain the high population density of the four-eyed turtle in the midstream section of Hezonggou stream. To the best of our knowledge, this study is among the first to quantify camouflage of freshwater turtles in the wild, laying the groundwork to further study the function and mechanisms of turtle camouflage. Copyright © 2016. Published by Elsevier B.V.

  2. Optimization of Reciprocals and Square Roots on the i860 Microprocessor

    DEFF Research Database (Denmark)

    Sinclair, Robert

    1996-01-01

    The i860 microprocessor lacks both a divide and a square root instruction. The consequences of this for code involving many reciprocal square roots, such as many-body simulations involving Coulomb-like potentials, are discussed with a particular emphasis on high performance.......The i860 microprocessor lacks both a divide and a square root instruction. The consequences of this for code involving many reciprocal square roots, such as many-body simulations involving Coulomb-like potentials, are discussed with a particular emphasis on high performance....

  3. Background velocity inversion by phase along reflection wave paths

    KAUST Repository

    Yu, Han; Guo, Bowen; Schuster, Gerard T.

    2014-01-01

    A background velocity model containing the correct lowwavenumber information is desired for both the quality of the migration image and the success of waveform inversion. We propose to invert for the low-wavenumber part of the velocity model by minimizing the phase difference between predicted and observed reflections. The velocity update is exclusively along the reflection wavepaths and, unlike conventional FWI, not along the reflection ellipses. This allows for reconstructing the smoothly varying parts of the background velocity model. Tests with synthetic data show both the benefits and limitations of this method.

  4. Background velocity inversion by phase along reflection wave paths

    KAUST Repository

    Yu, Han

    2014-08-05

    A background velocity model containing the correct lowwavenumber information is desired for both the quality of the migration image and the success of waveform inversion. We propose to invert for the low-wavenumber part of the velocity model by minimizing the phase difference between predicted and observed reflections. The velocity update is exclusively along the reflection wavepaths and, unlike conventional FWI, not along the reflection ellipses. This allows for reconstructing the smoothly varying parts of the background velocity model. Tests with synthetic data show both the benefits and limitations of this method.

  5. note: The least square nucleolus is a general nucleolus

    OpenAIRE

    Elisenda Molina; Juan Tejada

    2000-01-01

    This short note proves that the least square nucleolus (Ruiz et al. (1996)) and the lexicographical solution (Sakawa and Nishizaki (1994)) select the same imputation in each game with nonempty imputation set. As a consequence the least square nucleolus is a general nucleolus (Maschler et al. (1992)).

  6. Landslide susceptibility mapping using decision-tree based CHi-squared automatic interaction detection (CHAID) and Logistic regression (LR) integration

    International Nuclear Information System (INIS)

    Althuwaynee, Omar F; Pradhan, Biswajeet; Ahmad, Noordin

    2014-01-01

    This article uses methodology based on chi-squared automatic interaction detection (CHAID), as a multivariate method that has an automatic classification capacity to analyse large numbers of landslide conditioning factors. This new algorithm was developed to overcome the subjectivity of the manual categorization of scale data of landslide conditioning factors, and to predict rainfall-induced susceptibility map in Kuala Lumpur city and surrounding areas using geographic information system (GIS). The main objective of this article is to use CHi-squared automatic interaction detection (CHAID) method to perform the best classification fit for each conditioning factor, then, combining it with logistic regression (LR). LR model was used to find the corresponding coefficients of best fitting function that assess the optimal terminal nodes. A cluster pattern of landslide locations was extracted in previous study using nearest neighbor index (NNI), which were then used to identify the clustered landslide locations range. Clustered locations were used as model training data with 14 landslide conditioning factors such as; topographic derived parameters, lithology, NDVI, land use and land cover maps. Pearson chi-squared value was used to find the best classification fit between the dependent variable and conditioning factors. Finally the relationship between conditioning factors were assessed and the landslide susceptibility map (LSM) was produced. An area under the curve (AUC) was used to test the model reliability and prediction capability with the training and validation landslide locations respectively. This study proved the efficiency and reliability of decision tree (DT) model in landslide susceptibility mapping. Also it provided a valuable scientific basis for spatial decision making in planning and urban management studies

  7. Landslide susceptibility mapping using decision-tree based CHi-squared automatic interaction detection (CHAID) and Logistic regression (LR) integration

    Science.gov (United States)

    Althuwaynee, Omar F.; Pradhan, Biswajeet; Ahmad, Noordin

    2014-06-01

    This article uses methodology based on chi-squared automatic interaction detection (CHAID), as a multivariate method that has an automatic classification capacity to analyse large numbers of landslide conditioning factors. This new algorithm was developed to overcome the subjectivity of the manual categorization of scale data of landslide conditioning factors, and to predict rainfall-induced susceptibility map in Kuala Lumpur city and surrounding areas using geographic information system (GIS). The main objective of this article is to use CHi-squared automatic interaction detection (CHAID) method to perform the best classification fit for each conditioning factor, then, combining it with logistic regression (LR). LR model was used to find the corresponding coefficients of best fitting function that assess the optimal terminal nodes. A cluster pattern of landslide locations was extracted in previous study using nearest neighbor index (NNI), which were then used to identify the clustered landslide locations range. Clustered locations were used as model training data with 14 landslide conditioning factors such as; topographic derived parameters, lithology, NDVI, land use and land cover maps. Pearson chi-squared value was used to find the best classification fit between the dependent variable and conditioning factors. Finally the relationship between conditioning factors were assessed and the landslide susceptibility map (LSM) was produced. An area under the curve (AUC) was used to test the model reliability and prediction capability with the training and validation landslide locations respectively. This study proved the efficiency and reliability of decision tree (DT) model in landslide susceptibility mapping. Also it provided a valuable scientific basis for spatial decision making in planning and urban management studies.

  8. Child- and elder-friendly urban public places in Fatahillah Square Historical District

    Science.gov (United States)

    Srinaga, F.; LKatoppo, M.; Hidayat, J.

    2018-03-01

    Fatahillah square as an important historical urban square in Jakarta has problems in eye level area integrative processing. Visitors cannot enjoy their time while in the square regarding their visuals, feelings, space, and bodies comfort. These also lead to other problems in which the square is lack of friendly and convenient places for children, the elderly and also the disabled, especially people with limited moving space. The research will attempt in proposing design inception for the Fatahillah Square that is using inclusive user-centered design approach, while in the same time incorporate theoretical studies of children and elderly-design considerations. The first stage of this research was building inclusive design parameter; begin with a context-led research which assesses the quality of Fatahillah square through three basic components of urban space: hardware, software and orgware. The second stage of this research is to propose inclusive design inception for the Fatahillah square.

  9. Observable tensor-to-scalar ratio and secondary gravitational wave background

    Science.gov (United States)

    Chatterjee, Arindam; Mazumdar, Anupam

    2018-03-01

    In this paper we will highlight how a simple vacuum energy dominated inflection-point inflation can match the current data from cosmic microwave background radiation, and predict large primordial tensor to scalar ratio, r ˜O (10-3-10-2), with observable second order gravitational wave background, which can be potentially detectable from future experiments, such as DECi-hertz Interferometer Gravitational wave Observatory (DECIGO), Laser Interferometer Space Antenna (eLISA), cosmic explorer (CE), and big bang observatory (BBO).

  10. Chi-square test and its application in hypothesis testing

    Directory of Open Access Journals (Sweden)

    Rakesh Rana

    2015-01-01

    Full Text Available In medical research, there are studies which often collect data on categorical variables that can be summarized as a series of counts. These counts are commonly arranged in a tabular format known as a contingency table. The chi-square test statistic can be used to evaluate whether there is an association between the rows and columns in a contingency table. More specifically, this statistic can be used to determine whether there is any difference between the study groups in the proportions of the risk factor of interest. Chi-square test and the logic of hypothesis testing were developed by Karl Pearson. This article describes in detail what is a chi-square test, on which type of data it is used, the assumptions associated with its application, how to manually calculate it and how to make use of an online calculator for calculating the Chi-square statistics and its associated P-value.

  11. Classical square-plus-triangle well fluid

    International Nuclear Information System (INIS)

    Boghdadi, M.

    1984-01-01

    A simplified model for the intermolecular-potential function which consists of a hard core and a square-plus-triangle well is proposed. The square width is taken to be lambda 1 -1 and the triangle width is lambda 2 -lambda 1 , where the diameter of the molecules is assumed to be epsilon. Under the restriction that the area under the potential well should be equal to 0.5epsilon, which has its own reason, it is shown that the appropriate choice of lambda 1 and lambda 2 that best mimics the Lennard-Jones (LJ) cut-off results is 1.15 and 1.85 respectively. With this choice for lambda 1 and lambda 2 , the proposed model is effective and satisfactory

  12. Infrared image background modeling based on improved Susan filtering

    Science.gov (United States)

    Yuehua, Xia

    2018-02-01

    When SUSAN filter is used to model the infrared image, the Gaussian filter lacks the ability of direction filtering. After filtering, the edge information of the image cannot be preserved well, so that there are a lot of edge singular points in the difference graph, increase the difficulties of target detection. To solve the above problems, the anisotropy algorithm is introduced in this paper, and the anisotropic Gauss filter is used instead of the Gauss filter in the SUSAN filter operator. Firstly, using anisotropic gradient operator to calculate a point of image's horizontal and vertical gradient, to determine the long axis direction of the filter; Secondly, use the local area of the point and the neighborhood smoothness to calculate the filter length and short axis variance; And then calculate the first-order norm of the difference between the local area of the point's gray-scale and mean, to determine the threshold of the SUSAN filter; Finally, the built SUSAN filter is used to convolution the image to obtain the background image, at the same time, the difference between the background image and the original image is obtained. The experimental results show that the background modeling effect of infrared image is evaluated by Mean Squared Error (MSE), Structural Similarity (SSIM) and local Signal-to-noise Ratio Gain (GSNR). Compared with the traditional filtering algorithm, the improved SUSAN filter has achieved better background modeling effect, which can effectively preserve the edge information in the image, and the dim small target is effectively enhanced in the difference graph, which greatly reduces the false alarm rate of the image.

  13. Linear zonal atmospheric prediction for adaptive optics

    Science.gov (United States)

    McGuire, Patrick C.; Rhoadarmer, Troy A.; Coy, Hanna A.; Angel, J. Roger P.; Lloyd-Hart, Michael

    2000-07-01

    We compare linear zonal predictors of atmospheric turbulence for adaptive optics. Zonal prediction has the possible advantage of being able to interpret and utilize wind-velocity information from the wavefront sensor better than modal prediction. For simulated open-loop atmospheric data for a 2- meter 16-subaperture AO telescope with 5 millisecond prediction and a lookback of 4 slope-vectors, we find that Widrow-Hoff Delta-Rule training of linear nets and Back- Propagation training of non-linear multilayer neural networks is quite slow, getting stuck on plateaus or in local minima. Recursive Least Squares training of linear predictors is two orders of magnitude faster and it also converges to the solution with global minimum error. We have successfully implemented Amari's Adaptive Natural Gradient Learning (ANGL) technique for a linear zonal predictor, which premultiplies the Delta-Rule gradients with a matrix that orthogonalizes the parameter space and speeds up the training by two orders of magnitude, like the Recursive Least Squares predictor. This shows that the simple Widrow-Hoff Delta-Rule's slow convergence is not a fluke. In the case of bright guidestars, the ANGL, RLS, and standard matrix-inversion least-squares (MILS) algorithms all converge to the same global minimum linear total phase error (approximately 0.18 rad2), which is only approximately 5% higher than the spatial phase error (approximately 0.17 rad2), and is approximately 33% lower than the total 'naive' phase error without prediction (approximately 0.27 rad2). ANGL can, in principle, also be extended to make non-linear neural network training feasible for these large networks, with the potential to lower the predictor error below the linear predictor error. We will soon scale our linear work to the approximately 108-subaperture MMT AO system, both with simulations and real wavefront sensor data from prime focus.

  14. Least-squares variance component estimation

    NARCIS (Netherlands)

    Teunissen, P.J.G.; Amiri-Simkooei, A.R.

    2007-01-01

    Least-squares variance component estimation (LS-VCE) is a simple, flexible and attractive method for the estimation of unknown variance and covariance components. LS-VCE is simple because it is based on the well-known principle of LS; it is flexible because it works with a user-defined weight

  15. Agglomerative clustering of growing squares

    NARCIS (Netherlands)

    Castermans, Thom; Speckmann, Bettina; Staals, Frank; Verbeek, Kevin; Bender, M.A.; Farach-Colton, M.; Mosteiro, M.A.

    2018-01-01

    We study an agglomerative clustering problem motivated by interactive glyphs in geo-visualization. Consider a set of disjoint square glyphs on an interactive map. When the user zooms out, the glyphs grow in size relative to the map, possibly with different speeds. When two glyphs intersect, we wish

  16. Ship Attitude Prediction Based on Input Delay Neural Network and Measurements of Gyroscopes

    DEFF Research Database (Denmark)

    Wang, Yunlong; N. Soltani, Mohsen; Hussain, Dil muhammed Akbar

    2017-01-01

    sampled in a ship simulation hardware system. Moreover, the factors that affect the prediction performance are also explored through a set of experiments. The prediction method proposed can achieve high precision, that is, the root-mean-square prediction errors for roll, pitch and yaw, are 0.26 deg, 0...

  17. Advancing Astrophysics with the Square Kilometre Array

    CERN Document Server

    Fender, Rob; Govoni, Federica; Green, Jimi; Hoare, Melvin; Jarvis, Matt; Johnston-Hollitt, Melanie; Keane, Evan; Koopmans, Leon; Kramer, Michael; Maartens, Roy; Macquart, Jean-Pierre; Mellema, Garrelt; Oosterloo, Tom; Prandoni, Isabella; Pritchard, Jonathan; Santos, Mario; Seymour, Nick; Stappers, Ben; Staveley-Smith, Lister; Tian, Wen Wu; Umana, Grazia; Wagg, Jeff; Bourke, Tyler L; AASKA14

    2015-01-01

    In 2014 it was 10 years since the publication of the comprehensive ‘Science with the Square Kilometre Array’ book and 15 years since the first such volume appeared in 1999. In that time numerous and unexpected advances have been made in the fields of astronomy and physics relevant to the capabilities of the Square Kilometre Array (SKA). The SKA itself progressed from an idea to a developing reality with a baselined Phase 1 design (SKA1) and construction planned from 2017. To facilitate the publication of a new, updated science book, which will be relevant to the current astrophysical context, the meeting "Advancing Astrophysics with the Square Kilometre Array" was held in Giardina Naxos, Sicily. Articles were solicited from the community for that meeting to document the scientific advances enabled by the first phase of the SKA and those pertaining to future SKA deployments, with expected gains of 5 times the Phase 1 sensitivity below 350 MHz, about 10 times the Phase 1 sensitivity above 350 MHz and with f...

  18. Consistency of the least weighted squares under heteroscedasticity

    Czech Academy of Sciences Publication Activity Database

    Víšek, Jan Ámos

    2011-01-01

    Roč. 2011, č. 47 (2011), s. 179-206 ISSN 0023-5954 Grant - others:GA UK(CZ) GA402/09/055 Institutional research plan: CEZ:AV0Z10750506 Keywords : Regression * Consistency * The least weighted squares * Heteroscedasticity Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 0.454, year: 2011 http://library.utia.cas.cz/separaty/2011/SI/visek-consistency of the least weighted squares under heteroscedasticity.pdf

  19. Background simulations for the Large Area Detector onboard LOFT

    DEFF Research Database (Denmark)

    Campana, Riccardo; Feroci, Marco; Ettore, Del Monte

    2013-01-01

    and magnetic fields around compact objects and in supranuclear density conditions. Having an effective area of similar to 10 m(2) at 8 keV, LOFT will be able to measure with high sensitivity very fast variability in the X-ray fluxes and spectra. A good knowledge of the in-orbit background environment...... is essential to assess the scientific performance of the mission and optimize the design of its main instrument, the Large Area Detector (LAD). In this paper the results of an extensive Geant-4 simulation of the instrumentwillbe discussed, showing the main contributions to the background and the design...... an anticipated modulation of the background rate as small as 10 % over the orbital timescale. The intrinsic photonic origin of the largest background component also allows for an efficient modelling, supported by an in-flight active monitoring, allowing to predict systematic residuals significantly better than...

  20. Space and protest : A tale of two Egyptian squares

    NARCIS (Netherlands)

    Mohamed, A.A.; Van Nes, A.; Salheen, M.A.

    2015-01-01

    Protests and revolts take place in public space. How they can be controlled or how protests develop depend on the physical layout of the built environment. This study reveals the relationship between urban space and protest for two Egyptian squares: Tahrir Square and Rabaa Al-Adawiya in Cairo. For

  1. Masticatory muscle sleep background electromyographic activity is elevated in myofascial temporomandibular disorder patients.

    Science.gov (United States)

    Raphael, K G; Janal, M N; Sirois, D A; Dubrovsky, B; Wigren, P E; Klausner, J J; Krieger, A C; Lavigne, G J

    2013-12-01

    Despite theoretical speculation and strong clinical belief, recent research using laboratory polysomnographic (PSG) recording has provided new evidence that frequency of sleep bruxism (SB) masseter muscle events, including grinding or clenching of the teeth during sleep, is not increased for women with chronic myofascial temporomandibular disorder (TMD). The current case-control study compares a large sample of women suffering from chronic myofascial TMD (n = 124) with a demographically matched control group without TMD (n = 46) on sleep background electromyography (EMG) during a laboratory PSG study. Background EMG activity was measured as EMG root mean square (RMS) from the right masseter muscle after lights out. Sleep background EMG activity was defined as EMG RMS remaining after activity attributable to SB, other orofacial activity, other oromotor activity and movement artefacts were removed. Results indicated that median background EMG during these non-SB event periods was significantly higher (P cases exceeding control activity. Moreover, for TMD cases, background EMG was positively associated and SB event-related EMG was negatively associated with pain intensity ratings (0-10 numerical scale) on post-sleep waking. These data provide the foundation for a new focus on small, but persistent, elevations in sleep EMG activity over the course of the night as a mechanism of pain induction or maintenance. © 2013 John Wiley & Sons Ltd.

  2. A hybrid model for dissolved oxygen prediction in aquaculture based on multi-scale features

    Directory of Open Access Journals (Sweden)

    Chen Li

    2018-03-01

    Full Text Available To increase prediction accuracy of dissolved oxygen (DO in aquaculture, a hybrid model based on multi-scale features using ensemble empirical mode decomposition (EEMD is proposed. Firstly, original DO datasets are decomposed by EEMD and we get several components. Secondly, these components are used to reconstruct four terms including high frequency term, intermediate frequency term, low frequency term and trend term. Thirdly, according to the characteristics of high and intermediate frequency terms, which fluctuate violently, the least squares support vector machine (LSSVR is used to predict the two terms. The fluctuation of low frequency term is gentle and periodic, so it can be modeled by BP neural network with an optimal mind evolutionary computation (MEC-BP. Then, the trend term is predicted using grey model (GM because it is nearly linear. Finally, the prediction values of DO datasets are calculated by the sum of the forecasting values of all terms. The experimental results demonstrate that our hybrid model outperforms EEMD-ELM (extreme learning machine based on EEMD, EEMD-BP and MEC-BP models based on the mean absolute error (MAE, mean absolute percentage error (MAPE, mean square error (MSE and root mean square error (RMSE. Our hybrid model is proven to be an effective approach to predict aquaculture DO.

  3. Limited Sampling Strategy for Accurate Prediction of Pharmacokinetics of Saroglitazar: A 3-point Linear Regression Model Development and Successful Prediction of Human Exposure.

    Science.gov (United States)

    Joshi, Shuchi N; Srinivas, Nuggehally R; Parmar, Deven V

    2018-03-01

    Our aim was to develop and validate the extrapolative performance of a regression model using a limited sampling strategy for accurate estimation of the area under the plasma concentration versus time curve for saroglitazar. Healthy subject pharmacokinetic data from a well-powered food-effect study (fasted vs fed treatments; n = 50) was used in this work. The first 25 subjects' serial plasma concentration data up to 72 hours and corresponding AUC 0-t (ie, 72 hours) from the fasting group comprised a training dataset to develop the limited sampling model. The internal datasets for prediction included the remaining 25 subjects from the fasting group and all 50 subjects from the fed condition of the same study. The external datasets included pharmacokinetic data for saroglitazar from previous single-dose clinical studies. Limited sampling models were composed of 1-, 2-, and 3-concentration-time points' correlation with AUC 0-t of saroglitazar. Only models with regression coefficients (R 2 ) >0.90 were screened for further evaluation. The best R 2 model was validated for its utility based on mean prediction error, mean absolute prediction error, and root mean square error. Both correlations between predicted and observed AUC 0-t of saroglitazar and verification of precision and bias using Bland-Altman plot were carried out. None of the evaluated 1- and 2-concentration-time points models achieved R 2 > 0.90. Among the various 3-concentration-time points models, only 4 equations passed the predefined criterion of R 2 > 0.90. Limited sampling models with time points 0.5, 2, and 8 hours (R 2 = 0.9323) and 0.75, 2, and 8 hours (R 2 = 0.9375) were validated. Mean prediction error, mean absolute prediction error, and root mean square error were prediction of saroglitazar. The same models, when applied to the AUC 0-t prediction of saroglitazar sulfoxide, showed mean prediction error, mean absolute prediction error, and root mean square error model predicts the exposure of

  4. Prediction of BP Reactivity to Talking Using Hybrid Soft Computing Approaches

    Directory of Open Access Journals (Sweden)

    Gurmanik Kaur

    2014-01-01

    Full Text Available High blood pressure (BP is associated with an increased risk of cardiovascular diseases. Therefore, optimal precision in measurement of BP is appropriate in clinical and research studies. In this work, anthropometric characteristics including age, height, weight, body mass index (BMI, and arm circumference (AC were used as independent predictor variables for the prediction of BP reactivity to talking. Principal component analysis (PCA was fused with artificial neural network (ANN, adaptive neurofuzzy inference system (ANFIS, and least square-support vector machine (LS-SVM model to remove the multicollinearity effect among anthropometric predictor variables. The statistical tests in terms of coefficient of determination (R2, root mean square error (RMSE, and mean absolute percentage error (MAPE revealed that PCA based LS-SVM (PCA-LS-SVM model produced a more efficient prediction of BP reactivity as compared to other models. This assessment presents the importance and advantages posed by PCA fused prediction models for prediction of biological variables.

  5. Prediction of BP reactivity to talking using hybrid soft computing approaches.

    Science.gov (United States)

    Kaur, Gurmanik; Arora, Ajat Shatru; Jain, Vijender Kumar

    2014-01-01

    High blood pressure (BP) is associated with an increased risk of cardiovascular diseases. Therefore, optimal precision in measurement of BP is appropriate in clinical and research studies. In this work, anthropometric characteristics including age, height, weight, body mass index (BMI), and arm circumference (AC) were used as independent predictor variables for the prediction of BP reactivity to talking. Principal component analysis (PCA) was fused with artificial neural network (ANN), adaptive neurofuzzy inference system (ANFIS), and least square-support vector machine (LS-SVM) model to remove the multicollinearity effect among anthropometric predictor variables. The statistical tests in terms of coefficient of determination (R (2)), root mean square error (RMSE), and mean absolute percentage error (MAPE) revealed that PCA based LS-SVM (PCA-LS-SVM) model produced a more efficient prediction of BP reactivity as compared to other models. This assessment presents the importance and advantages posed by PCA fused prediction models for prediction of biological variables.

  6. Renaming Zagreb Streets and Squares

    Directory of Open Access Journals (Sweden)

    Jelena Stanić

    2009-06-01

    Full Text Available The paper deals with changes in street names in the city of Zagreb. Taking the Lower Town (Donji grad city area as an example, the first part of the paper analyses diachronic street name changes commencing from the systematic naming of streets in 1878. Analysis of official changes in street names throughout Zagreb’s history resulted in categorisation of five periods of ideologically motivated naming/name-changing: 1. the Croatia modernisation period, when the first official naming was put into effect, with a marked tendency towards politicisation and nationalisation of the urban landscape; 2. the period of the Kingdom of the Serbs, Croatians and Slovenians/Yugoslavia, when symbols of the new monarchy, the idea of the fellowship of the Southern Slavs, Slavenophilism and the pro-Slavic geopolitical orientation were incorporated into the street names, and when the national idea was highly evident and remained so in that process; 3. the period of the NDH, the Independent State of Croatia, with decanonisation of the tokens of the Yugoslavian monarchy and the Southern Slavic orientation, and reference to the Ustashi and the German Nazi and Italian Fascist movement; 4. the period of Socialism, embedding the ideals and heroes of the workers’ movement and the War of National Liberation into the canonical system; and, 5. the period following the democratic changes in 1990, when almost all the signs of Socialism and the Communist/Antifascist struggle were erased, with the prominent presence of a process of installing new references to early national culture and historical tradition. The closing part of the paper deals with public discussions connected with the selection of a location for a square to bear the name of the first president of independent Croatia, Franjo Tuđman. Analysis of these public polemics shows two opposing discourses: the right-wing political option, which supports a central position for the square and considers the chosen area to

  7. An Intelligent System Approach for Asthma Prediction in Symptomatic Preschool Children

    Directory of Open Access Journals (Sweden)

    E. Chatzimichail

    2013-01-01

    Full Text Available Objectives. In this study a new method for asthma outcome prediction, which is based on Principal Component Analysis and Least Square Support Vector Machine Classifier, is presented. Most of the asthma cases appear during the first years of life. Thus, the early identification of young children being at high risk of developing persistent symptoms of the disease throughout childhood is an important public health priority. Methods. The proposed intelligent system consists of three stages. At the first stage, Principal Component Analysis is used for feature extraction and dimension reduction. At the second stage, the pattern classification is achieved by using Least Square Support Vector Machine Classifier. Finally, at the third stage the performance evaluation of the system is estimated by using classification accuracy and 10-fold cross-validation. Results. The proposed prediction system can be used in asthma outcome prediction with 95.54 % success as shown in the experimental results. Conclusions. This study indicates that the proposed system is a potentially useful decision support tool for predicting asthma outcome and that some risk factors enhance its predictive ability.

  8. The nonabelian tensor square of a bieberbach group with ...

    African Journals Online (AJOL)

    The main objective of this paper is to compute the nonabelian tensor square of one Bieberbach group with elementary abelian 2-group point group of dimension three by using the computational method of the nonabelian tensor square for polycyclic groups. The finding of the computation showed that the nonabelian tensor ...

  9. Your Chi-Square Test Is Statistically Significant: Now What?

    Science.gov (United States)

    Sharpe, Donald

    2015-01-01

    Applied researchers have employed chi-square tests for more than one hundred years. This paper addresses the question of how one should follow a statistically significant chi-square test result in order to determine the source of that result. Four approaches were evaluated: calculating residuals, comparing cells, ransacking, and partitioning. Data…

  10. Application of least-squares method to decay heat evaluation

    International Nuclear Information System (INIS)

    Schmittroth, F.; Schenter, R.E.

    1976-01-01

    Generalized least-squares methods are applied to decay-heat experiments and summation calculations to arrive at evaluated values and uncertainties for the fission-product decay-heat from the thermal fission of 235 U. Emphasis is placed on a proper treatment of both statistical and correlated uncertainties in the least-squares method

  11. Estimasi Kanal Akustik Bawah Air Untuk Perairan Dangkal Menggunakan Metode Least Square (LS dan Minimum Mean Square Error (MMSE

    Directory of Open Access Journals (Sweden)

    Mardawia M Panrereng

    2015-06-01

    Full Text Available Dalam beberapa tahun terakhir, sistem komunikasi akustik bawah air banyak dikembangkan oleh beberapa peneliti. Besarnya tantangan yang dihadapi membuat para peneliti semakin tertarik untuk mengembangkan penelitian dibidang ini. Kanal bawah air merupakan media komunikasi yang sulit karena adanya attenuasi, absorption, dan multipath yang disebabkan oleh gerakan gelombang air setiap saat. Untuk perairan dangkal, multipath disebabkan adanya pantulan dari permukaan dan dasar laut. Kebutuhan pengiriman data cepat dengan bandwidth terbatas menjadikan Ortogonal Frequency Division Multiplexing (OFDM sebagai solusi untuk komunikasi transmisi tinggi dengan modulasi menggunakan Binary Phase-Shift Keying (BPSK. Estimasi kanal bertujuan untuk mengetahui karakteristik respon impuls kanal propagasi dengan mengirimkan pilot simbol. Pada estimasi kanal menggunakan metode Least Square (LS nilai Mean Square Error (MSE yang diperoleh cenderung lebih besar dari metode estimasi kanal menggunakan metode Minimum Mean Square (MMSE. Hasil kinerja estimasi kanal berdasarkan perhitungan Bit Error Rate (BER untuk estimasi kanal menggunakan metode LS dan metode MMSE tidak menunjukkan perbedaan yang signifikan yaitu berselisih satu SNR untuk setiap metode estimasi kanal yang digunakan.

  12. A decentralized square root information filter/smoother

    Science.gov (United States)

    Bierman, G. J.; Belzer, M. R.

    1985-01-01

    A number of developments has recently led to a considerable interest in the decentralization of linear least squares estimators. The developments are partly related to the impending emergence of VLSI technology, the realization of parallel processing, and the need for algorithmic ways to speed the solution of dynamically decoupled, high dimensional estimation problems. A new method is presented for combining Square Root Information Filters (SRIF) estimates obtained from independent data sets. The new method involves an orthogonal transformation, and an information matrix filter 'homework' problem discussed by Schweppe (1973) is generalized. The employed SRIF orthogonal transformation methodology has been described by Bierman (1977).

  13. On the calibration process of film dosimetry: OLS inverse regression versus WLS inverse prediction

    International Nuclear Information System (INIS)

    Crop, F; Thierens, H; Rompaye, B Van; Paelinck, L; Vakaet, L; Wagter, C De

    2008-01-01

    The purpose of this study was both putting forward a statistically correct model for film calibration and the optimization of this process. A reliable calibration is needed in order to perform accurate reference dosimetry with radiographic (Gafchromic) film. Sometimes, an ordinary least squares simple linear (in the parameters) regression is applied to the dose-optical-density (OD) curve with the dose as a function of OD (inverse regression) or sometimes OD as a function of dose (inverse prediction). The application of a simple linear regression fit is an invalid method because heteroscedasticity of the data is not taken into account. This could lead to erroneous results originating from the calibration process itself and thus to a lower accuracy. In this work, we compare the ordinary least squares (OLS) inverse regression method with the correct weighted least squares (WLS) inverse prediction method to create calibration curves. We found that the OLS inverse regression method could lead to a prediction bias of up to 7.3 cGy at 300 cGy and total prediction errors of 3% or more for Gafchromic EBT film. Application of the WLS inverse prediction method resulted in a maximum prediction bias of 1.4 cGy and total prediction errors below 2% in a 0-400 cGy range. We developed a Monte-Carlo-based process to optimize calibrations, depending on the needs of the experiment. This type of thorough analysis can lead to a higher accuracy for film dosimetry

  14. Chirality in distorted square planar Pd(O,N)2 compounds.

    Science.gov (United States)

    Brunner, Henri; Bodensteiner, Michael; Tsuno, Takashi

    2013-10-01

    Salicylidenimine palladium(II) complexes trans-Pd(O,N)2 adopt step and bowl arrangements. A stereochemical analysis subdivides 52 compounds into 41 step and 11 bowl types. Step complexes with chiral N-substituents and all the bowl complexes induce chiral distortions in the square planar system, resulting in Δ/Λ configuration of the Pd(O,N)2 unit. In complexes with enantiomerically pure N-substituents ligand chirality entails a specific square chirality and only one diastereomer assembles in the lattice. Dimeric Pd(O,N)2 complexes with bridging N-substituents in trans-arrangement are inherently chiral. For dimers different chirality patterns for the Pd(O,N)2 square are observed. The crystals contain racemates of enantiomers. In complex two independent molecules form a tight pair. The (RC) configuration of the ligand induces the same Δ chirality in the Pd(O,N)2 units of both molecules with varying square chirality due to the different crystallographic location of the independent molecules. In complexes and atrop isomerism induces specific configurations in the Pd(O,N)2 bowl systems. The square chirality is largest for complex [(Diop)Rh(PPh3 )Cl)], a catalyst for enantioselective hydrogenation. In the lattice of two diastereomers with the same (RC ,RC) configuration in the ligand Diop but opposite Δ and Λ square configurations co-crystallize, a rare phenomenon in stereochemistry. © 2013 Wiley Periodicals, Inc.

  15. Behaviour of FRP confined concrete in square columns

    OpenAIRE

    Diego Villalón, Ana de; Arteaga Iriarte, Ángel; Fernandez Gomez, Jaime Antonio; Perera Velamazán, Ricardo; Cisneros, Daniel

    2015-01-01

    A significant amount of research has been conducted on FRP-confined circular columns, but much less is known about rectangular/square columns in which the effectiveness of confinement is much reduced. This paper presents the results of experimental investigations on low strength square concrete columns confined with FRP. Axial compression tests were performed on ten intermediate size columns. The tests results indicate that FRP composites can significantly improve the bearing capacity and duc...

  16. Two-body perturbation theory versus first order perturbation theory: A comparison based on the square-well fluid.

    Science.gov (United States)

    Mercier Franco, Luís Fernando; Castier, Marcelo; Economou, Ioannis G

    2017-12-07

    We show that the Zwanzig first-order perturbation theory can be obtained directly from a truncated Taylor series expansion of a two-body perturbation theory and that such truncation provides a more accurate prediction of thermodynamic properties than the full two-body perturbation theory. This unexpected result is explained by the quality of the resulting approximation for the fluid radial distribution function. We prove that the first-order and the two-body perturbation theories are based on different approximations for the fluid radial distribution function. To illustrate the calculations, the square-well fluid is adopted. We develop an analytical expression for the two-body perturbed Helmholtz free energy for the square-well fluid. The equation of state obtained using such an expression is compared to the equation of state obtained from the first-order approximation. The vapor-liquid coexistence curve and the supercritical compressibility factor of a square-well fluid are calculated using both equations of state and compared to Monte Carlo simulation data. Finally, we show that the approximation for the fluid radial distribution function given by the first-order perturbation theory provides closer values to the ones calculated via Monte Carlo simulations. This explains why such theory gives a better description of the fluid thermodynamic behavior.

  17. ON THE CONSTRUCTION OF LATIN SQUARES COUNTERBALANCED FOR IMMEDIATE SEQUENTIAL EFFECTS.

    Science.gov (United States)

    HOUSTON, TOM R., JR.

    THIS REPORT IS ONE OF A SERIES DESCRIBING NEW DEVELOPMENTS IN THE AREA OF RESEARCH METHODOLOGY. IT DEALS WITH LATIN SQUARES AS A CONTROL FOR PROGRESSIVE AND ADJACENCY EFFECTS IN EXPERIMENTAL DESIGNS. THE HISTORY OF LATIN SQUARES IS ALSO REVIEWED, AND SEVERAL ALGORITHMS FOR THE CONSTRUCTION OF LATIN AND GRECO-LATIN SQUARES ARE PROPOSED. THE REPORT…

  18. Program for the analysis of pulse height spectra and the background from a proportional detector

    International Nuclear Information System (INIS)

    Flores-Llamas, H.; Yee-Madeira, H.; Contreras-Puente, G.; Zamorano-Ulloa, R.

    1991-01-01

    A PC-Fortran program is presented for fitting of lineshapes and the analysis of pulse height spectra obtainable with proportional detectors. The common fitting and analysis of pulse height spectra by means of mixed Gaussian lineshapes is readily improved by using Voigt lineshapes. In addition, the background can be evaluated during the fitting process without the need of extra measurements. As an application of the program, a pulse height transmission spectrum accumulated with a static 57 Co source and detected with an argon-metane proportional detector, was least squares fitted to an elaborated complex trial lineshape function containing two Voigt lines plus a straight line. The fitting straight line parameters a and b characterize quantitatively the background. The very good PC-fitting obtained shows that the fitting of experimental spectra with the more realistic Voigt lineshapes is no longer a formidable task and that it is possible to evaluate and subtract the background inherent to the experiment during the fitting process. (orig.)

  19. Regularization by truncated total least squares

    DEFF Research Database (Denmark)

    Hansen, Per Christian; Fierro, R.D; Golub, G.H

    1997-01-01

    The total least squares (TLS) method is a successful method for noise reduction in linear least squares problems in a number of applications. The TLS method is suited to problems in which both the coefficient matrix and the right-hand side are not precisely known. This paper focuses on the use...... schemes for relativistic hydrodynamical equations. Such an approximate Riemann solver is presented in this paper which treats all waves emanating from an initial discontinuity as themselves discontinuous. Therefore, jump conditions for shocks are approximately used for rarefaction waves. The solver...... is easy to implement in a Godunov scheme and converges rapidly for relativistic hydrodynamics. The fast convergence of the solver indicates the potential of a higher performance of a Godunov scheme in which the solver is used....

  20. The energetic cost of walking: a comparison of predictive methods.

    Directory of Open Access Journals (Sweden)

    Patricia Ann Kramer

    Full Text Available BACKGROUND: The energy that animals devote to locomotion has been of intense interest to biologists for decades and two basic methodologies have emerged to predict locomotor energy expenditure: those based on metabolic and those based on mechanical energy. Metabolic energy approaches share the perspective that prediction of locomotor energy expenditure should be based on statistically significant proxies of metabolic function, while mechanical energy approaches, which derive from many different perspectives, focus on quantifying the energy of movement. Some controversy exists as to which mechanical perspective is "best", but from first principles all mechanical methods should be equivalent if the inputs to the simulation are of similar quality. Our goals in this paper are 1 to establish the degree to which the various methods of calculating mechanical energy are correlated, and 2 to investigate to what degree the prediction methods explain the variation in energy expenditure. METHODOLOGY/PRINCIPAL FINDINGS: We use modern humans as the model organism in this experiment because their data are readily attainable, but the methodology is appropriate for use in other species. Volumetric oxygen consumption and kinematic and kinetic data were collected on 8 adults while walking at their self-selected slow, normal and fast velocities. Using hierarchical statistical modeling via ordinary least squares and maximum likelihood techniques, the predictive ability of several metabolic and mechanical approaches were assessed. We found that all approaches are correlated and that the mechanical approaches explain similar amounts of the variation in metabolic energy expenditure. Most methods predict the variation within an individual well, but are poor at accounting for variation between individuals. CONCLUSION: Our results indicate that the choice of predictive method is dependent on the question(s of interest and the data available for use as inputs. Although we

  1. Risk and Management Control: A Partial Least Square Modelling Approach

    DEFF Research Database (Denmark)

    Nielsen, Steen; Pontoppidan, Iens Christian

    Risk and economic theory goes many year back (e.g. to Keynes & Knight 1921) and risk/uncertainty belong to one of the explanations for the existence of the firm (Coarse, 1937). The present financial crisis going on in the past years have re-accentuated risk and the need of coherence...... and interrelations between risk and areas within management accounting. The idea is that management accounting should be able to conduct a valid feed forward but also predictions for decision making including risk. This study reports the test of a theoretical model using partial least squares (PLS) on survey data...... and a external attitude dimension. The results have important implications for both management control research and for the management control systems design for the way accountants consider the element of risk in their different tasks, both operational and strategic. Specifically, it seems that different risk...

  2. The mean squared writhe of alternating random knot diagrams

    Energy Technology Data Exchange (ETDEWEB)

    Diao, Y; Hinson, K [Department of Mathematics and Statistics University of North Carolina at Charlotte, NC 28223 (United States); Ernst, C; Ziegler, U, E-mail: ydiao@uncc.ed [Department of Mathematics and Computer Science, Western Kentucky University, Bowling Green, KY 42101 (United States)

    2010-12-10

    The writhe of a knot diagram is a simple geometric measure of the complexity of the knot diagram. It plays an important role not only in knot theory itself, but also in various applications of knot theory to fields such as molecular biology and polymer physics. The mean squared writhe of any sample of knot diagrams with n crossings is n when for each diagram at each crossing one of the two strands is chosen as the overpass at random with probability one-half. However, such a diagram is usually not minimal. If we restrict ourselves to a minimal knot diagram, then the choice of which strand is the over- or under-strand at each crossing is no longer independent of the neighboring crossings and a larger mean squared writhe is expected for minimal diagrams. This paper explores the effect on the correlation between the mean squared writhe and the diagrams imposed by the condition that diagrams are minimal by studying the writhe of classes of reduced, alternating knot diagrams. We demonstrate that the behavior of the mean squared writhe heavily depends on the underlying space of diagram templates. In particular this is true when the sample space contains only diagrams of a special structure. When the sample space is large enough to contain not only diagrams of a special type, then the mean squared writhe for n crossing diagrams tends to grow linearly with n, but at a faster rate than n, indicating an intrinsic property of alternating knot diagrams. Studying the mean squared writhe of alternating random knot diagrams also provides some insight into the properties of the diagram generating methods used, which is an important area of study in the applications of random knot theory.

  3. Statistical prediction of Late Miocene climate

    Digital Repository Service at National Institute of Oceanography (India)

    Fernandes, A.A; Gupta, S.M.

    by making certain simplifying assumptions; for example in modelling ocean 4 currents, the geostrophic approximation is made. In case of statistical prediction no such a priori assumption need be made. statistical prediction comprises of using observed data... the number of equations. In this case the equations are overdetermined, and therefore one has to look for a solution that best fits the sample data in a least squares sense. To this end we express the sample .... (2.1)+ ry = y + data as follows: n L c. (x...

  4. Improved upper limits on the stochastic gravitational-wave background from 2009-2010 LIGO and Virgo data.

    Science.gov (United States)

    Aasi, J; Abbott, B P; Abbott, R; Abbott, T; Abernathy, M R; Accadia, T; Acernese, F; Ackley, K; Adams, C; Adams, T; Addesso, P; Adhikari, R X; Affeldt, C; Agathos, M; Aggarwal, N; Aguiar, O D; Ain, A; Ajith, P; Alemic, A; Allen, B; Allocca, A; Amariutei, D; Andersen, M; Anderson, R; Anderson, S B; Anderson, W G; Arai, K; Araya, M C; Arceneaux, C; Areeda, J; Aston, S M; Astone, P; Aufmuth, P; Aulbert, C; Austin, L; Aylott, B E; Babak, S; Baker, P T; Ballardin, G; Ballmer, S W; Barayoga, J C; Barbet, M; Barish, B C; Barker, D; Barone, F; Barr, B; Barsotti, L; Barsuglia, M; Barton, M A; Bartos, I; Bassiri, R; Basti, A; Batch, J C; Bauchrowitz, J; Bauer, Th S; Behnke, B; Bejger, M; Beker, M G; Belczynski, C; Bell, A S; Bell, C; Bergmann, G; Bersanetti, D; Bertolini, A; Betzwieser, J; Beyersdorf, P T; Bilenko, I A; Billingsley, G; Birch, J; Biscans, S; Bitossi, M; Bizouard, M A; Black, E; Blackburn, J K; Blackburn, L; Blair, D; Bloemen, S; Blom, M; Bock, O; Bodiya, T P; Boer, M; Bogaert, G; Bogan, C; Bond, C; Bondu, F; Bonelli, L; Bonnand, R; Bork, R; Born, M; Boschi, V; Bose, Sukanta; Bosi, L; Bradaschia, C; Brady, P R; Braginsky, V B; Branchesi, M; Brau, J E; Briant, T; Bridges, D O; Brillet, A; Brinkmann, M; Brisson, V; Brooks, A F; Brown, D A; Brown, D D; Brückner, F; Buchman, S; Bulik, T; Bulten, H J; Buonanno, A; Burman, R; Buskulic, D; Buy, C; Cadonati, L; Cagnoli, G; Bustillo, J Calderón; Calloni, E; Camp, J B; Campsie, P; Cannon, K C; Canuel, B; Cao, J; Capano, C D; Carbognani, F; Carbone, L; Caride, S; Castiglia, A; Caudill, S; Cavaglià, M; Cavalier, F; Cavalieri, R; Celerier, C; Cella, G; Cepeda, C; Cesarini, E; Chakraborty, R; Chalermsongsak, T; Chamberlin, S J; Chao, S; Charlton, P; Chassande-Mottin, E; Chen, X; Chen, Y; Chincarini, A; Chiummo, A; Cho, H S; Chow, J; Christensen, N; Chu, Q; Chua, S S Y; Chung, S; Ciani, G; Clara, F; Clark, J A; Cleva, F; Coccia, E; Cohadon, P-F; Colla, A; Collette, C; Colombini, M; Cominsky, L; Constancio, M; Conte, A; Cook, D; Corbitt, T R; Cordier, M; Cornish, N; Corpuz, A; Corsi, A; Costa, C A; Coughlin, M W; Coughlin, S; Coulon, J-P; Countryman, S; Couvares, P; Coward, D M; Cowart, M; Coyne, D C; Coyne, R; Craig, K; Creighton, J D E; Crowder, S G; Cumming, A; Cunningham, L; Cuoco, E; Dahl, K; Canton, T Dal; Damjanic, M; Danilishin, S L; D'Antonio, S; Danzmann, K; Dattilo, V; Daveloza, H; Davier, M; Davies, G S; Daw, E J; Day, R; Dayanga, T; Debreczeni, G; Degallaix, J; Deléglise, S; Del Pozzo, W; Denker, T; Dent, T; Dereli, H; Dergachev, V; De Rosa, R; DeRosa, R T; DeSalvo, R; Dhurandhar, S; Díaz, M; Di Fiore, L; Di Lieto, A; Di Palma, I; Di Virgilio, A; Donath, A; Donovan, F; Dooley, K L; Doravari, S; Dossa, S; Douglas, R; Downes, T P; Drago, M; Drever, R W P; Driggers, J C; Du, Z; Dwyer, S; Eberle, T; Edo, T; Edwards, M; Effler, A; Eggenstein, H; Ehrens, P; Eichholz, J; Eikenberry, S S; Endrőczi, G; Essick, R; Etzel, T; Evans, M; Evans, T; Factourovich, M; Fafone, V; Fairhurst, S; Fang, Q; Farinon, S; Farr, B; Farr, W M; Favata, M; Fehrmann, H; Fejer, M M; Feldbaum, D; Feroz, F; Ferrante, I; Ferrini, F; Fidecaro, F; Finn, L S; Fiori, I; Fisher, R P; Flaminio, R; Fournier, J-D; Franco, S; Frasca, S; Frasconi, F; Frede, M; Frei, Z; Freise, A; Frey, R; Fricke, T T; Fritschel, P; Frolov, V V; Fulda, P; Fyffe, M; Gair, J; Gammaitoni, L; Gaonkar, S; Garufi, F; Gehrels, N; Gemme, G; Genin, E; Gennai, A; Ghosh, S; Giaime, J A; Giardina, K D; Giazotto, A; Gill, C; Gleason, J; Goetz, E; Goetz, R; Gondan, L; González, G; Gordon, N; Gorodetsky, M L; Gossan, S; Gossler, S; Gouaty, R; Gräf, C; Graff, P B; Granata, M; Grant, A; Gras, S; Gray, C; Greenhalgh, R J S; Gretarsson, A M; Groot, P; Grote, H; Grover, K; Grunewald, S; Guidi, G M; Guido, C; Gushwa, K; Gustafson, E K; Gustafson, R; Hammer, D; Hammond, G; Hanke, M; Hanks, J; Hanna, C; Hanson, J; Harms, J; Harry, G M; Harry, I W; Harstad, E D; Hart, M; Hartman, M T; Haster, C-J; Haughian, K; Heidmann, A; Heintze, M; Heitmann, H; Hello, P; Hemming, G; Hendry, M; Heng, I S; Heptonstall, A W; Heurs, M; Hewitson, M; Hild, S; Hoak, D; Hodge, K A; Holt, K; Hooper, S; Hopkins, P; Hosken, D J; Hough, J; Howell, E J; Hu, Y; Huerta, E; Hughey, B; Husa, S; Huttner, S H; Huynh, M; Huynh-Dinh, T; Ingram, D R; Inta, R; Isogai, T; Ivanov, A; Iyer, B R; Izumi, K; Jacobson, M; James, E; Jang, H; Jaranowski, P; Ji, Y; Jiménez-Forteza, F; Johnson, W W; Jones, D I; Jones, R; Jonker, R J G; Ju, L; K, Haris; Kalmus, P; Kalogera, V; Kandhasamy, S; Kang, G; Kanner, J B; Karlen, J; Kasprzack, M; Katsavounidis, E; Katzman, W; Kaufer, H; Kawabe, K; Kawazoe, F; Kéfélian, F; Keiser, G M; Keitel, D; Kelley, D B; Kells, W; Khalaidovski, A; Khalili, F Y; Khazanov, E A; Kim, C; Kim, K; Kim, N; Kim, N G; Kim, Y-M; King, E J; King, P J; Kinzel, D L; Kissel, J S; Klimenko, S; Kline, J; Koehlenbeck, S; Kokeyama, K; Kondrashov, V; Koranda, S; Korth, W Z; Kowalska, I; Kozak, D B; Kremin, A; Kringel, V; Królak, A; Kuehn, G; Kumar, A; Kumar, P; Kumar, R; Kuo, L; Kutynia, A; Kwee, P; Landry, M; Lantz, B; Larson, S; Lasky, P D; Lawrie, C; Lazzarini, A; Lazzaro, C; Leaci, P; Leavey, S; Lebigot, E O; Lee, C-H; Lee, H K; Lee, H M; Lee, J; Leonardi, M; Leong, J R; Le Roux, A; Leroy, N; Letendre, N; Levin, Y; Levine, B; Lewis, J; Li, T G F; Libbrecht, K; Libson, A; Lin, A C; Littenberg, T B; Litvine, V; Lockerbie, N A; Lockett, V; Lodhia, D; Loew, K; Logue, J; Lombardi, A L; Lorenzini, M; Loriette, V; Lormand, M; Losurdo, G; Lough, J; Lubinski, M J; Lück, H; Luijten, E; Lundgren, A P; Lynch, R; Ma, Y; Macarthur, J; Macdonald, E P; MacDonald, T; Machenschalk, B; MacInnis, M; Macleod, D M; Magana-Sandoval, F; Mageswaran, M; Maglione, C; Mailand, K; Majorana, E; Maksimovic, I; Malvezzi, V; Man, N; Manca, G M; Mandel, I; Mandic, V; Mangano, V; Mangini, N; Mantovani, M; Marchesoni, F; Marion, F; Márka, S; Márka, Z; Markosyan, A; Maros, E; Marque, J; Martelli, F; Martin, I W; Martin, R M; Martinelli, L; Martynov, D; Marx, J N; Mason, K; Masserot, A; Massinger, T J; Matichard, F; Matone, L; Matzner, R A; Mavalvala, N; Mazumder, N; Mazzolo, G; McCarthy, R; McClelland, D E; McGuire, S C; McIntyre, G; McIver, J; McLin, K; Meacher, D; Meadors, G D; Mehmet, M; Meidam, J; Meinders, M; Melatos, A; Mendell, G; Mercer, R A; Meshkov, S; Messenger, C; Meyers, P; Miao, H; Michel, C; Mikhailov, E E; Milano, L; Milde, S; Miller, J; Minenkov, Y; Mingarelli, C M F; Mishra, C; Mitra, S; Mitrofanov, V P; Mitselmakher, G; Mittleman, R; Moe, B; Moesta, P; Mohan, M; Mohapatra, S R P; Moraru, D; Moreno, G; Morgado, N; Morriss, S R; Mossavi, K; Mours, B; Mow-Lowry, C M; Mueller, C L; Mueller, G; Mukherjee, S; Mullavey, A; Munch, J; Murphy, D; Murray, P G; Mytidis, A; Nagy, M F; Kumar, D Nanda; Nardecchia, I; Naticchioni, L; Nayak, R K; Necula, V; Nelemans, G; Neri, I; Neri, M; Newton, G; Nguyen, T; Nitz, A; Nocera, F; Nolting, D; Normandin, M E N; Nuttall, L K; Ochsner, E; O'Dell, J; Oelker, E; Oh, J J; Oh, S H; Ohme, F; Oppermann, P; O'Reilly, B; O'Shaughnessy, R; Osthelder, C; Ottaway, D J; Ottens, R S; Overmier, H; Owen, B J; Padilla, C; Pai, A; Palashov, O; Palomba, C; Pan, H; Pan, Y; Pankow, C; Paoletti, F; Paoletti, R; Paris, H; Pasqualetti, A; Passaquieti, R; Passuello, D; Pedraza, M; Penn, S; Perreca, A; Phelps, M; Pichot, M; Pickenpack, M; Piergiovanni, F; Pierro, V; Pinard, L; Pinto, I M; Pitkin, M; Poeld, J; Poggiani, R; Poteomkin, A; Powell, J; Prasad, J; Premachandra, S; Prestegard, T; Price, L R; Prijatelj, M; Privitera, S; Prodi, G A; Prokhorov, L; Puncken, O; Punturo, M; Puppo, P; Qin, J; Quetschke, V; Quintero, E; Quiroga, G; Quitzow-James, R; Raab, F J; Rabeling, D S; Rácz, I; Radkins, H; Raffai, P; Raja, S; Rajalakshmi, G; Rakhmanov, M; Ramet, C; Ramirez, K; Rapagnani, P; Raymond, V; Re, V; Read, J; Reed, C M; Regimbau, T; Reid, S; Reitze, D H; Rhoades, E; Ricci, F; Riles, K; Robertson, N A; Robinet, F; Rocchi, A; Rodruck, M; Rolland, L; Rollins, J G; Romano, J D; Romano, R; Romanov, G; Romie, J H; Rosińska, D; Rowan, S; Rüdiger, A; Ruggi, P; Ryan, K; Salemi, F; Sammut, L; Sandberg, V; Sanders, J R; Sannibale, V; Santiago-Prieto, I; Saracco, E; Sassolas, B; Sathyaprakash, B S; Saulson, P R; Savage, R; Scheuer, J; Schilling, R; Schnabel, R; Schofield, R M S; Schreiber, E; Schuette, D; Schutz, B F; Scott, J; Scott, S M; Sellers, D; Sengupta, A S; Sentenac, D; Sequino, V; Sergeev, A; Shaddock, D; Shah, S; Shahriar, M S; Shaltev, M; Shapiro, B; Shawhan, P; Shoemaker, D H; Sidery, T L; Siellez, K; Siemens, X; Sigg, D; Simakov, D; Singer, A; Singer, L; Singh, R; Sintes, A M; Slagmolen, B J J; Slutsky, J; Smith, J R; Smith, M; Smith, R J E; Smith-Lefebvre, N D; Son, E J; Sorazu, B; Souradeep, T; Sperandio, L; Staley, A; Stebbins, J; Steinlechner, J; Steinlechner, S; Stephens, B C; Steplewski, S; Stevenson, S; Stone, R; Stops, D; Strain, K A; Straniero, N; Strigin, S; Sturani, R; Stuver, A L; Summerscales, T Z; Susmithan, S; Sutton, P J; Swinkels, B; Tacca, M; Talukder, D; Tanner, D B; Tarabrin, S P; Taylor, R; Ter Braack, A P M; Thirugnanasambandam, M P; Thomas, M; Thomas, P; Thorne, K A; Thorne, K S; Thrane, E; Tiwari, V; Tokmakov, K V; Tomlinson, C; Toncelli, A; Tonelli, M; Torre, O; Torres, C V; Torrie, C I; Travasso, F; Traylor, G; Tse, M; Ugolini, D; Unnikrishnan, C S; Urban, A L; Urbanek, K; Vahlbruch, H; Vajente, G; Valdes, G; Vallisneri, M; van den Brand, J F J; Van Den Broeck, C; van der Putten, S; van der Sluys, M V; van Heijningen, J; van Veggel, A A; Vass, S; Vasúth, M; Vaulin, R; Vecchio, A; Vedovato, G; Veitch, J; Veitch, P J; Venkateswara, K; Verkindt, D; Verma, S S; Vetrano, F; Viceré, A; Vincent-Finley, R; Vinet, J-Y; Vitale, S; Vo, T; Vocca, H; Vorvick, C; Vousden, W D; Vyachanin, S P; Wade, A; Wade, L; Wade, M; Walker, M; Wallace, L; Wang, M; Wang, X; Ward, R L; Was, M; Weaver, B; Wei, L-W; Weinert, M; Weinstein, A J; Weiss, R; Welborn, T; Wen, L; Wessels, P; West, M; Westphal, T; Wette, K; Whelan, J T; White, D J; Whiting, B F; Wiesner, K; Wilkinson, C; Williams, K; Williams, L; Williams, R; Williams, T; Williamson, A R; Willis, J L; Willke, B; Wimmer, M; Winkler, W; Wipf, C C; Wiseman, A G; Wittel, H; Woan, G; Worden, J; Yablon, J; Yakushin, I; Yamamoto, H; Yancey, C C; Yang, H; Yang, Z; Yoshida, S; Yvert, M; Zadrożny, A; Zanolin, M; Zendri, J-P; Zhang, Fan; Zhang, L; Zhao, C; Zhu, X J; Zucker, M E; Zuraw, S; Zweizig, J

    2014-12-05

    Gravitational waves from a variety of sources are predicted to superpose to create a stochastic background. This background is expected to contain unique information from throughout the history of the Universe that is unavailable through standard electromagnetic observations, making its study of fundamental importance to understanding the evolution of the Universe. We carry out a search for the stochastic background with the latest data from the LIGO and Virgo detectors. Consistent with predictions from most stochastic gravitational-wave background models, the data display no evidence of a stochastic gravitational-wave signal. Assuming a gravitational-wave spectrum of Ω_{GW}(f)=Ω_{α}(f/f_{ref})^{α}, we place 95% confidence level upper limits on the energy density of the background in each of four frequency bands spanning 41.5-1726 Hz. In the frequency band of 41.5-169.25 Hz for a spectral index of α=0, we constrain the energy density of the stochastic background to be Ω_{GW}(f)waves.

  5. Ralph A. Alpher, Robert C. Herman, and the Cosmic Microwave Background Radiation

    Science.gov (United States)

    Alpher, Victor S.

    2012-09-01

    Much of the literature on the history of the prediction and discovery of the Cosmic Microwave Background Radiation (CMBR) is incorrect in some respects. I focus on the early history of the CMBR, from its prediction in 1948 to its measurement in 1964, basing my discussion on the published literature, the private papers of Ralph A. Alpher, and interviews with several of the major figures involved in the prediction and measurement of the CMBR. I show that the early prediction of the CMBR continues to be widely misunderstood.

  6. Background radioactivity in sediments near Los Alamos, New Mexico

    International Nuclear Information System (INIS)

    McLin, Stephen G.

    2004-01-01

    River and reservoir sediments have been collected annually by Los Alamos National Laboratory since 1974 and 1979, respectively. These background samples are collected from five river stations and four reservoirs located throughout northern New Mexico and southern Colorado. Analyses include 3 H, 90 Sr, 137 Cs, total U, 238 Pu, 239,240 Pu, 241 Am, gross alpha, gross beta, and gross gamma radioactivity. Surprisingly, there are no federal or state regulatory standards in the USA that specify how to compute background radioactivity values on sediments. Hence, the sample median (or 0.50 quantile) is proposed for this background because it reflects central data tendency and is distribution-free. Estimates for the upper limit of background radioactivity on river and reservoir sediments are made for sampled analytes using the 0.95 quantile (two-tail). These analyses also show that seven of ten analytes from reservoir sediments are normally distributed, or are normally distributed after a logarithmic or square root transformation. However, only three of ten analytes from river sediments are similarly distributed. In addition, isotope ratios for 137 Cs/ 238 Pu, 137 Cs/ 239,240 Pu, and 239,240 Pu/ 238 Pu from reservoir sediments are independent of clay content, total organic carbon/specific surface area (TOC/SSA) and cation exchange capacity/specific surface area (CEC/SSA) ratios. These TOC/SSA and CEC/SSA ratios reflect sediment organic carbon and surface charge densities that are associated with radionuclide absorption, adsorption, and ion exchange reactions on clay mineral structures. These latter ratio values greatly exceed the availability of background radionuclides in the environment, and insure that measured background levels are a maximum. Since finer-grained reservoir sediments contain larger clay-sized fractions compared to coarser river sediments, they show higher background levels for most analytes. Furthermore, radioactivity values on reservoir sediments have

  7. Circulating microparticles: square the circle

    Science.gov (United States)

    2013-01-01

    Background The present review summarizes current knowledge about microparticles (MPs) and provides a systematic overview of last 20 years of research on circulating MPs, with particular focus on their clinical relevance. Results MPs are a heterogeneous population of cell-derived vesicles, with sizes ranging between 50 and 1000 nm. MPs are capable of transferring peptides, proteins, lipid components, microRNA, mRNA, and DNA from one cell to another without direct cell-to-cell contact. Growing evidence suggests that MPs present in peripheral blood and body fluids contribute to the development and progression of cancer, and are of pathophysiological relevance for autoimmune, inflammatory, infectious, cardiovascular, hematological, and other diseases. MPs have large diagnostic potential as biomarkers; however, due to current technological limitations in purification of MPs and an absence of standardized methods of MP detection, challenges remain in validating the potential of MPs as a non-invasive and early diagnostic platform. Conclusions Improvements in the effective deciphering of MP molecular signatures will be critical not only for diagnostics, but also for the evaluation of treatment regimens and predicting disease outcomes. PMID:23607880

  8. Prediction of peptide drift time in ion mobility mass spectrometry from sequence-based features

    KAUST Repository

    Wang, Bing; Zhang, Jun; Chen, Peng; Ji, Zhiwei; Deng, Shuping; Li, Chi

    2013-01-01

    Background: Ion mobility-mass spectrometry (IMMS), an analytical technique which combines the features of ion mobility spectrometry (IMS) and mass spectrometry (MS), can rapidly separates ions on a millisecond time-scale. IMMS becomes a powerful tool to analyzing complex mixtures, especially for the analysis of peptides in proteomics. The high-throughput nature of this technique provides a challenge for the identification of peptides in complex biological samples. As an important parameter, peptide drift time can be used for enhancing downstream data analysis in IMMS-based proteomics.Results: In this paper, a model is presented based on least square support vectors regression (LS-SVR) method to predict peptide ion drift time in IMMS from the sequence-based features of peptide. Four descriptors were extracted from peptide sequence to represent peptide ions by a 34-component vector. The parameters of LS-SVR were selected by a grid searching strategy, and a 10-fold cross-validation approach was employed for the model training and testing. Our proposed method was tested on three datasets with different charge states. The high prediction performance achieve demonstrate the effectiveness and efficiency of the prediction model.Conclusions: Our proposed LS-SVR model can predict peptide drift time from sequence information in relative high prediction accuracy by a test on a dataset of 595 peptides. This work can enhance the confidence of protein identification by combining with current protein searching techniques. 2013 Wang et al.; licensee BioMed Central Ltd.

  9. Prediction of peptide drift time in ion mobility mass spectrometry from sequence-based features

    KAUST Repository

    Wang, Bing

    2013-05-09

    Background: Ion mobility-mass spectrometry (IMMS), an analytical technique which combines the features of ion mobility spectrometry (IMS) and mass spectrometry (MS), can rapidly separates ions on a millisecond time-scale. IMMS becomes a powerful tool to analyzing complex mixtures, especially for the analysis of peptides in proteomics. The high-throughput nature of this technique provides a challenge for the identification of peptides in complex biological samples. As an important parameter, peptide drift time can be used for enhancing downstream data analysis in IMMS-based proteomics.Results: In this paper, a model is presented based on least square support vectors regression (LS-SVR) method to predict peptide ion drift time in IMMS from the sequence-based features of peptide. Four descriptors were extracted from peptide sequence to represent peptide ions by a 34-component vector. The parameters of LS-SVR were selected by a grid searching strategy, and a 10-fold cross-validation approach was employed for the model training and testing. Our proposed method was tested on three datasets with different charge states. The high prediction performance achieve demonstrate the effectiveness and efficiency of the prediction model.Conclusions: Our proposed LS-SVR model can predict peptide drift time from sequence information in relative high prediction accuracy by a test on a dataset of 595 peptides. This work can enhance the confidence of protein identification by combining with current protein searching techniques. 2013 Wang et al.; licensee BioMed Central Ltd.

  10. A note on Dupuy's QJM and new square law

    Directory of Open Access Journals (Sweden)

    G. Geldenhuys

    2003-12-01

    Full Text Available T N Duputy has developed various operations research models in an attempt to quantify lessons that can be learned from military history. We discuss two of his models, the Quantified Judgment Model (QJM, and the "new square law". The QJM was developed by Duputy for the analysis of military operations. We point out mathematical discrepancies in a part of the model and make suggestions to remove these discrepancies. Duputy's new square law is an attempt to modify the well-known Lanchester equations for aimed fire, taking into account some results that were obtained in the QJM. We show that the new square law cannot be accepted as a valid mathematical model of combat attrition.

  11. Modelling by partial least squares the relationship between the HPLC mobile phases and analytes on phenyl column.

    Science.gov (United States)

    Markopoulou, Catherine K; Kouskoura, Maria G; Koundourellis, John E

    2011-06-01

    Twenty-five descriptors and 61 structurally different analytes have been used on a partial least squares (PLS) to latent structure technique in order to study chromatographically their interaction mechanism on a phenyl column. According to the model, 240 different retention times of the analytes, expressed as Y variable (log k), at different % MeOH mobile-phase concentrations have been correlated with their theoretical most important structural or molecular descriptors. The goodness-of-fit was estimated by the coefficient of multiple determinations r(2) (0.919), and the root mean square error of estimation (RMSEE=0.1283) values with a predictive ability (Q(2)) of 0.901. The model was further validated using cross-validation (CV), validated by 20 response permutations r(2) (0.0, 0.0146), Q(2) (0.0, -0.136) and validated by external prediction. The contribution of certain mechanism interactions between the analytes, the mobile phase and the column, proportional or counterbalancing is also studied. Trying to evaluate the influence on Y of every variable in a PLS model, VIP (variables importance in the projection) plot provides evidence that lipophilicity (expressed as Log D, Log P), polarizability, refractivity and the eluting power of the mobile phase are dominant in the retention mechanism on a phenyl column. Copyright © 2011 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  12. Penalized linear regression for discrete ill-posed problems: A hybrid least-squares and mean-squared error approach

    KAUST Repository

    Suliman, Mohamed Abdalla Elhag

    2016-12-19

    This paper proposes a new approach to find the regularization parameter for linear least-squares discrete ill-posed problems. In the proposed approach, an artificial perturbation matrix with a bounded norm is forced into the discrete ill-posed model matrix. This perturbation is introduced to enhance the singular-value (SV) structure of the matrix and hence to provide a better solution. The proposed approach is derived to select the regularization parameter in a way that minimizes the mean-squared error (MSE) of the estimator. Numerical results demonstrate that the proposed approach outperforms a set of benchmark methods in most cases when applied to different scenarios of discrete ill-posed problems. Jointly, the proposed approach enjoys the lowest run-time and offers the highest level of robustness amongst all the tested methods.

  13. Correlated perturbations from inflation and the cosmic microwave background.

    Science.gov (United States)

    Amendola, Luca; Gordon, Christopher; Wands, David; Sasaki, Misao

    2002-05-27

    We compare the latest cosmic microwave background data with theoretical predictions including correlated adiabatic and cold dark matter (CDM) isocurvature perturbations with a simple power-law dependence. We find that there is a degeneracy between the amplitude of correlated isocurvature perturbations and the spectral tilt. A negative (red) tilt is found to be compatible with a larger isocurvature contribution. Estimates of the baryon and CDM densities are found to be almost independent of the isocurvature amplitude. The main result is that current microwave background data do not exclude a dominant contribution from CDM isocurvature fluctuations on large scales.

  14. El nuevo Madison Square Garden – (EE.UU.

    Directory of Open Access Journals (Sweden)

    Luckman, Ch.

    1971-05-01

    Full Text Available The Madison Square Garden Sports and Amusements Center comprises the following. 1. A circular building, 129.54 m in diameter and 45.72 m high, which houses the New Madison Square Garden and many other facilities. The arena sits 20.250 spectators, who can watch hockey, basketball, cycling, boxing, circus shows, ice skating, special displays, variety shows, meetings and other kinds of performance. 2. An office block on Seventh Avenue, with a useful floor area for office use amounting to 111,500 m2 and a further 4,800 m2 of floor area on the first two floors for commercial and banking activities.Forman parte del Centro Deportivo y de Atracciones Madison Square Garden: 1 Un edificio circular, de 129,54 m de diámetro y 45,72 m de altura, que aloja el Nuevo Madison Square Garden y otras muchas instalaciones. Tiene capacidad para 20.250 asientos, y en él se pueden celebrar espectáculos de: hockey, baloncesto, ciclismo, boxeo, circo, patinaje sobre hielo, acontecimientos especiales, variedades, asambleas y otros deportes de masas, etc. 2 Un edificio de oficinas que se alza contiguo a la Séptima Avenida, con una superficie útil de 111.500 m2 destinada a oficinas, y otra de 4.800 m2, en las plantas primera y segunda, dedicada a actividades comerciales y bancarias.

  15. Numerical simulation of turbulent flow through a straight square duct using a near wall linear k – ε model.

    Directory of Open Access Journals (Sweden)

    Ahmed Rechia

    2007-09-01

    Full Text Available The aim of this work is to predict numerically the turbulent flow through a straight square duct using Reynolds Average Navier-Stokes equations (RANS by the widely used k – ε and a near wall turbulence k – ε − fμ models. To handle wall proximity and no-equilibrium effects, the first model is modified by incorporating damping functions fμ via the eddy viscosity relation. The predicted results for the streamwise, spanwise velocities and the Reynolds stress components are compared to those given by the k – ε model and by the direct numerical simulation (DNS data of Gavrilakis (J. Fluid Mech., 1992. In light of these results, the proposed k – ε − fμ model is found to be generally satisfactory for predicting the considered flow.

  16. Prospects for the direct detection of the cosmic neutrino background

    International Nuclear Information System (INIS)

    Ringwald, Andreas

    2009-01-01

    The existence of a cosmic neutrino background - the analogue of the cosmic microwave background - is a fundamental prediction of standard big bang cosmology. Up to now, the observational evidence for its existence is rather indirect and rests entirely on cosmological observations of, e.g., the light elemental abundances, the anisotropies in the cosmic microwave background, and the large scale distribution of matter. Here, we review more direct, weak interaction based detection techniques for the cosmic neutrino background in the present epoch and in our local neighbourhood. We show that, with current technology, all proposals are still off by some orders of magnitude in sensitivity to lead to a guaranteed detection of the relic neutrinos. The most promising laboratory search, based on neutrino capture on beta decaying nuclei, may be done in future experiments designed to measure the neutrino mass through decay kinematics.

  17. Prospects for the direct detection of the cosmic neutrino background

    International Nuclear Information System (INIS)

    Ringwald, Andreas

    2009-01-01

    The existence of a cosmic neutrino background - the analogue of the cosmic microwave background - is a fundamental prediction of standard big bang cosmology. Up to now, the observational evidence for its existence is rather indirect and rests entirely on cosmological observations of, e.g., the light elemental abundances, the anisotropies in the cosmic microwave background, and the large scale distribution of matter. Here, we review more direct, weak interaction based detection techniques for the cosmic neutrino background in the present epoch and in our local neighbourhood. We show that, with current technology, all proposals are still off by some orders of magnitude in sensitivity to lead to a guaranteed detection of the relic neutrinos. The most promising laboratory search, based on neutrino capture on beta decaying nuclei, may be done in future experiments designed to measure the neutrino mass through decay kinematics. (orig.)

  18. Background Noise Reduction Using Adaptive Noise Cancellation Determined by the Cross-Correlation

    Science.gov (United States)

    Spalt, Taylor B.; Brooks, Thomas F.; Fuller, Christopher R.

    2012-01-01

    Background noise due to flow in wind tunnels contaminates desired data by decreasing the Signal-to-Noise Ratio. The use of Adaptive Noise Cancellation to remove background noise at measurement microphones is compromised when the reference sensor measures both background and desired noise. The technique proposed modifies the classical processing configuration based on the cross-correlation between the reference and primary microphone. Background noise attenuation is achieved using a cross-correlation sample width that encompasses only the background noise and a matched delay for the adaptive processing. A present limitation of the method is that a minimum time delay between the background noise and desired signal must exist in order for the correlated parts of the desired signal to be separated from the background noise in the crosscorrelation. A simulation yields primary signal recovery which can be predicted from the coherence of the background noise between the channels. Results are compared with two existing methods.

  19. ELMO Bumpy Square proposal

    International Nuclear Information System (INIS)

    Dory, R.A.; Uckan, N.A.; Ard, W.B.

    1986-10-01

    The ELMO Bumpy Square (EBS) concept consists of four straight magnetic mirror arrays linked by four high-field corner coils. Extensive calculations show that this configuration offers major improvements over the ELMO Bumpy Torus (EBT) in particle confinement, heating, transport, ring production, and stability. The components of the EBT device at Oak Ridge National Laboratory can be reconfigured into a square arrangement having straight sides composed of EBT coils, with new microwave cavities and high-field corners designed and built for this application. The elimination of neoclassical convection, identified as the dominant mechanism for the limited confinement in EBT, will give the EBS device substantially improved confinement and the flexibility to explore the concepts that produce this improvement. The primary goals of the EBS program are twofold: first, to improve the physics of confinement in toroidal systems by developing the concepts of plasma stabilization using the effects of energetic electrons and confinement optimization using magnetic field shaping and electrostatic potential control to limit particle drift, and second, to develop bumpy toroid devices as attractive candidates for fusion reactors. This report presents a brief review of the physics analyses that support the EBS concept, discussions of the design and expected performance of the EBS device, a description of the EBS experimental program, and a review of the reactor potential of bumpy toroid configurations. Detailed information is presented in the appendices

  20. Evaluating Outlier Identification Tests: Mahalanobis "D" Squared and Comrey "Dk."

    Science.gov (United States)

    Rasmussen, Jeffrey Lee

    1988-01-01

    A Monte Carlo simulation was used to compare the Mahalanobis "D" Squared and the Comrey "Dk" methods of detecting outliers in data sets. Under the conditions investigated, the "D" Squared technique was preferable as an outlier removal statistic. (SLD)

  1. Predicting performance using background characteristics of international medical graduates in an inner-city university-affiliated Internal Medicine residency training program

    Directory of Open Access Journals (Sweden)

    Akhuetie Jane

    2009-07-01

    years and USMLE step I & step II clinical skills scores were 85 (IQR: 80–88 & 82 (IQR: 79–87 respectively. The median aggregate CBE scores during training were: PG1 5.8 (IQR: 5.6–6.3; PG2 6.3 (IQR 6–6.8 & PG3 6.7 (IQR: 6.7 – 7.1. 25% of our residents scored consistently above US national median ITE scores in all 3 years of training and 16% pursued a fellowship. Younger residents had higher aggregate annual CBE score than the program median (p Conclusion Background IMG features namely, age and USMLE scores predict performance evaluation and in-training examination scores during residency training. In addition enhanced research activities during residency training could facilitate fellowship goals among interested IMGs.

  2. Soft sensor modelling by time difference, recursive partial least squares and adaptive model updating

    International Nuclear Information System (INIS)

    Fu, Y; Xu, O; Yang, W; Zhou, L; Wang, J

    2017-01-01

    To investigate time-variant and nonlinear characteristics in industrial processes, a soft sensor modelling method based on time difference, moving-window recursive partial least square (PLS) and adaptive model updating is proposed. In this method, time difference values of input and output variables are used as training samples to construct the model, which can reduce the effects of the nonlinear characteristic on modelling accuracy and retain the advantages of recursive PLS algorithm. To solve the high updating frequency of the model, a confidence value is introduced, which can be updated adaptively according to the results of the model performance assessment. Once the confidence value is updated, the model can be updated. The proposed method has been used to predict the 4-carboxy-benz-aldehyde (CBA) content in the purified terephthalic acid (PTA) oxidation reaction process. The results show that the proposed soft sensor modelling method can reduce computation effectively, improve prediction accuracy by making use of process information and reflect the process characteristics accurately. (paper)

  3. Clinical prediction in defined populations: a simulation study investigating when and how to aggregate existing models

    Directory of Open Access Journals (Sweden)

    Glen P. Martin

    2017-01-01

    Full Text Available Abstract Background Clinical prediction models (CPMs are increasingly deployed to support healthcare decisions but they are derived inconsistently, in part due to limited data. An emerging alternative is to aggregate existing CPMs developed for similar settings and outcomes. This simulation study aimed to investigate the impact of between-population-heterogeneity and sample size on aggregating existing CPMs in a defined population, compared with developing a model de novo. Methods Simulations were designed to mimic a scenario in which multiple CPMs for a binary outcome had been derived in distinct, heterogeneous populations, with potentially different predictors available in each. We then generated a new ‘local’ population and compared the performance of CPMs developed for this population by aggregation, using stacked regression, principal component analysis or partial least squares, with redevelopment from scratch using backwards selection and penalised regression. Results While redevelopment approaches resulted in models that were miscalibrated for local datasets of less than 500 observations, model aggregation methods were well calibrated across all simulation scenarios. When the size of local data was less than 1000 observations and between-population-heterogeneity was small, aggregating existing CPMs gave better discrimination and had the lowest mean square error in the predicted risks compared with deriving a new model. Conversely, given greater than 1000 observations and significant between-population-heterogeneity, then redevelopment outperformed the aggregation approaches. In all other scenarios, both aggregation and de novo derivation resulted in similar predictive performance. Conclusion This study demonstrates a pragmatic approach to contextualising CPMs to defined populations. When aiming to develop models in defined populations, modellers should consider existing CPMs, with aggregation approaches being a suitable modelling

  4. Partial Least Squares tutorial for analyzing neuroimaging data

    Directory of Open Access Journals (Sweden)

    Patricia Van Roon

    2014-09-01

    Full Text Available Partial least squares (PLS has become a respected and meaningful soft modeling analysis technique that can be applied to very large datasets where the number of factors or variables is greater than the number of observations. Current biometric studies (e.g., eye movements, EKG, body movements, EEG are often of this nature. PLS eliminates the multiple linear regression issues of over-fitting data by finding a few underlying or latent variables (factors that account for most of the variation in the data. In real-world applications, where linear models do not always apply, PLS can model the non-linear relationship well. This tutorial introduces two PLS methods, PLS Correlation (PLSC and PLS Regression (PLSR and their applications in data analysis which are illustrated with neuroimaging examples. Both methods provide straightforward and comprehensible techniques for determining and modeling relationships between two multivariate data blocks by finding latent variables that best describes the relationships. In the examples, the PLSC will analyze the relationship between neuroimaging data such as Event-Related Potential (ERP amplitude averages from different locations on the scalp with their corresponding behavioural data. Using the same data, the PLSR will be used to model the relationship between neuroimaging and behavioural data. This model will be able to predict future behaviour solely from available neuroimaging data. To find latent variables, Singular Value Decomposition (SVD for PLSC and Non-linear Iterative PArtial Least Squares (NIPALS for PLSR are implemented in this tutorial. SVD decomposes the large data block into three manageable matrices containing a diagonal set of singular values, as well as left and right singular vectors. For PLSR, NIPALS algorithms are used because it provides amore precise estimation of the latent variables. Mathematica notebooks are provided for each PLS method with clearly labeled sections and subsections. The

  5. Lameness detection challenges in automated milking systems addressed with partial least squares discriminant analysis

    DEFF Research Database (Denmark)

    Garcia, Emanuel; Klaas, Ilka Christine; Amigo Rubio, Jose Manuel

    2014-01-01

    Lameness is prevalent in dairy herds. It causes decreased animal welfare and leads to higher production costs. This study explored data from an automatic milking system (AMS) to model on-farm gait scoring from a commercial farm. A total of 88 cows were gait scored once per week, for 2 5-wk periods......). The reference gait scoring error was estimated in the first week of the study and was, on average, 15%. Two partial least squares discriminant analysis models were fitted to parity 1 and parity 2 groups, respectively, to assign the lameness class according to the predicted probability of being lame (score 3...

  6. Characteristics of Fluid Force Reduction for Two Different Square Prisms in a Tandem Arrangement

    Energy Technology Data Exchange (ETDEWEB)

    Ro, Ki Deok; Kang, Chang Whan; Park, Kwon Ho [Gyeongsang Nat’l Univ., Jinju (Korea, Republic of)

    2017-07-15

    The Characteristics of the flow fields of a square prism having a small square prism were investigated by measuring of lift and drag on the square prism and visualizing the flow field using PIV. The experimental parameters were the width ratios(H/B=0.2~0.6) of small square prisms to the prism width and the gap ratios (G/B=0~3) between the prism and the small square prism. The drag reduction rate of the square prism initially increased and then decreased with the G/B ratio for the same H/B ratio, and increased with the H/B ratio for the same G/B ratio. The maximum drag reduction rate of 98.0% was observed at H/B=0.6 and G/B=1.0. The lift reduction rate of the square prism was not affected by the width and gap ratios; the total average value was approximately 66.5%. In case of a square prism having a small square prism, the stagnation regions were represented in the upstream and downstream sides of the square prism.

  7. Improved variable reduction in partial least squares modelling by Global-Minimum Error Uninformative-Variable Elimination.

    Science.gov (United States)

    Andries, Jan P M; Vander Heyden, Yvan; Buydens, Lutgarde M C

    2017-08-22

    The calibration performance of Partial Least Squares regression (PLS) can be improved by eliminating uninformative variables. For PLS, many variable elimination methods have been developed. One is the Uninformative-Variable Elimination for PLS (UVE-PLS). However, the number of variables retained by UVE-PLS is usually still large. In UVE-PLS, variable elimination is repeated as long as the root mean squared error of cross validation (RMSECV) is decreasing. The set of variables in this first local minimum is retained. In this paper, a modification of UVE-PLS is proposed and investigated, in which UVE is repeated until no further reduction in variables is possible, followed by a search for the global RMSECV minimum. The method is called Global-Minimum Error Uninformative-Variable Elimination for PLS, denoted as GME-UVE-PLS or simply GME-UVE. After each iteration, the predictive ability of the PLS model, built with the remaining variable set, is assessed by RMSECV. The variable set with the global RMSECV minimum is then finally selected. The goal is to obtain smaller sets of variables with similar or improved predictability than those from the classical UVE-PLS method. The performance of the GME-UVE-PLS method is investigated using four data sets, i.e. a simulated set, NIR and NMR spectra, and a theoretical molecular descriptors set, resulting in twelve profile-response (X-y) calibrations. The selective and predictive performances of the models resulting from GME-UVE-PLS are statistically compared to those from UVE-PLS and 1-step UVE, one-sided paired t-tests. The results demonstrate that variable reduction with the proposed GME-UVE-PLS method, usually eliminates significantly more variables than the classical UVE-PLS, while the predictive abilities of the resulting models are better. With GME-UVE-PLS, a lower number of uninformative variables, without a chemical meaning for the response, may be retained than with UVE-PLS. The selectivity of the classical UVE method

  8. Multiples least-squares reverse time migration

    KAUST Repository

    Zhang, Dongliang; Zhan, Ge; Dai, Wei; Schuster, Gerard T.

    2013-01-01

    To enhance the image quality, we propose multiples least-squares reverse time migration (MLSRTM) that transforms each hydrophone into a virtual point source with a time history equal to that of the recorded data. Since each recorded trace is treated

  9. A principal-component and least-squares method for allocating polycyclic aromatic hydrocarbons in sediment to multiple sources

    International Nuclear Information System (INIS)

    Burns, W.A.; Mankiewicz, P.J.; Bence, A.E.; Page, D.S.; Parker, K.R.

    1997-01-01

    A method was developed to allocate polycyclic aromatic hydrocarbons (PAHs) in sediment samples to the PAH sources from which they came. The method uses principal-component analysis to identify possible sources and a least-squares model to find the source mix that gives the best fit of 36 PAH analytes in each sample. The method identified 18 possible PAH sources in a large set of field data collected in Prince William Sound, Alaska, USA, after the 1989 Exxon Valdez oil spill, including diesel oil, diesel soot, spilled crude oil in various weathering states, natural background, creosote, and combustion products from human activities and forest fires. Spill oil was generally found to be a small increment of the natural background in subtidal sediments, whereas combustion products were often the predominant sources for subtidal PAHs near sites of past or present human activity. The method appears to be applicable to other situations, including other spills

  10. submitter Study of Backgrounds to Black Hole Events in the ATLAS Detector

    CERN Document Server

    Han, Sang Hee

    large extra dimension model with black hole mass MBH = sˆ, where sˆ is the parton-parton Centre of Momentum System (CMS) energy squared. In the large extra dimension model, quantum gravity can become strong at a TeV energy scale in the bulk space-time, and could lead to microscopic black holes being produced and observed by the LHC experiments. Once black holes are produced in the collider, they will decay to the SM particles by Hawking evaporation. Under this scenario, an analysis was carried out to determine the significance of black hole signals above some SM backgrounds in the ATLAS detector. Five event selection criteria were app...

  11. Forecasting of Energy Consumption in China Based on Ensemble Empirical Mode Decomposition and Least Squares Support Vector Machine Optimized by Improved Shuffled Frog Leaping Algorithm

    Directory of Open Access Journals (Sweden)

    Shuyu Dai

    2018-04-01

    Full Text Available For social development, energy is a crucial material whose consumption affects the stable and sustained development of the natural environment and economy. Currently, China has become the largest energy consumer in the world. Therefore, establishing an appropriate energy consumption prediction model and accurately forecasting energy consumption in China have practical significance, and can provide a scientific basis for China to formulate a reasonable energy production plan and energy-saving and emissions-reduction-related policies to boost sustainable development. For forecasting the energy consumption in China accurately, considering the main driving factors of energy consumption, a novel model, EEMD-ISFLA-LSSVM (Ensemble Empirical Mode Decomposition and Least Squares Support Vector Machine Optimized by Improved Shuffled Frog Leaping Algorithm, is proposed in this article. The prediction accuracy of energy consumption is influenced by various factors. In this article, first considering population, GDP (Gross Domestic Product, industrial structure (the proportion of the second industry added value, energy consumption structure, energy intensity, carbon emissions intensity, total imports and exports and other influencing factors of energy consumption, the main driving factors of energy consumption are screened as the model input according to the sorting of grey relational degrees to realize feature dimension reduction. Then, the original energy consumption sequence of China is decomposed into multiple subsequences by Ensemble Empirical Mode Decomposition for de-noising. Next, the ISFLA-LSSVM (Least Squares Support Vector Machine Optimized by Improved Shuffled Frog Leaping Algorithm model is adopted to forecast each subsequence, and the prediction sequences are reconstructed to obtain the forecasting result. After that, the data from 1990 to 2009 are taken as the training set, and the data from 2010 to 2016 are taken as the test set to make an

  12. Global Warming and the Microwave Background

    Directory of Open Access Journals (Sweden)

    Robitaille P.-M.

    2009-04-01

    Full Text Available In the work, the importance of assigning the microwave background to the Earth is ad- dressed while emphasizing the consequences for global climate change. Climate mod- els can only produce meaningful forecasts when they consider the real magnitude of all radiative processes. The oceans and continents both contribute to terrestrial emis- sions. However, the extent of oceanic radiation, particularly in the microwave region, raises concerns. This is not only since the globe is covered with water, but because the oceans themselves are likely to be weaker emitters than currently believed. Should the microwave background truly be generated by the oceans of the Earth, our planet would be a much less efficient emitter of radiation in this region of the electromagnetic spectrum. Furthermore, the oceans would appear unable to increase their emissions in the microwave in response to temperature elevation, as predicted by Stefan’s law. The results are significant relative to the modeling of global warming.

  13. Sparse least-squares reverse time migration using seislets

    KAUST Repository

    Dutta, Gaurav

    2015-08-19

    We propose sparse least-squares reverse time migration (LSRTM) using seislets as a basis for the reflectivity distribution. This basis is used along with a dip-constrained preconditioner that emphasizes image updates only along prominent dips during the iterations. These dips can be estimated from the standard migration image or from the gradient using plane-wave destruction filters or structural tensors. Numerical tests on synthetic datasets demonstrate the benefits of this method for mitigation of aliasing artifacts and crosstalk noise in multisource least-squares migration.

  14. Background Selection in Partially Selfing Populations

    Science.gov (United States)

    Roze, Denis

    2016-01-01

    Self-fertilizing species often present lower levels of neutral polymorphism than their outcrossing relatives. Indeed, selfing automatically increases the rate of coalescence per generation, but also enhances the effects of background selection and genetic hitchhiking by reducing the efficiency of recombination. Approximations for the effect of background selection in partially selfing populations have been derived previously, assuming tight linkage between deleterious alleles and neutral loci. However, loosely linked deleterious mutations may have important effects on neutral diversity in highly selfing populations. In this article, I use a general method based on multilocus population genetics theory to express the effect of a deleterious allele on diversity at a linked neutral locus in terms of moments of genetic associations between loci. Expressions for these genetic moments at equilibrium are then computed for arbitrary rates of selfing and recombination. An extrapolation of the results to the case where deleterious alleles segregate at multiple loci is checked using individual-based simulations. At high selfing rates, the tight linkage approximation underestimates the effect of background selection in genomes with moderate to high map length; however, another simple approximation can be obtained for this situation and provides accurate predictions as long as the deleterious mutation rate is not too high. PMID:27075726

  15. Penalized linear regression for discrete ill-posed problems: A hybrid least-squares and mean-squared error approach

    KAUST Repository

    Suliman, Mohamed Abdalla Elhag; Ballal, Tarig; Kammoun, Abla; Al-Naffouri, Tareq Y.

    2016-01-01

    This paper proposes a new approach to find the regularization parameter for linear least-squares discrete ill-posed problems. In the proposed approach, an artificial perturbation matrix with a bounded norm is forced into the discrete ill-posed model

  16. Big bang nucleosynthesis, cosmic microwave background anisotropies and dark energy

    International Nuclear Information System (INIS)

    Signore, Monique; Puy, Denis

    2002-01-01

    Over the last decade, cosmological observations have attained a level of precision which allows for very detailed comparison with theoretical predictions. We are beginning to learn the answers to some fundamental questions, using information contained in Cosmic Microwave Background Anisotropy (CMBA) data. In this talk, we briefly review some studies of the current and prospected constraints imposed by CMBA measurements on the neutrino physics and on the dark energy. As it was already announced by Scott, we present some possible new physics from the Cosmic Microwave Background (CMB)

  17. Approximate calculation method for integral of mean square value of nonstationary response

    International Nuclear Information System (INIS)

    Aoki, Shigeru; Fukano, Azusa

    2010-01-01

    The response of the structure subjected to nonstationary random vibration such as earthquake excitation is nonstationary random vibration. Calculating method for statistical characteristics of such a response is complicated. Mean square value of the response is usually used to evaluate random response. Integral of mean square value of the response corresponds to total energy of the response. In this paper, a simplified calculation method to obtain integral of mean square value of the response is proposed. As input excitation, nonstationary white noise and nonstationary filtered white noise are used. Integrals of mean square value of the response are calculated for various values of parameters. It is found that the proposed method gives exact value of integral of mean square value of the response.

  18. Hierarchical Cluster-based Partial Least Squares Regression (HC-PLSR is an efficient tool for metamodelling of nonlinear dynamic models

    Directory of Open Access Journals (Sweden)

    Omholt Stig W

    2011-06-01

    Full Text Available Abstract Background Deterministic dynamic models of complex biological systems contain a large number of parameters and state variables, related through nonlinear differential equations with various types of feedback. A metamodel of such a dynamic model is a statistical approximation model that maps variation in parameters and initial conditions (inputs to variation in features of the trajectories of the state variables (outputs throughout the entire biologically relevant input space. A sufficiently accurate mapping can be exploited both instrumentally and epistemically. Multivariate regression methodology is a commonly used approach for emulating dynamic models. However, when the input-output relations are highly nonlinear or non-monotone, a standard linear regression approach is prone to give suboptimal results. We therefore hypothesised that a more accurate mapping can be obtained by locally linear or locally polynomial regression. We present here a new method for local regression modelling, Hierarchical Cluster-based PLS regression (HC-PLSR, where fuzzy C-means clustering is used to separate the data set into parts according to the structure of the response surface. We compare the metamodelling performance of HC-PLSR with polynomial partial least squares regression (PLSR and ordinary least squares (OLS regression on various systems: six different gene regulatory network models with various types of feedback, a deterministic mathematical model of the mammalian circadian clock and a model of the mouse ventricular myocyte function. Results Our results indicate that multivariate regression is well suited for emulating dynamic models in systems biology. The hierarchical approach turned out to be superior to both polynomial PLSR and OLS regression in all three test cases. The advantage, in terms of explained variance and prediction accuracy, was largest in systems with highly nonlinear functional relationships and in systems with positive feedback

  19. Sub-Model Partial Least Squares for Improved Accuracy in Quantitative Laser Induced Breakdown Spectroscopy

    Science.gov (United States)

    Anderson, R. B.; Clegg, S. M.; Frydenvang, J.

    2015-12-01

    One of the primary challenges faced by the ChemCam instrument on the Curiosity Mars rover is developing a regression model that can accurately predict the composition of the wide range of target types encountered (basalts, calcium sulfate, feldspar, oxides, etc.). The original calibration used 69 rock standards to train a partial least squares (PLS) model for each major element. By expanding the suite of calibration samples to >400 targets spanning a wider range of compositions, the accuracy of the model was improved, but some targets with "extreme" compositions (e.g. pure minerals) were still poorly predicted. We have therefore developed a simple method, referred to as "submodel PLS", to improve the performance of PLS across a wide range of target compositions. In addition to generating a "full" (0-100 wt.%) PLS model for the element of interest, we also generate several overlapping submodels (e.g. for SiO2, we generate "low" (0-50 wt.%), "mid" (30-70 wt.%), and "high" (60-100 wt.%) models). The submodels are generally more accurate than the "full" model for samples within their range because they are able to adjust for matrix effects that are specific to that range. To predict the composition of an unknown target, we first predict the composition with the submodels and the "full" model. Then, based on the predicted composition from the "full" model, the appropriate submodel prediction can be used (e.g. if the full model predicts a low composition, use the "low" model result, which is likely to be more accurate). For samples with "full" predictions that occur in a region of overlap between submodels, the submodel predictions are "blended" using a simple linear weighted sum. The submodel PLS method shows improvements in most of the major elements predicted by ChemCam and reduces the occurrence of negative predictions for low wt.% targets. Submodel PLS is currently being used in conjunction with ICA regression for the major element compositions of ChemCam data.

  20. Application of Least-Squares Spectral Element Methods to Polynomial Chaos

    NARCIS (Netherlands)

    Vos, P.E.J.; Gerritsma, M.I.

    2006-01-01

    This papers describes the use of the Least-Squares Spectral Element Method to polynomial Chaos to solve stochastic partial differential equations. The method will be described in detail and a comparison will be presented between the least-squares projection and the conventional Galerkin projection.

  1. FC LSEI WNNLS, Least-Square Fitting Algorithms Using B Splines

    International Nuclear Information System (INIS)

    Hanson, R.J.; Haskell, K.H.

    1989-01-01

    1 - Description of problem or function: FC allows a user to fit dis- crete data, in a weighted least-squares sense, using piece-wise polynomial functions represented by B-Splines on a given set of knots. In addition to the least-squares fitting of the data, equality, inequality, and periodic constraints at a discrete, user-specified set of points can be imposed on the fitted curve or its derivatives. The subprograms LSEI and WNNLS solve the linearly-constrained least-squares problem. LSEI solves the class of problem with general inequality constraints, and, if requested, obtains a covariance matrix of the solution parameters. WNNLS solves the class of problem with non-negativity constraints. It is anticipated that most users will find LSEI suitable for their needs; however, users with inequalities that are single bounds on variables may wish to use WNNLS. 2 - Method of solution: The discrete data are fit by a linear combination of piece-wise polynomial curves which leads to a linear least-squares system of algebraic equations. Additional information is expressed as a discrete set of linear inequality and equality constraints on the fitted curve which leads to a linearly-constrained least-squares system of algebraic equations. The solution of this system is the main computational problem solved

  2. LSL: a logarithmic least-squares adjustment method

    International Nuclear Information System (INIS)

    Stallmann, F.W.

    1982-01-01

    To meet regulatory requirements, spectral unfolding codes must not only provide reliable estimates for spectral parameters, but must also be able to determine the uncertainties associated with these parameters. The newer codes, which are more appropriately called adjustment codes, use the least squares principle to determine estimates and uncertainties. The principle is simple and straightforward, but there are several different mathematical models to describe the unfolding problem. In addition to a sound mathematical model, ease of use and range of options are important considerations in the construction of adjustment codes. Based on these considerations, a least squares adjustment code for neutron spectrum unfolding has been constructed some time ago and tentatively named LSL

  3. Battery state-of-charge estimation using approximate least squares

    Science.gov (United States)

    Unterrieder, C.; Zhang, C.; Lunglmayr, M.; Priewasser, R.; Marsili, S.; Huemer, M.

    2015-03-01

    In recent years, much effort has been spent to extend the runtime of battery-powered electronic applications. In order to improve the utilization of the available cell capacity, high precision estimation approaches for battery-specific parameters are needed. In this work, an approximate least squares estimation scheme is proposed for the estimation of the battery state-of-charge (SoC). The SoC is determined based on the prediction of the battery's electromotive force. The proposed approach allows for an improved re-initialization of the Coulomb counting (CC) based SoC estimation method. Experimental results for an implementation of the estimation scheme on a fuel gauge system on chip are illustrated. Implementation details and design guidelines are presented. The performance of the presented concept is evaluated for realistic operating conditions (temperature effects, aging, standby current, etc.). For the considered test case of a GSM/UMTS load current pattern of a mobile phone, the proposed method is able to re-initialize the CC-method with a high accuracy, while state-of-the-art methods fail to perform a re-initialization.

  4. A critical heat flux approach for square rod bundles using the 1995 Groeneveld CHF table and bundle data of heat transfer research facility

    International Nuclear Information System (INIS)

    Lee, M.

    2000-01-01

    The critical heat flux (CHF) approach using CHF look-up tables has become a widely accepted CHF prediction technique. In these approaches, the CHF tables are developed based mostly on the data bank for flow in circular tubes. A set of correction factors was proposed by Groeneveld et al. [Groeneveld, D.C., Cheng, S.C., Doan, T. (1986)] to extend the application of the CHF table to other flow situations including flow in rod bundles. The proposed correction factors are based on a limited amount of data not specified in the original paper. The CHF approach of Groeneveld and co-workers is extensively used in the thermal hydraulic analysis of nuclear reactors. In 1996, Groeneveld et al. proposed a new CHF table to predict CHF in circular tubes [Groeneveld, D.C., et al., 1996. The 1995 look-up table for Critical Heat Flux. Nucl. Eng. Des. 163(1), 23]. In the present study, a set of correction factors is developed to extend the applicability of the new CHF table to flow in rod bundles of square array. The correction factors are developed by minimizing the statistical parameters of the ratio of the measured and predicted bundle CHF data from the Heat Transfer Research Facility. The proposed correction factors include: the hydraulic diameter factor (K hy ), the bundle factor (K bf ), the heated length factor (K hl ), the grid spacer factor (K sp ), the axial flux distribution factors (K nu ), the cold wall factor (K cw ) and the radial power distribution factor (K rp ). The value of constants in these correction factors is different when the heat balance method (HBM) and direct substitution method (DSM) are adopted to predict the experimental results of HTRF. With the 1995 Groeneveld CHF Table and the proposed correction factors, the average relative error is 0.1 and 0.0% for HBM and DSM, respectively, and the root mean square (RMS) error is 31.7% in DSM and 17.7% in HBM for 9852 square array data points of HTRF. (orig.)

  5. Two Enhancements of the Logarithmic Least-Squares Method for Analyzing Subjective Comparisons

    Science.gov (United States)

    1989-03-25

    error term. 1 For this model, the total sum of squares ( SSTO ), defined as n 2 SSTO = E (yi y) i=1 can be partitioned into error and regression sums...of the regression line around the mean value. Mathematically, for the model given by equation A.4, SSTO = SSE + SSR (A.6) A-4 where SSTO is the total...sum of squares (i.e., the variance of the yi’s), SSE is error sum of squares, and SSR is the regression sum of squares. SSTO , SSE, and SSR are given

  6. Square Van Atta reflector with conducting mounting flame

    DEFF Research Database (Denmark)

    Nielsen, Erik Dragø

    1970-01-01

    A theoretical and numerical analysis of square Van Atta reflectors has been carried out with or without a conducting plate, used for mounting of the antenna elements. The Van Atta reflector investigated has antenna elements which are parallel half-wave dipoles interconnected in pairs by transmiss......A theoretical and numerical analysis of square Van Atta reflectors has been carried out with or without a conducting plate, used for mounting of the antenna elements. The Van Atta reflector investigated has antenna elements which are parallel half-wave dipoles interconnected in pairs...

  7. Radiation Field of a Square, Helical Beam Antenna

    DEFF Research Database (Denmark)

    Knudsen, Hans Lottrup

    1952-01-01

    square helices are used. Further, in connection with corresponding rigorous formulas for the field from a circular, helical antenna with a uniformly progressing current wave of constant amplitude the present formulas may be used for an investigation of the magnitude of the error introduced in Kraus......' approximate calculation of the field from a circular, helical antenna by replacing this antenna with an ``equivalent'' square helix. This investigation is carried out by means of a numerical example. The investigation shows that Kraus' approximate method of calculation yields results in fair agreement...

  8. The importance for speech intelligibility of random fluctuations in "steady" background noise.

    Science.gov (United States)

    Stone, Michael A; Füllgrabe, Christian; Mackinnon, Robert C; Moore, Brian C J

    2011-11-01

    Spectrally shaped steady noise is commonly used as a masker of speech. The effects of inherent random fluctuations in amplitude of such a noise are typically ignored. Here, the importance of these random fluctuations was assessed by comparing two cases. For one, speech was mixed with steady speech-shaped noise and N-channel tone vocoded, a process referred to as signal-domain mixing (SDM); this preserved the random fluctuations of the noise. For the second, the envelope of speech alone was extracted for each vocoder channel and a constant was added corresponding to the root-mean-square value of the noise envelope for that channel. This is referred to as envelope-domain mixing (EDM); it removed the random fluctuations of the noise. Sinusoidally modulated noise and a single talker were also used as backgrounds, with both SDM and EDM. Speech intelligibility was measured for N = 12, 19, and 30, with the target-to-background ratio fixed at -7 dB. For SDM, performance was best for the speech background and worst for the steady noise. For EDM, this pattern was reversed. Intelligibility with steady noise was consistently very poor for SDM, but near-ceiling for EDM, demonstrating that the random fluctuations in steady noise have a large effect.

  9. Qualitative values of radioactivity, area and volumetric: Application on phantoms (target and background)

    Energy Technology Data Exchange (ETDEWEB)

    Abdel-Rahman Al-Shakhrah, Issa [Department of Physics, University of Jordan, Queen Rania Street, Amman (Jordan)], E-mail: issashak@yahoo.com

    2009-04-15

    The visualization of a lesion depends on the contrast between the lesion and surrounding background (T/B; (target/background) ratio). For imaging in vivo not only is the radioactivity in the target organ important, but so too is the ratio of radioactivity in the target versus that in the background. Nearly all studies reported in the literature have dealt with the surface index, as a standard factor to study the relationship between the target (tissue or organ) and the background. It is necessary to know the ratio between the volumetric activity of lesions (targets) and normal tissues (background) instead of knowing the ratio between the area activity, the volume index being a more realistic factor than the area index as the targets (tissues or organs) are real volumes that have surfaces. The intention is that this work should aid in approaching a quantitative relationship and differentiation between different tissues (target/background or abnormal/normal tissues). For the background, square regions of interest (Rios) (11x11 pixels in size) were manually drawn by the observer at locations far from the border of the plastic cylinder (simulated organ), while an isocontour region with 50% threshold was drawn automatically over the cylinder. The total number of counts and pixels in each of these regions was calculated. The relationship between different phantom parameters, cylinder (target) depth, area activity ratio (background/target, A(B/T)) and real volumetric activity ratio (background/target, V(B/T)), was demonstrated. Variations in the area and volumetric activity ratio values with respect to the depth were deduced. To find a realistic value of the ratio, calibration charts have been constructed that relate the area and real volumetric ratios as a function of depth of the tissues and organs. Our experiments show that the cross-sectional area of the cylinder (applying a threshold 50% isocontour) has a weak dependence on the activity concentrations of the

  10. Qualitative values of radioactivity, area and volumetric: Application on phantoms (target and background)

    International Nuclear Information System (INIS)

    Abdel-Rahman Al-Shakhrah, Issa

    2009-01-01

    The visualization of a lesion depends on the contrast between the lesion and surrounding background (T/B; (target/background) ratio). For imaging in vivo not only is the radioactivity in the target organ important, but so too is the ratio of radioactivity in the target versus that in the background. Nearly all studies reported in the literature have dealt with the surface index, as a standard factor to study the relationship between the target (tissue or organ) and the background. It is necessary to know the ratio between the volumetric activity of lesions (targets) and normal tissues (background) instead of knowing the ratio between the area activity, the volume index being a more realistic factor than the area index as the targets (tissues or organs) are real volumes that have surfaces. The intention is that this work should aid in approaching a quantitative relationship and differentiation between different tissues (target/background or abnormal/normal tissues). For the background, square regions of interest (Rios) (11x11 pixels in size) were manually drawn by the observer at locations far from the border of the plastic cylinder (simulated organ), while an isocontour region with 50% threshold was drawn automatically over the cylinder. The total number of counts and pixels in each of these regions was calculated. The relationship between different phantom parameters, cylinder (target) depth, area activity ratio (background/target, A(B/T)) and real volumetric activity ratio (background/target, V(B/T)), was demonstrated. Variations in the area and volumetric activity ratio values with respect to the depth were deduced. To find a realistic value of the ratio, calibration charts have been constructed that relate the area and real volumetric ratios as a function of depth of the tissues and organs. Our experiments show that the cross-sectional area of the cylinder (applying a threshold 50% isocontour) has a weak dependence on the activity concentrations of the

  11. Application of least square support vector machine and multivariate adaptive regression spline models in long term prediction of river water pollution

    Science.gov (United States)

    Kisi, Ozgur; Parmar, Kulwinder Singh

    2016-03-01

    This study investigates the accuracy of least square support vector machine (LSSVM), multivariate adaptive regression splines (MARS) and M5 model tree (M5Tree) in modeling river water pollution. Various combinations of water quality parameters, Free Ammonia (AMM), Total Kjeldahl Nitrogen (TKN), Water Temperature (WT), Total Coliform (TC), Fecal Coliform (FC) and Potential of Hydrogen (pH) monitored at Nizamuddin, Delhi Yamuna River in India were used as inputs to the applied models. Results indicated that the LSSVM and MARS models had almost same accuracy and they performed better than the M5Tree model in modeling monthly chemical oxygen demand (COD). The average root mean square error (RMSE) of the LSSVM and M5Tree models was decreased by 1.47% and 19.1% using MARS model, respectively. Adding TC input to the models did not increase their accuracy in modeling COD while adding FC and pH inputs to the models generally decreased the accuracy. The overall results indicated that the MARS and LSSVM models could be successfully used in estimating monthly river water pollution level by using AMM, TKN and WT parameters as inputs.

  12. Large Eddy Simulation of turbulence induced secondary flows in stationary and rotating straight square ducts

    Science.gov (United States)

    Sudjai, W.; Juntasaro, V.; Juttijudata, V.

    2018-01-01

    The accuracy of predicting turbulence induced secondary flows is crucially important in many industrial applications such as turbine blade internal cooling passages in a gas turbine and fuel rod bundles in a nuclear reactor. A straight square duct is popularly used to reveal the characteristic of turbulence induced secondary flows which consists of two counter rotating vortices distributed in each corner of the duct. For a rotating duct, the flow can be divided into the pressure side and the suction side. The turbulence induced secondary flows are converted to the Coriolis force driven two large circulations with a pair of additional vortices on the pressure wall due to the rotational effect. In this paper, the Large Eddy Simulation (LES) of turbulence induced secondary flows in a straight square duct is performed using the ANSYS FLUENT CFD software. A dynamic kinetic energy subgrid-scale model is used to describe the three-dimensional incompressible turbulent flows in the stationary and the rotating straight square ducts. The Reynolds number based on the friction velocity and the hydraulic diameter is 300 with the various rotation numbers for the rotating cases. The flow is assumed fully developed by imposing the constant pressure gradient in the streamwise direction. For the rotating cases, the rotational axis is placed perpendicular to the streamwise direction. The simulation results on the secondary flows and the turbulent statistics are found to be in good agreement with the available Direct Numerical Simulation (DNS) data. Finally, the details of the Coriolis effects are discussed.

  13. Dim point target detection against bright background

    Science.gov (United States)

    Zhang, Yao; Zhang, Qiheng; Xu, Zhiyong; Xu, Junping

    2010-05-01

    For target detection within a large-field cluttered background from a long distance, several difficulties, involving low contrast between target and background, little occupancy, illumination ununiformity caused by vignetting of lens, and system noise, make it a challenging problem. The existing approaches to dim target detection can be roughly divided into two categories: detection before tracking (DBT) and tracking before detection (TBD). The DBT-based scheme has been widely used in practical applications due to its simplicity, but it often requires working in the situation with a higher signal-to-noise ratio (SNR). In contrast, the TBD-based methods can provide impressive detection results even in the cases of very low SNR; unfortunately, the large memory requirement and high computational load prevents these methods from real-time tasks. In this paper, we propose a new method for dim target detection. We address this problem by combining the advantages of the DBT-based scheme in computational efficiency and of the TBD-based in detection capability. Our method first predicts the local background, and then employs the energy accumulation and median filter to remove background clutter. The dim target is finally located by double window filtering together with an improved high order correlation which speeds up the convergence. The proposed method is implemented on a hardware platform and performs suitably in outside experiments.

  14. Support-Vector-based Least Squares for learning non-linear dynamics

    NARCIS (Netherlands)

    de Kruif, B.J.; de Vries, Theodorus J.A.

    2002-01-01

    A function approximator is introduced that is based on least squares support vector machines (LSSVM) and on least squares (LS). The potential indicators for the LS method are chosen as the kernel functions of all the training samples similar to LSSVM. By selecting these as indicator functions the

  15. Lateralization of noise bursts in interaurally correlated or uncorrelated background noise using interaural level differences.

    Science.gov (United States)

    Reed, Darrin K; van de Par, Steven

    2015-10-01

    The interaural level difference (ILD) of a lateralized target source may be effectively reduced when the target is presented together with background noise containing zero ILD. It is not certain whether listeners perceive a position congruent with the reduced ILD or the actual target ILD in a lateralization task. Two sets of behavioral experiments revealed that many listeners perceived a position at or even larger than that corresponding to the presented target ILD when a temporal onset/offset asynchrony between the broadband target and the broadband background noise was present. When no temporal asynchrony was present, however, the perceived lateral position indicated a dependency on the coherence of the background noise for several listeners. With interaurally correlated background noise, listeners reported a reduced ILD resulting from the combined target and background noise stimulus. In contrast, several of the listeners made a reasonable estimate of the position corresponding to the target ILD for interaurally uncorrelated, broadband, background noise. No obvious difference in performance was seen between low- or high-frequency stimuli. Extension of a weighting template to the output of a standard equalization-cancellation model was shown to remove a lateral bias on the predicted target ILD resulting from the presence of background noise. Provided that an appropriate weighting template is applied based on knowledge of the background noise coherence, good prediction of the behavioral data is possible.

  16. Evaluating and comparing algorithms for respiratory motion prediction

    International Nuclear Information System (INIS)

    Ernst, F; Dürichen, R; Schlaefer, A; Schweikard, A

    2013-01-01

    In robotic radiosurgery, it is necessary to compensate for systematic latencies arising from target tracking and mechanical constraints. This compensation is usually achieved by means of an algorithm which computes the future target position. In most scientific works on respiratory motion prediction, only one or two algorithms are evaluated on a limited amount of very short motion traces. The purpose of this work is to gain more insight into the real world capabilities of respiratory motion prediction methods by evaluating many algorithms on an unprecedented amount of data. We have evaluated six algorithms, the normalized least mean squares (nLMS), recursive least squares (RLS), multi-step linear methods (MULIN), wavelet-based multiscale autoregression (wLMS), extended Kalman filtering, and ε-support vector regression (SVRpred) methods, on an extensive database of 304 respiratory motion traces. The traces were collected during treatment with the CyberKnife (Accuray, Inc., Sunnyvale, CA, USA) and feature an average length of 71 min. Evaluation was done using a graphical prediction toolkit, which is available to the general public, as is the data we used. The experiments show that the nLMS algorithm—which is one of the algorithms currently used in the CyberKnife—is outperformed by all other methods. This is especially true in the case of the wLMS, the SVRpred, and the MULIN algorithms, which perform much better. The nLMS algorithm produces a relative root mean square (RMS) error of 75% or less (i.e., a reduction in error of 25% or more when compared to not doing prediction) in only 38% of the test cases, whereas the MULIN and SVRpred methods reach this level in more than 77%, the wLMS algorithm in more than 84% of the test cases. Our work shows that the wLMS algorithm is the most accurate algorithm and does not require parameter tuning, making it an ideal candidate for clinical implementation. Additionally, we have seen that the structure of a patient

  17. Predicting clinically unrecognized coronary artery disease: use of two- dimensional echocardiography

    Directory of Open Access Journals (Sweden)

    Nagueh Sherif F

    2009-03-01

    Full Text Available Abstract Background 2-D Echo is often performed in patients without history of coronary artery disease (CAD. We sought to determine echo features predictive of CAD. Methods 2-D Echo of 328 patients without known CAD performed within one year prior to stress myocardial SPECT and angiography were reviewed. Echo features examined were left ventricular and atrial enlargement, LV hypertrophy, wall motion abnormality (WMA, LV ejection fraction (EF 15% LV perfusion defect or multivessel distribution. Severe coronary artery stenosis (CAS was defined as left main, 3 VD or 2VD involving proximal LAD. Results The mean age was 62 ± 13 years, 59% men, 29% diabetic (DM and 148 (45% had > 2 risk factors. Pharmacologic stress was performed in 109 patients (33%. MPA was present in 200 pts (60% of which, 137 were high risk. CAS was present in 166 pts (51%, 75 were severe. Of 87 patients with WMA, 83% had MPA and 78% had CAS. Multivariate analysis identified age >65, male, inability to exercise, DM, WMA, MAC and AS as independent predictors of MPA and CAS. Independent predictors of high risk MPA and severe CAS were age, DM, inability to exercise and WMA. 2-D echo findings offered incremental value over clinical information in predicting CAD by angiography. (Chi square: 360 vs. 320 p = 0.02. Conclusion 2-D Echo was valuable in predicting presence of physiological and anatomical CAD in addition to clinical information.

  18. Mechanic properties analysis of quasi-square honeycomb sandwich structure′s core

    Directory of Open Access Journals (Sweden)

    Guan TONG

    2017-12-01

    Full Text Available In order to illustrate the relationship between the quasi-square-honeycomb structure and the hexagonal honeycomb structure, after decomposing the quasi-square honeycomb sandwich structure into unique T-shaped cell, the equivalent elastic constants equations of T-shaped cell model are derived respectively by applying Euler beam theory and energy method. At the same time, the quasi-square honeycomb's characteristic structure parameters are substituted into the equivalent elastic constants equations which are derived by the classical method of a hexagonal honeycomb core, and the same results are obtained as that of the preceding both methods. It is proved that the quasi-square-honeycomb structure is an evolution of hexagonal honeycomb. The limitations and application scope of the two classical honeycomb formulas are pointed out. The research of the structural characteristics of the square-shaped honeycomb shows that the classical cellular theoretical formula are singular and inaccurate when the feature angle values equal to zero or near zero. This study has important reference value for the subsequent research and improvement of the theories about cellular structure mechanical properties.

  19. SUPPLEMENTARY COMPARISON: EUROMET.L-S10 Comparison of squareness measurements

    Science.gov (United States)

    Mokros, Jiri

    2005-01-01

    The idea of performing a comparison of squareness resulted from the need to review the MRA Appendix C, Category 90° square. At its meeting in October 1999 (in Prague) it was decided upon a first comparison of squareness measurements in the framework of EUROMET, numbered #570, starting in 2000, with the Slovak Institute of Metrology (SMU) as the pilot laboratory. During the preparation stage of the project, it was agreed that it should be submitted as a EUROMET supplementary comparison in the framework of the Mutual Recognition Arrangement (MRA) of the Metre Convention and would boost confidence in calibration and measurement certificates issued by the participating national metrology institutes. The aim of the comparison of squareness measurement was to compare and verify the declared calibration measurement capabilities of participating laboratories and to investigate the effect of systematic influences in the measurement process and their elimination. Eleven NMIs from the EUROMET region carried out this project. Two standards were calibrated: granite squareness standard of rectangular shape, cylindrical squareness standard of steel with marked positions for the profile lines. The following parameters had to be calibrated: granite squareness standard: interior angle γB between two lines AB and AC (envelope - LS regression) fitted through the measured profiles, and/or granite squareness standard: interior angle γLS between two LS regression lines AB and AC fitted through the measured profiles, cylindrical squareness standard: interior angles γ0°, γ90°, γ180°, γ270° between the LS regression line fitted through the measurement profiles at 0°, 90°, 180°, 270° and the envelope plane of the basis (resting on a surface plate), local LS straightness deviation for all measured profiles (2 and 4) of both standards. The results of the comparison are the deviations of profiles and angles measured by the individual NMIs from the reference values. These resulted

  20. Figure-of-merit (FOM), an improved criterion over the normalized chi-squared test for assessing goodness-of-fit of gamma-ray spectral peaks

    International Nuclear Information System (INIS)

    Garo Balian, H.; Eddy, N.W.

    1977-01-01

    A careful experimenter knows that in order to choose the best curve fits of peaks from a gamma ray spectrum for such purposes as energy or intensity calibration, half-life determination, etc., the application of the normalized chi-squared test, [chisub(N)] 2 =chi 2 /(n-m), is insufficient. One must normally verify the goodness-of-fit with plots, detailed scans of residuals, etc. Because of different techniques of application, variations in backgrounds, in peak sizes and shapes, etc., quotation of the [chisub(N)] 2 value associated with an individual peak fit conveys very little information unless accompanied by considerable ancillary data. (This is not to say that the traditional chi 2 formula should not be used as the source of the normal equations in the least squares fitting procedure. But after the fitting, it is unreliable as a criterion for comparison with other fits.) The authors present a formula designated figure-of-merit (FOM) which greatly improves on the uncertainty and fluctuations of the [chisub(N)] 2 formula. An FOM value of less than 2.5% indicates a good fit (in the authors' judgement) irrespective of background conditions and variations in peak sizes and shapes. Furthermore, the authors feel the FOM formula is less subject to fluctuations resulting from different techniques of application. (Auth.)

  1. A Search for WIMP Dark Matter Using an Optimized Chi-square Technique on the Final Data from the Cryogenic Dark Matter Search Experiment (CDMS II)

    Energy Technology Data Exchange (ETDEWEB)

    Manungu Kiveni, Joseph [Syracuse Univ., NY (United States)

    2012-12-01

    This dissertation describes the results of a WIMP search using CDMS II data sets accumulated at the Soudan Underground Laboratory in Minnesota. Results from the original analysis of these data were published in 2009; two events were observed in the signal region with an expected leakage of 0.9 events. Further investigation revealed an issue with the ionization-pulse reconstruction algorithm leading to a software upgrade and a subsequent reanalysis of the data. As part of the reanalysis, I performed an advanced discrimination technique to better distinguish (potential) signal events from backgrounds using a 5-dimensional chi-square method. This dataanalysis technique combines the event information recorded for each WIMP-search event to derive a backgrounddiscrimination parameter capable of reducing the expected background to less than one event, while maintaining high efficiency for signal events. Furthermore, optimizing the cut positions of this 5-dimensional chi-square parameter for the 14 viable germanium detectors yields an improved expected sensitivity to WIMP interactions relative to previous CDMS results. This dissertation describes my improved (and optimized) discrimination technique and the results obtained from a blind application to the reanalyzed CDMS II WIMP-search data.

  2. Square pulse current wave’s effect on electroplated nickel hardness

    Directory of Open Access Journals (Sweden)

    Bibian Alonso Hoyos

    2006-09-01

    Full Text Available The effects of frequency, average current density and duty cycle on the hardness of electroplated nickel were studied in Watts and sulphamate solutions by means of direct and square pulse current. The results in Watts’ solutions revealed greater hardness at low duty cycle, high average current density and high square pulse current frequency. There was little variation in hardness in nickel sulphamate solutions to changes in duty cycle and wave frequency. Hardness values obtained in the Watts’ bath with square pulse current were higher than those achieved with direct current at the same average current density; such difference was not significant in sulphamate bath treatment.

  3. Least-squares methods involving the H{sup -1} inner product

    Energy Technology Data Exchange (ETDEWEB)

    Pasciak, J.

    1996-12-31

    Least-squares methods are being shown to be an effective technique for the solution of elliptic boundary value problems. However, the methods differ depending on the norms in which they are formulated. For certain problems, it is much more natural to consider least-squares functionals involving the H{sup -1} norm. Such norms give rise to improved convergence estimates and better approximation to problems with low regularity solutions. In addition, fewer new variables need to be added and less stringent boundary conditions need to be imposed. In this talk, I will describe some recent developments involving least-squares methods utilizing the H{sup -1} inner product.

  4. The Chaotic Prediction for Aero-Engine Performance Parameters Based on Nonlinear PLS Regression

    Directory of Open Access Journals (Sweden)

    Chunxiao Zhang

    2012-01-01

    Full Text Available The prediction of the aero-engine performance parameters is very important for aero-engine condition monitoring and fault diagnosis. In this paper, the chaotic phase space of engine exhaust temperature (EGT time series which come from actual air-borne ACARS data is reconstructed through selecting some suitable nearby points. The partial least square (PLS based on the cubic spline function or the kernel function transformation is adopted to obtain chaotic predictive function of EGT series. The experiment results indicate that the proposed PLS chaotic prediction algorithm based on biweight kernel function transformation has significant advantage in overcoming multicollinearity of the independent variables and solve the stability of regression model. Our predictive NMSE is 16.5 percent less than that of the traditional linear least squares (OLS method and 10.38 percent less than that of the linear PLS approach. At the same time, the forecast error is less than that of nonlinear PLS algorithm through bootstrap test screening.

  5. Hanford Site background: Part 1, Soil background for nonradioactive analytes

    International Nuclear Information System (INIS)

    1993-04-01

    Volume two contains the following appendices: Description of soil sampling sites; sampling narrative; raw data soil background; background data analysis; sitewide background soil sampling plan; and use of soil background data for the detection of contamination at waste management unit on the Hanford Site

  6. Group-wise partial least square regression

    NARCIS (Netherlands)

    Camacho, José; Saccenti, Edoardo

    2018-01-01

    This paper introduces the group-wise partial least squares (GPLS) regression. GPLS is a new sparse PLS technique where the sparsity structure is defined in terms of groups of correlated variables, similarly to what is done in the related group-wise principal component analysis. These groups are

  7. Least squares approach for initial data recovery in dynamic data-driven applications simulations

    KAUST Repository

    Douglas, C.

    2010-12-01

    In this paper, we consider the initial data recovery and the solution update based on the local measured data that are acquired during simulations. Each time new data is obtained, the initial condition, which is a representation of the solution at a previous time step, is updated. The update is performed using the least squares approach. The objective function is set up based on both a measurement error as well as a penalization term that depends on the prior knowledge about the solution at previous time steps (or initial data). Various numerical examples are considered, where the penalization term is varied during the simulations. Numerical examples demonstrate that the predictions are more accurate if the initial data are updated during the simulations. © Springer-Verlag 2011.

  8. Predicting performance using background characteristics of international medical graduates in an inner-city university-affiliated Internal Medicine residency training program

    Science.gov (United States)

    Kanna, Balavenkatesh; Gu, Ying; Akhuetie, Jane; Dimitrov, Vihren

    2009-01-01

    I & step II clinical skills scores were 85 (IQR: 80–88) & 82 (IQR: 79–87) respectively. The median aggregate CBE scores during training were: PG1 5.8 (IQR: 5.6–6.3); PG2 6.3 (IQR 6–6.8) & PG3 6.7 (IQR: 6.7 – 7.1). 25% of our residents scored consistently above US national median ITE scores in all 3 years of training and 16% pursued a fellowship. Younger residents had higher aggregate annual CBE score than the program median (p ITE scores, reflecting exam-taking skills. Success in acquiring a fellowship was associated with consistent fellowship interest (p < 0.05) and research publications or presentations (p <0.05). None of the other characteristics including visa status were associated with the outcomes. Conclusion Background IMG features namely, age and USMLE scores predict performance evaluation and in-training examination scores during residency training. In addition enhanced research activities during residency training could facilitate fellowship goals among interested IMGs. PMID:19594918

  9. GW150914: Implications for the Stochastic Gravitational-Wave Background from Binary Black Holes.

    Science.gov (United States)

    Abbott, B P; Abbott, R; Abbott, T D; Abernathy, M R; Acernese, F; Ackley, K; Adams, C; Adams, T; Addesso, P; Adhikari, R X; Adya, V B; Affeldt, C; Agathos, M; Agatsuma, K; Aggarwal, N; Aguiar, O D; Aiello, L; Ain, A; Ajith, P; Allen, B; Allocca, A; Altin, P A; Anderson, S B; Anderson, W G; Arai, K; Araya, M C; Arceneaux, C C; Areeda, J S; Arnaud, N; Arun, K G; Ascenzi, S; Ashton, G; Ast, M; Aston, S M; Astone, P; Aufmuth, P; Aulbert, C; Babak, S; Bacon, P; Bader, M K M; Baker, P T; Baldaccini, F; Ballardin, G; Ballmer, S W; Barayoga, J C; Barclay, S E; Barish, B C; Barker, D; Barone, F; Barr, B; Barsotti, L; Barsuglia, M; Barta, D; Bartlett, J; Bartos, I; Bassiri, R; Basti, A; Batch, J C; Baune, C; Bavigadda, V; Bazzan, M; Behnke, B; Bejger, M; Bell, A S; Bell, C J; Berger, B K; Bergman, J; Bergmann, G; Berry, C P L; Bersanetti, D; Bertolini, A; Betzwieser, J; Bhagwat, S; Bhandare, R; Bilenko, I A; Billingsley, G; Birch, J; Birney, R; Biscans, S; Bisht, A; Bitossi, M; Biwer, C; Bizouard, M A; Blackburn, J K; Blair, C D; Blair, D G; Blair, R M; Bloemen, S; Bock, O; Bodiya, T P; Boer, M; Bogaert, G; Bogan, C; Bohe, A; Bojtos, P; Bond, C; Bondu, F; Bonnand, R; Boom, B A; Bork, R; Boschi, V; Bose, S; Bouffanais, Y; Bozzi, A; Bradaschia, C; Brady, P R; Braginsky, V B; Branchesi, M; Brau, J E; Briant, T; Brillet, A; Brinkmann, M; Brisson, V; Brockill, P; Brooks, A F; Brown, D D; Brown, N M; Buchanan, C C; Buikema, A; Bulik, T; Bulten, H J; Buonanno, A; Buskulic, D; Buy, C; Byer, R L; Cadonati, L; Cagnoli, G; Cahillane, C; Bustillo, J Calderón; Callister, T; Calloni, E; Camp, J B; Cannon, K C; Cao, J; Capano, C D; Capocasa, E; Carbognani, F; Caride, S; Diaz, J Casanueva; Casentini, C; Caudill, S; Cavaglià, M; Cavalier, F; Cavalieri, R; Cella, G; Cepeda, C B; Baiardi, L Cerboni; Cerretani, G; Cesarini, E; Chakraborty, R; Chalermsongsak, T; Chamberlin, S J; Chan, M; Chao, S; Charlton, P; Chassande-Mottin, E; Chen, H Y; Chen, Y; Cheng, C; Chincarini, A; Chiummo, A; Cho, H S; Cho, M; Chow, J H; Christensen, N; Chu, Q; Chua, S; Chung, S; Ciani, G; Clara, F; Clark, J A; Cleva, F; Coccia, E; Cohadon, P-F; Colla, A; Collette, C G; Cominsky, L; Constancio, M; Conte, A; Conti, L; Cook, D; Corbitt, T R; Cornish, N; Corsi, A; Cortese, S; Costa, C A; Coughlin, M W; Coughlin, S B; Coulon, J-P; Countryman, S T; Couvares, P; Cowan, E E; Coward, D M; Cowart, M J; Coyne, D C; Coyne, R; Craig, K; Creighton, J D E; Cripe, J; Crowder, S G; Cumming, A; Cunningham, L; Cuoco, E; Canton, T Dal; Danilishin, S L; D'Antonio, S; Danzmann, K; Darman, N S; Dattilo, V; Dave, I; Daveloza, H P; Davier, M; Davies, G S; Daw, E J; Day, R; DeBra, D; Debreczeni, G; Degallaix, J; De Laurentis, M; Deléglise, S; Del Pozzo, W; Denker, T; Dent, T; Dereli, H; Dergachev, V; DeRosa, R T; De Rosa, R; DeSalvo, R; Dhurandhar, S; Díaz, M C; Di Fiore, L; Di Giovanni, M; Di Lieto, A; Di Pace, S; Di Palma, I; Di Virgilio, A; Dojcinoski, G; Dolique, V; Donovan, F; Dooley, K L; Doravari, S; Douglas, R; Downes, T P; Drago, M; Drever, R W P; Driggers, J C; Du, Z; Ducrot, M; Dwyer, S E; Edo, T B; Edwards, M C; Effler, A; Eggenstein, H-B; Ehrens, P; Eichholz, J; Eikenberry, S S; Engels, W; Essick, R C; Etzel, T; Evans, M; Evans, T M; Everett, R; Factourovich, M; Fafone, V; Fair, H; Fairhurst, S; Fan, X; Fang, Q; Farinon, S; Farr, B; Farr, W M; Favata, M; Fays, M; Fehrmann, H; Fejer, M M; Ferrante, I; Ferreira, E C; Ferrini, F; Fidecaro, F; Fiori, I; Fiorucci, D; Fisher, R P; Flaminio, R; Fletcher, M; Fournier, J-D; Franco, S; Frasca, S; Frasconi, F; Frei, Z; Freise, A; Frey, R; Frey, V; Fricke, T T; Fritschel, P; Frolov, V V; Fulda, P; Fyffe, M; Gabbard, H A G; Gair, J R; Gammaitoni, L; Gaonkar, S G; Garufi, F; Gatto, A; Gaur, G; Gehrels, N; Gemme, G; Gendre, B; Genin, E; Gennai, A; George, J; Gergely, L; Germain, V; Ghosh, Archisman; Ghosh, S; Giaime, J A; Giardina, K D; Giazotto, A; Gill, K; Glaefke, A; Goetz, E; Goetz, R; Gondan, L; González, G; Castro, J M Gonzalez; Gopakumar, A; Gordon, N A; Gorodetsky, M L; Gossan, S E; Gosselin, M; Gouaty, R; Graef, C; Graff, P B; Granata, M; Grant, A; Gras, S; Gray, C; Greco, G; Green, A C; Groot, P; Grote, H; Grunewald, S; Guidi, G M; Guo, X; Gupta, A; Gupta, M K; Gushwa, K E; Gustafson, E K; Gustafson, R; Hacker, J J; Hall, B R; Hall, E D; Hammond, G; Haney, M; Hanke, M M; Hanks, J; Hanna, C; Hannam, M D; Hanson, J; Hardwick, T; Haris, K; Harms, J; Harry, G M; Harry, I W; Hart, M J; Hartman, M T; Haster, C-J; Haughian, K; Heidmann, A; Heintze, M C; Heitmann, H; Hello, P; Hemming, G; Hendry, M; Heng, I S; Hennig, J; Heptonstall, A W; Heurs, M; Hild, S; Hoak, D; Hodge, K A; Hofman, D; Hollitt, S E; Holt, K; Holz, D E; Hopkins, P; Hosken, D J; Hough, J; Houston, E A; Howell, E J; Hu, Y M; Huang, S; Huerta, E A; Huet, D; Hughey, B; Husa, S; Huttner, S H; Huynh-Dinh, T; Idrisy, A; Indik, N; Ingram, D R; Inta, R; Isa, H N; Isac, J-M; Isi, M; Islas, G; Isogai, T; Iyer, B R; Izumi, K; Jacqmin, T; Jang, H; Jani, K; Jaranowski, P; Jawahar, S; Jiménez-Forteza, F; Johnson, W W; Jones, D I; Jones, R; Jonker, R J G; Ju, L; Kalaghatgi, C V; Kalogera, V; Kandhasamy, S; Kang, G; Kanner, J B; Karki, S; Kasprzack, M; Katsavounidis, E; Katzman, W; Kaufer, S; Kaur, T; Kawabe, K; Kawazoe, F; Kéfélian, F; Kehl, M S; Keitel, D; Kelley, D B; Kells, W; Kennedy, R; Key, J S; Khalaidovski, A; Khalili, F Y; Khan, I; Khan, S; Khan, Z; Khazanov, E A; Kijbunchoo, N; Kim, C; Kim, J; Kim, K; Kim, Nam-Gyu; Kim, Namjun; Kim, Y-M; King, E J; King, P J; Kinzel, D L; Kissel, J S; Kleybolte, L; Klimenko, S; Koehlenbeck, S M; Kokeyama, K; Koley, S; Kondrashov, V; Kontos, A; Korobko, M; Korth, W Z; Kowalska, I; Kozak, D B; Kringel, V; Królak, A; Krueger, C; Kuehn, G; Kumar, P; Kuo, L; Kutynia, A; Lackey, B D; Landry, M; Lange, J; Lantz, B; Lasky, P D; Lazzarini, A; Lazzaro, C; Leaci, P; Leavey, S; Lebigot, E O; Lee, C H; Lee, H K; Lee, H M; Lee, K; Lenon, A; Leonardi, M; Leong, J R; Leroy, N; Letendre, N; Levin, Y; Levine, B M; Li, T G F; Libson, A; Littenberg, T B; Lockerbie, N A; Logue, J; Lombardi, A L; Lord, J E; Lorenzini, M; Loriette, V; Lormand, M; Losurdo, G; Lough, J D; Lück, H; Lundgren, A P; Luo, J; Lynch, R; Ma, Y; MacDonald, T; Machenschalk, B; MacInnis, M; Macleod, D M; Magaña-Sandoval, F; Magee, R M; Mageswaran, M; Majorana, E; Maksimovic, I; Malvezzi, V; Man, N; Mandel, I; Mandic, V; Mangano, V; Mansell, G L; Manske, M; Mantovani, M; Marchesoni, F; Marion, F; Márka, S; Márka, Z; Markosyan, A S; Maros, E; Martelli, F; Martellini, L; Martin, I W; Martin, R M; Martynov, D V; Marx, J N; Mason, K; Masserot, A; Massinger, T J; Masso-Reid, M; Matichard, F; Matone, L; Mavalvala, N; Mazumder, N; Mazzolo, G; McCarthy, R; McClelland, D E; McCormick, S; McGuire, S C; McIntyre, G; McIver, J; McManus, D J; McWilliams, S T; Meacher, D; Meadors, G D; Meidam, J; Melatos, A; Mendell, G; Mendoza-Gandara, D; Mercer, R A; Merilh, E; Merzougui, M; Meshkov, S; Messenger, C; Messick, C; Meyers, P M; Mezzani, F; Miao, H; Michel, C; Middleton, H; Mikhailov, E E; Milano, L; Miller, J; Millhouse, M; Minenkov, Y; Ming, J; Mirshekari, S; Mishra, C; Mitra, S; Mitrofanov, V P; Mitselmakher, G; Mittleman, R; Moggi, A; Mohan, M; Mohapatra, S R P; Montani, M; Moore, B C; Moore, C J; Moraru, D; Moreno, G; Morriss, S R; Mossavi, K; Mours, B; Mow-Lowry, C M; Mueller, C L; Mueller, G; Muir, A W; Mukherjee, Arunava; Mukherjee, D; Mukherjee, S; Mukund, N; Mullavey, A; Munch, J; Murphy, D J; Murray, P G; Mytidis, A; Nardecchia, I; Naticchioni, L; Nayak, R K; Necula, V; Nedkova, K; Nelemans, G; Neri, M; Neunzert, A; Newton, G; Nguyen, T T; Nielsen, A B; Nissanke, S; Nitz, A; Nocera, F; Nolting, D; Normandin, M E N; Nuttall, L K; Oberling, J; Ochsner, E; O'Dell, J; Oelker, E; Ogin, G H; Oh, J J; Oh, S H; Ohme, F; Oliver, M; Oppermann, P; Oram, Richard J; O'Reilly, B; O'Shaughnessy, R; Ottaway, D J; Ottens, R S; Overmier, H; Owen, B J; Pai, A; Pai, S A; Palamos, J R; Palashov, O; Palomba, C; Pal-Singh, A; Pan, H; Pankow, C; Pannarale, F; Pant, B C; Paoletti, F; Paoli, A; Papa, M A; Paris, H R; Parker, W; Pascucci, D; Pasqualetti, A; Passaquieti, R; Passuello, D; Patricelli, B; Patrick, Z; Pearlstone, B L; Pedraza, M; Pedurand, R; Pekowsky, L; Pele, A; Penn, S; Perreca, A; Phelps, M; Piccinni, O; Pichot, M; Piergiovanni, F; Pierro, V; Pillant, G; Pinard, L; Pinto, I M; Pitkin, M; Poggiani, R; Popolizio, P; Post, A; Powell, J; Prasad, J; Predoi, V; Premachandra, S S; Prestegard, T; Price, L R; Prijatelj, M; Principe, M; Privitera, S; Prodi, G A; Prokhorov, L; Puncken, O; Punturo, M; Puppo, P; Pürrer, M; Qi, H; Qin, J; Quetschke, V; Quintero, E A; Quitzow-James, R; Raab, F J; Rabeling, D S; Radkins, H; Raffai, P; Raja, S; Rakhmanov, M; Rapagnani, P; Raymond, V; Razzano, M; Re, V; Read, J; Reed, C M; Regimbau, T; Rei, L; Reid, S; Reitze, D H; Rew, H; Reyes, S D; Ricci, F; Riles, K; Robertson, N A; Robie, R; Robinet, F; Rocchi, A; Rolland, L; Rollins, J G; Roma, V J; Romano, J D; Romano, R; Romanov, G; Romie, J H; Rosińska, D; Rowan, S; Rüdiger, A; Ruggi, P; Ryan, K; Sachdev, S; Sadecki, T; Sadeghian, L; Salconi, L; Saleem, M; Salemi, F; Samajdar, A; Sammut, L; Sanchez, E J; Sandberg, V; Sandeen, B; Sanders, J R; Sassolas, B; Sathyaprakash, B S; Saulson, P R; Sauter, O; Savage, R L; Sawadsky, A; Schale, P; Schilling, R; Schmidt, J; Schmidt, P; Schnabel, R; Schofield, R M S; Schönbeck, A; Schreiber, E; Schuette, D; Schutz, B F; Scott, J; Scott, S M; Sellers, D; Sentenac, D; Sequino, V; Sergeev, A; Serna, G; Setyawati, Y; Sevigny, A; Shaddock, D A; Shah, S; Shahriar, M S; Shaltev, M; Shao, Z; Shapiro, B; Shawhan, P; Sheperd, A; Shoemaker, D H; Shoemaker, D M; Siellez, K; Siemens, X; Sigg, D; Silva, A D; Simakov, D; Singer, A; Singer, L P; Singh, A; Singh, R; Singhal, A; Sintes, A M; Slagmolen, B J J; Smith, J R; Smith, N D; Smith, R J E; Son, E J; Sorazu, B; Sorrentino, F; Souradeep, T; Srivastava, A K; Staley, A; Steinke, M; Steinlechner, J; Steinlechner, S; Steinmeyer, D; Stephens, B C; Stone, R; Strain, K A; Straniero, N; Stratta, G; Strauss, N A; Strigin, S; Sturani, R; Stuver, A L; Summerscales, T Z; Sun, L; Sutton, P J; Swinkels, B L; Szczepańczyk, M J; Tacca, M; Talukder, D; Tanner, D B; Tápai, M; Tarabrin, S P; Taracchini, A; Taylor, R; Theeg, T; Thirugnanasambandam, M P; Thomas, E G; Thomas, M; Thomas, P; Thorne, K A; Thorne, K S; Thrane, E; Tiwari, S; Tiwari, V; Tokmakov, K V; Tomlinson, C; Tonelli, M; Torres, C V; Torrie, C I; Töyrä, D; Travasso, F; Traylor, G; Trifirò, D; Tringali, M C; Trozzo, L; Tse, M; Turconi, M; Tuyenbayev, D; Ugolini, D; Unnikrishnan, C S; Urban, A L; Usman, S A; Vahlbruch, H; Vajente, G; Valdes, G; van Bakel, N; van Beuzekom, M; van den Brand, J F J; Van Den Broeck, C; Vander-Hyde, D C; van der Schaaf, L; van Heijningen, J V; van Veggel, A A; Vardaro, M; Vass, S; Vasúth, M; Vaulin, R; Vecchio, A; Vedovato, G; Veitch, J; Veitch, P J; Venkateswara, K; Verkindt, D; Vetrano, F; Viceré, A; Vinciguerra, S; Vine, D J; Vinet, J-Y; Vitale, S; Vo, T; Vocca, H; Vorvick, C; Voss, D; Vousden, W D; Vyatchanin, S P; Wade, A R; Wade, L E; Wade, M; Walker, M; Wallace, L; Walsh, S; Wang, G; Wang, H; Wang, M; Wang, X; Wang, Y; Ward, R L; Warner, J; Was, M; Weaver, B; Wei, L-W; Weinert, M; Weinstein, A J; Weiss, R; Welborn, T; Wen, L; Weßels, P; Westphal, T; Wette, K; Whelan, J T; White, D J; Whiting, B F; Williams, R D; Williamson, A R; Willis, J L; Willke, B; Wimmer, M H; Winkler, W; Wipf, C C; Wittel, H; Woan, G; Worden, J; Wright, J L; Wu, G; Yablon, J; Yam, W; Yamamoto, H; Yancey, C C; Yap, M J; Yu, H; Yvert, M; Zadrożny, A; Zangrando, L; Zanolin, M; Zendri, J-P; Zevin, M; Zhang, F; Zhang, L; Zhang, M; Zhang, Y; Zhao, C; Zhou, M; Zhou, Z; Zhu, X J; Zucker, M E; Zuraw, S E; Zweizig, J

    2016-04-01

    The LIGO detection of the gravitational wave transient GW150914, from the inspiral and merger of two black holes with masses ≳30M_{⊙}, suggests a population of binary black holes with relatively high mass. This observation implies that the stochastic gravitational-wave background from binary black holes, created from the incoherent superposition of all the merging binaries in the Universe, could be higher than previously expected. Using the properties of GW150914, we estimate the energy density of such a background from binary black holes. In the most sensitive part of the Advanced LIGO and Advanced Virgo band for stochastic backgrounds (near 25 Hz), we predict Ω_{GW}(f=25  Hz)=1.1_{-0.9}^{+2.7}×10^{-9} with 90% confidence. This prediction is robustly demonstrated for a variety of formation scenarios with different parameters. The differences between models are small compared to the statistical uncertainty arising from the currently poorly constrained local coalescence rate. We conclude that this background is potentially measurable by the Advanced LIGO and Advanced Virgo detectors operating at their projected final sensitivity.

  10. The slope-background for the near-peak regimen of photoemission spectra

    Energy Technology Data Exchange (ETDEWEB)

    Herrera-Gomez, A., E-mail: aherrera@qro.cinvestav.mx [CINVESTAV-Unidad Queretaro, Queretaro 76230 (Mexico); Bravo-Sanchez, M. [CINVESTAV-Unidad Queretaro, Queretaro 76230 (Mexico); Aguirre-Tostado, F.S. [Centro de Investigación en Materiales Avanzados, Chihuahua, Chihuahua 31109 (Mexico); Vazquez-Lepe, M.O. [Departamento de Ingeniería de Proyectos, Universidad de Guadalajara, Jalisco 44430 (Mexico)

    2013-08-15

    Highlights: •We propose a method that accounts for the change in the background slope of XPS data. •The slope-background can be derived from Tougaard–Sigmund's transport theory. •The total background is composed by Shirley–Sherwood and Tougaard type backgrounds. •The slope-background employs one parameter that can be related to REELS spectra. •The slope, in conjunction with the Shirley–Sherwood background, provides better fits. -- Abstract: Photoemission data typically exhibits a change on the intensity of the background between the two sides of the peaks. This step is usually very well reproduced by the Shirley–Sherwood background. Yet, the change on the slope of the background in the near-peak regime, although usually present, is not always as obvious to the eye. However, the intensity of the background signal associated with the evolution of its slope can be appreciable. The slope-background is designed to empirically reproduce the change on the slope. Resembling the non-iterative Shirley method, the proposed functional form relates the slope of the background to the integrated signal at higher electron kinetic energies. This form can be predicted under Tougaard–Sigmund's electron transport theory in the near-peak regime. To reproduce both the step and slope changes on the background, it is necessary to employ the slope-background in conjunction with the Shirley–Sherwood background under the active-background method. As it is shown for a series of materials, the application of the slope-background provides excellent fits, is transparent to the operator, and is much more independent of the fitting range than other background methods. The total area assessed through the combination of the slope and the Shirley–Sherwood backgrounds is larger than when only the Shirley–Sherwood background is employed, and smaller than when the Tougaard background is employed.

  11. Do resting brain dynamics predict oddball evoked-potential?

    Directory of Open Access Journals (Sweden)

    Lee Tien-Wen

    2011-11-01

    Full Text Available Abstract Background The oddball paradigm is widely applied to the investigation of cognitive function in neuroscience and in neuropsychiatry. Whether cortical oscillation in the resting state can predict the elicited oddball event-related potential (ERP is still not clear. This study explored the relationship between resting electroencephalography (EEG and oddball ERPs. The regional powers of 18 electrodes across delta, theta, alpha and beta frequencies were correlated with the amplitude and latency of N1, P2, N2 and P3 components of oddball ERPs. A multivariate analysis based on partial least squares (PLS was applied to further examine the spatial pattern revealed by multiple correlations. Results Higher synchronization in the resting state, especially at the alpha spectrum, is associated with higher neural responsiveness and faster neural propagation, as indicated by the higher amplitude change of N1/N2 and shorter latency of P2. None of the resting quantitative EEG indices predict P3 latency and amplitude. The PLS analysis confirms that the resting cortical dynamics which explains N1/N2 amplitude and P2 latency does not show regional specificity, indicating a global property of the brain. Conclusions This study differs from previous approaches by relating dynamics in the resting state to neural responsiveness in the activation state. Our analyses suggest that the neural characteristics carried by resting brain dynamics modulate the earlier/automatic stage of target detection.

  12. A statistical background noise correction sensitive to the steadiness of background noise.

    Science.gov (United States)

    Oppenheimer, Charles H

    2016-10-01

    A statistical background noise correction is developed for removing background noise contributions from measured source levels, producing a background noise-corrected source level. Like the standard background noise corrections of ISO 3741, ISO 3744, ISO 3745, and ISO 11201, the statistical background correction increases as the background level approaches the measured source level, decreasing the background noise-corrected source level. Unlike the standard corrections, the statistical background correction increases with steadiness of the background and is excluded from use when background fluctuation could be responsible for measured differences between the source and background noise levels. The statistical background noise correction has several advantages over the standard correction: (1) enveloping the true source with known confidence, (2) assuring physical source descriptions when measuring sources in fluctuating backgrounds, (3) reducing background corrected source descriptions by 1 to 8 dB for sources in steady backgrounds, and (4) providing a means to replace standardized background correction caps that incentivize against high precision grade methods.

  13. Estimating the SM background for supersymmetry searches: challenges and methods

    CERN Document Server

    Besjes, G J; The ATLAS collaboration

    2013-01-01

    Supersymmetry features a broad range of possible signatures at the LHC. If R-parity is conserved the production of squarks and gluinos is accompanied by events with hard jets, possibly leptons or photons and missing transverse momentum. Some Standard Model processes also mimic such events, which, due to their large cross sections, represent backgrounds that can fake or hide supersymmetry. While the normalisation of these backgrounds can be obtained from data in dedicated control regions, Monte Carlo simulation is often used to extrapolate the measured event yields from control to signal regions. Next-to-leading order and multi-parton generators are employed to predict these extrapolations for the dominant processes contributing to the SM background: W/Z boson and top pair production in association with (many) jets. The proper estimate of the associated theoretical uncertainties and testing these with data represent challenges. Other important backgrounds are diboson and top pair plus boson events with additio...

  14. Heat transfer to water at supercritical pressures in a circular and square annular flow geometry

    International Nuclear Information System (INIS)

    Licht, Jeremy; Anderson, Mark; Corradini, Michael

    2008-01-01

    A supercritical water heat transfer facility has been built at the University of Wisconsin to study heat transfer in a circular and square annular flow channel. Operating conditions included mass velocities of 350-1425 kg/m 2 s, heat fluxes up to 1.0 MW/m 2 , and bulk inlet temperatures up to 400 o C; all at a pressure of 25 MPa. The accuracy and validity of selected heat transfer correlations and buoyancy criterion were compared with heat transfer measurements. Jackson's Nusselt correlation was able to best predict the test data, capturing 86% of the data within 25%. Watts Nusselt correlation showed a similar trend but under predicted measurements by 10% relative to Jackson's. Comparison of experimental results with results of previous investigators has shown general agreement with high mass velocity data. Low mass velocity data have provided some insight into the difficulty in applying these Nusselt correlations to a region of deteriorated heat transfer. Geometrical differences in heat transfer were seen when deterioration was present. Jackson's buoyancy criterion predicted the onset of deterioration while modifications were applied to Seo's Froude number based criterion

  15. Ensemble learned vaccination uptake prediction using web search queries

    DEFF Research Database (Denmark)

    Hansen, Niels Dalum; Lioma, Christina; Mølbak, Kåre

    2016-01-01

    We present a method that uses ensemble learning to combine clinical and web-mined time-series data in order to predict future vaccination uptake. The clinical data is official vaccination registries, and the web data is query frequencies collected from Google Trends. Experiments with official...... vaccine records show that our method predicts vaccination uptake eff?ectively (4.7 Root Mean Squared Error). Whereas performance is best when combining clinical and web data, using solely web data yields comparative performance. To our knowledge, this is the ?first study to predict vaccination uptake...

  16. Latin square three dimensional gage master

    Science.gov (United States)

    Jones, Lynn L.

    1982-01-01

    A gage master for coordinate measuring machines has an nxn array of objects distributed in the Z coordinate utilizing the concept of a Latin square experimental design. Using analysis of variance techniques, the invention may be used to identify sources of error in machine geometry and quantify machine accuracy.

  17. Latin-square three-dimensional gage master

    Science.gov (United States)

    Jones, L.

    1981-05-12

    A gage master for coordinate measuring machines has an nxn array of objects distributed in the Z coordinate utilizing the concept of a Latin square experimental design. Using analysis of variance techniques, the invention may be used to identify sources of error in machine geometry and quantify machine accuracy.

  18. Optimistic semi-supervised least squares classification

    DEFF Research Database (Denmark)

    Krijthe, Jesse H.; Loog, Marco

    2017-01-01

    The goal of semi-supervised learning is to improve supervised classifiers by using additional unlabeled training examples. In this work we study a simple self-learning approach to semi-supervised learning applied to the least squares classifier. We show that a soft-label and a hard-label variant ...

  19. Multi-source least-squares migration of marine data

    KAUST Repository

    Wang, Xin

    2012-11-04

    Kirchhoff based multi-source least-squares migration (MSLSM) is applied to marine streamer data. To suppress the crosstalk noise from the excitation of multiple sources, a dynamic encoding function (including both time-shifts and polarity changes) is applied to the receiver side traces. Results show that the MSLSM images are of better quality than the standard Kirchhoff migration and reverse time migration images; moreover, the migration artifacts are reduced and image resolution is significantly improved. The computational cost of MSLSM is about the same as conventional least-squares migration, but its IO cost is significantly decreased.

  20. Wall-resolved Large Eddy Simulation of a flow through a square-edged orifice in a round pipe at Re = 25,000

    Energy Technology Data Exchange (ETDEWEB)

    Benhamadouche, S., E-mail: sofiane.benhamadouche@edf.fr; Arenas, M.; Malouf, W.J.

    2017-02-15

    Highlights: • Wall-resolved LES can predict the flow through a square-edged orifice at Re = 25,000. • LES results are compared with the available experimental data and ISO 5167-2. • Pressure loss and discharge coefficients are in very good agreement with ISO 5167-2. • The present wall-resolved LES could be used as reference data for RANS validation. - Abstract: The orifice plate is a pressure differential device frequently used for flow measurements in pipes across different industries. The present study demonstrates the accuracy obtainable using a wall-resolved Large Eddy Simulation (LES) approach to predict the velocity, the Reynolds stresses, the pressure loss and the discharge coefficient for a flow through a square-edged orifice in a round pipe at a Reynolds number of 25,000. The ratio of the orifice diameter to the pipe diameter is β = 0.62, and the ratio of the orifice thickness to the pipe diameter is 0.11. The mesh is sized using refinement criteria at the wall and preliminary RANS results to ensure that the solution is resolved beyond an estimated Taylor micro-scale. The inlet condition is simulated using a recycling method, and the LES is run with a dynamic Smagorinsky sub-grid scale (SGS) model. The sensitivity to the SGS model and to the pressure–velocity coupling is shown to be small in the present study. The LES is compared with the available experimental data and ISO 5167-2. In general, the LES shows good agreement with the velocity from the experimental data. The profiles of the Reynolds stresses are similar, but an offset is observed in the diagonal stresses. The pressure loss and discharge coefficients are shown to be in very good agreement with the predictions of ISO 5167-2. Therefore, the wall-resolved LES is shown to be highly accurate in simulating the flow across a square-edged orifice.

  1. Wall-resolved Large Eddy Simulation of a flow through a square-edged orifice in a round pipe at Re = 25,000

    International Nuclear Information System (INIS)

    Benhamadouche, S.; Arenas, M.; Malouf, W.J.

    2017-01-01

    Highlights: • Wall-resolved LES can predict the flow through a square-edged orifice at Re = 25,000. • LES results are compared with the available experimental data and ISO 5167-2. • Pressure loss and discharge coefficients are in very good agreement with ISO 5167-2. • The present wall-resolved LES could be used as reference data for RANS validation. - Abstract: The orifice plate is a pressure differential device frequently used for flow measurements in pipes across different industries. The present study demonstrates the accuracy obtainable using a wall-resolved Large Eddy Simulation (LES) approach to predict the velocity, the Reynolds stresses, the pressure loss and the discharge coefficient for a flow through a square-edged orifice in a round pipe at a Reynolds number of 25,000. The ratio of the orifice diameter to the pipe diameter is β = 0.62, and the ratio of the orifice thickness to the pipe diameter is 0.11. The mesh is sized using refinement criteria at the wall and preliminary RANS results to ensure that the solution is resolved beyond an estimated Taylor micro-scale. The inlet condition is simulated using a recycling method, and the LES is run with a dynamic Smagorinsky sub-grid scale (SGS) model. The sensitivity to the SGS model and to the pressure–velocity coupling is shown to be small in the present study. The LES is compared with the available experimental data and ISO 5167-2. In general, the LES shows good agreement with the velocity from the experimental data. The profiles of the Reynolds stresses are similar, but an offset is observed in the diagonal stresses. The pressure loss and discharge coefficients are shown to be in very good agreement with the predictions of ISO 5167-2. Therefore, the wall-resolved LES is shown to be highly accurate in simulating the flow across a square-edged orifice.

  2. Fitting polynomial surfaces to triangular meshes with Voronoi Squared Distance Minimization

    KAUST Repository

    Nivoliers, Vincent; Yan, Dongming; Lé vy, Bruno L.

    2011-01-01

    This paper introduces Voronoi Squared Distance Minimization (VSDM), an algorithm that fits a surface to an input mesh. VSDM minimizes an objective function that corresponds to a Voronoi-based approximation of the overall squared distance function

  3. Fitting polynomial surfaces to triangular meshes with Voronoi squared distance minimization

    KAUST Repository

    Nivoliers, Vincent; Yan, Dongming; Lé vy, Bruno L.

    2012-01-01

    This paper introduces Voronoi squared distance minimization (VSDM), an algorithm that fits a surface to an input mesh. VSDM minimizes an objective function that corresponds to a Voronoi-based approximation of the overall squared distance function

  4. Stochastic gravitational wave background from the single-degenerate channel of type Ia supernovae

    International Nuclear Information System (INIS)

    Falta, David; Fisher, Robert

    2011-01-01

    We demonstrate that the integrated gravitational wave signal of type Ia supernovae (SNe Ia) in the single-degenerate channel out to cosmological distances gives rise to a continuous background to spaceborne gravitational wave detectors, including the Big Bang Observer and Deci-Hertz Interferometer Gravitational wave Observatory planned missions. This gravitational wave background from SNe Ia acts as a noise background in the frequency range 0.1-10 Hz, which heretofore was thought to be relatively free from astrophysical sources apart from neutron-star and white-dwarf binaries, and therefore a key window in which to study primordial gravitational waves generated by inflation. While inflationary energy scales of > or approx. 10 16 GeV yield inflationary gravitational wave backgrounds in excess of our range of predicted backgrounds, for lower energy scales of ∼10 15 GeV, the inflationary gravitational wave background becomes comparable to the noise background from SNe Ia.

  5. Ultralow background germanium gamma-ray spectrometer using superclean materials and cosmic-ray anticoincidence

    International Nuclear Information System (INIS)

    Reeves, J.H.; Hensley, W.K.; Brodzinski, R.L.; Ryge, P.

    1983-10-01

    Efforts to measure the double beta decay of 76 Ge as predicted by Grand Unified Theories have resulted in the development of a high resolution germanium diode gamma-ray spectrometer with an exceptionally low background. This paper describes the development of this system and how these techniques can be utilized to significantly reduce the background in high resolution photon spectrometers at only a moderate cost

  6. Multi-source least-squares reverse time migration

    KAUST Repository

    Dai, Wei

    2012-06-15

    Least-squares migration has been shown to improve image quality compared to the conventional migration method, but its computational cost is often too high to be practical. In this paper, we develop two numerical schemes to implement least-squares migration with the reverse time migration method and the blended source processing technique to increase computation efficiency. By iterative migration of supergathers, which consist in a sum of many phase-encoded shots, the image quality is enhanced and the crosstalk noise associated with the encoded shots is reduced. Numerical tests on 2D HESS VTI data show that the multisource least-squares reverse time migration (LSRTM) algorithm suppresses migration artefacts, balances the amplitudes, improves image resolution and reduces crosstalk noise associated with the blended shot gathers. For this example, the multisource LSRTM is about three times faster than the conventional RTM method. For the 3D example of the SEG/EAGE salt model, with a comparable computational cost, multisource LSRTM produces images with more accurate amplitudes, better spatial resolution and fewer migration artefacts compared to conventional RTM. The empirical results suggest that multisource LSRTM can produce more accurate reflectivity images than conventional RTM does with a similar or less computational cost. The caveat is that the LSRTM image is sensitive to large errors in the migration velocity model. © 2012 European Association of Geoscientists & Engineers.

  7. Multi-source least-squares reverse time migration

    KAUST Repository

    Dai, Wei; Fowler, Paul J.; Schuster, Gerard T.

    2012-01-01

    Least-squares migration has been shown to improve image quality compared to the conventional migration method, but its computational cost is often too high to be practical. In this paper, we develop two numerical schemes to implement least-squares migration with the reverse time migration method and the blended source processing technique to increase computation efficiency. By iterative migration of supergathers, which consist in a sum of many phase-encoded shots, the image quality is enhanced and the crosstalk noise associated with the encoded shots is reduced. Numerical tests on 2D HESS VTI data show that the multisource least-squares reverse time migration (LSRTM) algorithm suppresses migration artefacts, balances the amplitudes, improves image resolution and reduces crosstalk noise associated with the blended shot gathers. For this example, the multisource LSRTM is about three times faster than the conventional RTM method. For the 3D example of the SEG/EAGE salt model, with a comparable computational cost, multisource LSRTM produces images with more accurate amplitudes, better spatial resolution and fewer migration artefacts compared to conventional RTM. The empirical results suggest that multisource LSRTM can produce more accurate reflectivity images than conventional RTM does with a similar or less computational cost. The caveat is that the LSRTM image is sensitive to large errors in the migration velocity model. © 2012 European Association of Geoscientists & Engineers.

  8. Application of support vector regression (SVR) for stream flow prediction on the Amazon basin

    CSIR Research Space (South Africa)

    Du Toit, Melise

    2016-10-01

    Full Text Available regression technique is used in this study to analyse historical stream flow occurrences and predict stream flow values for the Amazon basin. Up to twelve month predictions are made and the coefficient of determination and root-mean-square error are used...

  9. The study of the evolution of squares in 3 periods of Safavid, Qajar ...

    African Journals Online (AJOL)

    The study of the evolution of squares in 3 periods of Safavid, Qajar and Pahlavi with historical – evolutionary and form approach (Isfahan and Tehran styles) case study of Naqshe Jahan square in Isfahan, Ganjalikhan square in Kerman, Sabze Meydan and Toop.

  10. Thermal performance in circular tube fitted with coiled square wires

    International Nuclear Information System (INIS)

    Promvonge, Pongjet

    2008-01-01

    The effects of wires with square cross section forming a coil used as a turbulator on the heat transfer and turbulent flow friction characteristics in a uniform heat flux, circular tube are experimentally investigated in the present work. The experiments are performed for flows with Reynolds numbers ranging from 5000 to 25,000. Two different spring coiled wire pitches are introduced. The results are also compared with those obtained from using a typical coiled circular wire, apart from the smooth tube. The experimental results reveal that the use of coiled square wire turbulators leads to a considerable increase in heat transfer and friction loss over those of a smooth wall tube. The Nusselt number increases with the rise of Reynolds number and the reduction of pitch for both circular and square wire coils. The coiled square wire provides higher heat transfer than the circular one under the same conditions. Also, performance evaluation criteria to assess the real benefits in using both coil wires of the enhanced tube are determined

  11. Prediction-oriented modeling in business research by means of PLS path modeling : Introduction to a JBR special section

    NARCIS (Netherlands)

    Cepeda Carrion, Gabriel; Henseler, Jörg; Ringle, Christian M.; Roldan, Jose Luis

    2016-01-01

    Under the main theme “prediction-oriented modeling in business research by means of partial least squares path modeling” (PLS), the special issue presents 17 papers. Most contributions include content from presentations at the 2nd International Symposium on Partial Least Squares Path Modeling: The

  12. Agorá Acoustics - Effects of arcades on the acoustics of public squares

    DEFF Research Database (Denmark)

    Paini, Dario; Gade, Anders Christian; Rindel, Jens Holger

    2005-01-01

    This paper is part of a PhD work, dealing with the acoustics of the public squares (‘Agorá Acoustics’), especially when music (amplified or not) is played. Consequently, our approach will be to evaluate public squares using the same set of acoustics concepts for subjective evaluation and objective...... measurements as applied for concert halls and theatres. In this paper the acoustical effects of arcades will be studied, in terms of reverberation (EDT and T30), clarity (C80), intelligibility (STI) and other acoustical parameters. For this purpose, also the theory of coupled rooms is applied and compared...... with results. An acoustic modelling program, ODEON 7.0, was used for this investigation. Three different sizes of public squares were considered. In order to evaluate the ‘real’ effects of the arcades on the open square, models of all three squares were designed both with and without arcades. The sound source...

  13. Parameter estimation of Monod model by the Least-Squares method for microalgae Botryococcus Braunii sp

    Science.gov (United States)

    See, J. J.; Jamaian, S. S.; Salleh, R. M.; Nor, M. E.; Aman, F.

    2018-04-01

    This research aims to estimate the parameters of Monod model of microalgae Botryococcus Braunii sp growth by the Least-Squares method. Monod equation is a non-linear equation which can be transformed into a linear equation form and it is solved by implementing the Least-Squares linear regression method. Meanwhile, Gauss-Newton method is an alternative method to solve the non-linear Least-Squares problem with the aim to obtain the parameters value of Monod model by minimizing the sum of square error ( SSE). As the result, the parameters of the Monod model for microalgae Botryococcus Braunii sp can be estimated by the Least-Squares method. However, the estimated parameters value obtained by the non-linear Least-Squares method are more accurate compared to the linear Least-Squares method since the SSE of the non-linear Least-Squares method is less than the linear Least-Squares method.

  14. Study of microwave instabilities by means of a square-well potential

    International Nuclear Information System (INIS)

    Kim, K.J.

    1979-01-01

    Microwave instabilities are analyzed in a simple model, in which the usual synchrotron oscillation of a particle is replaced by particle motion in a square-well potential. In the usual synchrotron oscillation, a particle moves along an elliptic trajectory. The most natural coordinates for such a motion are the action and the angle variables. On the other hand, the distribution of the particles along the ring is most conveniently described by azimuthal variables. The difficulty disappears if the synchrotron motion is approximated by the motion in a square-well potential. The square-well potential may seem extremely unphysical. However, it should be remarked that the form of the potential with addition of a Landau cavity looks more or less like a square-well. At any rate, the main motivation of introducing the square-well here is to simplify the mathematics of and thereby gain some insight into microwave instabilities. The model is exactly soluble. The results are in general agreement with the conclusions obtained from qualitative arguments based on coasting beam theory. However, some of the detailed features of the solution, for example the behavior of ω 2 as a function of impedance, are surprising

  15. TINGKAT KEPUASAN PELANGGAN TERHADAP PELAYANAN DI KFC MAKASSAR TOWN SQUARE

    OpenAIRE

    RAMADHANI, IRMA

    2017-01-01

    2017 Penelitian ini bertujuan untuk mengetahui tingkat kepuasan pelanggan terhadap pelayanan di KFC Makassar Town Square. Penelitian ini dilaksanakan terhadap pelanggan KFC Makassar Town Square yang telah bertransaksi lebih dari satu kali. Penelitian ini merupakan penelitian kuantitatif deskriptif dengan mendeskripsikan atau menggambarkan tentang tingkat kepuasan pelanggan terhadap pelayanan. Jumlah sampel penelitian 83 pelanggan KFC Makassar Town. Analisis data menggunak...

  16. Solution of a Complex Least Squares Problem with Constrained Phase.

    Science.gov (United States)

    Bydder, Mark

    2010-12-30

    The least squares solution of a complex linear equation is in general a complex vector with independent real and imaginary parts. In certain applications in magnetic resonance imaging, a solution is desired such that each element has the same phase. A direct method for obtaining the least squares solution to the phase constrained problem is described.

  17. Improved Upper Limits on the Stochastic Gravitational-Wave Background from 2009-2010 LIGO and Virgo Data

    NARCIS (Netherlands)

    Aasi, J.; Agathos, M.; Beker, M.G.; Blom, M.R.; Bulten, H.J.; Del Pozzo, W.; Jonker, R.; Meidam, J.; van den Brand, J.F.J.; LIGO-Virgo Sci, Collaboration

    2014-01-01

    Gravitational waves from a variety of sources are predicted to superpose to create a stochastic background. This background is expected to contain unique information from throughout the history of the Universe that is unavailable through standard electromagnetic observations, making its study of

  18. Direct detection of the inflationary gravitational-wave background

    International Nuclear Information System (INIS)

    Smith, Tristan L.; Kamionkowski, Marc; Cooray, Asantha

    2006-01-01

    Inflation generically predicts a stochastic background of gravitational waves over a broad range of frequencies, from those accessible with cosmic microwave background (CMB) measurements, to those accessible directly with gravitational-wave detectors, like NASA's Big-Bang Observer (BBO) or Japan's Deci-Hertz Interferometer Gravitational-wave Observer (DECIGO), both currently under study. Here we investigate the detectability of the inflationary gravitational-wave background at BBO/DECIGO frequencies. To do so, we survey a range of slow-roll inflationary models consistent with constraints from the CMB and large-scale structure (LSS). We go beyond the usual assumption of power-law power spectra, which may break down given the 16 orders of magnitude in frequency between the CMB and direct detection, and solve instead the inflationary dynamics for four classes of inflaton potentials. Direct detection is possible in a variety of inflationary models, although probably not in any in which the gravitational-wave signal does not appear in the CMB polarization. However, direct detection by BBO/DECIGO can help discriminate between inflationary models that have the same slow-roll parameters at CMB/LSS scales

  19. THE NEAR-INFRARED BACKGROUND INTENSITY AND ANISOTROPIES DURING THE EPOCH OF REIONIZATION

    Energy Technology Data Exchange (ETDEWEB)

    Cooray, Asantha; Gong Yan; Smidt, Joseph [Department of Physics and Astronomy, University of California, Irvine, CA 92697 (United States); Santos, Mario G. [CENTRA, Instituto Superior Tecnico, Technical University of Lisbon, Lisboa 1049-001 (Portugal)

    2012-09-01

    A fraction of the extragalactic near-infrared (near-IR) background light involves redshifted photons from the ultraviolet (UV) emission from galaxies present during reionization at redshifts above 6. The absolute intensity and the anisotropies of the near-IR background provide an observational probe of the first-light galaxies and their spatial distribution. We estimate the extragalactic background light intensity during reionization by accounting for the stellar and nebular emission from first-light galaxies. We require the UV photon density from these galaxies to generate a reionization history that is consistent with the optical depth to electron scattering from cosmic microwave background measurements. We also require the bright-end luminosity function (LF) of galaxies in our models to reproduce the measured Lyman-dropout LFs at redshifts of 6-8. The absolute intensity is about 0.1-0.4 nW m{sup -2} sr{sup -1} at the peak of its spectrum at {approx}1.1 {mu}m. We also discuss the anisotropy power spectrum of the near-IR background using a halo model to describe the galaxy distribution. We compare our predictions for the anisotropy power spectrum to existing measurements from deep near-IR imaging data from Spitzer/IRAC, Hubble/NICMOS, and AKARI. The predicted rms fluctuations at tens of arcminute angular scales are roughly an order of magnitude smaller than the existing measurements. While strong arguments have been made that the measured fluctuations do not have an origin involving faint low-redshift galaxies, we find that measurements in the literature are also incompatible with galaxies present during the era of reionization. The measured near-IR background anisotropies remain unexplained with an unknown origin.

  20. EEG Beta Power but Not Background Music Predicts the Recall Scores in a Foreign-Vocabulary Learning Task

    NARCIS (Netherlands)

    Küssner, Mats B.; de Groot, Annette M. B.; Hofman, Winni F.; Hillen, Marij A.

    2016-01-01

    As tantalizing as the idea that background music beneficially affects foreign vocabulary learning may seem, there is-partly due to a lack of theory-driven research-no consistent evidence to support this notion. We investigated inter-individual differences in the effects of background music on