#### Sample records for linear prediction applied

1. Applied linear regression

Weisberg, Sanford

2013-01-01

Praise for the Third Edition ""...this is an excellent book which could easily be used as a course text...""-International Statistical Institute The Fourth Edition of Applied Linear Regression provides a thorough update of the basic theory and methodology of linear regression modeling. Demonstrating the practical applications of linear regression analysis techniques, the Fourth Edition uses interesting, real-world exercises and examples. Stressing central concepts such as model building, understanding parameters, assessing fit and reliability, and drawing conclusions, the new edition illus

2. Applied linear algebra

Olver, Peter J

2018-01-01

This textbook develops the essential tools of linear algebra, with the goal of imparting technique alongside contextual understanding. Applications go hand-in-hand with theory, each reinforcing and explaining the other. This approach encourages students to develop not only the technical proficiency needed to go on to further study, but an appreciation for when, why, and how the tools of linear algebra can be used across modern applied mathematics. Providing an extensive treatment of essential topics such as Gaussian elimination, inner products and norms, and eigenvalues and singular values, this text can be used for an in-depth first course, or an application-driven second course in linear algebra. In this second edition, applications have been updated and expanded to include numerical methods, dynamical systems, data analysis, and signal processing, while the pedagogical flow of the core material has been improved. Throughout, the text emphasizes the conceptual connections between each application and the un...

3. Applying linear discriminant analysis to predict groundwater redox conditions conducive to denitrification

Wilson, S. R.; Close, M. E.; Abraham, P.

2018-01-01

Diffuse nitrate losses from agricultural land pollute groundwater resources worldwide, but can be attenuated under reducing subsurface conditions. In New Zealand, the ability to predict where groundwater denitrification occurs is important for understanding the linkage between land use and discharges of nitrate-bearing groundwater to streams. This study assesses the application of linear discriminant analysis (LDA) for predicting groundwater redox status for Southland, a major dairy farming region in New Zealand. Data cases were developed by assigning a redox status to samples derived from a regional groundwater quality database. Pre-existing regional-scale geospatial databases were used as training variables for the discriminant functions. The predictive accuracy of the discriminant functions was slightly improved by optimising the thresholds between sample depth classes. The models predict 23% of the region as being reducing at shallow depths (water table, and low-permeability clastic sediments. The coastal plains are an area of widespread groundwater discharge, and the soil and hydrology characteristics require the land to be artificially drained to render the land suitable for farming. For the improvement of water quality in coastal areas, it is therefore important that land and water management efforts focus on understanding hydrological bypassing that may occur via artificial drainage systems.

4. Similarities and Differences Between Warped Linear Prediction and Laguerre Linear Prediction

Brinker, Albertus C. den; Krishnamoorthi, Harish; Verbitskiy, Evgeny A.

2011-01-01

Linear prediction has been successfully applied in many speech and audio processing systems. This paper presents the similarities and differences between two classes of linear prediction schemes, namely, Warped Linear Prediction (WLP) and Laguerre Linear Prediction (LLP). It is shown that both

5. The Theory of Linear Prediction

Vaidyanathan, PP

2007-01-01

Linear prediction theory has had a profound impact in the field of digital signal processing. Although the theory dates back to the early 1940s, its influence can still be seen in applications today. The theory is based on very elegant mathematics and leads to many beautiful insights into statistical signal processing. Although prediction is only a part of the more general topics of linear estimation, filtering, and smoothing, this book focuses on linear prediction. This has enabled detailed discussion of a number of issues that are normally not found in texts. For example, the theory of vecto

6. Predicting the effect of ionising radiation on biological populations: testing of a non-linear Leslie model applied to a small mammal population

Monte, Luigi

2013-01-01

7. Experimental Confirmation of Nonlinear-Model- Predictive Control Applied Offline to a Permanent Magnet Linear Generator for Ocean-Wave Energy Conversion

Tom, Nathan; Yeung, Ronald W.

2015-01-01

To further maximize power absorption in both regular and irregular ocean wave environments, nonlinear-model-predictive control (NMPC) was applied to a model-scale point absorber developed at the University of California Berkeley, Berkeley, CA, USA. The NMPC strategy requires a power-takeoff (PTO) unit that could be turned on and off, as the generator would be inactive for up to 60% of the wave period. To confirm the effectiveness of this NMPC strategy, an in-house-designed permanent magnet linear generator (PMLG) was chosen as the PTO. The time-varying performance of the PMLG was first characterized by dry-bench tests, using mechanical relays to control the electromagnetic conversion process. The on/off sequencing of the PMLG was tested under regular and irregular wave excitation to validate NMPC simulations using control inputs obtained from running the choice optimizer offline. Experimental results indicate that successful implementation was achieved and absorbed power using NMPC was up to 50% greater than the passive system, which utilized no controller. Previous investigations into MPC applied to wave energy converters have lacked the experimental results to confirm the reported gains in power absorption. However, after considering the PMLG mechanical-to-electrical conversion efficiency, the electrical power output was not consistently maximized. To improve output power, a mathematical relation between the efficiency and damping magnitude of the PMLG was inserted in the system model to maximize the electrical power output through continued use of NMPC which helps separate this work from previous investigators. Of significance, results from latter simulations provided a damping time series that was active over a larger portion of the wave period requiring the actuation of the applied electrical load, rather than on/off control.

8. Applied predictive control

Sunan, Huang; Heng, Lee Tong

2002-01-01

The presence of considerable time delays in the dynamics of many industrial processes, leading to difficult problems in the associated closed-loop control systems, is a well-recognized phenomenon. The performance achievable in conventional feedback control systems can be significantly degraded if an industrial process has a relatively large time delay compared with the dominant time constant. Under these circumstances, advanced predictive control is necessary to improve the performance of the control system significantly. The book is a focused treatment of the subject matter, including the fundamentals and some state-of-the-art developments in the field of predictive control. Three main schemes for advanced predictive control are addressed in this book: • Smith Predictive Control; • Generalised Predictive Control; • a form of predictive control based on Finite Spectrum Assignment. A substantial part of the book addresses application issues in predictive control, providing several interesting case studie...

9. Applied linear algebra and matrix analysis

Shores, Thomas S

2018-01-01

In its second edition, this textbook offers a fresh approach to matrix and linear algebra. Its blend of theory, computational exercises, and analytical writing projects is designed to highlight the interplay between these aspects of an application. This approach places special emphasis on linear algebra as an experimental science that provides tools for solving concrete problems. The second edition’s revised text discusses applications of linear algebra like graph theory and network modeling methods used in Google’s PageRank algorithm. Other new materials include modeling examples of diffusive processes, linear programming, image processing, digital signal processing, and Fourier analysis. These topics are woven into the core material of Gaussian elimination and other matrix operations; eigenvalues, eigenvectors, and discrete dynamical systems; and the geometrical aspects of vector spaces. Intended for a one-semester undergraduate course without a strict calculus prerequisite, Applied Linear Algebra and M...

10. Linear regression crash prediction models : issues and proposed solutions.

2010-05-01

The paper develops a linear regression model approach that can be applied to : crash data to predict vehicle crashes. The proposed approach involves novice data aggregation : to satisfy linear regression assumptions; namely error structure normality ...

11. The Use of Linear Programming for Prediction.

Schnittjer, Carl J.

The purpose of the study was to develop a linear programming model to be used for prediction, test the accuracy of the predictions, and compare the accuracy with that produced by curvilinear multiple regression analysis. (Author)

12. Linear zonal atmospheric prediction for adaptive optics

McGuire, Patrick C.; Rhoadarmer, Troy A.; Coy, Hanna A.; Angel, J. Roger P.; Lloyd-Hart, Michael

2000-07-01

We compare linear zonal predictors of atmospheric turbulence for adaptive optics. Zonal prediction has the possible advantage of being able to interpret and utilize wind-velocity information from the wavefront sensor better than modal prediction. For simulated open-loop atmospheric data for a 2- meter 16-subaperture AO telescope with 5 millisecond prediction and a lookback of 4 slope-vectors, we find that Widrow-Hoff Delta-Rule training of linear nets and Back- Propagation training of non-linear multilayer neural networks is quite slow, getting stuck on plateaus or in local minima. Recursive Least Squares training of linear predictors is two orders of magnitude faster and it also converges to the solution with global minimum error. We have successfully implemented Amari's Adaptive Natural Gradient Learning (ANGL) technique for a linear zonal predictor, which premultiplies the Delta-Rule gradients with a matrix that orthogonalizes the parameter space and speeds up the training by two orders of magnitude, like the Recursive Least Squares predictor. This shows that the simple Widrow-Hoff Delta-Rule's slow convergence is not a fluke. In the case of bright guidestars, the ANGL, RLS, and standard matrix-inversion least-squares (MILS) algorithms all converge to the same global minimum linear total phase error (approximately 0.18 rad2), which is only approximately 5% higher than the spatial phase error (approximately 0.17 rad2), and is approximately 33% lower than the total 'naive' phase error without prediction (approximately 0.27 rad2). ANGL can, in principle, also be extended to make non-linear neural network training feasible for these large networks, with the potential to lower the predictor error below the linear predictor error. We will soon scale our linear work to the approximately 108-subaperture MMT AO system, both with simulations and real wavefront sensor data from prime focus.

13. Modelling and Predicting Backstroke Start Performance Using Non-Linear and Linear Models.

de Jesus, Karla; Ayala, Helon V H; de Jesus, Kelly; Coelho, Leandro Dos S; Medeiros, Alexandre I A; Abraldes, José A; Vaz, Mário A P; Fernandes, Ricardo J; Vilas-Boas, João Paulo

2018-03-01

Our aim was to compare non-linear and linear mathematical model responses for backstroke start performance prediction. Ten swimmers randomly completed eight 15 m backstroke starts with feet over the wedge, four with hands on the highest horizontal and four on the vertical handgrip. Swimmers were videotaped using a dual media camera set-up, with the starts being performed over an instrumented block with four force plates. Artificial neural networks were applied to predict 5 m start time using kinematic and kinetic variables and to determine the accuracy of the mean absolute percentage error. Artificial neural networks predicted start time more robustly than the linear model with respect to changing training to the validation dataset for the vertical handgrip (3.95 ± 1.67 vs. 5.92 ± 3.27%). Artificial neural networks obtained a smaller mean absolute percentage error than the linear model in the horizontal (0.43 ± 0.19 vs. 0.98 ± 0.19%) and vertical handgrip (0.45 ± 0.19 vs. 1.38 ± 0.30%) using all input data. The best artificial neural network validation revealed a smaller mean absolute error than the linear model for the horizontal (0.007 vs. 0.04 s) and vertical handgrip (0.01 vs. 0.03 s). Artificial neural networks should be used for backstroke 5 m start time prediction due to the quite small differences among the elite level performances.

14. Linear Prediction Using Refined Autocorrelation Function

M. Shahidur Rahman

2007-07-01

Full Text Available This paper proposes a new technique for improving the performance of linear prediction analysis by utilizing a refined version of the autocorrelation function. Problems in analyzing voiced speech using linear prediction occur often due to the harmonic structure of the excitation source, which causes the autocorrelation function to be an aliased version of that of the vocal tract impulse response. To estimate the vocal tract characteristics accurately, however, the effect of aliasing must be eliminated. In this paper, we employ homomorphic deconvolution technique in the autocorrelation domain to eliminate the aliasing effect occurred due to periodicity. The resulted autocorrelation function of the vocal tract impulse response is found to produce significant improvement in estimating formant frequencies. The accuracy of formant estimation is verified on synthetic vowels for a wide range of pitch frequencies typical for male and female speakers. The validity of the proposed method is also illustrated by inspecting the spectral envelopes of natural speech spoken by high-pitched female speaker. The synthesis filter obtained by the current method is guaranteed to be stable, which makes the method superior to many of its alternatives.

15. Linear algebraic methods applied to intensity modulated radiation therapy.

Crooks, S M; Xing, L

2001-10-01

Methods of linear algebra are applied to the choice of beam weights for intensity modulated radiation therapy (IMRT). It is shown that the physical interpretation of the beam weights, target homogeneity and ratios of deposited energy can be given in terms of matrix equations and quadratic forms. The methodology of fitting using linear algebra as applied to IMRT is examined. Results are compared with IMRT plans that had been prepared using a commercially available IMRT treatment planning system and previously delivered to cancer patients.

16. Improved Methods for Pitch Synchronous Linear Prediction Analysis of Speech

劉, 麗清

2015-01-01

Linear prediction (LP) analysis has been applied to speech system over the last few decades. LP technique is well-suited for speech analysis due to its ability to model speech production process approximately. Hence LP analysis has been widely used for speech enhancement, low-bit-rate speech coding in cellular telephony, speech recognition, characteristic parameter extraction (vocal tract resonances frequencies, fundamental frequency called pitch) and so on. However, the performance of the co...

17. Sparsity in Linear Predictive Coding of Speech

Giacobello, Daniele

of the effectiveness of their application in audio processing. The second part of the thesis deals with introducing sparsity directly in the linear prediction analysis-by-synthesis (LPAS) speech coding paradigm. We first propose a novel near-optimal method to look for a sparse approximate excitation using a compressed...... one with direct applications to coding but also consistent with the speech production model of voiced speech, where the excitation of the all-pole filter can be modeled as an impulse train, i.e., a sparse sequence. Introducing sparsity in the LP framework will also bring to de- velop the concept...... sensing formulation. Furthermore, we define a novel re-estimation procedure to adapt the predictor coefficients to the given sparse excitation, balancing the two representations in the context of speech coding. Finally, the advantages of the compact parametric representation of a segment of speech, given...

18. Adaptive prediction applied to seismic event detection

Clark, G.A.; Rodgers, P.W.

1981-01-01

Adaptive prediction was applied to the problem of detecting small seismic events in microseismic background noise. The Widrow-Hoff LMS adaptive filter used in a prediction configuration is compared with two standard seismic filters as an onset indicator. Examples demonstrate the technique's usefulness with both synthetic and actual seismic data

19. Adaptive prediction applied to seismic event detection

Clark, G.A.; Rodgers, P.W.

1981-09-01

Adaptive prediction was applied to the problem of detecting small seismic events in microseismic background noise. The Widrow-Hoff LMS adaptive filter used in a prediction configuration is compared with two standard seismic filters as an onset indicator. Examples demonstrate the technique's usefulness with both synthetic and actual seismic data.

20. Linear mixing model applied to coarse resolution satellite data

Holben, Brent N.; Shimabukuro, Yosio E.

1992-01-01

A linear mixing model typically applied to high resolution data such as Airborne Visible/Infrared Imaging Spectrometer, Thematic Mapper, and Multispectral Scanner System is applied to the NOAA Advanced Very High Resolution Radiometer coarse resolution satellite data. The reflective portion extracted from the middle IR channel 3 (3.55 - 3.93 microns) is used with channels 1 (0.58 - 0.68 microns) and 2 (0.725 - 1.1 microns) to run the Constrained Least Squares model to generate fraction images for an area in the west central region of Brazil. The derived fraction images are compared with an unsupervised classification and the fraction images derived from Landsat TM data acquired in the same day. In addition, the relationship betweeen these fraction images and the well known NDVI images are presented. The results show the great potential of the unmixing techniques for applying to coarse resolution data for global studies.

1. Linear filtering applied to Monte Carlo criticality calculations

Morrison, G.W.; Pike, D.H.; Petrie, L.M.

1975-01-01

A significant improvement in the acceleration of the convergence of the eigenvalue computed by Monte Carlo techniques has been developed by applying linear filtering theory to Monte Carlo calculations for multiplying systems. A Kalman filter was applied to a KENO Monte Carlo calculation of an experimental critical system consisting of eight interacting units of fissile material. A comparison of the filter estimate and the Monte Carlo realization was made. The Kalman filter converged in five iterations to 0.9977. After 95 iterations, the average k-eff from the Monte Carlo calculation was 0.9981. This demonstrates that the Kalman filter has the potential of reducing the calculational effort of multiplying systems. Other examples and results are discussed

2. A Lagrangian meshfree method applied to linear and nonlinear elasticity.

2017-01-01

The repeated replacement method (RRM) is a Lagrangian meshfree method which we have previously applied to the Euler equations for compressible fluid flow. In this paper we present new enhancements to RRM, and we apply the enhanced method to both linear and nonlinear elasticity. We compare the results of ten test problems to those of analytic solvers, to demonstrate that RRM can successfully simulate these elastic systems without many of the requirements of traditional numerical methods such as numerical derivatives, equation system solvers, or Riemann solvers. We also show the relationship between error and computational effort for RRM on these systems, and compare RRM to other methods to highlight its strengths and weaknesses. And to further explain the two elastic equations used in the paper, we demonstrate the mathematical procedure used to create Riemann and Sedov-Taylor solvers for them, and detail the numerical techniques needed to embody those solvers in code.

3. Linear model applied to the evaluation of pharmaceutical stability data

Renato Cesar Souza

2013-09-01

Full Text Available The expiry date on the packaging of a product gives the consumer the confidence that the product will retain its identity, content, quality and purity throughout the period of validity of the drug. The definition of this term in the pharmaceutical industry is based on stability data obtained during the product registration. By the above, this work aims to apply the linear regression according to the guideline ICH Q1E, 2003, to evaluate some aspects of a product undergoing in a registration phase in Brazil. With this propose, the evaluation was realized with the development center of a multinational company in Brazil, with samples of three different batches composed by two active principal ingredients in two different packages. Based on the preliminary results obtained, it was possible to observe the difference of degradation tendency of the product in two different packages and the relationship between the variables studied, added knowledge so new models of linear equations can be applied and developed for other products.

4. Linear mixing model applied to AVHRR LAC data

Holben, Brent N.; Shimabukuro, Yosio E.

1993-01-01

A linear mixing model was applied to coarse spatial resolution data from the NOAA Advanced Very High Resolution Radiometer. The reflective component of the 3.55 - 3.93 microns channel was extracted and used with the two reflective channels 0.58 - 0.68 microns and 0.725 - 1.1 microns to run a Constraine Least Squares model to generate vegetation, soil, and shade fraction images for an area in the Western region of Brazil. The Landsat Thematic Mapper data covering the Emas National park region was used for estimating the spectral response of the mixture components and for evaluating the mixing model results. The fraction images were compared with an unsupervised classification derived from Landsat TM data acquired on the same day. The relationship between the fraction images and normalized difference vegetation index images show the potential of the unmixing techniques when using coarse resolution data for global studies.

5. Deformation Prediction Using Linear Polynomial Functions ...

By Deformation, we mean change of shape of any structure from its original shape and by monitoring over time using Geodetic means, the change in shape, size and the overall structural dynamics behaviors of structure can be detected. Prediction is therefor based on the epochs measurement obtained during monitoring, ...

6. Linear and Non-Linear Control Techniques Applied to Actively Lubricated Journal Bearings

Nicoletti, Rodrigo; Santos, Ilmar

2003-01-01

The main objectives of actively lubricated bearings are the simultaneous reduction of wear and vibration between rotating and stationary machinery parts. For reducing wear and dissipating vibration energy until certain limits, one can count with the conventional hydrodynamic lubrication. For furt......The main objectives of actively lubricated bearings are the simultaneous reduction of wear and vibration between rotating and stationary machinery parts. For reducing wear and dissipating vibration energy until certain limits, one can count with the conventional hydrodynamic lubrication....... For further reduction of shaft vibrations one can count with the active lubrication action, which is based on injecting pressurised oil into the bearing gap through orifices machined in the bearing sliding surface. The design and efficiency of some linear (PD, PI and PID) and non-linear controllers, applied...... vibration reduction of unbalance response of a rigid rotor, where the PD and the non-linear P controllers show better performance for the frequency range of study (0 to 80 Hz). The feasibility of eliminating rotor-bearing instabilities (phenomena of whirl) by using active lubrication is also investigated...

7. Model Predictive Control for Linear Complementarity and Extended Linear Complementarity Systems

Bambang Riyanto

2005-11-01

Full Text Available In this paper, we propose model predictive control method for linear complementarity and extended linear complementarity systems by formulating optimization along prediction horizon as mixed integer quadratic program. Such systems contain interaction between continuous dynamics and discrete event systems, and therefore, can be categorized as hybrid systems. As linear complementarity and extended linear complementarity systems finds applications in different research areas, such as impact mechanical systems, traffic control and process control, this work will contribute to the development of control design method for those areas as well, as shown by three given examples.

8. Linear Invariant Tensor Interpolation Applied to Cardiac Diffusion Tensor MRI

Gahm, Jin Kyu; Wisniewski, Nicholas; Kindlmann, Gordon; Kung, Geoffrey L.; Klug, William S.; Garfinkel, Alan; Ennis, Daniel B.

2015-01-01

Purpose Various methods exist for interpolating diffusion tensor fields, but none of them linearly interpolate tensor shape attributes. Linear interpolation is expected not to introduce spurious changes in tensor shape. Methods Herein we define a new linear invariant (LI) tensor interpolation method that linearly interpolates components of tensor shape (tensor invariants) and recapitulates the interpolated tensor from the linearly interpolated tensor invariants and the eigenvectors of a linearly interpolated tensor. The LI tensor interpolation method is compared to the Euclidean (EU), affine-invariant Riemannian (AI), log-Euclidean (LE) and geodesic-loxodrome (GL) interpolation methods using both a synthetic tensor field and three experimentally measured cardiac DT-MRI datasets. Results EU, AI, and LE introduce significant microstructural bias, which can be avoided through the use of GL or LI. Conclusion GL introduces the least microstructural bias, but LI tensor interpolation performs very similarly and at substantially reduced computational cost. PMID:23286085

9. Large-scale linear programs in planning and prediction.

2017-06-01

Large-scale linear programs are at the core of many traffic-related optimization problems in both planning and prediction. Moreover, many of these involve significant uncertainty, and hence are modeled using either chance constraints, or robust optim...

10. Who Will Win?: Predicting the Presidential Election Using Linear Regression

Lamb, John H.

2007-01-01

This article outlines a linear regression activity that engages learners, uses technology, and fosters cooperation. Students generated least-squares linear regression equations using TI-83 Plus[TM] graphing calculators, Microsoft[C] Excel, and paper-and-pencil calculations using derived normal equations to predict the 2004 presidential election.…

11. Predicting birth weight with conditionally linear transformation models.

Möst, Lisa; Schmid, Matthias; Faschingbauer, Florian; Hothorn, Torsten

2016-12-01

Low and high birth weight (BW) are important risk factors for neonatal morbidity and mortality. Gynecologists must therefore accurately predict BW before delivery. Most prediction formulas for BW are based on prenatal ultrasound measurements carried out within one week prior to birth. Although successfully used in clinical practice, these formulas focus on point predictions of BW but do not systematically quantify uncertainty of the predictions, i.e. they result in estimates of the conditional mean of BW but do not deliver prediction intervals. To overcome this problem, we introduce conditionally linear transformation models (CLTMs) to predict BW. Instead of focusing only on the conditional mean, CLTMs model the whole conditional distribution function of BW given prenatal ultrasound parameters. Consequently, the CLTM approach delivers both point predictions of BW and fetus-specific prediction intervals. Prediction intervals constitute an easy-to-interpret measure of prediction accuracy and allow identification of fetuses subject to high prediction uncertainty. Using a data set of 8712 deliveries at the Perinatal Centre at the University Clinic Erlangen (Germany), we analyzed variants of CLTMs and compared them to standard linear regression estimation techniques used in the past and to quantile regression approaches. The best-performing CLTM variant was competitive with quantile regression and linear regression approaches in terms of conditional coverage and average length of the prediction intervals. We propose that CLTMs be used because they are able to account for possible heteroscedasticity, kurtosis, and skewness of the distribution of BWs. © The Author(s) 2014.

12. Comparison of linear and non-linear models for predicting energy expenditure from raw accelerometer data.

Montoye, Alexander H K; Begum, Munni; Henning, Zachary; Pfeiffer, Karin A

2017-02-01

This study had three purposes, all related to evaluating energy expenditure (EE) prediction accuracy from body-worn accelerometers: (1) compare linear regression to linear mixed models, (2) compare linear models to artificial neural network models, and (3) compare accuracy of accelerometers placed on the hip, thigh, and wrists. Forty individuals performed 13 activities in a 90 min semi-structured, laboratory-based protocol. Participants wore accelerometers on the right hip, right thigh, and both wrists and a portable metabolic analyzer (EE criterion). Four EE prediction models were developed for each accelerometer: linear regression, linear mixed, and two ANN models. EE prediction accuracy was assessed using correlations, root mean square error (RMSE), and bias and was compared across models and accelerometers using repeated-measures analysis of variance. For all accelerometer placements, there were no significant differences for correlations or RMSE between linear regression and linear mixed models (correlations: r  =  0.71-0.88, RMSE: 1.11-1.61 METs; p  >  0.05). For the thigh-worn accelerometer, there were no differences in correlations or RMSE between linear and ANN models (ANN-correlations: r  =  0.89, RMSE: 1.07-1.08 METs. Linear models-correlations: r  =  0.88, RMSE: 1.10-1.11 METs; p  >  0.05). Conversely, one ANN had higher correlations and lower RMSE than both linear models for the hip (ANN-correlation: r  =  0.88, RMSE: 1.12 METs. Linear models-correlations: r  =  0.86, RMSE: 1.18-1.19 METs; p  linear models for the wrist-worn accelerometers (ANN-correlations: r  =  0.82-0.84, RMSE: 1.26-1.32 METs. Linear models-correlations: r  =  0.71-0.73, RMSE: 1.55-1.61 METs; p  models offer a significant improvement in EE prediction accuracy over linear models. Conversely, linear models showed similar EE prediction accuracy to machine learning models for hip- and thigh

13. Implementation of neural network based non-linear predictive

Sørensen, Paul Haase; Nørgård, Peter Magnus; Ravn, Ole

1998-01-01

The paper describes a control method for non-linear systems based on generalized predictive control. Generalized predictive control (GPC) was developed to control linear systems including open loop unstable and non-minimum phase systems, but has also been proposed extended for the control of non......-linear systems. GPC is model-based and in this paper we propose the use of a neural network for the modeling of the system. Based on the neural network model a controller with extended control horizon is developed and the implementation issues are discussed, with particular emphasis on an efficient Quasi......-Newton optimization algorithm. The performance is demonstrated on a pneumatic servo system....

14. An online re-linearization scheme suited for Model Predictive and Linear Quadratic Control

Henriksen, Lars Christian; Poulsen, Niels Kjølstad

This technical note documents the equations for primal-dual interior-point quadratic programming problem solver used for MPC. The algorithm exploits the special structure of the MPC problem and is able to reduce the computational burden such that the computational burden scales with prediction...... horizon length in a linear way rather than cubic, which would be the case if the structure was not exploited. It is also shown how models used for design of model-based controllers, e.g. linear quadratic and model predictive, can be linearized both at equilibrium and non-equilibrium points, making...

15. Non linear identification applied to PWR steam generators

Poncet, B.

1982-11-01

For the precise industrial purpose of PWR nuclear power plant steam generator water level control, a natural method is developed where classical techniques seem not to be efficient enough. From this essentially non-linear practical problem, an input-output identification of dynamic systems is proposed. Through Homodynamic Systems, characterized by a regularity property which can be found in most industrial processes with balance set, state form realizations are built, which resolve the exact joining of local dynamic behaviors, in both discrete and continuous time cases, avoiding any load parameter. Specifically non-linear modelling analytical means, which have no influence on local joined behaviors, are also pointed out. Non-linear autoregressive realizations allow us to perform indirect adaptive control under constraint of an admissible given dynamic family [fr

16. Applied Research of Enterprise Cost Control Based on Linear Programming

Yu Shuo

2015-01-01

This paper researches the enterprise cost control through the linear programming model, and analyzes the restriction factors of the labor of enterprise production, raw materials, processing equipment, sales price, and other factors affecting the enterprise income, so as to obtain an enterprise cost control model based on the linear programming. This model can calculate rational production mode in the case of limited resources, and acquire optimal enterprise income. The production guiding program and scheduling arrangement of the enterprise can be obtained through calculation results, so as to provide scientific and effective guidance for the enterprise production. This paper adds the sensitivity analysis in the linear programming model, so as to learn about the stability of the enterprise cost control model based on linear programming through the sensitivity analysis, and verify the rationality of the model, and indicate the direction for the enterprise cost control. The calculation results of the model can provide a certain reference for the enterprise planning in the market economy environment, which have strong reference and practical significance in terms of the enterprise cost control.

17. System theory as applied differential geometry. [linear system

Hermann, R.

1979-01-01

The invariants of input-output systems under the action of the feedback group was examined. The approach used the theory of Lie groups and concepts of modern differential geometry, and illustrated how the latter provides a basis for the discussion of the analytic structure of systems. Finite dimensional linear systems in a single independent variable are considered. Lessons of more general situations (e.g., distributed parameter and multidimensional systems) which are increasingly encountered as technology advances are presented.

18. Multicollinearity in applied economics research and the Bayesian linear regression

EISENSTAT, Eric

2016-01-01

This article revises the popular issue of collinearity amongst explanatory variables in the context of a multiple linear regression analysis, particularly in empirical studies within social science related fields. Some important interpretations and explanations are highlighted from the econometrics literature with respect to the effects of multicollinearity on statistical inference, as well as the general shortcomings of the once fervent search for methods intended to detect and mitigate thes...

19. Non-linear punctual kinetics applied to PWR reactors simulation

Cysne, F.S.

1978-11-01

In order to study some kinds of nuclear reactor accidents, a simulation is made using the punctual kinetics model for the reactor core. The following integration methods are used: Hansen's method in which a linearization is made and CSMP using a variable interval fourth-order Runge Kutta method. The results were good and were compared with those obtained by the code Dinamica I which uses a finite difference integration method of backward kind. (Author) [pt

20. Frequency prediction by linear stability analysis around mean flow

Bengana, Yacine; Tuckerman, Laurette

2017-11-01

The frequency of certain limit cycles resulting from a Hopf bifurcation, such as the von Karman vortex street, can be predicted by linear stability analysis around their mean flows. Barkley (2006) has shown this to yield an eigenvalue whose real part is zero and whose imaginary part matches the nonlinear frequency. This property was named RZIF by Turton et al. (2015); moreover they found that the traveling waves (TW) of thermosolutal convection have the RZIF property. They explained this as a consequence of the fact that the temporal Fourier spectrum is dominated by the mean flow and first harmonic. We could therefore consider that only the first mode is important in the saturation of the mean flow as presented in the Self-Consistent Model (SCM) of Mantic-Lugo et al. (2014). We have implemented a full Newton's method to solve the SCM for thermosolutal convection. We show that while the RZIF property is satisfied far from the threshold, the SCM model reproduces the exact frequency only very close to the threshold. Thus, the nonlinear interaction of only the first mode with itself is insufficiently accurate to estimate the mean flow. Our next step will be to take into account higher harmonics and to apply this analysis to the standing waves, for which RZIF does not hold.

1. Adaptive feedback linearization applied to steering of ships

Thor I. Fossen

1993-10-01

Full Text Available This paper describes the application of feedback linearization to automatic steering of ships. The flexibility of the design procedure allows the autopilot to be optimized for both course-keeping and course-changing manoeuvres. Direct adaptive versions of both the course-keeping and turning controller are derived. The advantages of the adaptive controllers are improved performance and reduced fuel consumption. The application of nonlinear control theory also allows the designer in a systematic manner to compensate for nonlinearities in the control design.

2. Applied research of quantum information based on linear optics

Xu, Xiao-Ye

2016-01-01

This thesis reports on outstanding work in two main subfields of quantum information science: one involves the quantum measurement problem, and the other concerns quantum simulation. The thesis proposes using a polarization-based displaced Sagnac-type interferometer to achieve partial collapse measurement and its reversal, and presents the first experimental verification of the nonlocality of the partial collapse measurement and its reversal. All of the experiments are carried out in the linear optical system, one of the earliest experimental systems to employ quantum communication and quantum information processing. The thesis argues that quantum measurement can yield quantum entanglement recovery, which is demonstrated by using the frequency freedom to simulate the environment. Based on the weak measurement theory, the author proposes that white light can be used to precisely estimate phase, and effectively demonstrates that the imaginary part of the weak value can be introduced by means of weak measurement evolution. Lastly, a nine-order polarization-based displaced Sagnac-type interferometer employing bulk optics is constructed to perform quantum simulation of the Landau-Zener evolution, and by tuning the system Hamiltonian, the first experiment to research the Kibble-Zurek mechanism in non-equilibrium kinetics processes is carried out in the linear optical system.

3. Applied research of quantum information based on linear optics

Xu, Xiao-Ye

2016-08-01

This thesis reports on outstanding work in two main subfields of quantum information science: one involves the quantum measurement problem, and the other concerns quantum simulation. The thesis proposes using a polarization-based displaced Sagnac-type interferometer to achieve partial collapse measurement and its reversal, and presents the first experimental verification of the nonlocality of the partial collapse measurement and its reversal. All of the experiments are carried out in the linear optical system, one of the earliest experimental systems to employ quantum communication and quantum information processing. The thesis argues that quantum measurement can yield quantum entanglement recovery, which is demonstrated by using the frequency freedom to simulate the environment. Based on the weak measurement theory, the author proposes that white light can be used to precisely estimate phase, and effectively demonstrates that the imaginary part of the weak value can be introduced by means of weak measurement evolution. Lastly, a nine-order polarization-based displaced Sagnac-type interferometer employing bulk optics is constructed to perform quantum simulation of the Landau-Zener evolution, and by tuning the system Hamiltonian, the first experiment to research the Kibble-Zurek mechanism in non-equilibrium kinetics processes is carried out in the linear optical system.

4. EPMLR: sequence-based linear B-cell epitope prediction method using multiple linear regression.

Lian, Yao; Ge, Meng; Pan, Xian-Ming

2014-12-19

B-cell epitopes have been studied extensively due to their immunological applications, such as peptide-based vaccine development, antibody production, and disease diagnosis and therapy. Despite several decades of research, the accurate prediction of linear B-cell epitopes has remained a challenging task. In this work, based on the antigen's primary sequence information, a novel linear B-cell epitope prediction model was developed using the multiple linear regression (MLR). A 10-fold cross-validation test on a large non-redundant dataset was performed to evaluate the performance of our model. To alleviate the problem caused by the noise of negative dataset, 300 experiments utilizing 300 sub-datasets were performed. We achieved overall sensitivity of 81.8%, precision of 64.1% and area under the receiver operating characteristic curve (AUC) of 0.728. We have presented a reliable method for the identification of linear B cell epitope using antigen's primary sequence information. Moreover, a web server EPMLR has been developed for linear B-cell epitope prediction: http://www.bioinfo.tsinghua.edu.cn/epitope/EPMLR/ .

5. Nonstandard Finite Difference Method Applied to a Linear Pharmacokinetics Model

Oluwaseun Egbelowo

2017-05-01

Full Text Available We extend the nonstandard finite difference method of solution to the study of pharmacokinetic–pharmacodynamic models. Pharmacokinetic (PK models are commonly used to predict drug concentrations that drive controlled intravenous (I.V. transfers (or infusion and oral transfers while pharmacokinetic and pharmacodynamic (PD interaction models are used to provide predictions of drug concentrations affecting the response of these clinical drugs. We structure a nonstandard finite difference (NSFD scheme for the relevant system of equations which models this pharamcokinetic process. We compare the results obtained to standard methods. The scheme is dynamically consistent and reliable in replicating complex dynamic properties of the relevant continuous models for varying step sizes. This study provides assistance in understanding the long-term behavior of the drug in the system, and validation of the efficiency of the nonstandard finite difference scheme as the method of choice.

6. Validation of Individual Non-Linear Predictive Pharmacokinetic ...

3Department of Veterinary Medicine, Faculty of Agriculture, University of Novi Sad, Novi Sad, Republic of Serbia ... Purpose: To evaluate the predictive performance of phenytoin multiple dosing non-linear pharmacokinetic ... status epilepticus affects an estimated 152,000 ..... causal factors, i.e., infection, inflammation, tissue.

7. Neural Generalized Predictive Control of a non-linear Process

Sørensen, Paul Haase; Nørgård, Peter Magnus; Ravn, Ole

1998-01-01

The use of neural network in non-linear control is made difficult by the fact the stability and robustness is not guaranteed and that the implementation in real time is non-trivial. In this paper we introduce a predictive controller based on a neural network model which has promising stability qu...... detail and discuss the implementation difficulties. The neural generalized predictive controller is tested on a pneumatic servo sys-tem.......The use of neural network in non-linear control is made difficult by the fact the stability and robustness is not guaranteed and that the implementation in real time is non-trivial. In this paper we introduce a predictive controller based on a neural network model which has promising stability...... qualities. The controller is a non-linear version of the well-known generalized predictive controller developed in linear control theory. It involves minimization of a cost function which in the present case has to be done numerically. Therefore, we develop the numerical algorithms necessary in substantial...

8. Implementation of neural network based non-linear predictive control

Sørensen, Paul Haase; Nørgård, Peter Magnus; Ravn, Ole

1999-01-01

This paper describes a control method for non-linear systems based on generalized predictive control. Generalized predictive control (GPC) was developed to control linear systems, including open-loop unstable and non-minimum phase systems, but has also been proposed to be extended for the control...... of non-linear systems. GPC is model based and in this paper we propose the use of a neural network for the modeling of the system. Based on the neural network model, a controller with extended control horizon is developed and the implementation issues are discussed, with particular emphasis...... on an efficient quasi-Newton algorithm. The performance is demonstrated on a pneumatic servo system....

9. A Decomposition Algorithm for Mean-Variance Economic Model Predictive Control of Stochastic Linear Systems

Sokoler, Leo Emil; Dammann, Bernd; Madsen, Henrik

2014-01-01

This paper presents a decomposition algorithm for solving the optimal control problem (OCP) that arises in Mean-Variance Economic Model Predictive Control of stochastic linear systems. The algorithm applies the alternating direction method of multipliers to a reformulation of the OCP...

10. Linear and Non-linear Multi-Input Multi-Output Model Predictive Control of Continuous Stirred Tank Reactor

2015-02-01

Full Text Available In this article, multi-input multi-output (MIMO linear model predictive controller (LMPC based on state space model and nonlinear model predictive controller based on neural network (NNMPC are applied on a continuous stirred tank reactor (CSTR. The idea is to have a good control system that will be able to give optimal performance, reject high load disturbance, and track set point change. In order to study the performance of the two model predictive controllers, MIMO Proportional-Integral-Derivative controller (PID strategy is used as benchmark. The LMPC, NNMPC, and PID strategies are used for controlling the residual concentration (CA and reactor temperature (T. NNMPC control shows a superior performance over the LMPC and PID controllers by presenting a smaller overshoot and shorter settling time.

11. Linear and nonlinear dynamic systems in financial time series prediction

Salim Lahmiri

2012-10-01

Full Text Available Autoregressive moving average (ARMA process and dynamic neural networks namely the nonlinear autoregressive moving average with exogenous inputs (NARX are compared by evaluating their ability to predict financial time series; for instance the S&P500 returns. Two classes of ARMA are considered. The first one is the standard ARMA model which is a linear static system. The second one uses Kalman filter (KF to estimate and predict ARMA coefficients. This model is a linear dynamic system. The forecasting ability of each system is evaluated by means of mean absolute error (MAE and mean absolute deviation (MAD statistics. Simulation results indicate that the ARMA-KF system performs better than the standard ARMA alone. Thus, introducing dynamics into the ARMA process improves the forecasting accuracy. In addition, the ARMA-KF outperformed the NARX. This result may suggest that the linear component found in the S&P500 return series is more dominant than the nonlinear part. In sum, we conclude that introducing dynamics into the ARMA process provides an effective system for S&P500 time series prediction.

12. Prediction of minimum temperatures in an alpine region by linear and non-linear post-processing of meteorological models

R. Barbiero

2007-05-01

Full Text Available Model Output Statistics (MOS refers to a method of post-processing the direct outputs of numerical weather prediction (NWP models in order to reduce the biases introduced by a coarse horizontal resolution. This technique is especially useful in orographically complex regions, where large differences can be found between the NWP elevation model and the true orography. This study carries out a comparison of linear and non-linear MOS methods, aimed at the prediction of minimum temperatures in a fruit-growing region of the Italian Alps, based on the output of two different NWPs (ECMWF T511–L60 and LAMI-3. Temperature, of course, is a particularly important NWP output; among other roles it drives the local frost forecast, which is of great interest to agriculture. The mechanisms of cold air drainage, a distinctive aspect of mountain environments, are often unsatisfactorily captured by global circulation models. The simplest post-processing technique applied in this work was a correction for the mean bias, assessed at individual model grid points. We also implemented a multivariate linear regression on the output at the grid points surrounding the target area, and two non-linear models based on machine learning techniques: Neural Networks and Random Forest. We compare the performance of all these techniques on four different NWP data sets. Downscaling the temperatures clearly improved the temperature forecasts with respect to the raw NWP output, and also with respect to the basic mean bias correction. Multivariate methods generally yielded better results, but the advantage of using non-linear algorithms was small if not negligible. RF, the best performing method, was implemented on ECMWF prognostic output at 06:00 UTC over the 9 grid points surrounding the target area. Mean absolute errors in the prediction of 2 m temperature at 06:00 UTC were approximately 1.2°C, close to the natural variability inside the area itself.

13. Applications of Kalman filters based on non-linear functions to numerical weather predictions

G. Galanis

2006-10-01

Full Text Available This paper investigates the use of non-linear functions in classical Kalman filter algorithms on the improvement of regional weather forecasts. The main aim is the implementation of non linear polynomial mappings in a usual linear Kalman filter in order to simulate better non linear problems in numerical weather prediction. In addition, the optimal order of the polynomials applied for such a filter is identified. This work is based on observations and corresponding numerical weather predictions of two meteorological parameters characterized by essential differences in their evolution in time, namely, air temperature and wind speed. It is shown that in both cases, a polynomial of low order is adequate for eliminating any systematic error, while higher order functions lead to instabilities in the filtered results having, at the same time, trivial contribution to the sensitivity of the filter. It is further demonstrated that the filter is independent of the time period and the geographic location of application.

14. Applications of Kalman filters based on non-linear functions to numerical weather predictions

G. Galanis

2006-10-01

Full Text Available This paper investigates the use of non-linear functions in classical Kalman filter algorithms on the improvement of regional weather forecasts. The main aim is the implementation of non linear polynomial mappings in a usual linear Kalman filter in order to simulate better non linear problems in numerical weather prediction. In addition, the optimal order of the polynomials applied for such a filter is identified. This work is based on observations and corresponding numerical weather predictions of two meteorological parameters characterized by essential differences in their evolution in time, namely, air temperature and wind speed. It is shown that in both cases, a polynomial of low order is adequate for eliminating any systematic error, while higher order functions lead to instabilities in the filtered results having, at the same time, trivial contribution to the sensitivity of the filter. It is further demonstrated that the filter is independent of the time period and the geographic location of application.

15. Non-linear aeroelastic prediction for aircraft applications

de C. Henshaw, M. J.; Badcock, K. J.; Vio, G. A.; Allen, C. B.; Chamberlain, J.; Kaynes, I.; Dimitriadis, G.; Cooper, J. E.; Woodgate, M. A.; Rampurawala, A. M.; Jones, D.; Fenwick, C.; Gaitonde, A. L.; Taylor, N. V.; Amor, D. S.; Eccles, T. A.; Denley, C. J.

2007-05-01

Current industrial practice for the prediction and analysis of flutter relies heavily on linear methods and this has led to overly conservative design and envelope restrictions for aircraft. Although the methods have served the industry well, it is clear that for a number of reasons the inclusion of non-linearity in the mathematical and computational aeroelastic prediction tools is highly desirable. The increase in available and affordable computational resources, together with major advances in algorithms, mean that non-linear aeroelastic tools are now viable within the aircraft design and qualification environment. The Partnership for Unsteady Methods in Aerodynamics (PUMA) Defence and Aerospace Research Partnership (DARP) was sponsored in 2002 to conduct research into non-linear aeroelastic prediction methods and an academic, industry, and government consortium collaborated to address the following objectives: To develop useable methodologies to model and predict non-linear aeroelastic behaviour of complete aircraft. To evaluate the methodologies on real aircraft problems. To investigate the effect of non-linearities on aeroelastic behaviour and to determine which have the greatest effect on the flutter qualification process. These aims have been very effectively met during the course of the programme and the research outputs include: New methods available to industry for use in the flutter prediction process, together with the appropriate coaching of industry engineers. Interesting results in both linear and non-linear aeroelastics, with comprehensive comparison of methods and approaches for challenging problems. Additional embryonic techniques that, with further research, will further improve aeroelastics capability. This paper describes the methods that have been developed and how they are deployable within the industrial environment. We present a thorough review of the PUMA aeroelastics programme together with a comprehensive review of the relevant research

16. Technical note: A linear model for predicting δ13 Cprotein.

Pestle, William J; Hubbe, Mark; Smith, Erin K; Stevenson, Joseph M

2015-08-01

Development of a model for the prediction of δ(13) Cprotein from δ(13) Ccollagen and Δ(13) Cap-co . Model-generated values could, in turn, serve as "consumer" inputs for multisource mixture modeling of paleodiet. Linear regression analysis of previously published controlled diet data facilitated the development of a mathematical model for predicting δ(13) Cprotein (and an experimentally generated error term) from isotopic data routinely generated during the analysis of osseous remains (δ(13) Cco and Δ(13) Cap-co ). Regression analysis resulted in a two-term linear model (δ(13) Cprotein (%) = (0.78 × δ(13) Cco ) - (0.58× Δ(13) Cap-co ) - 4.7), possessing a high R-value of 0.93 (r(2)  = 0.86, P analysis of human osseous remains. These predicted values are ideal for use in multisource mixture modeling of dietary protein source contribution. © 2015 Wiley Periodicals, Inc.

17. Linear predictions of supercritical flow instability in two parallel channels

Shah, M.

2008-01-01

A steady state linear code that can predict thermo-hydraulic instability boundaries in a two parallel channel system under supercritical conditions has been developed. Linear and non-linear solutions of the instability boundary in a two parallel channel system are also compared. The effect of gravity on the instability boundary in a two parallel channel system, by changing the orientation of the system flow from horizontal flow to vertical up-flow and vertical down-flow has been analyzed. Vertical up-flow is found to be more unstable than horizontal flow and vertical down flow is found to be the most unstable configuration. The type of instability present in each flow-orientation of a parallel channel system has been checked and the density wave oscillation type is observed in horizontal flow and vertical up-flow, while the static type of instability is observed in a vertical down-flow for the cases studied here. The parameters affecting the instability boundary, such as the heating power, inlet temperature, inlet and outlet K-factors are varied to assess their effects. This study is important for the design of future Generation IV nuclear reactors in which supercritical light water is proposed as the primary coolant. (author)

18. Predicting Madura cattle growth curve using non-linear model

Widyas, N.; Prastowo, S.; Widi, T. S. M.; Baliarti, E.

2018-03-01

19. Prediction of Complex Human Traits Using the Genomic Best Linear Unbiased Predictor

de los Campos, Gustavo; Vazquez, Ana I; Fernando, Rohan

2013-01-01

Despite important advances from Genome Wide Association Studies (GWAS), for most complex human traits and diseases, a sizable proportion of genetic variance remains unexplained and prediction accuracy (PA) is usually low. Evidence suggests that PA can be improved using Whole-Genome Regression (WGR......) models where phenotypes are regressed on hundreds of thousands of variants simultaneously. The Genomic Best Linear Unbiased Prediction G-BLUP, a ridge-regression type method) is a commonly used WGR method and has shown good predictive performance when applied to plant and animal breeding populations....... However, breeding and human populations differ greatly in a number of factors that can affect the predictive performance of G-BLUP. Using theory, simulations, and real data analysis, we study the erformance of G-BLUP when applied to data from related and unrelated human subjects. Under perfect linkage...

20. Comparison of Linear Prediction Models for Audio Signals

2009-03-01

Full Text Available While linear prediction (LP has become immensely popular in speech modeling, it does not seem to provide a good approach for modeling audio signals. This is somewhat surprising, since a tonal signal consisting of a number of sinusoids can be perfectly predicted based on an (all-pole LP model with a model order that is twice the number of sinusoids. We provide an explanation why this result cannot simply be extrapolated to LP of audio signals. If noise is taken into account in the tonal signal model, a low-order all-pole model appears to be only appropriate when the tonal components are uniformly distributed in the Nyquist interval. Based on this observation, different alternatives to the conventional LP model can be suggested. Either the model should be changed to a pole-zero, a high-order all-pole, or a pitch prediction model, or the conventional LP model should be preceded by an appropriate frequency transform, such as a frequency warping or downsampling. By comparing these alternative LP models to the conventional LP model in terms of frequency estimation accuracy, residual spectral flatness, and perceptual frequency resolution, we obtain several new and promising approaches to LP-based audio modeling.

1. Genomic prediction based on data from three layer lines: a comparison between linear methods

Calus, M.P.L.; Huang, H.; Vereijken, J.; Visscher, J.; Napel, ten J.; Windig, J.J.

2014-01-01

Background The prediction accuracy of several linear genomic prediction models, which have previously been used for within-line genomic prediction, was evaluated for multi-line genomic prediction. Methods Compared to a conventional BLUP (best linear unbiased prediction) model using pedigree data, we

2. Predicting recovery of cognitive function soon after stroke: differential modeling of logarithmic and linear regression.

Suzuki, Makoto; Sugimura, Yuko; Yamada, Sumio; Omori, Yoshitsugu; Miyamoto, Masaaki; Yamamoto, Jun-ichi

2013-01-01

Cognitive disorders in the acute stage of stroke are common and are important independent predictors of adverse outcome in the long term. Despite the impact of cognitive disorders on both patients and their families, it is still difficult to predict the extent or duration of cognitive impairments. The objective of the present study was, therefore, to provide data on predicting the recovery of cognitive function soon after stroke by differential modeling with logarithmic and linear regression. This study included two rounds of data collection comprising 57 stroke patients enrolled in the first round for the purpose of identifying the time course of cognitive recovery in the early-phase group data, and 43 stroke patients in the second round for the purpose of ensuring that the correlation of the early-phase group data applied to the prediction of each individual's degree of cognitive recovery. In the first round, Mini-Mental State Examination (MMSE) scores were assessed 3 times during hospitalization, and the scores were regressed on the logarithm and linear of time. In the second round, calculations of MMSE scores were made for the first two scoring times after admission to tailor the structures of logarithmic and linear regression formulae to fit an individual's degree of functional recovery. The time course of early-phase recovery for cognitive functions resembled both logarithmic and linear functions. However, MMSE scores sampled at two baseline points based on logarithmic regression modeling could estimate prediction of cognitive recovery more accurately than could linear regression modeling (logarithmic modeling, R(2) = 0.676, PLogarithmic modeling based on MMSE scores could accurately predict the recovery of cognitive function soon after the occurrence of stroke. This logarithmic modeling with mathematical procedures is simple enough to be adopted in daily clinical practice.

3. Multispectral code excited linear prediction coding and its application in magnetic resonance images.

Hu, J H; Wang, Y; Cahill, P T

1997-01-01

This paper reports a multispectral code excited linear prediction (MCELP) method for the compression of multispectral images. Different linear prediction models and adaptation schemes have been compared. The method that uses a forward adaptive autoregressive (AR) model has been proven to achieve a good compromise between performance, complexity, and robustness. This approach is referred to as the MFCELP method. Given a set of multispectral images, the linear predictive coefficients are updated over nonoverlapping three-dimensional (3-D) macroblocks. Each macroblock is further divided into several 3-D micro-blocks, and the best excitation signal for each microblock is determined through an analysis-by-synthesis procedure. The MFCELP method has been applied to multispectral magnetic resonance (MR) images. To satisfy the high quality requirement for medical images, the error between the original image set and the synthesized one is further specified using a vector quantizer. This method has been applied to images from 26 clinical MR neuro studies (20 slices/study, three spectral bands/slice, 256x256 pixels/band, 12 b/pixel). The MFCELP method provides a significant visual improvement over the discrete cosine transform (DCT) based Joint Photographers Expert Group (JPEG) method, the wavelet transform based embedded zero-tree wavelet (EZW) coding method, and the vector tree (VT) coding method, as well as the multispectral segmented autoregressive moving average (MSARMA) method we developed previously.

4. The application of sparse linear prediction dictionary to compressive sensing in speech signals

YOU Hanxu

2016-04-01

Full Text Available Appling compressive sensing (CS,which theoretically guarantees that signal sampling and signal compression can be achieved simultaneously,into audio and speech signal processing is one of the most popular research topics in recent years.In this paper,K-SVD algorithm was employed to learn a sparse linear prediction dictionary regarding as the sparse basis of underlying speech signals.Compressed signals was obtained by applying random Gaussian matrix to sample original speech frames.Orthogonal matching pursuit (OMP and compressive sampling matching pursuit (CoSaMP were adopted to recovery original signals from compressed one.Numbers of experiments were carried out to investigate the impact of speech frames length,compression ratios,sparse basis and reconstruction algorithms on CS performance.Results show that sparse linear prediction dictionary can advance the performance of speech signals reconstruction compared with discrete cosine transform (DCT matrix.

5. Quantifying the predictive consequences of model error with linear subspace analysis

White, Jeremy T.; Doherty, John E.; Hughes, Joseph D.

2014-01-01

All computer models are simplified and imperfect simulators of complex natural systems. The discrepancy arising from simplification induces bias in model predictions, which may be amplified by the process of model calibration. This paper presents a new method to identify and quantify the predictive consequences of calibrating a simplified computer model. The method is based on linear theory, and it scales efficiently to the large numbers of parameters and observations characteristic of groundwater and petroleum reservoir models. The method is applied to a range of predictions made with a synthetic integrated surface-water/groundwater model with thousands of parameters. Several different observation processing strategies and parameterization/regularization approaches are examined in detail, including use of the Karhunen-Loève parameter transformation. Predictive bias arising from model error is shown to be prediction specific and often invisible to the modeler. The amount of calibration-induced bias is influenced by several factors, including how expert knowledge is applied in the design of parameterization schemes, the number of parameters adjusted during calibration, how observations and model-generated counterparts are processed, and the level of fit with observations achieved through calibration. Failure to properly implement any of these factors in a prediction-specific manner may increase the potential for predictive bias in ways that are not visible to the calibration and uncertainty analysis process.

6. Prediction of Mind-Wandering with Electroencephalogram and Non-linear Regression Modeling.

Kawashima, Issaku; Kumano, Hiroaki

2017-01-01

Mind-wandering (MW), task-unrelated thought, has been examined by researchers in an increasing number of articles using models to predict whether subjects are in MW, using numerous physiological variables. However, these models are not applicable in general situations. Moreover, they output only binary classification. The current study suggests that the combination of electroencephalogram (EEG) variables and non-linear regression modeling can be a good indicator of MW intensity. We recorded EEGs of 50 subjects during the performance of a Sustained Attention to Response Task, including a thought sampling probe that inquired the focus of attention. We calculated the power and coherence value and prepared 35 patterns of variable combinations and applied Support Vector machine Regression (SVR) to them. Finally, we chose four SVR models: two of them non-linear models and the others linear models; two of the four models are composed of a limited number of electrodes to satisfy model usefulness. Examination using the held-out data indicated that all models had robust predictive precision and provided significantly better estimations than a linear regression model using single electrode EEG variables. Furthermore, in limited electrode condition, non-linear SVR model showed significantly better precision than linear SVR model. The method proposed in this study helps investigations into MW in various little-examined situations. Further, by measuring MW with a high temporal resolution EEG, unclear aspects of MW, such as time series variation, are expected to be revealed. Furthermore, our suggestion that a few electrodes can also predict MW contributes to the development of neuro-feedback studies.

7. Prediction of Mind-Wandering with Electroencephalogram and Non-linear Regression Modeling

Issaku Kawashima

2017-07-01

Full Text Available Mind-wandering (MW, task-unrelated thought, has been examined by researchers in an increasing number of articles using models to predict whether subjects are in MW, using numerous physiological variables. However, these models are not applicable in general situations. Moreover, they output only binary classification. The current study suggests that the combination of electroencephalogram (EEG variables and non-linear regression modeling can be a good indicator of MW intensity. We recorded EEGs of 50 subjects during the performance of a Sustained Attention to Response Task, including a thought sampling probe that inquired the focus of attention. We calculated the power and coherence value and prepared 35 patterns of variable combinations and applied Support Vector machine Regression (SVR to them. Finally, we chose four SVR models: two of them non-linear models and the others linear models; two of the four models are composed of a limited number of electrodes to satisfy model usefulness. Examination using the held-out data indicated that all models had robust predictive precision and provided significantly better estimations than a linear regression model using single electrode EEG variables. Furthermore, in limited electrode condition, non-linear SVR model showed significantly better precision than linear SVR model. The method proposed in this study helps investigations into MW in various little-examined situations. Further, by measuring MW with a high temporal resolution EEG, unclear aspects of MW, such as time series variation, are expected to be revealed. Furthermore, our suggestion that a few electrodes can also predict MW contributes to the development of neuro-feedback studies.

8. QSAR models for prediction study of HIV protease inhibitors using support vector machines, neural networks and multiple linear regression

Rachid Darnag

2017-02-01

Full Text Available Support vector machines (SVM represent one of the most promising Machine Learning (ML tools that can be applied to develop a predictive quantitative structure–activity relationship (QSAR models using molecular descriptors. Multiple linear regression (MLR and artificial neural networks (ANNs were also utilized to construct quantitative linear and non linear models to compare with the results obtained by SVM. The prediction results are in good agreement with the experimental value of HIV activity; also, the results reveal the superiority of the SVM over MLR and ANN model. The contribution of each descriptor to the structure–activity relationships was evaluated.

9. Flow discharge prediction in compound channels using linear genetic programming

Azamathulla, H. Md.; Zahiri, A.

2012-08-01

SummaryFlow discharge determination in rivers is one of the key elements in mathematical modelling in the design of river engineering projects. Because of the inundation of floodplains and sudden changes in river geometry, flow resistance equations are not applicable for compound channels. Therefore, many approaches have been developed for modification of flow discharge computations. Most of these methods have satisfactory results only in laboratory flumes. Due to the ability to model complex phenomena, the artificial intelligence methods have recently been employed for wide applications in various fields of water engineering. Linear genetic programming (LGP), a branch of artificial intelligence methods, is able to optimise the model structure and its components and to derive an explicit equation based on the variables of the phenomena. In this paper, a precise dimensionless equation has been derived for prediction of flood discharge using LGP. The proposed model was developed using published data compiled for stage-discharge data sets for 394 laboratories, and field of 30 compound channels. The results indicate that the LGP model has a better performance than the existing models.

10. Comparison of the Tangent Linear Properties of Tracer Transport Schemes Applied to Geophysical Problems.

Kent, James; Holdaway, Daniel

2015-01-01

A number of geophysical applications require the use of the linearized version of the full model. One such example is in numerical weather prediction, where the tangent linear and adjoint versions of the atmospheric model are required for the 4DVAR inverse problem. The part of the model that represents the resolved scale processes of the atmosphere is known as the dynamical core. Advection, or transport, is performed by the dynamical core. It is a central process in many geophysical applications and is a process that often has a quasi-linear underlying behavior. However, over the decades since the advent of numerical modelling, significant effort has gone into developing many flavors of high-order, shape preserving, nonoscillatory, positive definite advection schemes. These schemes are excellent in terms of transporting the quantities of interest in the dynamical core, but they introduce nonlinearity through the use of nonlinear limiters. The linearity of the transport schemes used in Goddard Earth Observing System version 5 (GEOS-5), as well as a number of other schemes, is analyzed using a simple 1D setup. The linearized version of GEOS-5 is then tested using a linear third order scheme in the tangent linear version.

11. Digital linear control theory applied to automatic stepsize control in electrical circuit simulation

Verhoeven, A.; Beelen, T.G.J.; Hautus, M.L.J.; Maten, ter E.J.W.; Di Bucchianico, A.; Mattheij, R.M.M.; Peletier, M.A.

2006-01-01

Adaptive stepsize control is used to control the local errors of the numerical solution. For optimization purposes smoother stepsize controllers are wanted, such that the errors and stepsizes also behave smoothly. We consider approaches from digital linear control theory applied to multistep

12. Digital linear control theory applied to automatic stepsize control in electrical circuit simulation

Verhoeven, A.; Beelen, T.G.J.; Hautus, M.L.J.; Maten, ter E.J.W.

2005-01-01

Adaptive stepsize control is used to control the local errors of the numerical solution. For optimization purposes smoother stepsize controllers are wanted, such that the errors and stepsizes also behave smoothly. We consider approaches from digital linear control theory applied to multistep

13. Modified linear predictive coding approach for moving target tracking by Doppler radar

Ding, Yipeng; Lin, Xiaoyi; Sun, Ke-Hui; Xu, Xue-Mei; Liu, Xi-Yao

2016-07-01

Doppler radar is a cost-effective tool for moving target tracking, which can support a large range of civilian and military applications. A modified linear predictive coding (LPC) approach is proposed to increase the target localization accuracy of the Doppler radar. Based on the time-frequency analysis of the received echo, the proposed approach first real-time estimates the noise statistical parameters and constructs an adaptive filter to intelligently suppress the noise interference. Then, a linear predictive model is applied to extend the available data, which can help improve the resolution of the target localization result. Compared with the traditional LPC method, which empirically decides the extension data length, the proposed approach develops an error array to evaluate the prediction accuracy and thus, adjust the optimum extension data length intelligently. Finally, the prediction error array is superimposed with the predictor output to correct the prediction error. A series of experiments are conducted to illustrate the validity and performance of the proposed techniques.

14. Fuzzy model predictive control algorithm applied in nuclear power plant

2006-01-01

The aim of this paper is to design a predictive controller based on a fuzzy model. The Takagi-Sugeno fuzzy model with an Adaptive B-splines neuro-fuzzy implementation is used and incorporated as a predictor in a predictive controller. An optimization approach with a simplified gradient technique is used to calculate predictions of the future control actions. In this approach, adaptation of the fuzzy model using dynamic process information is carried out to build the predictive controller. The easy description of the fuzzy model and the easy computation of the gradient sector during the optimization procedure are the main advantages of the computation algorithm. The algorithm is applied to the control of a U-tube steam generation unit (UTSG) used for electricity generation. (author)

15. Model output statistics applied to wind power prediction

Joensen, A; Giebel, G; Landberg, L [Risoe National Lab., Roskilde (Denmark); Madsen, H; Nielsen, H A [The Technical Univ. of Denmark, Dept. of Mathematical Modelling, Lyngby (Denmark)

1999-03-01

Being able to predict the output of a wind farm online for a day or two in advance has significant advantages for utilities, such as better possibility to schedule fossil fuelled power plants and a better position on electricity spot markets. In this paper prediction methods based on Numerical Weather Prediction (NWP) models are considered. The spatial resolution used in NWP models implies that these predictions are not valid locally at a specific wind farm. Furthermore, due to the non-stationary nature and complexity of the processes in the atmosphere, and occasional changes of NWP models, the deviation between the predicted and the measured wind will be time dependent. If observational data is available, and if the deviation between the predictions and the observations exhibits systematic behavior, this should be corrected for; if statistical methods are used, this approaches is usually referred to as MOS (Model Output Statistics). The influence of atmospheric turbulence intensity, topography, prediction horizon length and auto-correlation of wind speed and power is considered, and to take the time-variations into account, adaptive estimation methods are applied. Three estimation techniques are considered and compared, Extended Kalman Filtering, recursive least squares and a new modified recursive least squares algorithm. (au) EU-JOULE-3. 11 refs.

16. Econometrics analysis of consumer behaviour: a linear expenditure system applied to energy

Giansante, C.; Ferrari, V.

1996-12-01

In economics literature the expenditure system specification is a well known subject. The problem is to define a coherent representation of consumer behaviour through functional forms easy to calculate. In this work it is used the Stone-Geary Linear Expenditure System and its multi-level decision process version. The Linear Expenditure system is characterized by an easy calculating estimation procedure, and its multi-level specification allows substitution and complementary relations between goods. Moreover, the utility function separability condition on which the Utility Tree Approach is based, justifies to use an estimation procedure in two or more steps. This allows to use an high degree of expenditure categories disaggregation, impossible to reach the Linear Expediture System. The analysis is applied to energy sectors

17. Linear and nonlinear schemes applied to pitch control of wind turbines.

Geng, Hua; Yang, Geng

2014-01-01

Linear controllers have been employed in industrial applications for many years, but sometimes they are noneffective on the system with nonlinear characteristics. This paper discusses the structure, performance, implementation cost, advantages, and disadvantages of different linear and nonlinear schemes applied to the pitch control of the wind energy conversion systems (WECSs). The linear controller has the simplest structure and is easily understood by the engineers and thus is widely accepted by the industry. In contrast, nonlinear schemes are more complicated, but they can provide better performance. Although nonlinear algorithms can be implemented in a powerful digital processor nowadays, they need time to be accepted by the industry and their reliability needs to be verified in the commercial products. More information about the system nonlinear feature is helpful to simplify the controller design. However, nonlinear schemes independent of the system model are more robust to the uncertainties or deviations of the system parameters.

18. Machine learning applied to the prediction of citrus production

Díaz, I.; Mazza, S.M.; Combarro, E.F.; Giménez, L.I.; Gaiad, J.E.

2017-07-01

An in-depth knowledge about variables affecting production is required in order to predict global production and take decisions in agriculture. Machine learning is a technique used in agricultural planning and precision agriculture. This work (i) studies the effectiveness of machine learning techniques for predicting orchards production; and (ii) variables affecting this production were also identified. Data from 964 orchards of lemon, mandarin, and orange in Corrientes, Argentina are analysed. Graphic and analytical descriptive statistics, correlation coefficients, principal component analysis and Biplot were performed. Production was predicted via M5-Prime, a model regression tree constructor which produces a classification based on piecewise linear functions. For all the species studied, the most informative variable was the trees' age; in mandarin and orange orchards, age was followed by between and within row distances; irrigation also affected mandarin production. Also, the performance of M5-Prime in the prediction of production is adequate, as shown when measured with correlation coefficients (~0.8) and relative mean absolute error (~0.1). These results show that M5-Prime is an appropriate method to classify citrus orchards according to production and, in addition, it allows for identifying the most informative variables affecting production by tree.

19. Machine learning applied to the prediction of citrus production

Díaz, I.; Mazza, S.M.; Combarro, E.F.; Giménez, L.I.; Gaiad, J.E.

2017-01-01

An in-depth knowledge about variables affecting production is required in order to predict global production and take decisions in agriculture. Machine learning is a technique used in agricultural planning and precision agriculture. This work (i) studies the effectiveness of machine learning techniques for predicting orchards production; and (ii) variables affecting this production were also identified. Data from 964 orchards of lemon, mandarin, and orange in Corrientes, Argentina are analysed. Graphic and analytical descriptive statistics, correlation coefficients, principal component analysis and Biplot were performed. Production was predicted via M5-Prime, a model regression tree constructor which produces a classification based on piecewise linear functions. For all the species studied, the most informative variable was the trees' age; in mandarin and orange orchards, age was followed by between and within row distances; irrigation also affected mandarin production. Also, the performance of M5-Prime in the prediction of production is adequate, as shown when measured with correlation coefficients (~0.8) and relative mean absolute error (~0.1). These results show that M5-Prime is an appropriate method to classify citrus orchards according to production and, in addition, it allows for identifying the most informative variables affecting production by tree.

20. Machine learning applied to the prediction of citrus production

Irene Díaz

2017-07-01

Full Text Available An in-depth knowledge about variables affecting production is required in order to predict global production and take decisions in agriculture. Machine learning is a technique used in agricultural planning and precision agriculture. This work (i studies the effectiveness of machine learning techniques for predicting orchards production; and (ii variables affecting this production were also identified. Data from 964 orchards of lemon, mandarin, and orange in Corrientes, Argentina are analysed. Graphic and analytical descriptive statistics, correlation coefficients, principal component analysis and Biplot were performed. Production was predicted via M5-Prime, a model regression tree constructor which produces a classification based on piecewise linear functions. For all the species studied, the most informative variable was the trees’ age; in mandarin and orange orchards, age was followed by between and within row distances; irrigation also affected mandarin production. Also, the performance of M5-Prime in the prediction of production is adequate, as shown when measured with correlation coefficients (~0.8 and relative mean absolute error (~0.1. These results show that M5-Prime is an appropriate method to classify citrus orchards according to production and, in addition, it allows for identifying the most informative variables affecting production by tree.

1. Applying model predictive control to power system frequency control

Ersdal, AM; Imsland, L; Cecilio, IM; Fabozzi, D; Thornhill, NF

2013-01-01

16.07.14 KB Ok to add accepted version to Spiral Model predictive control (MPC) is investigated as a control method which may offer advantages in frequency control of power systems than the control methods applied today, especially in presence of increased renewable energy penetration. The MPC includes constraints on both generation amount and generation rate of change, and it is tested on a one-area system. The proposed MPC is tested against a conventional proportional-integral (PI) cont...

2. SCADA system with predictive controller applied to irrigation canals

Figueiredo, João; Botto, Miguel; Rijo, Manuel

2013-01-01

This paper applies a model predictive controller (MPC) to an automatic water canal with sensors and actuators controlled by a network (programmable logic controller), and supervised by a SCADA system (supervisory control and a data acquisition). This canal is composed by a set of distributed sub-systems that control the water level in each canal pool, constrained by discharge gates (control variables) and water off-takes (disturbances). All local controllers are available through an industria...

3. Dynamics and control of quadcopter using linear model predictive control approach

Islam, M.; Okasha, M.; Idres, M. M.

2017-12-01

This paper investigates the dynamics and control of a quadcopter using the Model Predictive Control (MPC) approach. The dynamic model is of high fidelity and nonlinear, with six degrees of freedom that include disturbances and model uncertainties. The control approach is developed based on MPC to track different reference trajectories ranging from simple ones such as circular to complex helical trajectories. In this control technique, a linearized model is derived and the receding horizon method is applied to generate the optimal control sequence. Although MPC is computer expensive, it is highly effective to deal with the different types of nonlinearities and constraints such as actuators’ saturation and model uncertainties. The MPC parameters (control and prediction horizons) are selected by trial-and-error approach. Several simulation scenarios are performed to examine and evaluate the performance of the proposed control approach using MATLAB and Simulink environment. Simulation results show that this control approach is highly effective to track a given reference trajectory.

4. Fast Algorithms for High-Order Sparse Linear Prediction with Applications to Speech Processing

Jensen, Tobias Lindstrøm; Giacobello, Daniele; van Waterschoot, Toon

2016-01-01

In speech processing applications, imposing sparsity constraints on high-order linear prediction coefficients and prediction residuals has proven successful in overcoming some of the limitation of conventional linear predictive modeling. However, this modeling scheme, named sparse linear prediction...... problem with lower accuracy than in previous work. In the experimental analysis, we clearly show that a solution with lower accuracy can achieve approximately the same performance as a high accuracy solution both objectively, in terms of prediction gain, as well as with perceptual relevant measures, when...... evaluated in a speech reconstruction application....

5. The log-linear return approximation, bubbles, and predictability

Engsted, Tom; Pedersen, Thomas Quistgaard; Tanggaard, Carsten

We study in detail the log-linear return approximation introduced by Campbell and Shiller (1988a). First, we derive an upper bound for the mean approximation error, given stationarity of the log dividendprice ratio. Next, we simulate various rational bubbles which have explosive conditional expec...

6. The Log-Linear Return Approximation, Bubbles, and Predictability

Engsted, Tom; Pedersen, Thomas Quistgaard; Tanggaard, Carsten

2012-01-01

We study in detail the log-linear return approximation introduced by Campbell and Shiller (1988a). First, we derive an upper bound for the mean approximation error, given stationarity of the log dividend-price ratio. Next, we simulate various rational bubbles which have explosive conditional expe...

7. Considering linear generator copper losses on model predictive control for a point absorber wave energy converter

Montoya Andrade, Dan-El; Villa Jaén, Antonio de la; García Santana, Agustín

2014-01-01

Highlights: • We considered the linear generator copper losses in the proposed MPC strategy. • We maximized the power transferred to the generator side power converter. • The proposed MPC increases the useful average power injected into the grid. • The stress level of the PTO system can be reduced by the proposed MPC. - Abstract: The amount of energy that a wave energy converter can extract depends strongly on the control strategy applied to the power take-off system. It is well known that, ideally, the reactive control allows for maximum energy extraction from waves. However, the reactive control is intrinsically noncausal in practice and requires some kind of causal approach to be applied. Moreover, this strategy does not consider physical constraints and this could be a problem because the system could achieve unacceptable dynamic values. These, and other control techniques have focused on the wave energy extraction problem in order to maximize the energy absorbed by the power take-off device without considering the possible losses in intermediate devices. In this sense, a reactive control that considers the linear generator copper losses has been recently proposed to increase the useful power injected into the grid. Among the control techniques that have emerged recently, the model predictive control represents a promising strategy. This approach performs an optimization process on a time prediction horizon incorporating dynamic constraints associated with the physical features of the power take-off system. This paper proposes a model predictive control technique that considers the copper losses in the control optimization process of point absorbers with direct drive linear generators. This proposal makes the most of reactive control as it considers the copper losses, and it makes the most of the model predictive control, as it considers the system constraints. This means that the useful power transferred from the linear generator to the power

8. Linear genetic programming application for successive-station monthly streamflow prediction

Danandeh Mehr, Ali; Kahya, Ercan; Yerdelen, Cahit

2014-09-01

In recent decades, artificial intelligence (AI) techniques have been pronounced as a branch of computer science to model wide range of hydrological phenomena. A number of researches have been still comparing these techniques in order to find more effective approaches in terms of accuracy and applicability. In this study, we examined the ability of linear genetic programming (LGP) technique to model successive-station monthly streamflow process, as an applied alternative for streamflow prediction. A comparative efficiency study between LGP and three different artificial neural network algorithms, namely feed forward back propagation (FFBP), generalized regression neural networks (GRNN), and radial basis function (RBF), has also been presented in this study. For this aim, firstly, we put forward six different successive-station monthly streamflow prediction scenarios subjected to training by LGP and FFBP using the field data recorded at two gauging stations on Çoruh River, Turkey. Based on Nash-Sutcliffe and root mean squared error measures, we then compared the efficiency of these techniques and selected the best prediction scenario. Eventually, GRNN and RBF algorithms were utilized to restructure the selected scenario and to compare with corresponding FFBP and LGP. Our results indicated the promising role of LGP for successive-station monthly streamflow prediction providing more accurate results than those of all the ANN algorithms. We found an explicit LGP-based expression evolved by only the basic arithmetic functions as the best prediction model for the river, which uses the records of the both target and upstream stations.

9. Dependence of total dose response of bipolar linear microcircuits on applied dose rate

McClure, S.; Will, W.; Perry, G.; Pease, R.L.

1994-01-01

The effect of dose rate on the total dose radiation hardness of three commercial bipolar linear microcircuits is investigated. Total dose tests of linear bipolar microcircuits show larger degradation at 0.167 rad/s than at 90 rad/s even after the high dose rate test is followed by a room temperature plus a 100 C anneal. No systematic correlation could be found for degradation at low dose rate versus high dose rate and anneal. Comparison of the low dose rate with the high dose rate anneal data indicates that MIL-STD-883, method 1019.4 is not a worst-case test method when applied to bipolar microcircuits for low dose rate space applications

10. Enhanced pid vs model predictive control applied to bldc motor

Gaya, M. S.; Muhammad, Auwal; Aliyu Abdulkadir, Rabiu; Salim, S. N. S.; Madugu, I. S.; Tijjani, Aminu; Aminu Yusuf, Lukman; Dauda Umar, Ibrahim; Khairi, M. T. M.

2018-01-01

BrushLess Direct Current (BLDC) motor is a multivariable and highly complex nonlinear system. Variation of internal parameter values with environment or reference signal increases the difficulty in controlling the BLDC effectively. Advanced control strategies (like model predictive control) often have to be integrated to satisfy the control desires. Enhancing or proper tuning of a conventional algorithm results in achieving the desired performance. This paper presents a performance comparison of Enhanced PID and Model Predictive Control (MPC) applied to brushless direct current motor. The simulation results demonstrated that the PSO-PID is slightly better than the PID and MPC in tracking the trajectory of the reference signal. The proposed scheme could be useful algorithms for the system.

11. Classical linear-control analysis applied to business-cycle dynamics and stability

Wingrove, R. C.

1983-01-01

Linear control analysis is applied as an aid in understanding the fluctuations of business cycles in the past, and to examine monetary policies that might improve stabilization. The analysis shows how different policies change the frequency and damping of the economic system dynamics, and how they modify the amplitude of the fluctuations that are caused by random disturbances. Examples are used to show how policy feedbacks and policy lags can be incorporated, and how different monetary strategies for stabilization can be analytically compared. Representative numerical results are used to illustrate the main points.

12. Linear-quadratic model predictions for tumor control probability

Yaes, R.J.

1987-01-01

Sigmoid dose-response curves for tumor control are calculated from the linear-quadratic model parameters α and Β, obtained from human epidermoid carcinoma cell lines, and are much steeper than the clinical dose-response curves for head and neck cancers. One possible explanation is the presence of small radiation-resistant clones arising from mutations in an initially homogeneous tumor. Using the mutation theory of Delbruck and Luria and of Goldie and Coldman, the authors discuss the implications of such radiation-resistant clones for clinical radiation therapy

13. Genomic prediction based on data from three layer lines using non-linear regression models.

Huang, Heyun; Windig, Jack J; Vereijken, Addie; Calus, Mario P L

2014-11-06

Most studies on genomic prediction with reference populations that include multiple lines or breeds have used linear models. Data heterogeneity due to using multiple populations may conflict with model assumptions used in linear regression methods. In an attempt to alleviate potential discrepancies between assumptions of linear models and multi-population data, two types of alternative models were used: (1) a multi-trait genomic best linear unbiased prediction (GBLUP) model that modelled trait by line combinations as separate but correlated traits and (2) non-linear models based on kernel learning. These models were compared to conventional linear models for genomic prediction for two lines of brown layer hens (B1 and B2) and one line of white hens (W1). The three lines each had 1004 to 1023 training and 238 to 240 validation animals. Prediction accuracy was evaluated by estimating the correlation between observed phenotypes and predicted breeding values. When the training dataset included only data from the evaluated line, non-linear models yielded at best a similar accuracy as linear models. In some cases, when adding a distantly related line, the linear models showed a slight decrease in performance, while non-linear models generally showed no change in accuracy. When only information from a closely related line was used for training, linear models and non-linear radial basis function (RBF) kernel models performed similarly. The multi-trait GBLUP model took advantage of the estimated genetic correlations between the lines. Combining linear and non-linear models improved the accuracy of multi-line genomic prediction. Linear models and non-linear RBF models performed very similarly for genomic prediction, despite the expectation that non-linear models could deal better with the heterogeneous multi-population data. This heterogeneity of the data can be overcome by modelling trait by line combinations as separate but correlated traits, which avoids the occasional

14. Non linear predictive control of a LEGO mobile robot

Merabti, H.; Bouchemal, B.; Belarbi, K.; Boucherma, D.; Amouri, A.

2014-10-01

Metaheuristics are general purpose heuristics which have shown a great potential for the solution of difficult optimization problems. In this work, we apply the meta heuristic, namely particle swarm optimization, PSO, for the solution of the optimization problem arising in NLMPC. This algorithm is easy to code and may be considered as alternatives for the more classical solution procedures. The PSO- NLMPC is applied to control a mobile robot for the tracking trajectory and obstacles avoidance. Experimental results show the strength of this approach.

15. Prediction of linear B-cell epitopes of hepatitis C virus for vaccine development

2015-01-01

proposes an interpretable rule mining system IRMS-BE for extracting interpretable rules using informative physicochemical properties and a web server Bcell-HCV for predicting linear B-cell epitopes of HCV. IRMS-BE may also apply to predict B-cell epitopes for other viruses, which benefits the improvement of vaccines development of these viruses without significant modification. Bcell-HCV is useful for identifying B-cell epitopes of HCV antigen to help vaccine development, which is available at http://e045.life.nctu.edu.tw/BcellHCV. PMID:26680271

16. Warped Linear Prediction of Physical Model Excitations with Applications in Audio Compression and Instrument Synthesis

Glass, Alexis; Fukudome, Kimitoshi

2004-12-01

A sound recording of a plucked string instrument is encoded and resynthesized using two stages of prediction. In the first stage of prediction, a simple physical model of a plucked string is estimated and the instrument excitation is obtained. The second stage of prediction compensates for the simplicity of the model in the first stage by encoding either the instrument excitation or the model error using warped linear prediction. These two methods of compensation are compared with each other, and to the case of single-stage warped linear prediction, adjustments are introduced, and their applications to instrument synthesis and MPEG4's audio compression within the structured audio format are discussed.

17. Bayesian techniques for fatigue life prediction and for inference in linear time dependent PDEs

Scavino, Marco

2016-01-08

In this talk we introduce first the main characteristics of a systematic statistical approach to model calibration, model selection and model ranking when stress-life data are drawn from a collection of records of fatigue experiments. Focusing on Bayesian prediction assessment, we consider fatigue-limit models and random fatigue-limit models under different a priori assumptions. In the second part of the talk, we present a hierarchical Bayesian technique for the inference of the coefficients of time dependent linear PDEs, under the assumption that noisy measurements are available in both the interior of a domain of interest and from boundary conditions. We present a computational technique based on the marginalization of the contribution of the boundary parameters and apply it to inverse heat conduction problems.

18. Force prediction in permanent magnet flat linear motors (abstract)

Eastham, J.F.; Akmese, R.

1991-01-01

The advent of neodymium iron boron rare-earth permanent magnet material has afforded the opportunity to construct linear machines of high force to weight ratio. The paper describes the design and construction of an axial flux machine and rotating drum test rig. The machine occupies an arc of 45 degree on a drum 1.22 m in diameter. The excitation is provided by blocks of NdFeB material which are skewed in order to minimize the force variations due to slotting. The stator carries a three-phase short-chorded double-layer winding of four poles. The machine is supplied by a PWM inverter the fundamental component of which is phase locked to the rotor position so that a ''dc brushless'' drive system is produced. Electromagnetic forces including ripple forces are measured at supply frequencies up to 100 Hz. They are compared with finite-element analysis which calculates the force variation over the time period. The paper then considers some of the causes of ripple torque. In particular, the force production due solely to the permanent magnet excitation is considered. This has two important components each acting along the line of motion of the machine, one is due to slotting and the other is due to the finite length of the primary. In the practical machine the excitation poles are skewed to minimize the slotting force and the effectiveness of this is confirmed by both results from the experiments and the finite-element analysis. The end effect force is shown to have a space period of twice that of the excitation. The amplitude of this force and its period are again confirmed by practical results

19. Hourly predictive Levenberg-Marquardt ANN and multi linear regression models for predicting of dew point temperature

2012-08-01

In this study, the ability of two models of multi linear regression (MLR) and Levenberg-Marquardt (LM) feed-forward neural network was examined to estimate the hourly dew point temperature. Dew point temperature is the temperature at which water vapor in the air condenses into liquid. This temperature can be useful in estimating meteorological variables such as fog, rain, snow, dew, and evapotranspiration and in investigating agronomical issues as stomatal closure in plants. The availability of hourly records of climatic data (air temperature, relative humidity and pressure) which could be used to predict dew point temperature initiated the practice of modeling. Additionally, the wind vector (wind speed magnitude and direction) and conceptual input of weather condition were employed as other input variables. The three quantitative standard statistical performance evaluation measures, i.e. the root mean squared error, mean absolute error, and absolute logarithmic Nash-Sutcliffe efficiency coefficient ( {| {{{Log}}({{NS}})} |} ) were employed to evaluate the performances of the developed models. The results showed that applying wind vector and weather condition as input vectors along with meteorological variables could slightly increase the ANN and MLR predictive accuracy. The results also revealed that LM-NN was superior to MLR model and the best performance was obtained by considering all potential input variables in terms of different evaluation criteria.

20. Linear mixing model applied to coarse spatial resolution data from multispectral satellite sensors

Holben, Brent N.; Shimabukuro, Yosio E.

1993-01-01

A linear mixing model was applied to coarse spatial resolution data from the NOAA Advanced Very High Resolution Radiometer. The reflective component of the 3.55-3.95 micron channel was used with the two reflective channels 0.58-0.68 micron and 0.725-1.1 micron to run a constrained least squares model to generate fraction images for an area in the west central region of Brazil. The fraction images were compared with an unsupervised classification derived from Landsat TM data acquired on the same day. The relationship between the fraction images and normalized difference vegetation index images show the potential of the unmixing techniques when using coarse spatial resolution data for global studies.

1. Applying Monte Carlo Concept and Linear Programming in Modern Portfolio Theory to Obtain Best Weighting Structure

Tumpal Sihombing

2013-01-01

Full Text Available The world is entering the era of recession when the trend is bearish and market is not so favorable. The capital markets in every major country were experiencing great amount of loss and people suffered in their investment. The Jakarta Composite Index (JCI has shown a great downturn for the past one year but the trend bearish year of the JCI. Therefore, rational investors should consider restructuring their portfolio to set bigger proportion in bonds and cash instead of stocks. Investors can apply modern portfolio theory by Harry Markowitz to find the optimum asset allocation for their portfolio. Higher return is always associated with higher risk. This study shows investors how to find out the lowest risk of a portfolio investment by providing them with several structures of portfolio weighting. By this way, investor can compare and make the decision based on risk-return consideration and opportunity cost as well. Keywords: Modern portfolio theory, Monte Carlo, linear programming

2. Linear Elastic Waves - Series: Cambridge Texts in Applied Mathematics (No. 26)

Harris, John G.

2001-10-01

Wave propagation and scattering are among the most fundamental processes that we use to comprehend the world around us. While these processes are often very complex, one way to begin to understand them is to study wave propagation in the linear approximation. This is a book describing such propagation using, as a context, the equations of elasticity. Two unifying themes are used. The first is that an understanding of plane wave interactions is fundamental to understanding more complex wave interactions. The second is that waves are best understood in an asymptotic approximation where they are free of the complications of their excitation and are governed primarily by their propagation environments. The topics covered include reflection, refraction, the propagation of interfacial waves, integral representations, radiation and diffraction, and propagation in closed and open waveguides. Linear Elastic Waves is an advanced level textbook directed at applied mathematicians, seismologists, and engineers. Aimed at beginning graduate students Includes examples and exercises Has application in a wide range of disciplines

3. Predicting Fuel Ignition Quality Using 1H NMR Spectroscopy and Multiple Linear Regression

Abdul Jameel, Abdul Gani; Naser, Nimal; Emwas, Abdul-Hamid M.; Dooley, Stephen; Sarathy, Mani

2016-01-01

An improved model for the prediction of ignition quality of hydrocarbon fuels has been developed using 1H nuclear magnetic resonance (NMR) spectroscopy and multiple linear regression (MLR) modeling. Cetane number (CN) and derived cetane number (DCN

4. A novel simple QSAR model for the prediction of anti-HIV activity using multiple linear regression analysis.

Afantitis, Antreas; Melagraki, Georgia; Sarimveis, Haralambos; Koutentis, Panayiotis A; Markopoulos, John; Igglessi-Markopoulou, Olga

2006-08-01

A quantitative-structure activity relationship was obtained by applying Multiple Linear Regression Analysis to a series of 80 1-[2-hydroxyethoxy-methyl]-6-(phenylthio) thymine (HEPT) derivatives with significant anti-HIV activity. For the selection of the best among 37 different descriptors, the Elimination Selection Stepwise Regression Method (ES-SWR) was utilized. The resulting QSAR model (R (2) (CV) = 0.8160; S (PRESS) = 0.5680) proved to be very accurate both in training and predictive stages.

5. Nonlinear Model-Based Predictive Control applied to Large Scale Cryogenic Facilities

Blanco Vinuela, Enrique; de Prada Moraga, Cesar

2001-01-01

The thesis addresses the study, analysis, development, and finally the real implementation of an advanced control system for the 1.8 K Cooling Loop of the LHC (Large Hadron Collider) accelerator. The LHC is the next accelerator being built at CERN (European Center for Nuclear Research), it will use superconducting magnets operating below a temperature of 1.9 K along a circumference of 27 kilometers. The temperature of these magnets is a control parameter with strict operating constraints. The first control implementations applied a procedure that included linear identification, modelling and regulation using a linear predictive controller. It did improve largely the overall performance of the plant with respect to a classical PID regulator, but the nature of the cryogenic processes pointed out the need of a more adequate technique, such as a nonlinear methodology. This thesis is a first step to develop a global regulation strategy for the overall control of the LHC cells when they will operate simultaneously....

6. Genomic prediction based on data from three layer lines using non-linear regression models

Huang, H.; Windig, J.J.; Vereijken, A.; Calus, M.P.L.

2014-01-01

Background - Most studies on genomic prediction with reference populations that include multiple lines or breeds have used linear models. Data heterogeneity due to using multiple populations may conflict with model assumptions used in linear regression methods. Methods - In an attempt to alleviate

7. Integrating genomics and proteomics data to predict drug effects using binary linear programming.

Ji, Zhiwei; Su, Jing; Liu, Chenglin; Wang, Hongyan; Huang, Deshuang; Zhou, Xiaobo

2014-01-01

The Library of Integrated Network-Based Cellular Signatures (LINCS) project aims to create a network-based understanding of biology by cataloging changes in gene expression and signal transduction that occur when cells are exposed to a variety of perturbations. It is helpful for understanding cell pathways and facilitating drug discovery. Here, we developed a novel approach to infer cell-specific pathways and identify a compound's effects using gene expression and phosphoproteomics data under treatments with different compounds. Gene expression data were employed to infer potential targets of compounds and create a generic pathway map. Binary linear programming (BLP) was then developed to optimize the generic pathway topology based on the mid-stage signaling response of phosphorylation. To demonstrate effectiveness of this approach, we built a generic pathway map for the MCF7 breast cancer cell line and inferred the cell-specific pathways by BLP. The first group of 11 compounds was utilized to optimize the generic pathways, and then 4 compounds were used to identify effects based on the inferred cell-specific pathways. Cross-validation indicated that the cell-specific pathways reliably predicted a compound's effects. Finally, we applied BLP to re-optimize the cell-specific pathways to predict the effects of 4 compounds (trichostatin A, MS-275, staurosporine, and digoxigenin) according to compound-induced topological alterations. Trichostatin A and MS-275 (both HDAC inhibitors) inhibited the downstream pathway of HDAC1 and caused cell growth arrest via activation of p53 and p21; the effects of digoxigenin were totally opposite. Staurosporine blocked the cell cycle via p53 and p21, but also promoted cell growth via activated HDAC1 and its downstream pathway. Our approach was also applied to the PC3 prostate cancer cell line, and the cross-validation analysis showed very good accuracy in predicting effects of 4 compounds. In summary, our computational model can be

8. An acceleration technique for the Gauss-Seidel method applied to symmetric linear systems

Jesús Cajigas

2014-06-01

Full Text Available A preconditioning technique to improve the convergence of the Gauss-Seidel method applied to symmetric linear systems while preserving symmetry is proposed. The preconditioner is of the form I + K and can be applied an arbitrary number of times. It is shown that under certain conditions the application of the preconditioner a finite number of steps reduces the matrix to a diagonal. A series of numerical experiments using matrices from spatial discretizations of partial differential equations demonstrates that both versions of the preconditioner, point and block version, exhibit lower iteration counts than its non-symmetric version. Resumen. Se propone una técnica de precondicionamiento para mejorar la convergencia del método Gauss-Seidel aplicado a sistemas lineales simétricos pero preservando simetría. El precondicionador es de la forma I + K y puede ser aplicado un número arbitrario de veces. Se demuestra que bajo ciertas condiciones la aplicación del precondicionador un número finito de pasos reduce la matriz del sistema precondicionado a una diagonal. Una serie de experimentos con matrices que provienen de la discretización de ecuaciones en derivadas parciales muestra que ambas versiones del precondicionador, por punto y por bloque, muestran un menor número de iteraciones en comparación con la versión que no preserva simetría.

9. Applying linear programming model to aggregate production planning of coated peanut products

Rohmah, W. G.; Purwaningsih, I.; Santoso, EF S. M.

2018-03-01

The aim of this study was to set the overall production level for each grade of coated peanut product to meet market demands with a minimum production cost. The linear programming model was applied in this study. The proposed model was used to minimize the total production cost based on the limited demand of coated peanuts. The demand values applied to the method was previously forecasted using time series method and production capacity aimed to plan the aggregate production for the next 6 month period. The results indicated that the production planning using the proposed model has resulted a better fitted pattern to the customer demands compared to that of the company policy. The production capacity of product family A, B, and C was relatively stable for the first 3 months of the planning periods, then began to fluctuate over the next 3 months. While, the production capacity of product family D and E was fluctuated over the 6-month planning periods, with the values in the range of 10,864 - 32,580 kg and 255 – 5,069 kg, respectively. The total production cost for all products was 27.06% lower than the production cost calculated using the company’s policy-based method.

10. [A study of linearity and reciprocity during shock applied with a hammer to human dry skull].

Kumazawa, Y; Sekiguchi, J; Saito, M; Honma, K; Toyoda, M; Matsuo, E

1990-09-01

The authors used a human dry skull on which the cranial bone mandible had been joined with an artificial articulator disk to form a single unit. Impact acceleration corresponding to weak and strong tapping was considered a dynamic load in examining the vibration transfer characteristics of the facial cranial bone when impact was applied from the mentum section in a situation designed to be closer to reality. Flexion injection type (resonance frequency f0 = 100 to 150 Hz, produced by GC Corp.) was applied to the human dry skull as an artificial periodontal membrane at thickness of 0.3 mm. In addition, Exaflex heavy body type (f0 = 400 Hz, produced by GC Corp.) was applied as an artificial disk. This was then placed on a damper produced by spreading a rubber dam sheet with a thickness of 35 microns on a tire tube with a diameter of 35 cm and an air pressure of 35 kg/cm2. Investigations were then made concerning linearity and reciprocity to determine whether an experimental system could be achieved or not. This was then followed by modal analysis. As a result, the following matters were ascertained: (1) The resonating area differed according to the extent of the force. (2) An increase in the viscoelastic elements of the silicon was accompanied by attenuation of force. (3) Directionality of force attenuation was caused by the complexity of bone structure. (4) A tapping force of 0.3G or 1G was sufficiently attenuated by the facial cranial bone. (5) The transfer function at the bone seams and thinner areas of the bones was insufficient for modal analysis of the facial region and total cranial bone of the human dry skull.

11. A linear programming computational framework integrates phosphor-proteomics and prior knowledge to predict drug efficacy.

Ji, Zhiwei; Wang, Bing; Yan, Ke; Dong, Ligang; Meng, Guanmin; Shi, Lei

2017-12-21

In recent years, the integration of 'omics' technologies, high performance computation, and mathematical modeling of biological processes marks that the systems biology has started to fundamentally impact the way of approaching drug discovery. The LINCS public data warehouse provides detailed information about cell responses with various genetic and environmental stressors. It can be greatly helpful in developing new drugs and therapeutics, as well as improving the situations of lacking effective drugs, drug resistance and relapse in cancer therapies, etc. In this study, we developed a Ternary status based Integer Linear Programming (TILP) method to infer cell-specific signaling pathway network and predict compounds' treatment efficacy. The novelty of our study is that phosphor-proteomic data and prior knowledge are combined for modeling and optimizing the signaling network. To test the power of our approach, a generic pathway network was constructed for a human breast cancer cell line MCF7; and the TILP model was used to infer MCF7-specific pathways with a set of phosphor-proteomic data collected from ten representative small molecule chemical compounds (most of them were studied in breast cancer treatment). Cross-validation indicated that the MCF7-specific pathway network inferred by TILP were reliable predicting a compound's efficacy. Finally, we applied TILP to re-optimize the inferred cell-specific pathways and predict the outcomes of five small compounds (carmustine, doxorubicin, GW-8510, daunorubicin, and verapamil), which were rarely used in clinic for breast cancer. In the simulation, the proposed approach facilitates us to identify a compound's treatment efficacy qualitatively and quantitatively, and the cross validation analysis indicated good accuracy in predicting effects of five compounds. In summary, the TILP model is useful for discovering new drugs for clinic use, and also elucidating the potential mechanisms of a compound to targets.

12. Comparison between linear and non-parametric regression models for genome-enabled prediction in wheat.

Pérez-Rodríguez, Paulino; Gianola, Daniel; González-Camacho, Juan Manuel; Crossa, José; Manès, Yann; Dreisigacker, Susanne

2012-12-01

In genome-enabled prediction, parametric, semi-parametric, and non-parametric regression models have been used. This study assessed the predictive ability of linear and non-linear models using dense molecular markers. The linear models were linear on marker effects and included the Bayesian LASSO, Bayesian ridge regression, Bayes A, and Bayes B. The non-linear models (this refers to non-linearity on markers) were reproducing kernel Hilbert space (RKHS) regression, Bayesian regularized neural networks (BRNN), and radial basis function neural networks (RBFNN). These statistical models were compared using 306 elite wheat lines from CIMMYT genotyped with 1717 diversity array technology (DArT) markers and two traits, days to heading (DTH) and grain yield (GY), measured in each of 12 environments. It was found that the three non-linear models had better overall prediction accuracy than the linear regression specification. Results showed a consistent superiority of RKHS and RBFNN over the Bayesian LASSO, Bayesian ridge regression, Bayes A, and Bayes B models.

13. A unified frame of predicting side effects of drugs by using linear neighborhood similarity.

Zhang, Wen; Yue, Xiang; Liu, Feng; Chen, Yanlin; Tu, Shikui; Zhang, Xining

2017-12-14

Drug side effects are one of main concerns in the drug discovery, which gains wide attentions. Investigating drug side effects is of great importance, and the computational prediction can help to guide wet experiments. As far as we known, a great number of computational methods have been proposed for the side effect predictions. The assumption that similar drugs may induce same side effects is usually employed for modeling, and how to calculate the drug-drug similarity is critical in the side effect predictions. In this paper, we present a novel measure of drug-drug similarity named "linear neighborhood similarity", which is calculated in a drug feature space by exploring linear neighborhood relationship. Then, we transfer the similarity from the feature space into the side effect space, and predict drug side effects by propagating known side effect information through a similarity-based graph. Under a unified frame based on the linear neighborhood similarity, we propose method "LNSM" and its extension "LNSM-SMI" to predict side effects of new drugs, and propose the method "LNSM-MSE" to predict unobserved side effect of approved drugs. We evaluate the performances of LNSM and LNSM-SMI in predicting side effects of new drugs, and evaluate the performances of LNSM-MSE in predicting missing side effects of approved drugs. The results demonstrate that the linear neighborhood similarity can improve the performances of side effect prediction, and the linear neighborhood similarity-based methods can outperform existing side effect prediction methods. More importantly, the proposed methods can predict side effects of new drugs as well as unobserved side effects of approved drugs under a unified frame.

14. Performance prediction of mechanical excavators from linear cutter tests on Yucca Mountain welded tuffs

Gertsch, R.; Ozdemir, L.

1992-09-01

The performances of mechanical excavators are predicted for excavations in welded tuff. Emphasis is given to tunnel boring machine evaluations based on linear cutting machine test data obtained on samples of Topopah Spring welded tuff. The tests involve measurement of forces as cutters are applied to the rock surface at certain spacing and penetrations. Two disc and two point-attack cutters representing currently available technology are thus evaluated. The performance predictions based on these direct experimental measurements are believed to be more accurate than any previous values for mechanical excavation of welded tuff. The calculations of performance are predicated on minimizing the amount of energy required to excavate the welded tuff. Specific energy decreases with increasing spacing and penetration, and reaches its lowest at the widest spacing and deepest penetration used in this test program. Using the force, spacing, and penetration data from this experimental program, the thrust, torque, power, and rate of penetration are calculated for several types of mechanical excavators. The results of this study show that the candidate excavators will require higher torque and power than heretofore estimated

15. Applied Distributed Model Predictive Control for Energy Efficient Buildings and Ramp Metering

Koehler, Sarah Muraoka

suited for nonlinear optimization problems. The parallel computation of the algorithm exploits iterative linear algebra methods for the main linear algebra computations in the algorithm. We show that the splitting of the algorithm is flexible and can thus be applied to various distributed platform configurations. The two proposed algorithms are applied to two main energy and transportation control problems. The first application is energy efficient building control. Buildings represent 40% of energy consumption in the United States. Thus, it is significant to improve the energy efficiency of buildings. The goal is to minimize energy consumption subject to the physics of the building (e.g. heat transfer laws), the constraints of the actuators as well as the desired operating constraints (thermal comfort of the occupants), and heat load on the system. In this thesis, we describe the control systems of forced air building systems in practice. We discuss the "Trim and Respond" algorithm which is a distributed control algorithm that is used in practice, and show that it performs similarly to a one-step explicit DMPC algorithm. Then, we apply the novel distributed primal-dual active-set method and provide extensive numerical results for the building MPC problem. The second main application is the control of ramp metering signals to optimize traffic flow through a freeway system. This application is particularly important since urban congestion has more than doubled in the past few decades. The ramp metering problem is to maximize freeway throughput subject to freeway dynamics (derived from mass conservation), actuation constraints, freeway capacity constraints, and predicted traffic demand. In this thesis, we develop a hybrid model predictive controller for ramp metering that is guaranteed to be persistently feasible and stable. This contrasts to previous work on MPC for ramp metering where such guarantees are absent. We apply a smoothing method to the hybrid model predictive

16. Pixel-Level Decorrelation and BiLinearly Interpolated Subpixel Sensitivity applied to WASP-29b

Challener, Ryan; Harrington, Joseph; Cubillos, Patricio; Blecic, Jasmina; Deming, Drake

2017-10-01

Measured exoplanet transit and eclipse depths can vary significantly depending on the methodology used, especially at the low S/N levels in Spitzer eclipses. BiLinearly Interpolated Subpixel Sensitivity (BLISS) models a physical, spatial effect, which is independent of any astrophysical effects. Pixel-Level Decorrelation (PLD) uses the relative variations in pixels near the target to correct for flux variations due to telescope motion. PLD is being widely applied to all Spitzer data without a thorough understanding of its behavior. It is a mathematical method derived from a Taylor expansion, and many of its parameters do not have a physical basis. PLD also relies heavily on binning the data to remove short time-scale variations, which can artifically smooth the data. We applied both methods to 4 eclipse observations of WASP-29b, a Saturn-sized planet, which was observed twice with the 3.6 µm and twice with the 4.5 µm channels of Spitzer's IRAC in 2010, 2011 and 2014 (programs 60003, 70084, and 10054, respectively). We compare the resulting eclipse depths and midpoints from each model, assess each method's ability to remove correlated noise, and discuss how to choose or combine the best data analysis methods. We also refined the orbit from eclipse timings, detecting a significant nonzero eccentricity, and we used our Bayesian Atmospheric Radiative Transfer (BART) code to retrieve the planet's atmosphere, which is consistent with a blackbody. Spitzer is operated by the Jet Propulsion Laboratory, California Institute of Technology, under a contract with NASA. This work was supported by NASA Planetary Atmospheres grant NNX12AI69G and NASA Astrophysics Data Analysis Program grant NNX13AF38G.

17. Structural Dynamic Analyses And Test Predictions For Spacecraft Structures With Non-Linearities

Vergniaud, Jean-Baptiste; Soula, Laurent; Newerla, Alfred

2012-07-01

The overall objective of the mechanical development and verification process is to ensure that the spacecraft structure is able to sustain the mechanical environments encountered during launch. In general the spacecraft structures are a-priori assumed to behave linear, i.e. the responses to a static load or dynamic excitation, respectively, will increase or decrease proportionally to the amplitude of the load or excitation induced. However, past experiences have shown that various non-linearities might exist in spacecraft structures and the consequences of their dynamic effects can significantly affect the development and verification process. Current processes are mainly adapted to linear spacecraft structure behaviour. No clear rules exist for dealing with major structure non-linearities. They are handled outside the process by individual analysis and margin policy, and analyses after tests to justify the CLA coverage. Non-linearities can primarily affect the current spacecraft development and verification process on two aspects. Prediction of flights loads by launcher/satellite coupled loads analyses (CLA): only linear satellite models are delivered for performing CLA and no well-established rules exist how to properly linearize a model when non- linearities are present. The potential impact of the linearization on the results of the CLA has not yet been properly analyzed. There are thus difficulties to assess that CLA results will cover actual flight levels. Management of satellite verification tests: the CLA results generated with a linear satellite FEM are assumed flight representative. If the internal non- linearities are present in the tested satellite then there might be difficulties to determine which input level must be passed to cover satellite internal loads. The non-linear behaviour can also disturb the shaker control, putting the satellite at risk by potentially imposing too high levels. This paper presents the results of a test campaign performed in

18. The Inverse System Method Applied to the Derivation of Power System Non—linear Control Laws

DonghaiLI; XuezhiJIANG; 等

1997-01-01

The differential geometric method has been applied to a series of power system non-linear control problems effectively.However a set of differential equations must be solved for obtaining the required diffeomorphic transformation.Therefore the derivation of control laws is very complicated.In fact because of the specificity of power system models the required diffeomorphic transformation may be obtained directly,so it is unnecessary to solve a set of differential equations.In addition inverse system method is equivalent to differential geometric method in reality and not limited to affine nonlinear systems,Its physical meaning is able to be viewed directly and its deduction needs only algebraic operation and derivation,so control laws can be obtained easily and the application to engineering is very convenient.Authors of this paper take steam valving control of power system as a typical case to be studied.It is demonstrated that the control law deduced by inverse system method is just the same as one by differential geometric method.The conclusion will simplify the control law derivations of steam valving,excitation,converter and static var compensator by differential geometric method and may be suited to similar control problems in other areas.

19. Improvement of Bragg peak shift estimation using dimensionality reduction techniques and predictive linear modeling

Xing, Yafei; Macq, Benoit

2017-11-01

With the emergence of clinical prototypes and first patient acquisitions for proton therapy, the research on prompt gamma imaging is aiming at making most use of the prompt gamma data for in vivo estimation of any shift from expected Bragg peak (BP). The simple problem of matching the measured prompt gamma profile of each pencil beam with a reference simulation from the treatment plan is actually made complex by uncertainties which can translate into distortions during treatment. We will illustrate this challenge and demonstrate the robustness of a predictive linear model we proposed for BP shift estimation based on principal component analysis (PCA) method. It considered the first clinical knife-edge slit camera design in use with anthropomorphic phantom CT data. Particularly, 4115 error scenarios were simulated for the learning model. PCA was applied to the training input randomly chosen from 500 scenarios for eliminating data collinearities. A total variance of 99.95% was used for representing the testing input from 3615 scenarios. This model improved the BP shift estimation by an average of 63+/-19% in a range between -2.5% and 86%, comparing to our previous profile shift (PS) method. The robustness of our method was demonstrated by a comparative study conducted by applying 1000 times Poisson noise to each profile. 67% cases obtained by the learning model had lower prediction errors than those obtained by PS method. The estimation accuracy ranged between 0.31 +/- 0.22 mm and 1.84 +/- 8.98 mm for the learning model, while for PS method it ranged between 0.3 +/- 0.25 mm and 20.71 +/- 8.38 mm.

20. Method for pulse to pulse dose reproducibility applied to electron linear accelerators

Ighigeanu, D.; Martin, D.; Oproiu, C.; Cirstea, E.; Craciun, G.

2002-01-01

An original method for obtaining programmed beam single shots and pulse trains with programmed pulse number, pulse repetition frequency, pulse duration and pulse dose is presented. It is particularly useful for automatic control of absorbed dose rate level, irradiation process control as well as in pulse radiolysis studies, single pulse dose measurement or for research experiments where pulse-to-pulse dose reproducibility is required. This method is applied to the electron linear accelerators, ALIN-10 of 6.23 MeV and 82 W and ALID-7, of 5.5 MeV and 670 W, built in NILPRP. In order to implement this method, the accelerator triggering system (ATS) consists of two branches: the gun branch and the magnetron branch. ATS, which synchronizes all the system units, delivers trigger pulses at a programmed repetition rate (up to 250 pulses/s) to the gun (80 kV, 10 A and 4 ms) and magnetron (45 kV, 100 A, and 4 ms).The accelerated electron beam existence is determined by the electron gun and magnetron pulses overlapping. The method consists in controlling the overlapping of pulses in order to deliver the beam in the desired sequence. This control is implemented by a discrete pulse position modulation of gun and/or magnetron pulses. The instabilities of the gun and magnetron transient regimes are avoided by operating the accelerator with no accelerated beam for a certain time. At the operator 'beam start' command, the ATS controls electron gun and magnetron pulses overlapping and the linac beam is generated. The pulse-to-pulse absorbed dose variation is thus considerably reduced. Programmed absorbed dose, irradiation time, beam pulse number or other external events may interrupt the coincidence between the gun and magnetron pulses. Slow absorbed dose variation is compensated by the control of the pulse duration and repetition frequency. Two methods are reported in the electron linear accelerators' development for obtaining the pulse to pulse dose reproducibility: the method

1. The use of artificial neural networks and multiple linear regression to predict rate of medical waste generation

2009-01-01

Prediction of the amount of hospital waste production will be helpful in the storage, transportation and disposal of hospital waste management. Based on this fact, two predictor models including artificial neural networks (ANNs) and multiple linear regression (MLR) were applied to predict the rate of medical waste generation totally and in different types of sharp, infectious and general. In this study, a 5-fold cross-validation procedure on a database containing total of 50 hospitals of Fars province (Iran) were used to verify the performance of the models. Three performance measures including MAR, RMSE and R 2 were used to evaluate performance of models. The MLR as a conventional model obtained poor prediction performance measure values. However, MLR distinguished hospital capacity and bed occupancy as more significant parameters. On the other hand, ANNs as a more powerful model, which has not been introduced in predicting rate of medical waste generation, showed high performance measure values, especially 0.99 value of R 2 confirming the good fit of the data. Such satisfactory results could be attributed to the non-linear nature of ANNs in problem solving which provides the opportunity for relating independent variables to dependent ones non-linearly. In conclusion, the obtained results showed that our ANN-based model approach is very promising and may play a useful role in developing a better cost-effective strategy for waste management in future.

2. Machine learning-based methods for prediction of linear B-cell epitopes.

Wang, Hsin-Wei; Pai, Tun-Wen

2014-01-01

B-cell epitope prediction facilitates immunologists in designing peptide-based vaccine, diagnostic test, disease prevention, treatment, and antibody production. In comparison with T-cell epitope prediction, the performance of variable length B-cell epitope prediction is still yet to be satisfied. Fortunately, due to increasingly available verified epitope databases, bioinformaticians could adopt machine learning-based algorithms on all curated data to design an improved prediction tool for biomedical researchers. Here, we have reviewed related epitope prediction papers, especially those for linear B-cell epitope prediction. It should be noticed that a combination of selected propensity scales and statistics of epitope residues with machine learning-based tools formulated a general way for constructing linear B-cell epitope prediction systems. It is also observed from most of the comparison results that the kernel method of support vector machine (SVM) classifier outperformed other machine learning-based approaches. Hence, in this chapter, except reviewing recently published papers, we have introduced the fundamentals of B-cell epitope and SVM techniques. In addition, an example of linear B-cell prediction system based on physicochemical features and amino acid combinations is illustrated in details.

3. Performance Characteristics and Prediction of Bodyweight using Linear Body Measurements in Four Strains of Broiler Chicken

I. Udeh; J.O. Isikwenu and G. Ukughere

2011-01-01

The objectives of this study were to compare the performance characteristics of four strains of broiler chicken from 2 to 8 weeks of age and predict body weight of the broilers using linear body measurements. The four strains of broiler chicken used were Anak, Arbor Acre, Ross and Marshall. The parameters recorded were bodyweight, weight gain, total feed intake, feed conversion ratio, mortality and some linear body measurements (body length, body width, breast width, drumstick length, shank l...

4. Predicting musically induced emotions from physiological inputs: linear and neural network models.

Russo, Frank A; Vempala, Naresh N; Sandstrom, Gillian M

2013-01-01

Listening to music often leads to physiological responses. Do these physiological responses contain sufficient information to infer emotion induced in the listener? The current study explores this question by attempting to predict judgments of "felt" emotion from physiological responses alone using linear and neural network models. We measured five channels of peripheral physiology from 20 participants-heart rate (HR), respiration, galvanic skin response, and activity in corrugator supercilii and zygomaticus major facial muscles. Using valence and arousal (VA) dimensions, participants rated their felt emotion after listening to each of 12 classical music excerpts. After extracting features from the five channels, we examined their correlation with VA ratings, and then performed multiple linear regression to see if a linear relationship between the physiological responses could account for the ratings. Although linear models predicted a significant amount of variance in arousal ratings, they were unable to do so with valence ratings. We then used a neural network to provide a non-linear account of the ratings. The network was trained on the mean ratings of eight of the 12 excerpts and tested on the remainder. Performance of the neural network confirms that physiological responses alone can be used to predict musically induced emotion. The non-linear model derived from the neural network was more accurate than linear models derived from multiple linear regression, particularly along the valence dimension. A secondary analysis allowed us to quantify the relative contributions of inputs to the non-linear model. The study represents a novel approach to understanding the complex relationship between physiological responses and musically induced emotion.

5. Iterated non-linear model predictive control based on tubes and contractive constraints.

Murillo, M; Sánchez, G; Giovanini, L

2016-05-01

6. To Apply Microdosing or Not? Recommendations to Single Out Compounds with Non-Linear Pharmacokinetics

Bosgra, S.; Vlaming, M.L.H.; Vaes, W.H.J.

2015-01-01

Non-linearities occur no more frequently between microdose and therapeutic dose studies than in therapeutic range ascending-dose studies. Most non-linearities are due to known saturable processes, and can be foreseen by integrating commonly available preclinical data. The guidance presented here may

7. Sixth SIAM conference on applied linear algebra: Final program and abstracts. Final technical report

NONE

1997-12-31

Linear algebra plays a central role in mathematics and applications. The analysis and solution of problems from an amazingly wide variety of disciplines depend on the theory and computational techniques of linear algebra. In turn, the diversity of disciplines depending on linear algebra also serves to focus and shape its development. Some problems have special properties (numerical, structural) that can be exploited. Some are simply so large that conventional approaches are impractical. New computer architectures motivate new algorithms, and fresh ways to look at old ones. The pervasive nature of linear algebra in analyzing and solving problems means that people from a wide spectrum--universities, industrial and government laboratories, financial institutions, and many others--share an interest in current developments in linear algebra. This conference aims to bring them together for their mutual benefit. Abstracts of papers presented are included.

8. Machine learning applied to the prediction of citrus production

Díaz Rodríguez, Susana Irene; Mazza, Silvia M.; Fernández-Combarro Álvarez, Elías; Giménez, Laura I.; Gaiad, José E.

2017-01-01

An in-depth knowledge about variables affecting production is required in order to predict global production and take decisions in agriculture. Machine learning is a technique used in agricultural planning and precision agriculture. This work (i) studies the effectiveness of machine learning techniques for predicting orchards production; and (ii) variables affecting this production were also identified. Data from 964 orchards of lemon, mandarin, and orange in Corrientes, Argentina are analyse...

9. Trend analysis by a piecewise linear regression model applied to surface air temperatures in Southeastern Spain (1973–2014)

Campra, Pablo; Morales, Maria

2016-01-01

The magnitude of the trends of environmental and climatic changes is mostly derived from the slopes of the linear trends using ordinary least-square fitting. An alternative flexible fitting model, piecewise regression, has been applied here to surface air temperature records in southeastern Spain for the recent warming period (1973–2014) to gain accuracy in the description of the inner structure of change, dividing the time series into linear segments with different slopes. Breakpoint y...

10. Data Normalization to Accelerate Training for Linear Neural Net to Predict Tropical Cyclone Tracks

Jian Jin

2015-01-01

Full Text Available When pure linear neural network (PLNN is used to predict tropical cyclone tracks (TCTs in South China Sea, whether the data is normalized or not greatly affects the training process. In this paper, min.-max. method and normal distribution method, instead of standard normal distribution, are applied to TCT data before modeling. We propose the experimental schemes in which, with min.-max. method, the min.-max. value pair of each variable is mapped to (−1, 1 and (0, 1; with normal distribution method, each variable’s mean and standard deviation pair is set to (0, 1 and (100, 1. We present the following results: (1 data scaled to the similar intervals have similar effects, no matter the use of min.-max. or normal distribution method; (2 mapping data to around 0 gains much faster training speed than mapping them to the intervals far away from 0 or using unnormalized raw data, although all of them can approach the same lower level after certain steps from their training error curves. This could be useful to decide data normalization method when PLNN is used individually.

11. Wheel slip control with torque blending using linear and nonlinear model predictive control

Basrah, M. Sofian; Siampis, Efstathios; Velenis, Efstathios; Cao, Dongpu; Longo, Stefano

2017-11-01

Modern hybrid electric vehicles employ electric braking to recuperate energy during deceleration. However, currently anti-lock braking system (ABS) functionality is delivered solely by friction brakes. Hence regenerative braking is typically deactivated at a low deceleration threshold in case high slip develops at the wheels and ABS activation is required. If blending of friction and electric braking can be achieved during ABS events, there would be no need to impose conservative thresholds for deactivation of regenerative braking and the recuperation capacity of the vehicle would increase significantly. In addition, electric actuators are typically significantly faster responding and would deliver better control of wheel slip than friction brakes. In this work we present a control strategy for ABS on a fully electric vehicle with each wheel independently driven by an electric machine and friction brake independently applied at each wheel. In particular we develop linear and nonlinear model predictive control strategies for optimal performance and enforcement of critical control and state constraints. The capability for real-time implementation of these controllers is assessed and their performance is validated in high fidelity simulation.

12. A turbulent mixing Reynolds stress model fitted to match linear interaction analysis predictions

Griffond, J; Soulard, O; Souffland, D

2010-01-01

To predict the evolution of turbulent mixing zones developing in shock tube experiments with different gases, a turbulence model must be able to reliably evaluate the production due to the shock-turbulence interaction. In the limit of homogeneous weak turbulence, 'linear interaction analysis' (LIA) can be applied. This theory relies on Kovasznay's decomposition and allows the computation of waves transmitted or produced at the shock front. With assumptions about the composition of the upstream turbulent mixture, one can connect the second-order moments downstream from the shock front to those upstream through a transfer matrix, depending on shock strength. The purpose of this work is to provide a turbulence model that matches LIA results for the shock-turbulent mixture interaction. Reynolds stress models (RSMs) with additional equations for the density-velocity correlation and the density variance are considered here. The turbulent states upstream and downstream from the shock front calculated with these models can also be related through a transfer matrix, provided that the numerical implementation is based on a pseudo-pressure formulation. Then, the RSM should be modified in such a way that its transfer matrix matches the LIA one. Using the pseudo-pressure to introduce ad hoc production terms, we are able to obtain a close agreement between LIA and RSM matrices for any shock strength and thus improve the capabilities of the RSM.

13. Model for predicting non-linear crack growth considering load sequence effects (LOSEQ)

Fuehring, H.

1982-01-01

A new analytical model for predicting non-linear crack growth is presented which takes into account the retardation as well as the acceleration effects due to irregular loading. It considers not only the maximum peak of a load sequence to effect crack growth but also all other loads of the history according to a generalised memory criterion. Comparisons between crack growth predicted by using the LOSEQ-programme and experimentally observed data are presented. (orig.) [de

14. Performance prediction of gas turbines by solving a system of non-linear equations

Kaikko, J

1998-09-01

This study presents a novel method for implementing the performance prediction of gas turbines from the component models. It is based on solving the non-linear set of equations that corresponds to the process equations, and the mass and energy balances for the engine. General models have been presented for determining the steady state operation of single components. Single and multiple shad arrangements have been examined with consideration also being given to heat regeneration and intercooling. Emphasis has been placed upon axial gas turbines of an industrial scale. Applying the models requires no information of the structural dimensions of the gas turbines. On comparison with the commonly applied component matching procedures, this method incorporates several advantages. The application of the models for providing results is facilitated as less attention needs to be paid to calculation sequences and routines. Solving the set of equations is based on zeroing co-ordinate functions that are directly derived from the modelling equations. Therefore, controlling the accuracy of the results is easy. This method gives more freedom for the selection of the modelling parameters since, unlike for the matching procedures, exchanging these criteria does not itself affect the algorithms. Implicit relationships between the variables are of no significance, thus increasing the freedom for the modelling equations as well. The mathematical models developed in this thesis will provide facilities to optimise the operation of any major gas turbine configuration with respect to the desired process parameters. The computational methods used in this study may also be adapted to any other modelling problems arising in industry. (orig.) 36 refs.

15. Substituting random forest for multiple linear regression improves binding affinity prediction of scoring functions: Cyscore as a case study.

Li, Hongjian; Leung, Kwong-Sak; Wong, Man-Hon; Ballester, Pedro J

2014-08-27

State-of-the-art protein-ligand docking methods are generally limited by the traditionally low accuracy of their scoring functions, which are used to predict binding affinity and thus vital for discriminating between active and inactive compounds. Despite intensive research over the years, classical scoring functions have reached a plateau in their predictive performance. These assume a predetermined additive functional form for some sophisticated numerical features, and use standard multivariate linear regression (MLR) on experimental data to derive the coefficients. In this study we show that such a simple functional form is detrimental for the prediction performance of a scoring function, and replacing linear regression by machine learning techniques like random forest (RF) can improve prediction performance. We investigate the conditions of applying RF under various contexts and find that given sufficient training samples RF manages to comprehensively capture the non-linearity between structural features and measured binding affinities. Incorporating more structural features and training with more samples can both boost RF performance. In addition, we analyze the importance of structural features to binding affinity prediction using the RF variable importance tool. Lastly, we use Cyscore, a top performing empirical scoring function, as a baseline for comparison study. Machine-learning scoring functions are fundamentally different from classical scoring functions because the former circumvents the fixed functional form relating structural features with binding affinities. RF, but not MLR, can effectively exploit more structural features and more training samples, leading to higher prediction performance. The future availability of more X-ray crystal structures will further widen the performance gap between RF-based and MLR-based scoring functions. This further stresses the importance of substituting RF for MLR in scoring function development.

16. Bayesian prediction of spatial count data using generalized linear mixed models

Christensen, Ole Fredslund; Waagepetersen, Rasmus Plenge

2002-01-01

Spatial weed count data are modeled and predicted using a generalized linear mixed model combined with a Bayesian approach and Markov chain Monte Carlo. Informative priors for a data set with sparse sampling are elicited using a previously collected data set with extensive sampling. Furthermore, ...

17. Using Hierarchical Linear Modelling to Examine Factors Predicting English Language Students' Reading Achievement

Fung, Karen; ElAtia, Samira

2015-01-01

Using Hierarchical Linear Modelling (HLM), this study aimed to identify factors such as ESL/ELL/EAL status that would predict students' reading performance in an English language arts exam taken across Canada. Using data from the 2007 administration of the Pan-Canadian Assessment Program (PCAP) along with the accompanying surveys for students and…

18. An improved robust model predictive control for linear parameter-varying input-output models

Abbas, H.S.; Hanema, J.; Tóth, R.; Mohammadpour, J.; Meskin, N.

2018-01-01

This paper describes a new robust model predictive control (MPC) scheme to control the discrete-time linear parameter-varying input-output models subject to input and output constraints. Closed-loop asymptotic stability is guaranteed by including a quadratic terminal cost and an ellipsoidal terminal

19. Variables Predicting Foreign Language Reading Comprehension and Vocabulary Acquisition in a Linear Hypermedia Environment

Akbulut, Yavuz

2007-01-01

Factors predicting vocabulary learning and reading comprehension of advanced language learners of English in a linear multimedia text were investigated in the current study. Predictor variables of interest were multimedia type, reading proficiency, learning styles, topic interest and background knowledge about the topic. The outcome variables of…

20. Convergence Guaranteed Nonlinear Constraint Model Predictive Control via I/O Linearization

Xiaobing Kong

2013-01-01

Full Text Available Constituting reliable optimal solution is a key issue for the nonlinear constrained model predictive control. Input-output feedback linearization is a popular method in nonlinear control. By using an input-output feedback linearizing controller, the original linear input constraints will change to nonlinear constraints and sometimes the constraints are state dependent. This paper presents an iterative quadratic program (IQP routine on the continuous-time system. To guarantee its convergence, another iterative approach is incorporated. The proposed algorithm can reach a feasible solution over the entire prediction horizon. Simulation results on both a numerical example and the continuous stirred tank reactors (CSTR demonstrate the effectiveness of the proposed method.

1. A study on two phase flows of linear compressors for the prediction of refrigerant leakage

Hwang, Il Sun; Lee, Young Lim; Oh, Won Sik; Park, Kyeong Bae

2015-01-01

Usage of linear compressors is on the rise due to their high efficiency. In this paper, leakage of a linear compressor has been studied through numerical analysis and experiments. First, nitrogen leakage for a stagnant piston with fixed cylinder pressure as well as for a moving piston with fixed cylinder pressure was analyzed to verify the validity of the two-phase flow analysis model. Next, refrigerant leakage of a linear compressor in operation was finally predicted through 3-dimensional unsteady, two phase flow CFD (Computational fluid dynamics). According to the research results, the numerical analyses for the fixed cylinder pressure models were in good agreement with the experimental results. The refrigerant leakage of the linear compressor in operation mainly occurred through the oil exit and the leakage became negligible after about 0.4s following operation where the leakage became lower than 2.0x10 -4 kg/s.

2. Biochemical methane potential prediction of plant biomasses: Comparing chemical composition versus near infrared methods and linear versus non-linear models.

Godin, Bruno; Mayer, Frédéric; Agneessens, Richard; Gerin, Patrick; Dardenne, Pierre; Delfosse, Philippe; Delcarte, Jérôme

2015-01-01

The reliability of different models to predict the biochemical methane potential (BMP) of various plant biomasses using a multispecies dataset was compared. The most reliable prediction models of the BMP were those based on the near infrared (NIR) spectrum compared to those based on the chemical composition. The NIR predictions of local (specific regression and non-linear) models were able to estimate quantitatively, rapidly, cheaply and easily the BMP. Such a model could be further used for biomethanation plant management and optimization. The predictions of non-linear models were more reliable compared to those of linear models. The presentation form (green-dried, silage-dried and silage-wet form) of biomasses to the NIR spectrometer did not influence the performances of the NIR prediction models. The accuracy of the BMP method should be improved to enhance further the BMP prediction models. Copyright © 2014 Elsevier Ltd. All rights reserved.

3. A national-scale model of linear features improves predictions of farmland biodiversity.

Sullivan, Martin J P; Pearce-Higgins, James W; Newson, Stuart E; Scholefield, Paul; Brereton, Tom; Oliver, Tom H

2017-12-01

Modelling species distribution and abundance is important for many conservation applications, but it is typically performed using relatively coarse-scale environmental variables such as the area of broad land-cover types. Fine-scale environmental data capturing the most biologically relevant variables have the potential to improve these models. For example, field studies have demonstrated the importance of linear features, such as hedgerows, for multiple taxa, but the absence of large-scale datasets of their extent prevents their inclusion in large-scale modelling studies.We assessed whether a novel spatial dataset mapping linear and woody-linear features across the UK improves the performance of abundance models of 18 bird and 24 butterfly species across 3723 and 1547 UK monitoring sites, respectively.Although improvements in explanatory power were small, the inclusion of linear features data significantly improved model predictive performance for many species. For some species, the importance of linear features depended on landscape context, with greater importance in agricultural areas. Synthesis and applications . This study demonstrates that a national-scale model of the extent and distribution of linear features improves predictions of farmland biodiversity. The ability to model spatial variability in the role of linear features such as hedgerows will be important in targeting agri-environment schemes to maximally deliver biodiversity benefits. Although this study focuses on farmland, data on the extent of different linear features are likely to improve species distribution and abundance models in a wide range of systems and also can potentially be used to assess habitat connectivity.

4. Drug-Target Interaction Prediction through Label Propagation with Linear Neighborhood Information.

Zhang, Wen; Chen, Yanlin; Li, Dingfang

2017-11-25

Interactions between drugs and target proteins provide important information for the drug discovery. Currently, experiments identified only a small number of drug-target interactions. Therefore, the development of computational methods for drug-target interaction prediction is an urgent task of theoretical interest and practical significance. In this paper, we propose a label propagation method with linear neighborhood information (LPLNI) for predicting unobserved drug-target interactions. Firstly, we calculate drug-drug linear neighborhood similarity in the feature spaces, by considering how to reconstruct data points from neighbors. Then, we take similarities as the manifold of drugs, and assume the manifold unchanged in the interaction space. At last, we predict unobserved interactions between known drugs and targets by using drug-drug linear neighborhood similarity and known drug-target interactions. The experiments show that LPLNI can utilize only known drug-target interactions to make high-accuracy predictions on four benchmark datasets. Furthermore, we consider incorporating chemical structures into LPLNI models. Experimental results demonstrate that the model with integrated information (LPLNI-II) can produce improved performances, better than other state-of-the-art methods. The known drug-target interactions are an important information source for computational predictions. The usefulness of the proposed method is demonstrated by cross validation and the case study.

5. Linear, Transﬁnite and Weighted Method for Interpolation from Grid Lines Applied to OCT Images

Lindberg, Anne-Sofie Wessel; Jørgensen, Thomas Martini; Dahl, Vedrana Andersen

2018-01-01

of a square grid, but are unknown inside each square. To view these values as an image, intensities need to be interpolated at regularly spaced pixel positions. In this paper we evaluate three methods for interpolation from grid lines: linear, transfinite and weighted. The linear method does not preserve...... and the stability of the linear method further away. An important parameter influencing the performance of the interpolation methods is the upsampling rate. We perform an extensive evaluation of the three interpolation methods across a range of upsampling rates. Our statistical analysis shows significant difference...... in the performance of the three methods. We find that the transfinite interpolation works well for small upsampling rates and the proposed weighted interpolation method performs very well for all upsampling rates typically used in practice. On the basis of these findings we propose an approach for combining two OCT...

6. Linear and nonlinear models for predicting fish bioconcentration factors for pesticides.

Yuan, Jintao; Xie, Chun; Zhang, Ting; Sun, Jinfang; Yuan, Xuejie; Yu, Shuling; Zhang, Yingbiao; Cao, Yunyuan; Yu, Xingchen; Yang, Xuan; Yao, Wu

2016-08-01

This work is devoted to the applications of the multiple linear regression (MLR), multilayer perceptron neural network (MLP NN) and projection pursuit regression (PPR) to quantitative structure-property relationship analysis of bioconcentration factors (BCFs) of pesticides tested on Bluegill (Lepomis macrochirus). Molecular descriptors of a total of 107 pesticides were calculated with the DRAGON Software and selected by inverse enhanced replacement method. Based on the selected DRAGON descriptors, a linear model was built by MLR, nonlinear models were developed using MLP NN and PPR. The robustness of the obtained models was assessed by cross-validation and external validation using test set. Outliers were also examined and deleted to improve predictive power. Comparative results revealed that PPR achieved the most accurate predictions. This study offers useful models and information for BCF prediction, risk assessment, and pesticide formulation. Copyright © 2016 Elsevier Ltd. All rights reserved.

7. Non-linear effects in electron cyclotron current drive applied for the stabilization of neoclassical tearing modes

Ayten, B.; Westerhof, E.; ASDEX Upgrade team,

2014-01-01

Due to the smallness of the volumes associated with the flux surfaces around the O-point of a magnetic island, the electron cyclotron power density applied inside the island for the stabilization of neoclassical tearing modes (NTMs) can exceed the threshold for non-linear effects as derived

8. A Multiobjective Approach Applied to the Protein Structure Prediction Problem

2002-03-07

local conformations [38]. Moreover, all these models have the same theme in trying to define the properties a real protein has when folding. Today , it...attempted to solve the PSP problem with a real valued GA and found better results than a competitor (Scheraga, et al) [50]; however, today we know that...ACM Symposium on Applied computing (SAC01) (March 11-14 2001). Las Vegas, Nevada. [22] Derrida , B. “Random Energy Model: Limit of a Family of

9. Improving the Prediction of Total Surgical Procedure Time Using Linear Regression Modeling.

Edelman, Eric R; van Kuijk, Sander M J; Hamaekers, Ankie E W; de Korte, Marcel J M; van Merode, Godefridus G; Buhre, Wolfgang F F A

2017-01-01

For efficient utilization of operating rooms (ORs), accurate schedules of assigned block time and sequences of patient cases need to be made. The quality of these planning tools is dependent on the accurate prediction of total procedure time (TPT) per case. In this paper, we attempt to improve the accuracy of TPT predictions by using linear regression models based on estimated surgeon-controlled time (eSCT) and other variables relevant to TPT. We extracted data from a Dutch benchmarking database of all surgeries performed in six academic hospitals in The Netherlands from 2012 till 2016. The final dataset consisted of 79,983 records, describing 199,772 h of total OR time. Potential predictors of TPT that were included in the subsequent analysis were eSCT, patient age, type of operation, American Society of Anesthesiologists (ASA) physical status classification, and type of anesthesia used. First, we computed the predicted TPT based on a previously described fixed ratio model for each record, multiplying eSCT by 1.33. This number is based on the research performed by van Veen-Berkx et al., which showed that 33% of SCT is generally a good approximation of anesthesia-controlled time (ACT). We then systematically tested all possible linear regression models to predict TPT using eSCT in combination with the other available independent variables. In addition, all regression models were again tested without eSCT as a predictor to predict ACT separately (which leads to TPT by adding SCT). TPT was most accurately predicted using a linear regression model based on the independent variables eSCT, type of operation, ASA classification, and type of anesthesia. This model performed significantly better than the fixed ratio model and the method of predicting ACT separately. Making use of these more accurate predictions in planning and sequencing algorithms may enable an increase in utilization of ORs, leading to significant financial and productivity related benefits.

10. Improving the Prediction of Total Surgical Procedure Time Using Linear Regression Modeling

Eric R. Edelman

2017-06-01

Full Text Available For efficient utilization of operating rooms (ORs, accurate schedules of assigned block time and sequences of patient cases need to be made. The quality of these planning tools is dependent on the accurate prediction of total procedure time (TPT per case. In this paper, we attempt to improve the accuracy of TPT predictions by using linear regression models based on estimated surgeon-controlled time (eSCT and other variables relevant to TPT. We extracted data from a Dutch benchmarking database of all surgeries performed in six academic hospitals in The Netherlands from 2012 till 2016. The final dataset consisted of 79,983 records, describing 199,772 h of total OR time. Potential predictors of TPT that were included in the subsequent analysis were eSCT, patient age, type of operation, American Society of Anesthesiologists (ASA physical status classification, and type of anesthesia used. First, we computed the predicted TPT based on a previously described fixed ratio model for each record, multiplying eSCT by 1.33. This number is based on the research performed by van Veen-Berkx et al., which showed that 33% of SCT is generally a good approximation of anesthesia-controlled time (ACT. We then systematically tested all possible linear regression models to predict TPT using eSCT in combination with the other available independent variables. In addition, all regression models were again tested without eSCT as a predictor to predict ACT separately (which leads to TPT by adding SCT. TPT was most accurately predicted using a linear regression model based on the independent variables eSCT, type of operation, ASA classification, and type of anesthesia. This model performed significantly better than the fixed ratio model and the method of predicting ACT separately. Making use of these more accurate predictions in planning and sequencing algorithms may enable an increase in utilization of ORs, leading to significant financial and productivity related

11. Possibility logic applied to pressure vessel residual lifetime prediction

Garribba, S.; Lucia, A.C.; Volta, G.

1985-01-01

The adequacy is discussed of a probability measure to deal with the different types of uncertainty affecting any pressure vessel lifetime prediction. A more comprehensive framework derived from the fuzzy set theory and including as particular case possibility and probability measures is considered. With reference to the most critical step of lifetime assessment (the ND inspection), the paper compares the results, obtained adopting a possibility measure or a probability measure, in the representation models, fault tree-event tree, and in the decision models

12. Predicting musically induced emotions from physiological inputs: Linear and neural network models

Frank A. Russo

2013-08-01

Full Text Available Listening to music often leads to physiological responses. Do these physiological responses contain sufficient information to infer emotion induced in the listener? The current study explores this question by attempting to predict judgments of 'felt' emotion from physiological responses alone using linear and neural network models. We measured five channels of peripheral physiology from 20 participants – heart rate, respiration, galvanic skin response, and activity in corrugator supercilii and zygomaticus major facial muscles. Using valence and arousal (VA dimensions, participants rated their felt emotion after listening to each of 12 classical music excerpts. After extracting features from the five channels, we examined their correlation with VA ratings, and then performed multiple linear regression to see if a linear relationship between the physiological responses could account for the ratings. Although linear models predicted a significant amount of variance in arousal ratings, they were unable to do so with valence ratings. We then used a neural network to provide a nonlinear account of the ratings. The network was trained on the mean ratings of eight of the 12 excerpts and tested on the remainder. Performance of the neural network confirms that physiological responses alone can be used to predict musically induced emotion. The nonlinear model derived from the neural network was more accurate than linear models derived from multiple linear regression, particularly along the valence dimension. A secondary analysis allowed us to quantify the relative contributions of inputs to the nonlinear model. The study represents a novel approach to understanding the complex relationship between physiological responses and musically induced emotion.

13. Straight line fitting and predictions: On a marginal likelihood approach to linear regression and errors-in-variables models

Christiansen, Bo

2015-04-01

Linear regression methods are without doubt the most used approaches to describe and predict data in the physical sciences. They are often good first order approximations and they are in general easier to apply and interpret than more advanced methods. However, even the properties of univariate regression can lead to debate over the appropriateness of various models as witnessed by the recent discussion about climate reconstruction methods. Before linear regression is applied important choices have to be made regarding the origins of the noise terms and regarding which of the two variables under consideration that should be treated as the independent variable. These decisions are often not easy to make but they may have a considerable impact on the results. We seek to give a unified probabilistic - Bayesian with flat priors - treatment of univariate linear regression and prediction by taking, as starting point, the general errors-in-variables model (Christiansen, J. Clim., 27, 2014-2031, 2014). Other versions of linear regression can be obtained as limits of this model. We derive the likelihood of the model parameters and predictands of the general errors-in-variables model by marginalizing over the nuisance parameters. The resulting likelihood is relatively simple and easy to analyze and calculate. The well known unidentifiability of the errors-in-variables model is manifested as the absence of a well-defined maximum in the likelihood. However, this does not mean that probabilistic inference can not be made; the marginal likelihoods of model parameters and the predictands have, in general, well-defined maxima. We also include a probabilistic version of classical calibration and show how it is related to the errors-in-variables model. The results are illustrated by an example from the coupling between the lower stratosphere and the troposphere in the Northern Hemisphere winter.

14. Linearized and Kernelized Sparse Multitask Learning for Predicting Cognitive Outcomes in Alzheimer’s Disease

Xiaoli Liu

2018-01-01

Full Text Available Alzheimer’s disease (AD has been not only the substantial financial burden to the health care system but also the emotional burden to patients and their families. Predicting cognitive performance of subjects from their magnetic resonance imaging (MRI measures and identifying relevant imaging biomarkers are important research topics in the study of Alzheimer’s disease. Recently, the multitask learning (MTL methods with sparsity-inducing norm (e.g., l2,1-norm have been widely studied to select the discriminative feature subset from MRI features by incorporating inherent correlations among multiple clinical cognitive measures. However, these previous works formulate the prediction tasks as a linear regression problem. The major limitation is that they assumed a linear relationship between the MRI features and the cognitive outcomes. Some multikernel-based MTL methods have been proposed and shown better generalization ability due to the nonlinear advantage. We quantify the power of existing linear and nonlinear MTL methods by evaluating their performance on cognitive score prediction of Alzheimer’s disease. Moreover, we extend the traditional l2,1-norm to a more general lql1-norm (q≥1. Experiments on the Alzheimer’s Disease Neuroimaging Initiative database showed that the nonlinear l2,1lq-MKMTL method not only achieved better prediction performance than the state-of-the-art competitive methods but also effectively fused the multimodality data.

15. Predicting respiratory motion signals for image-guided radiotherapy using multi-step linear methods (MULIN)

Ernst, Floris; Schweikard, Achim

2008-01-01

Forecasting of respiration motion in image-guided radiotherapy requires algorithms that can accurately and efficiently predict target location. Improved methods for respiratory motion forecasting were developed and tested. MULIN, a new family of prediction algorithms based on linear expansions of the prediction error, was developed and tested. Computer-generated data with a prediction horizon of 150 ms was used for testing in simulation experiments. MULIN was compared to Least Mean Squares-based predictors (LMS; normalized LMS, nLMS; wavelet-based multiscale autoregression, wLMS) and a multi-frequency Extended Kalman Filter (EKF) approach. The in vivo performance of the algorithms was tested on data sets of patients who underwent radiotherapy. The new MULIN methods are highly competitive, outperforming the LMS and the EKF prediction algorithms in real-world settings and performing similarly to optimized nLMS and wLMS prediction algorithms. On simulated, periodic data the MULIN algorithms are outperformed only by the EKF approach due to its inherent advantage in predicting periodic signals. In the presence of noise, the MULIN methods significantly outperform all other algorithms. The MULIN family of algorithms is a feasible tool for the prediction of respiratory motion, performing as well as or better than conventional algorithms while requiring significantly lower computational complexity. The MULIN algorithms are of special importance wherever high-speed prediction is required. (orig.)

16. Predicting respiratory motion signals for image-guided radiotherapy using multi-step linear methods (MULIN)

Ernst, Floris; Schweikard, Achim [University of Luebeck, Institute for Robotics and Cognitive Systems, Luebeck (Germany)

2008-06-15

Forecasting of respiration motion in image-guided radiotherapy requires algorithms that can accurately and efficiently predict target location. Improved methods for respiratory motion forecasting were developed and tested. MULIN, a new family of prediction algorithms based on linear expansions of the prediction error, was developed and tested. Computer-generated data with a prediction horizon of 150 ms was used for testing in simulation experiments. MULIN was compared to Least Mean Squares-based predictors (LMS; normalized LMS, nLMS; wavelet-based multiscale autoregression, wLMS) and a multi-frequency Extended Kalman Filter (EKF) approach. The in vivo performance of the algorithms was tested on data sets of patients who underwent radiotherapy. The new MULIN methods are highly competitive, outperforming the LMS and the EKF prediction algorithms in real-world settings and performing similarly to optimized nLMS and wLMS prediction algorithms. On simulated, periodic data the MULIN algorithms are outperformed only by the EKF approach due to its inherent advantage in predicting periodic signals. In the presence of noise, the MULIN methods significantly outperform all other algorithms. The MULIN family of algorithms is a feasible tool for the prediction of respiratory motion, performing as well as or better than conventional algorithms while requiring significantly lower computational complexity. The MULIN algorithms are of special importance wherever high-speed prediction is required. (orig.)

17. The acoustic Doppler effect applied to the study of linear motions

Gómez-Tejedor, José A; Castro-Palacio, Juan C; Monsoriu, Juan A

2014-01-01

In this work, the change of frequency of a sound wave due to the Doppler effect has been measured using a smartphone. For this purpose, a speaker at rest and a smartphone placed on a cart on an air track were used. The change in frequency was measured by using an application for Android™, ‘Frequency Analyzer’, which was developed by us specifically for this work. This made it possible to analyze four types of mechanical motions: uniform linear motion, uniform accelerated linear motion, harmonic oscillations and damped harmonic oscillations. These experiments are suitable for undergraduate students. The main novelty of this work was the possibility of measuring the instantaneous frequency as a function of time with high precision. The results were compared with alternative measurements yielding good agreement. (paper)

18. Genomic prediction applied to high-biomass sorghum for bioenergy production.

de Oliveira, Amanda Avelar; Pastina, Maria Marta; de Souza, Vander Filipe; da Costa Parrella, Rafael Augusto; Noda, Roberto Willians; Simeone, Maria Lúcia Ferreira; Schaffert, Robert Eugene; de Magalhães, Jurandir Vieira; Damasceno, Cynthia Maria Borges; Margarido, Gabriel Rodrigues Alves

2018-01-01

The increasing cost of energy and finite oil and gas reserves have created a need to develop alternative fuels from renewable sources. Due to its abiotic stress tolerance and annual cultivation, high-biomass sorghum ( Sorghum bicolor L. Moench) shows potential as a bioenergy crop. Genomic selection is a useful tool for accelerating genetic gains and could restructure plant breeding programs by enabling early selection and reducing breeding cycle duration. This work aimed at predicting breeding values via genomic selection models for 200 sorghum genotypes comprising landrace accessions and breeding lines from biomass and saccharine groups. These genotypes were divided into two sub-panels, according to breeding purpose. We evaluated the following phenotypic biomass traits: days to flowering, plant height, fresh and dry matter yield, and fiber, cellulose, hemicellulose, and lignin proportions. Genotyping by sequencing yielded more than 258,000 single-nucleotide polymorphism markers, which revealed population structure between subpanels. We then fitted and compared genomic selection models BayesA, BayesB, BayesCπ, BayesLasso, Bayes Ridge Regression and random regression best linear unbiased predictor. The resulting predictive abilities varied little between the different models, but substantially between traits. Different scenarios of prediction showed the potential of using genomic selection results between sub-panels and years, although the genotype by environment interaction negatively affected accuracies. Functional enrichment analyses performed with the marker-predicted effects suggested several interesting associations, with potential for revealing biological processes relevant to the studied quantitative traits. This work shows that genomic selection can be successfully applied in biomass sorghum breeding programs.

19. Predictive analysis of photodynamic therapy applied to esophagus cancer

Fanjul-Vélez, F.; del Campo-Gutiérrez, M.; Ortega-Quijano, N.; Arce-Diego, J. L.

2008-04-01

The use of optical techniques in medicine has revolutionized in many cases the medical praxis, providing new tools for practitioners or improving the existing ones in the fight against diseases. The application of this technology comprises mainly two branches, characterization and treatment of biological tissues. Photodynamic Therapy (PDT) provides a solution for malignant tissue destruction, by means of the inoculation of a photosensitizer and irradiation by an optical source. The key factor of the procedure is the localization of the damage to avoid collateral harmful effects. The volume of tissue destroyed depends on the type of photosensitizer inoculated, both on its reactive characteristics and its distribution inside the tissue, and also on the specific properties of the optical source, that is, the optical power, wavelength and exposition time. In this work, a model for PDT based on the one-dimensional diffusion equation, extensible to 3D, to estimate the optical distribution in tissue, and on photosensitizer parameters to take into account the photobleaching effect is proposed. The application to esophagus cancer allows the selection of the right optical source parameters, like irradiance, wavelength or exposition time, in order to predict the area of tissue destruction.

20. Prediction of applied forces in handrim wheelchair propulsion.

Lin, Chien-Ju; Lin, Po-Chou; Guo, Lan-Yuen; Su, Fong-Chin

2011-02-03

Researchers of wheelchair propulsion have usually suggested that a wheelchair can be properly designed using anthropometrics to reduce high mechanical load and thus reduce pain and damage to joints. A model based on physiological features and biomechanical principles can be used to determine anthropometric relationships for wheelchair fitting. To improve the understanding of man-machine interaction and the mechanism through which propulsion performance been enhanced, this study develops and validates an energy model for wheelchair propulsion. Kinematic data obtained from ten able-bodied and ten wheelchair-dependent users during level propulsion at an average velocity of 1m/s were used as the input of a planar model with the criteria of increasing efficiency and reducing joint load. Results demonstrate that for both experienced and inexperienced users, predicted handrim contact forces agree with experimental data through an extensive range of the push. Significant deviations that were mostly observed in the early stage of the push phase might result from the lack of consideration of muscle dynamics and wrist joint biomechanics. The proposed model effectively verified the handrim contact force patterns during dynamic propulsion. Users do not aim to generate mechanically most effective forces to avoid high loadings on the joints. Copyright © 2010 Elsevier Ltd. All rights reserved.

1. A Dantzig-Wolfe decomposition algorithm for linear economic model predictive control of dynamically decoupled subsystems

Sokoler, Leo Emil; Standardi, Laura; Edlund, Kristian

2014-01-01

This paper presents a warm-started Dantzig–Wolfe decomposition algorithm tailored to economic model predictive control of dynamically decoupled subsystems. We formulate the constrained optimal control problem solved at each sampling instant as a linear program with state space constraints, input...... limits, input rate limits, and soft output limits. The objective function of the linear program is related directly to the cost of operating the subsystems, and the cost of violating the soft output constraints. Simulations for large-scale economic power dispatch problems show that the proposed algorithm...... is significantly faster than both state-of-the-art linear programming solvers, and a structure exploiting implementation of the alternating direction method of multipliers. It is also demonstrated that the control strategy presented in this paper can be tuned using a weighted ℓ1-regularization term...

2. Predicting non-linear dynamics by stable local learning in a recurrent spiking neural network.

2017-11-27

The brain needs to predict how the body reacts to motor commands, but how a network of spiking neurons can learn non-linear body dynamics using local, online and stable learning rules is unclear. Here, we present a supervised learning scheme for the feedforward and recurrent connections in a network of heterogeneous spiking neurons. The error in the output is fed back through fixed random connections with a negative gain, causing the network to follow the desired dynamics. The rule for Feedback-based Online Local Learning Of Weights (FOLLOW) is local in the sense that weight changes depend on the presynaptic activity and the error signal projected onto the postsynaptic neuron. We provide examples of learning linear, non-linear and chaotic dynamics, as well as the dynamics of a two-link arm. Under reasonable approximations, we show, using the Lyapunov method, that FOLLOW learning is uniformly stable, with the error going to zero asymptotically.

3. Financial Distress Prediction using Linear Discriminant Analysis and Support Vector Machine

Santoso, Noviyanti; Wibowo, Wahyu

2018-03-01

A financial difficulty is the early stages before the bankruptcy. Bankruptcies caused by the financial distress can be seen from the financial statements of the company. The ability to predict financial distress became an important research topic because it can provide early warning for the company. In addition, predicting financial distress is also beneficial for investors and creditors. This research will be made the prediction model of financial distress at industrial companies in Indonesia by comparing the performance of Linear Discriminant Analysis (LDA) and Support Vector Machine (SVM) combined with variable selection technique. The result of this research is prediction model based on hybrid Stepwise-SVM obtains better balance among fitting ability, generalization ability and model stability than the other models.

4. Application of genetic algorithm - multiple linear regressions to predict the activity of RSK inhibitors

Avval Zhila Mohajeri

2015-01-01

Full Text Available This paper deals with developing a linear quantitative structure-activity relationship (QSAR model for predicting the RSK inhibition activity of some new compounds. A dataset consisting of 62 pyrazino [1,2-α] indole, diazepino [1,2-α] indole, and imidazole derivatives with known inhibitory activities was used. Multiple linear regressions (MLR technique combined with the stepwise (SW and the genetic algorithm (GA methods as variable selection tools was employed. For more checking stability, robustness and predictability of the proposed models, internal and external validation techniques were used. Comparison of the results obtained, indicate that the GA-MLR model is superior to the SW-MLR model and that it isapplicable for designing novel RSK inhibitors.

5. Computationally Efficient Amplitude Modulated Sinusoidal Audio Coding using Frequency-Domain Linear Prediction

Christensen, M. G.; Jensen, Søren Holdt

2006-01-01

A method for amplitude modulated sinusoidal audio coding is presented that has low complexity and low delay. This is based on a subband processing system, where, in each subband, the signal is modeled as an amplitude modulated sum of sinusoids. The envelopes are estimated using frequency......-domain linear prediction and the prediction coefficients are quantized. As a proof of concept, we evaluate different configurations in a subjective listening test, and this shows that the proposed method offers significant improvements in sinusoidal coding. Furthermore, the properties of the frequency...

6. Comparison of Linear and Nonlinear Model Predictive Control for Optimization of Spray Dryer Operation

Petersen, Lars Norbert; Poulsen, Niels Kjølstad; Niemann, Hans Henrik

2015-01-01

In this paper, we compare the performance of an economically optimizing Nonlinear Model Predictive Controller (E-NMPC) to a linear tracking Model Predictive Controller (MPC) for a spray drying plant. We find in this simulation study, that the economic performance of the two controllers are almost...... equal. We evaluate the economic performance with an industrially recorded disturbance scenario, where unmeasured disturbances and model mismatch are present. The state of the spray dryer, used in the E-NMPC and MPC, is estimated using Kalman Filters with noise covariances estimated by a maximum...

7. Predicting haemodynamic networks using electrophysiology: The role of non-linear and cross-frequency interactions

Tewarie, P.; Bright, M.G.; Hillebrand, A.; Robson, S.E.; Gascoyne, L.E.; Morris, P.G.; Meier, J.; Van Mieghem, P.; Brookes, M.J.

2016-01-01

Understanding the electrophysiological basis of resting state networks (RSNs) in the human brain is a critical step towards elucidating how inter-areal connectivity supports healthy brain function. In recent years, the relationship between RSNs (typically measured using haemodynamic signals) and electrophysiology has been explored using functional Magnetic Resonance Imaging (fMRI) and magnetoencephalography (MEG). Significant progress has been made, with similar spatial structure observable in both modalities. However, there is a pressing need to understand this relationship beyond simple visual similarity of RSN patterns. Here, we introduce a mathematical model to predict fMRI-based RSNs using MEG. Our unique model, based upon a multivariate Taylor series, incorporates both phase and amplitude based MEG connectivity metrics, as well as linear and non-linear interactions within and between neural oscillations measured in multiple frequency bands. We show that including non-linear interactions, multiple frequency bands and cross-frequency terms significantly improves fMRI network prediction. This shows that fMRI connectivity is not only the result of direct electrophysiological connections, but is also driven by the overlap of connectivity profiles between separate regions. Our results indicate that a complete understanding of the electrophysiological basis of RSNs goes beyond simple frequency-specific analysis, and further exploration of non-linear and cross-frequency interactions will shed new light on distributed network connectivity, and its perturbation in pathology. PMID:26827811

8. [Prediction model of health workforce and beds in county hospitals of Hunan by multiple linear regression].

Ling, Ru; Liu, Jiawang

2011-12-01

To construct prediction model for health workforce and hospital beds in county hospitals of Hunan by multiple linear regression. We surveyed 16 counties in Hunan with stratified random sampling according to uniform questionnaires,and multiple linear regression analysis with 20 quotas selected by literature view was done. Independent variables in the multiple linear regression model on medical personnels in county hospitals included the counties' urban residents' income, crude death rate, medical beds, business occupancy, professional equipment value, the number of devices valued above 10 000 yuan, fixed assets, long-term debt, medical income, medical expenses, outpatient and emergency visits, hospital visits, actual available bed days, and utilization rate of hospital beds. Independent variables in the multiple linear regression model on county hospital beds included the the population of aged 65 and above in the counties, disposable income of urban residents, medical personnel of medical institutions in county area, business occupancy, the total value of professional equipment, fixed assets, long-term debt, medical income, medical expenses, outpatient and emergency visits, hospital visits, actual available bed days, utilization rate of hospital beds, and length of hospitalization. The prediction model shows good explanatory and fitting, and may be used for short- and mid-term forecasting.

9. MULTIPLE LINEAR REGRESSION ANALYSIS FOR PREDICTION OF BOILER LOSSES AND BOILER EFFICIENCY

Chayalakshmi C.L

2018-01-01

MULTIPLE LINEAR REGRESSION ANALYSIS FOR PREDICTION OF BOILER LOSSES AND BOILER EFFICIENCY ABSTRACT Calculation of boiler efficiency is essential if its parameters need to be controlled for either maintaining or enhancing its efficiency. But determination of boiler efficiency using conventional method is time consuming and very expensive. Hence, it is not recommended to find boiler efficiency frequently. The work presented in this paper deals with establishing the statistical mo...

10. Integrating piecewise linear representation and ensemble neural network for stock price prediction

Asaduzzaman, Md.; Shahjahan, Md.; Ahmed, Fatema Johera; Islam, Md. Monirul; Murase, Kazuyuki

2014-01-01

Stock Prices are considered to be very dynamic and susceptible to quick changes because of the underlying nature of the financial domain, and in part because of the interchange between known parameters and unknown factors. Of late, several researchers have used Piecewise Linear Representation (PLR) to predict the stock market pricing. However, some improvements are needed to avoid the appropriate threshold of the trading decision, choosing the input index as well as improving the overall perf...

11. Linear free energy relationship applied to trivalent cations with lanthanum and actinium oxide and hydroxide structure

Ragavan, Anpalaki J.

2006-01-01

Linear free energy relationships for trivalent cations with crystalline M 2 O 3 and, M(OH) 3 phases of lanthanides and actinides were developed from known thermodynamic properties of the aqueous trivalent cations, modifying the Sverjensky and Molling equation. The linear free energy relationship for trivalent cations is as ΔG f,MvX 0 =a MvX ΔG n,M 3+ 0 +b MvX +β MvX r M 3+ , where the coefficients a MvX , b MvX , and β MvX characterize a particular structural family of MvX, r M 3+ is the ionic radius of M 3+ cation, ΔG f,MvX 0 is the standard Gibbs free energy of formation of MvX and ΔG n,M 3+ 0 is the standard non-solvation free energy of the cation. The coefficients for the oxide family are: a MvX =0.2705, b MvX =-1984.75 (kJ/mol), and β MvX =197.24 (kJ/molnm). The coefficients for the hydroxide family are: a MvX =0.1587, b MvX =-1474.09 (kJ/mol), and β MvX =791.70 (kJ/molnm).

12. Optimization of a neutron transmission beamline applied to materials science for the CAB linear accelerator

Ramirez, S; Santisteban, J.R

2009-01-01

The Neutrons and Reactors Laboratory (NYR) of CAB (Centro Atomico Bariloche) is equipped with a linear electron accelerator (LINAC - Linear particle accelerator). This LINAC is used as a neutron source from which two beams are extracted to perform neutron transmission and dispersion experiments. Through these experiments, structural and dynamic properties of materials can be studied. The neutron transmission experiments consist in a collimated neutron beam which interacts with a sample and a detector behind the sample. Important information about the microstructural characteristics of the material can be obtained from the comparison between neutron spectra before and after the interaction with the sample. In the NYR Laboratory, cylindrical samples of one inch of diameter have been traditionally studied. Nonetheless, there is a great motivation for doing systematic research on smaller and with different geometries samples; particularly sheets and samples for tensile tests. Hence, in the NYR Laboratory it has been considered the possibility of incorporating a neutron guide into the existent transmission line. According to all mentioned above, the main objective of this work consisted in the optimization of the flight transmission tube optics of neutrons. This optimization not only improved the existent line but also contributed to an election criterion for the neutron guide acquisition. [es

13. APPLYING ROBUST RANKING METHOD IN TWO PHASE FUZZY OPTIMIZATION LINEAR PROGRAMMING PROBLEMS (FOLPP

Monalisha Pattnaik

2014-12-01

Full Text Available Background: This paper explores the solutions to the fuzzy optimization linear program problems (FOLPP where some parameters are fuzzy numbers. In practice, there are many problems in which all decision parameters are fuzzy numbers, and such problems are usually solved by either probabilistic programming or multi-objective programming methods. Methods: In this paper, using the concept of comparison of fuzzy numbers, a very effective method is introduced for solving these problems. This paper extends linear programming based problem in fuzzy environment. With the problem assumptions, the optimal solution can still be theoretically solved using the two phase simplex based method in fuzzy environment. To handle the fuzzy decision variables can be initially generated and then solved and improved sequentially using the fuzzy decision approach by introducing robust ranking technique. Results and conclusions: The model is illustrated with an application and a post optimal analysis approach is obtained. The proposed procedure was programmed with MATLAB (R2009a version software for plotting the four dimensional slice diagram to the application. Finally, numerical example is presented to illustrate the effectiveness of the theoretical results, and to gain additional managerial insights.

14. Wireless control system for two-axis linear oscillating motion applying CBR technology

Kuzyakov, O. N.; Andreeva, M. A.

2018-03-01

The paper presents the aspects of elaborating a movement control system. The system is to implement determination of movement characteristics of the object controlled, which performs an oscillating linear motion in a two-axis direction. The system has an electronic-optical principle of action: light receivers are attached to a controlled object, and a laser light emitter is attached to a static construction. While the object performs movement along the construction, the light emitter signal is registered by light receivers, based on which determination of the object position and characteristic of its movement are performed. An algorithm of system implementation is elaborated. Signal processing is performed on the basis of the case-based reasoning method. The system is to be used in machine-building industry in controlling relative displacement of the dynamic object or its assembly.

15. Linear filters as a method of real-time prediction of geomagnetic activity

McPherron, R.L.; Baker, D.N.; Bargatze, L.F.

1985-01-01

Important factors controlling geomagnetic activity include the solar wind velocity, the strength of the interplanetary magnetic field (IMF), and the field orientation. Because these quantities change so much in transit through the solar wind, real-time monitoring immediately upstream of the earth provides the best input for any technique of real-time prediction. One such technique is linear prediction filtering which utilizes past histories of the input and output of a linear system to create a time-invariant filter characterizing the system. Problems of nonlinearity or temporal changes of the system can be handled by appropriate choice of input parameters and piecewise approximation in various ranges of the input. We have created prediction filters for all the standard magnetic indices and tested their efficiency. The filters show that the initial response of the magnetosphere to a southward turning of the IMF peaks in 20 minutes and then again in 55 minutes. After a northward turning, auroral zone indices and the midlatitude ASYM index return to background within 2 hours, while Dst decays exponentially with a time constant of about 8 hours. This paper describes a simple, real-time system utilizing these filters which could predict a substantial fraction of the variation in magnetic activity indices 20 to 50 minutes in advance

16. Linear Model-Based Predictive Control of the LHC 1.8 K Cryogenic Loop

1999-01-01

The LHC accelerator will employ 1800 superconducting magnets (for guidance and focusing of the particle beams) in a pressurized superfluid helium bath at 1.9 K. This temperature is a severely constrained control parameter in order to avoid the transition from the superconducting to the normal state. Cryogenic processes are difficult to regulate due to their highly non-linear physical parameters (heat capacity, thermal conductance, etc.) and undesirable peculiarities like non self-regulating process, inverse response and variable dead time. To reduce the requirements on either temperature sensor or cryogenic system performance, various control strategies have been investigated on a reduced-scale LHC prototype built at CERN (String Test). Model Based Predictive Control (MBPC) is a regulation algorithm based on the explicit use of a process model to forecast the plant output over a certain prediction horizon. This predicted controlled variable is used in an on-line optimization procedure that minimizes an approp...

17. Robust Model Predictive Control Using Linear Matrix Inequalities for the Treatment of Asymmetric Output Constraints

Mariana Santos Matos Cavalca

2012-01-01

Full Text Available One of the main advantages of predictive control approaches is the capability of dealing explicitly with constraints on the manipulated and output variables. However, if the predictive control formulation does not consider model uncertainties, then the constraint satisfaction may be compromised. A solution for this inconvenience is to use robust model predictive control (RMPC strategies based on linear matrix inequalities (LMIs. However, LMI-based RMPC formulations typically consider only symmetric constraints. This paper proposes a method based on pseudoreferences to treat asymmetric output constraints in integrating SISO systems. Such technique guarantees robust constraint satisfaction and convergence of the state to the desired equilibrium point. A case study using numerical simulation indicates that satisfactory results can be achieved.

18. Evaluation of accuracy of linear regression models in predicting urban stormwater discharge characteristics.

2014-06-01

19. Improved prediction of genetic predisposition to psychiatric disorders using genomic feature best linear unbiased prediction models

Rohde, Palle Duun; Demontis, Ditte; Børglum, Anders

is enriched for causal variants. Here we apply the GFBLUP model to a small schizophrenia case-control study to test the promise of this model on psychiatric disorders, and hypothesize that the performance will be increased when applying the model to a larger ADHD case-control study if the genomic feature...... contains the causal variants. Materials and Methods: The schizophrenia study consisted of 882 controls and 888 schizophrenia cases genotyped for 520,000 SNPs. The ADHD study contained 25,954 controls and 16,663 ADHD cases with 8,4 million imputed genotypes. Results: The predictive ability for schizophrenia.......6% for the null model). Conclusion: The improvement in predictive ability for schizophrenia was marginal, however, greater improvement is expected for the larger ADHD data....

20. Non-linear multivariable predictive control of an alcoholic fermentation process using functional link networks

Luiz Augusto da Cruz Meleiro

2005-06-01

1. Applying moving bed biofilm reactor for removing linear alkylbenzene sulfonate using synthetic media

Jalaleddin Mollaei

2015-01-01

Full Text Available Detergents and problems of their attendance into water and wastewater cause varied difficulties such as producing foam, abnormality in the growth of algae, accumulation and dispersion in aqueous environments. One of the reactors was designated with 30% of the media with the similar conditions exactly same as the other which had filling rate about 10 %, in order to compare both of them together. A standard method methylene blue active substance was used to measure anionic surfactant. The concentrations of linear alkylbenzene sulfonate which examined were 50, 100, 200, 300 and 400 mg/l in HRT 72, 24 and 8 hrs. The removal percentage for both of reactors at the beginning of operating at50 mg/l concentration of pollutant had a bit difference and with gradually increasing the pollutant concentration and decreasing Hydraulic retention time, the variation between the removal percentage of both reactors became significant as the reactor that had the filling rate about 30 %, showed better condition than the other reactor with 10 % filling rate. Ideal condition in this experiment was caught at hydraulic retention time about 72 hrs and 200 mg/l pollutants concentration with 99.2% removal by the reactor with 30% filling rate. While the ideal condition for the reactor with 10% filling rate with the same hydraulic retention time and 100 mg/l pollutants concentrations was obtained about 99.4% removal. Regarding anionic surfactant standard in Iran which is 1.5 mg/l for surface water discharge, using this process is suitable for treating municipal wastewater and industrial wastewater which has a range of the pollutant between 100-200 mg/l. but for the industries that produce detergents products which make wastewater containing more than 200 mg/l surfactants, using secondary treatment process for achieving discharge standard is required.

2. Applying non-linear dynamics to atrial appendage flow data to understand and characterize atrial arrhythmia

Chandra, S.; Grimm, R.A.; Katz, R.; Thomas, J.D.

1996-01-01

The aim of this study was to better understand and characterize left atrial appendage flow in atrial fibrillation. Atrial fibrillation and flutter are the most common cardiac arrhythmias affecting 15% of the older population. The pulsed Doppler velocity profile data was recorded from the left atrial appendage of patients using transesophageal echocardiography. The data was analyzed using Fourier analysis and nonlinear dynamical tools. Fourier analysis showed that appendage mechanical frequency (f f ) for patients in sinus rhythm was always lower (around1 Hz) than that in atrial fibrillation (5-8 Hz). Among patients with atrial fibrillation spectral power below f f was significantly different suggesting variability within this group of patients. Results that suggested the presence of nonlinear dynamics were: a) the existence of two arbitrary peak frequencies f 1 , f 2 , and other peak frequencies as linear combinations thereof (mf 1 ±nf 2 ), and b) the similarity between the spectrum of patient data and that obtained using the Lorenz equation. Nonlinear analysis tools, including Phase plots and differential radial plots, were also generated from the velocity data using a delay of 10. In the phase plots, some patients displayed a torus-like structure, while others had a more random-like pattern. In the differential radial plots, the first set of patients (with torus-like phase plots) showed fewer values crossing an arbitrary threshold of 10 than did the second set (8 vs. 27 in one typical example). The outcome of cardioversion was different for these two set of patients. Fourier analysis helped to: differentiate between sinus rhythm and atrial fibrillation, understand the characteristics of the wide range of atrial fibrillation patients, and provide hints that atrial fibrillation could be a nonlinear process. Nonlinear dynamical tools helped to further characterize and sub-classify atrial fibrillation

3. BFLCRM: A BAYESIAN FUNCTIONAL LINEAR COX REGRESSION MODEL FOR PREDICTING TIME TO CONVERSION TO ALZHEIMER'S DISEASE.

Lee, Eunjee; Zhu, Hongtu; Kong, Dehan; Wang, Yalin; Giovanello, Kelly Sullivan; Ibrahim, Joseph G

2015-12-01

The aim of this paper is to develop a Bayesian functional linear Cox regression model (BFLCRM) with both functional and scalar covariates. This new development is motivated by establishing the likelihood of conversion to Alzheimer's disease (AD) in 346 patients with mild cognitive impairment (MCI) enrolled in the Alzheimer's Disease Neuroimaging Initiative 1 (ADNI-1) and the early markers of conversion. These 346 MCI patients were followed over 48 months, with 161 MCI participants progressing to AD at 48 months. The functional linear Cox regression model was used to establish that functional covariates including hippocampus surface morphology and scalar covariates including brain MRI volumes, cognitive performance (ADAS-Cog), and APOE status can accurately predict time to onset of AD. Posterior computation proceeds via an efficient Markov chain Monte Carlo algorithm. A simulation study is performed to evaluate the finite sample performance of BFLCRM.

4. Robust distributed model predictive control of linear systems with structured time-varying uncertainties

Zhang, Langwen; Xie, Wei; Wang, Jingcheng

2017-11-01

In this work, synthesis of robust distributed model predictive control (MPC) is presented for a class of linear systems subject to structured time-varying uncertainties. By decomposing a global system into smaller dimensional subsystems, a set of distributed MPC controllers, instead of a centralised controller, are designed. To ensure the robust stability of the closed-loop system with respect to model uncertainties, distributed state feedback laws are obtained by solving a min-max optimisation problem. The design of robust distributed MPC is then transformed into solving a minimisation optimisation problem with linear matrix inequality constraints. An iterative online algorithm with adjustable maximum iteration is proposed to coordinate the distributed controllers to achieve a global performance. The simulation results show the effectiveness of the proposed robust distributed MPC algorithm.

5. Predictive IP controller for robust position control of linear servo system.

Lu, Shaowu; Zhou, Fengxing; Ma, Yajie; Tang, Xiaoqi

2016-07-01

Position control is a typical application of linear servo system. In this paper, to reduce the system overshoot, an integral plus proportional (IP) controller is used in the position control implementation. To further improve the control performance, a gain-tuning IP controller based on a generalized predictive control (GPC) law is proposed. Firstly, to represent the dynamics of the position loop, a second-order linear model is used and its model parameters are estimated on-line by using a recursive least squares method. Secondly, based on the GPC law, an optimal control sequence is obtained by using receding horizon, then directly supplies the IP controller with the corresponding control parameters in the real operations. Finally, simulation and experimental results are presented to show the efficiency of proposed scheme. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.

6. A simple method for HPLC retention time prediction: linear calibration using two reference substances.

Sun, Lei; Jin, Hong-Yu; Tian, Run-Tao; Wang, Ming-Juan; Liu, Li-Na; Ye, Liu-Ping; Zuo, Tian-Tian; Ma, Shuang-Cheng

2017-01-01

Analysis of related substances in pharmaceutical chemicals and multi-components in traditional Chinese medicines needs bulk of reference substances to identify the chromatographic peaks accurately. But the reference substances are costly. Thus, the relative retention (RR) method has been widely adopted in pharmacopoeias and literatures for characterizing HPLC behaviors of those reference substances unavailable. The problem is it is difficult to reproduce the RR on different columns due to the error between measured retention time (t R ) and predicted t R in some cases. Therefore, it is useful to develop an alternative and simple method for prediction of t R accurately. In the present study, based on the thermodynamic theory of HPLC, a method named linear calibration using two reference substances (LCTRS) was proposed. The method includes three steps, procedure of two points prediction, procedure of validation by multiple points regression and sequential matching. The t R of compounds on a HPLC column can be calculated by standard retention time and linear relationship. The method was validated in two medicines on 30 columns. It was demonstrated that, LCTRS method is simple, but more accurate and more robust on different HPLC columns than RR method. Hence quality standards using LCTRS method are easy to reproduce in different laboratories with lower cost of reference substances.

7. Generating linear regression model to predict motor functions by use of laser range finder during TUG.

Adachi, Daiki; Nishiguchi, Shu; Fukutani, Naoto; Hotta, Takayuki; Tashiro, Yuto; Morino, Saori; Shirooka, Hidehiko; Nozaki, Yuma; Hirata, Hinako; Yamaguchi, Moe; Yorozu, Ayanori; Takahashi, Masaki; Aoyama, Tomoki

2017-05-01

The purpose of this study was to investigate which spatial and temporal parameters of the Timed Up and Go (TUG) test are associated with motor function in elderly individuals. This study included 99 community-dwelling women aged 72.9 ± 6.3 years. Step length, step width, single support time, variability of the aforementioned parameters, gait velocity, cadence, reaction time from starting signal to first step, and minimum distance between the foot and a marker placed to 3 in front of the chair were measured using our analysis system. The 10-m walk test, five times sit-to-stand (FTSTS) test, and one-leg standing (OLS) test were used to assess motor function. Stepwise multivariate linear regression analysis was used to determine which TUG test parameters were associated with each motor function test. Finally, we calculated a predictive model for each motor function test using each regression coefficient. In stepwise linear regression analysis, step length and cadence were significantly associated with the 10-m walk test, FTSTS and OLS test. Reaction time was associated with the FTSTS test, and step width was associated with the OLS test. Each predictive model showed a strong correlation with the 10-m walk test and OLS test (P motor function test. Moreover, the TUG test time regarded as the lower extremity function and mobility has strong predictive ability in each motor function test. Copyright © 2017 The Japanese Orthopaedic Association. Published by Elsevier B.V. All rights reserved.

8. Prediction of protein interaction hot spots using rough set-based multiple criteria linear programming.

Chen, Ruoying; Zhang, Zhiwang; Wu, Di; Zhang, Peng; Zhang, Xinyang; Wang, Yong; Shi, Yong

2011-01-21

Protein-protein interactions are fundamentally important in many biological processes and it is in pressing need to understand the principles of protein-protein interactions. Mutagenesis studies have found that only a small fraction of surface residues, known as hot spots, are responsible for the physical binding in protein complexes. However, revealing hot spots by mutagenesis experiments are usually time consuming and expensive. In order to complement the experimental efforts, we propose a new computational approach in this paper to predict hot spots. Our method, Rough Set-based Multiple Criteria Linear Programming (RS-MCLP), integrates rough sets theory and multiple criteria linear programming to choose dominant features and computationally predict hot spots. Our approach is benchmarked by a dataset of 904 alanine-mutated residues and the results show that our RS-MCLP method performs better than other methods, e.g., MCLP, Decision Tree, Bayes Net, and the existing HotSprint database. In addition, we reveal several biological insights based on our analysis. We find that four features (the change of accessible surface area, percentage of the change of accessible surface area, size of a residue, and atomic contacts) are critical in predicting hot spots. Furthermore, we find that three residues (Tyr, Trp, and Phe) are abundant in hot spots through analyzing the distribution of amino acids. Copyright © 2010 Elsevier Ltd. All rights reserved.

9. Improving sub-pixel imperviousness change prediction by ensembling heterogeneous non-linear regression models

Drzewiecki, Wojciech

2016-12-01

In this work nine non-linear regression models were compared for sub-pixel impervious surface area mapping from Landsat images. The comparison was done in three study areas both for accuracy of imperviousness coverage evaluation in individual points in time and accuracy of imperviousness change assessment. The performance of individual machine learning algorithms (Cubist, Random Forest, stochastic gradient boosting of regression trees, k-nearest neighbors regression, random k-nearest neighbors regression, Multivariate Adaptive Regression Splines, averaged neural networks, and support vector machines with polynomial and radial kernels) was also compared with the performance of heterogeneous model ensembles constructed from the best models trained using particular techniques. The results proved that in case of sub-pixel evaluation the most accurate prediction of change may not necessarily be based on the most accurate individual assessments. When single methods are considered, based on obtained results Cubist algorithm may be advised for Landsat based mapping of imperviousness for single dates. However, Random Forest may be endorsed when the most reliable evaluation of imperviousness change is the primary goal. It gave lower accuracies for individual assessments, but better prediction of change due to more correlated errors of individual predictions. Heterogeneous model ensembles performed for individual time points assessments at least as well as the best individual models. In case of imperviousness change assessment the ensembles always outperformed single model approaches. It means that it is possible to improve the accuracy of sub-pixel imperviousness change assessment using ensembles of heterogeneous non-linear regression models.

10. Predicting recycling behaviour: Comparison of a linear regression model and a fuzzy logic model.

Vesely, Stepan; Klöckner, Christian A; Dohnal, Mirko

2016-03-01

In this paper we demonstrate that fuzzy logic can provide a better tool for predicting recycling behaviour than the customarily used linear regression. To show this, we take a set of empirical data on recycling behaviour (N=664), which we randomly divide into two halves. The first half is used to estimate a linear regression model of recycling behaviour, and to develop a fuzzy logic model of recycling behaviour. As the first comparison, the fit of both models to the data included in estimation of the models (N=332) is evaluated. As the second comparison, predictive accuracy of both models for "new" cases (hold-out data not included in building the models, N=332) is assessed. In both cases, the fuzzy logic model significantly outperforms the regression model in terms of fit. To conclude, when accurate predictions of recycling and possibly other environmental behaviours are needed, fuzzy logic modelling seems to be a promising technique. Copyright © 2015 Elsevier Ltd. All rights reserved.

11. A comparison of random forest regression and multiple linear regression for prediction in neuroscience.

Smith, Paul F; Ganesh, Siva; Liu, Ping

2013-10-30

Regression is a common statistical tool for prediction in neuroscience. However, linear regression is by far the most common form of regression used, with regression trees receiving comparatively little attention. In this study, the results of conventional multiple linear regression (MLR) were compared with those of random forest regression (RFR), in the prediction of the concentrations of 9 neurochemicals in the vestibular nucleus complex and cerebellum that are part of the l-arginine biochemical pathway (agmatine, putrescine, spermidine, spermine, l-arginine, l-ornithine, l-citrulline, glutamate and γ-aminobutyric acid (GABA)). The R(2) values for the MLRs were higher than the proportion of variance explained values for the RFRs: 6/9 of them were ≥ 0.70 compared to 4/9 for RFRs. Even the variables that had the lowest R(2) values for the MLRs, e.g. ornithine (0.50) and glutamate (0.61), had much lower proportion of variance explained values for the RFRs (0.27 and 0.49, respectively). The RSE values for the MLRs were lower than those for the RFRs in all but two cases. In general, MLRs seemed to be superior to the RFRs in terms of predictive value and error. In the case of this data set, MLR appeared to be superior to RFR in terms of its explanatory value and error. This result suggests that MLR may have advantages over RFR for prediction in neuroscience with this kind of data set, but that RFR can still have good predictive value in some cases. Copyright © 2013 Elsevier B.V. All rights reserved.

12. A Non-linear Model for Predicting Tip Position of a Pliable Robot Arm Segment Using Bending Sensor Data

Elizabeth I. SKLAR

2016-04-01

Full Text Available Using pliable materials for the construction of robot bodies presents new and interesting challenges for the robotics community. Within the EU project entitled STIFFness controllable Flexible & Learnable manipulator for surgical Operations (STIFF-FLOP, a bendable, segmented robot arm has been developed. The exterior of the arm is composed of a soft material (silicone, encasing an internal structure that contains air-chamber actuators and a variety of sensors for monitoring applied force, position and shape of the arm as it bends. Due to the physical characteristics of the arm, a proper model of robot kinematics and dynamics is difficult to infer from the sensor data. Here we propose a non-linear approach to predicting the robot arm posture, by training a feed-forward neural network with a structured series of pressures values applied to the arm's actuators. The model is developed across a set of seven different experiments. Because the STIFF-FLOP arm is intended for use in surgical procedures, traditional methods for position estimation (based on visual information or electromagnetic tracking will not be possible to implement. Thus the ability to estimate pose based on data from a custom fiber-optic bending sensor and accompanying model is a valuable contribution. Results are presented which demonstrate the utility of our non-linear modelling approach across a range of data collection procedures.

13. Towards Automated Binding Affinity Prediction Using an Iterative Linear Interaction Energy Approach

C. Ruben Vosmeer

2014-01-01

Full Text Available Binding affinity prediction of potential drugs to target and off-target proteins is an essential asset in drug development. These predictions require the calculation of binding free energies. In such calculations, it is a major challenge to properly account for both the dynamic nature of the protein and the possible variety of ligand-binding orientations, while keeping computational costs tractable. Recently, an iterative Linear Interaction Energy (LIE approach was introduced, in which results from multiple simulations of a protein-ligand complex are combined into a single binding free energy using a Boltzmann weighting-based scheme. This method was shown to reach experimental accuracy for flexible proteins while retaining the computational efficiency of the general LIE approach. Here, we show that the iterative LIE approach can be used to predict binding affinities in an automated way. A workflow was designed using preselected protein conformations, automated ligand docking and clustering, and a (semi-automated molecular dynamics simulation setup. We show that using this workflow, binding affinities of aryloxypropanolamines to the malleable Cytochrome P450 2D6 enzyme can be predicted without a priori knowledge of dominant protein-ligand conformations. In addition, we provide an outlook for an approach to assess the quality of the LIE predictions, based on simulation outcomes only.

14. The Dangers of Estimating V˙O2max Using Linear, Nonexercise Prediction Models.

Nevill, Alan M; Cooke, Carlton B

2017-05-01

This study aimed to compare the accuracy and goodness of fit of two competing models (linear vs allometric) when estimating V˙O2max (mL·kg·min) using nonexercise prediction models. The two competing models were fitted to the V˙O2max (mL·kg·min) data taken from two previously published studies. Study 1 (the Allied Dunbar National Fitness Survey) recruited 1732 randomly selected healthy participants, 16 yr and older, from 30 English parliamentary constituencies. Estimates of V˙O2max were obtained using a progressive incremental test on a motorized treadmill. In study 2, maximal oxygen uptake was measured directly during a fatigue limited treadmill test in older men (n = 152) and women (n = 146) 55 to 86 yr old. In both studies, the quality of fit associated with estimating V˙O2max (mL·kg·min) was superior using allometric rather than linear (additive) models based on all criteria (R, maximum log-likelihood, and Akaike information criteria). Results suggest that linear models will systematically overestimate V˙O2max for participants in their 20s and underestimate V˙O2max for participants in their 60s and older. The residuals saved from the linear models were neither normally distributed nor independent of the predicted values nor age. This will probably explain the absence of a key quadratic age term in the linear models, crucially identified using allometric models. Not only does the curvilinear age decline within an exponential function follow a more realistic age decline (the right-hand side of a bell-shaped curve), but the allometric models identified either a stature-to-body mass ratio (study 1) or a fat-free mass-to-body mass ratio (study 2), both associated with leanness when estimating V˙O2max. Adopting allometric models will provide more accurate predictions of V˙O2max (mL·kg·min) using plausible, biologically sound, and interpretable models.

15. Automatic Offline Formulation of Robust Model Predictive Control Based on Linear Matrix Inequalities Method

Longge Zhang

2013-01-01

Full Text Available Two automatic robust model predictive control strategies are presented for uncertain polytopic linear plants with input and output constraints. A sequence of nested geometric proportion asymptotically stable ellipsoids and controllers is constructed offline first. Then the feedback controllers are automatically selected with the receding horizon online in the first strategy. Finally, a modified automatic offline robust MPC approach is constructed to improve the closed system's performance. The new proposed strategies not only reduce the conservatism but also decrease the online computation. Numerical examples are given to illustrate their effectiveness.

16. Effects of applied electromagnetic fields on the linear and nonlinear optical properties in an inverse parabolic quantum well

Ungan, F.; Yesilgul, U.; Kasapoglu, E.; Sari, H.; Sökmen, I.

2012-01-01

In this present work, we have investigated theoretically the effects of applied electric and magnetic fields on the linear and nonlinear optical properties in a GaAs/Al x Ga 1−x As inverse parabolic quantum well for different Al concentrations at the well center. The Al concentration at the barriers was always x max =0.3. The energy levels and wave functions are calculated within the effective mass approximation and the envelope function approach. The analytical expressions of optical properties are obtained by using the compact density-matrix approach. The linear, third-order nonlinear and total absorption and refractive index changes depending on the Al concentration at the well center are investigated as a function of the incident photon energy for the different values of the applied electric and magnetic fields. The results show that the applied electric and magnetic fields have a great effect on these optical quantities. - Highlights: ► The x c concentration has a great effect on the optical characteristics of these structures. ► The EM fields have a great effect on the optical properties of these structures. ► The total absorption coefficients increased as the electric and magnetic field increases. ► The RICs reduced as the electric and magnetic field increases.

17. Enhancement of Visual Field Predictions with Pointwise Exponential Regression (PER) and Pointwise Linear Regression (PLR).

Morales, Esteban; de Leon, John Mark S; Abdollahi, Niloufar; Yu, Fei; Nouri-Mahdavi, Kouros; Caprioli, Joseph

2016-03-01

The study was conducted to evaluate threshold smoothing algorithms to enhance prediction of the rates of visual field (VF) worsening in glaucoma. We studied 798 patients with primary open-angle glaucoma and 6 or more years of follow-up who underwent 8 or more VF examinations. Thresholds at each VF location for the first 4 years or first half of the follow-up time (whichever was greater) were smoothed with clusters defined by the nearest neighbor (NN), Garway-Heath, Glaucoma Hemifield Test (GHT), and weighting by the correlation of rates at all other VF locations. Thresholds were regressed with a pointwise exponential regression (PER) model and a pointwise linear regression (PLR) model. Smaller root mean square error (RMSE) values of the differences between the observed and the predicted thresholds at last two follow-ups indicated better model predictions. The mean (SD) follow-up times for the smoothing and prediction phase were 5.3 (1.5) and 10.5 (3.9) years. The mean RMSE values for the PER and PLR models were unsmoothed data, 6.09 and 6.55; NN, 3.40 and 3.42; Garway-Heath, 3.47 and 3.48; GHT, 3.57 and 3.74; and correlation of rates, 3.59 and 3.64. Smoothed VF data predicted better than unsmoothed data. Nearest neighbor provided the best predictions; PER also predicted consistently more accurately than PLR. Smoothing algorithms should be used when forecasting VF results with PER or PLR. The application of smoothing algorithms on VF data can improve forecasting in VF points to assist in treatment decisions.

18. Linear and nonlinear stability analysis in BWRs applying a reduced order model

Olvera G, O. A.; Espinosa P, G.; Prieto G, A., E-mail: omar_olverag@hotmail.com [Universidad Autonoma Metropolitana, Unidad Iztapalapa, San Rafael Atlixco No. 186, Col. Vicentina, 09340 Ciudad de Mexico (Mexico)

2016-09-15

Boiling Water Reactor (BWR) stability studies are generally conducted through nonlinear reduced order models (Rom) employing various techniques such as bifurcation analysis and time domain numerical integration. One of those models used for these studies is the March-Leuba Rom. Such model represents qualitatively the dynamic behavior of a BWR through a one-point reactor kinetics, a one node representation of the heat transfer process in fuel, and a two node representation of the channel Thermal hydraulics to account for the void reactivity feedback. Here, we study the effect of this higher order model on the overall stability of the BWR. The change in the stability boundaries is determined by evaluating the eigenvalues of the Jacobian matrix. The nonlinear model is also integrated numerically to show that in the nonlinear region, the system evolves to stable limit cycles when operating close to the stability boundary. We also applied a new technique based on the Empirical Mode Decomposition (Emd) to estimate a parameter linked with stability in a BWR. This instability parameter is not exactly the classical Decay Ratio (Dr), but it will be linked with it. The proposed method allows decomposing the analyzed signal in different levels or mono-component functions known as intrinsic mode functions (Imf). One or more of these different modes can be associated to the instability problem in BWRs. By tracking the instantaneous frequencies (calculated through Hilbert Huang Transform (HHT) and the autocorrelation function (Acf) of the Imf linked to instability. The estimation of the proposed parameter can be achieved. The current methodology was validated with simulated signals of the studied model. (Author)

19. Linear and nonlinear stability analysis in BWRs applying a reduced order model

Olvera G, O. A.; Espinosa P, G.; Prieto G, A.

2016-09-01

Boiling Water Reactor (BWR) stability studies are generally conducted through nonlinear reduced order models (Rom) employing various techniques such as bifurcation analysis and time domain numerical integration. One of those models used for these studies is the March-Leuba Rom. Such model represents qualitatively the dynamic behavior of a BWR through a one-point reactor kinetics, a one node representation of the heat transfer process in fuel, and a two node representation of the channel Thermal hydraulics to account for the void reactivity feedback. Here, we study the effect of this higher order model on the overall stability of the BWR. The change in the stability boundaries is determined by evaluating the eigenvalues of the Jacobian matrix. The nonlinear model is also integrated numerically to show that in the nonlinear region, the system evolves to stable limit cycles when operating close to the stability boundary. We also applied a new technique based on the Empirical Mode Decomposition (Emd) to estimate a parameter linked with stability in a BWR. This instability parameter is not exactly the classical Decay Ratio (Dr), but it will be linked with it. The proposed method allows decomposing the analyzed signal in different levels or mono-component functions known as intrinsic mode functions (Imf). One or more of these different modes can be associated to the instability problem in BWRs. By tracking the instantaneous frequencies (calculated through Hilbert Huang Transform (HHT) and the autocorrelation function (Acf) of the Imf linked to instability. The estimation of the proposed parameter can be achieved. The current methodology was validated with simulated signals of the studied model. (Author)

20. Linear Multivariable Regression Models for Prediction of Eddy Dissipation Rate from Available Meteorological Data

MCKissick, Burnell T. (Technical Monitor); Plassman, Gerald E.; Mall, Gerald H.; Quagliano, John R.

2005-01-01

Linear multivariable regression models for predicting day and night Eddy Dissipation Rate (EDR) from available meteorological data sources are defined and validated. Model definition is based on a combination of 1997-2000 Dallas/Fort Worth (DFW) data sources, EDR from Aircraft Vortex Spacing System (AVOSS) deployment data, and regression variables primarily from corresponding Automated Surface Observation System (ASOS) data. Model validation is accomplished through EDR predictions on a similar combination of 1994-1995 Memphis (MEM) AVOSS and ASOS data. Model forms include an intercept plus a single term of fixed optimal power for each of these regression variables; 30-minute forward averaged mean and variance of near-surface wind speed and temperature, variance of wind direction, and a discrete cloud cover metric. Distinct day and night models, regressing on EDR and the natural log of EDR respectively, yield best performance and avoid model discontinuity over day/night data boundaries.

1. Flexible non-linear predictive models for large-scale wind turbine diagnostics

Bach-Andersen, Martin; Rømer-Odgaard, Bo; Winther, Ole

2017-01-01

We demonstrate how flexible non-linear models can provide accurate and robust predictions on turbine component temperature sensor data using data-driven principles and only a minimum of system modeling. The merits of different model architectures are evaluated using data from a large set...... of turbines operating under diverse conditions. We then go on to test the predictive models in a diagnostic setting, where the output of the models are used to detect mechanical faults in rotor bearings. Using retrospective data from 22 actual rotor bearing failures, the fault detection performance...... of the models are quantified using a structured framework that provides the metrics required for evaluating the performance in a fleet wide monitoring setup. It is demonstrated that faults are identified with high accuracy up to 45 days before a warning from the hard-threshold warning system....

2. TBM performance prediction in Yucca Mountain welded tuff from linear cutter tests

Gertsch, R.; Ozdemir, L.; Gertsch, L.

1992-01-01

This paper discusses performance prediction which were developed for tunnel boring machines operating in welded tuff for the construction of the experimental study facility and the potential nuclear waste repository at Yucca Mountain. The predictions were based on test data obtained from an extensive series of linear cutting tests performed on samples of Topopah String welded tuff from the Yucca Mountain Project site. Using the cutter force, spacing, and penetration data from the experimental program, the thrust, torque, power, and rate of penetration were estimated for a 25 ft diameter tunnel boring machine (TBM) operating in welded tuff. The result show that the Topopah Spring welded tuff (TSw2) can be excavated at relatively high rates of advance with state-of-the-art TBMs. The result also show, however, that the TBM torque and power requirements will be higher than estimated based on rock physical properties and past tunneling experience in rock formations of similar strength

3. Daily Suspended Sediment Discharge Prediction Using Multiple Linear Regression and Artificial Neural Network

Uca; Toriman, Ekhwan; Jaafar, Othman; Maru, Rosmini; Arfan, Amal; Saleh Ahmar, Ansari

2018-01-01

Prediction of suspended sediment discharge in a catchments area is very important because it can be used to evaluation the erosion hazard, management of its water resources, water quality, hydrology project management (dams, reservoirs, and irrigation) and to determine the extent of the damage that occurred in the catchments. Multiple Linear Regression analysis and artificial neural network can be used to predict the amount of daily suspended sediment discharge. Regression analysis using the least square method, whereas artificial neural networks using Radial Basis Function (RBF) and feedforward multilayer perceptron with three learning algorithms namely Levenberg-Marquardt (LM), Scaled Conjugate Descent (SCD) and Broyden-Fletcher-Goldfarb-Shanno Quasi-Newton (BFGS). The number neuron of hidden layer is three to sixteen, while in output layer only one neuron because only one output target. The mean absolute error (MAE), root mean square error (RMSE), coefficient of determination (R2 ) and coefficient of efficiency (CE) of the multiple linear regression (MLRg) value Model 2 (6 input variable independent) has the lowest the value of MAE and RMSE (0.0000002 and 13.6039) and highest R2 and CE (0.9971 and 0.9971). When compared between LM, SCG and RBF, the BFGS model structure 3-7-1 is the better and more accurate to prediction suspended sediment discharge in Jenderam catchment. The performance value in testing process, MAE and RMSE (13.5769 and 17.9011) is smallest, meanwhile R2 and CE (0.9999 and 0.9998) is the highest if it compared with the another BFGS Quasi-Newton model (6-3-1, 9-10-1 and 12-12-1). Based on the performance statistics value, MLRg, LM, SCG, BFGS and RBF suitable and accurately for prediction by modeling the non-linear complex behavior of suspended sediment responses to rainfall, water depth and discharge. The comparison between artificial neural network (ANN) and MLRg, the MLRg Model 2 accurately for to prediction suspended sediment discharge (kg

4. Genomic-Enabled Prediction Based on Molecular Markers and Pedigree Using the Bayesian Linear Regression Package in R

Paulino Pérez

2010-09-01

Full Text Available The availability of dense molecular markers has made possible the use of genomic selection in plant and animal breeding. However, models for genomic selection pose several computational and statistical challenges and require specialized computer programs, not always available to the end user and not implemented in standard statistical software yet. The R-package BLR (Bayesian Linear Regression implements several statistical procedures (e.g., Bayesian Ridge Regression, Bayesian LASSO in a unified framework that allows including marker genotypes and pedigree data jointly. This article describes the classes of models implemented in the BLR package and illustrates their use through examples. Some challenges faced when applying genomic-enabled selection, such as model choice, evaluation of predictive ability through cross-validation, and choice of hyper-parameters, are also addressed.

5. Current error vector based prediction control of the section winding permanent magnet linear synchronous motor

Hong Junjie, E-mail: hongjjie@mail.sysu.edu.cn [School of Engineering, Sun Yat-Sen University, Guangzhou 510006 (China); Li Liyi, E-mail: liliyi@hit.edu.cn [Dept. Electrical Engineering, Harbin Institute of Technology, Harbin 150000 (China); Zong Zhijian; Liu Zhongtu [School of Engineering, Sun Yat-Sen University, Guangzhou 510006 (China)

2011-10-15

Highlights: {yields} The structure of the permanent magnet linear synchronous motor (SW-PMLSM) is new. {yields} A new current control method CEVPC is employed in this motor. {yields} The sectional power supply method is different to the others and effective. {yields} The performance gets worse with voltage and current limitations. - Abstract: To include features such as greater thrust density, higher efficiency without reducing the thrust stability, this paper proposes a section winding permanent magnet linear synchronous motor (SW-PMLSM), whose iron core is continuous, whereas winding is divided. The discrete system model of the motor is derived. With the definition of the current error vector and selection of the value function, the theory of the current error vector based prediction control (CEVPC) for the motor currents is explained clearly. According to the winding section feature, the motion region of the mover is divided into five zones, in which the implementation of the current predictive control method is proposed. Finally, the experimental platform is constructed and experiments are carried out. The results show: the current control effect has good dynamic response, and the thrust on the mover remains constant basically.

6. A review of model predictive control: moving from linear to nonlinear design methods

Nandong, J.; Samyudia, Y.; Tade, M.O.

2006-01-01

Linear model predictive control (LMPC) has now been considered as an industrial control standard in process industry. Its extension to nonlinear cases however has not yet gained wide acceptance due to many reasons, e.g. excessively heavy computational load and effort, thus, preventing its practical implementation in real-time control. The application of nonlinear MPC (NMPC) is advantageous for processes with strong nonlinearity or when the operating points are frequently moved from one set point to another due to, for instance, changes in market demands. Much effort has been dedicated towards improving the computational efficiency of NMPC as well as its stability analysis. This paper provides a review on alternative ways of extending linear MPC to the nonlinear one. We also highlight the critical issues pertinent to the applications of NMPC and discuss possible solutions to address these issues. In addition, we outline the future research trend in the area of model predictive control by emphasizing on the potential applications of multi-scale process model within NMPC

7. Using NCAP to predict RFI effects in linear bipolar integrated circuits

Fang, T.-F.; Whalen, J. J.; Chen, G. K. C.

1980-11-01

Applications of the Nonlinear Circuit Analysis Program (NCAP) to calculate RFI effects in electronic circuits containing discrete semiconductor devices have been reported upon previously. The objective of this paper is to demonstrate that the computer program NCAP also can be used to calcuate RFI effects in linear bipolar integrated circuits (IC's). The IC's reported upon are the microA741 operational amplifier (op amp) which is one of the most widely used IC's, and a differential pair which is a basic building block in many linear IC's. The microA741 op amp was used as the active component in a unity-gain buffer amplifier. The differential pair was used in a broad-band cascode amplifier circuit. The computer program NCAP was used to predict how amplitude-modulated RF signals are demodulated in the IC's to cause undesired low-frequency responses. The predicted and measured results for radio frequencies in the 0.050-60-MHz range are in good agreement.

8. Robust entry guidance using linear covariance-based model predictive control

Jianjun Luo

2017-02-01

Full Text Available For atmospheric entry vehicles, guidance design can be accomplished by solving an optimal issue using optimal control theories. However, traditional design methods generally focus on the nominal performance and do not include considerations of the robustness in the design process. This paper proposes a linear covariance-based model predictive control method for robust entry guidance design. Firstly, linear covariance analysis is employed to directly incorporate the robustness into the guidance design. The closed-loop covariance with the feedback updated control command is initially formulated to provide the expected errors of the nominal state variables in the presence of uncertainties. Then, the closed-loop covariance is innovatively used as a component of the cost function to guarantee the robustness to reduce its sensitivity to uncertainties. After that, the models predictive control is used to solve the optimal problem, and the control commands (bank angles are calculated. Finally, a series of simulations for different missions have been completed to demonstrate the high performance in precision and the robustness with respect to initial perturbations as well as uncertainties in the entry process. The 3σ confidence region results in the presence of uncertainties which show that the robustness of the guidance has been improved, and the errors of the state variables are decreased by approximately 35%.

9. Multiple linear regression models for predicting chronic aluminum toxicity to freshwater aquatic organisms and developing water quality guidelines.

DeForest, David K; Brix, Kevin V; Tear, Lucinda M; Adams, William J

2018-01-01

The bioavailability of aluminum (Al) to freshwater aquatic organisms varies as a function of several water chemistry parameters, including pH, dissolved organic carbon (DOC), and water hardness. We evaluated the ability of multiple linear regression (MLR) models to predict chronic Al toxicity to a green alga (Pseudokirchneriella subcapitata), a cladoceran (Ceriodaphnia dubia), and a fish (Pimephales promelas) as a function of varying DOC, pH, and hardness conditions. The MLR models predicted toxicity values that were within a factor of 2 of observed values in 100% of the cases for P. subcapitata (10 and 20% effective concentrations [EC10s and EC20s]), 91% of the cases for C. dubia (EC10s and EC20s), and 95% (EC10s) and 91% (EC20s) of the cases for P. promelas. The MLR models were then applied to all species with Al toxicity data to derive species and genus sensitivity distributions that could be adjusted as a function of varying DOC, pH, and hardness conditions (the P. subcapitata model was applied to algae and macrophytes, the C. dubia model was applied to invertebrates, and the P. promelas model was applied to fish). Hazardous concentrations to 5% of the species or genera were then derived in 2 ways: 1) fitting a log-normal distribution to species-mean EC10s for all species (following the European Union methodology), and 2) fitting a triangular distribution to genus-mean EC20s for animals only (following the US Environmental Protection Agency methodology). Overall, MLR-based models provide a viable approach for deriving Al water quality guidelines that vary as a function of DOC, pH, and hardness conditions and are a significant improvement over bioavailability corrections based on single parameters. Environ Toxicol Chem 2018;37:80-90. © 2017 SETAC. © 2017 SETAC.

10. Multi input single output model predictive control of non-linear bio-polymerization process

Arumugasamy, Senthil Kumar; Ahmad, Z. [School of Chemical Engineering, Univerisiti Sains Malaysia, Engineering Campus, Seri Ampangan,14300 Nibong Tebal, Seberang Perai Selatan, Pulau Pinang (Malaysia)

2015-05-15

This paper focuses on Multi Input Single Output (MISO) Model Predictive Control of bio-polymerization process in which mechanistic model is developed and linked with the feedforward neural network model to obtain a hybrid model (Mechanistic-FANN) of lipase-catalyzed ring-opening polymerization of ε-caprolactone (ε-CL) for Poly (ε-caprolactone) production. In this research, state space model was used, in which the input to the model were the reactor temperatures and reactor impeller speeds and the output were the molecular weight of polymer (M{sub n}) and polymer polydispersity index. State space model for MISO created using System identification tool box of Matlab™. This state space model is used in MISO MPC. Model predictive control (MPC) has been applied to predict the molecular weight of the biopolymer and consequently control the molecular weight of biopolymer. The result shows that MPC is able to track reference trajectory and give optimum movement of manipulated variable.

11. COMPARATIVE STUDY OF THREE LINEAR SYSTEM SOLVER APPLIED TO FAST DECOUPLED LOAD FLOW METHOD FOR CONTINGENCY ANALYSIS

Syafii

2017-03-01

Full Text Available This paper presents the assessment of fast decoupled load flow computation using three linear system solver scheme. The full matrix version of the fast decoupled load flow based on XB methods used in this study. The numerical investigations are carried out on the small and large test systems. The execution time of small system such as IEEE 14, 30, and 57 are very fast, therefore the computation time can not be compared for these cases. Another cases IEEE 118, 300 and TNB 664 produced significant execution speedup. The superLU factorization sparse matrix solver has best performance and speedup of load flow solution as well as in contigency analysis. The invers full matrix solver can solved only for IEEE 118 bus test system in 3.715 second and for another cases take too long time. However for superLU factorization linear solver can solved all of test system in 7.832 second for a largest of test system. Therefore the superLU factorization linear solver can be a viable alternative applied in contingency analysis.

12. Predicting Fuel Ignition Quality Using 1H NMR Spectroscopy and Multiple Linear Regression

Abdul Jameel, Abdul Gani

2016-09-14

An improved model for the prediction of ignition quality of hydrocarbon fuels has been developed using 1H nuclear magnetic resonance (NMR) spectroscopy and multiple linear regression (MLR) modeling. Cetane number (CN) and derived cetane number (DCN) of 71 pure hydrocarbons and 54 hydrocarbon blends were utilized as a data set to study the relationship between ignition quality and molecular structure. CN and DCN are functional equivalents and collectively referred to as D/CN, herein. The effect of molecular weight and weight percent of structural parameters such as paraffinic CH3 groups, paraffinic CH2 groups, paraffinic CH groups, olefinic CH–CH2 groups, naphthenic CH–CH2 groups, and aromatic C–CH groups on D/CN was studied. A particular emphasis on the effect of branching (i.e., methyl substitution) on the D/CN was studied, and a new parameter denoted as the branching index (BI) was introduced to quantify this effect. A new formula was developed to calculate the BI of hydrocarbon fuels using 1H NMR spectroscopy. Multiple linear regression (MLR) modeling was used to develop an empirical relationship between D/CN and the eight structural parameters. This was then used to predict the DCN of many hydrocarbon fuels. The developed model has a high correlation coefficient (R2 = 0.97) and was validated with experimentally measured DCN of twenty-two real fuel mixtures (e.g., gasolines and diesels) and fifty-nine blends of known composition, and the predicted values matched well with the experimental data.

13. A Riccati Based Homogeneous and Self-Dual Interior-Point Method for Linear Economic Model Predictive Control

Sokoler, Leo Emil; Frison, Gianluca; Edlund, Kristian

2013-01-01

In this paper, we develop an efficient interior-point method (IPM) for the linear programs arising in economic model predictive control of linear systems. The novelty of our algorithm is that it combines a homogeneous and self-dual model, and a specialized Riccati iteration procedure. We test...

14. Performance Prediction Modelling for Flexible Pavement on Low Volume Roads Using Multiple Linear Regression Analysis

C. Makendran

2015-01-01

15. Prediction of Depression in Cancer Patients With Different Classification Criteria, Linear Discriminant Analysis versus Logistic Regression.

Shayan, Zahra; Mohammad Gholi Mezerji, Naser; Shayan, Leila; Naseri, Parisa

2015-11-03

Logistic regression (LR) and linear discriminant analysis (LDA) are two popular statistical models for prediction of group membership. Although they are very similar, the LDA makes more assumptions about the data. When categorical and continuous variables used simultaneously, the optimal choice between the two models is questionable. In most studies, classification error (CE) is used to discriminate between subjects in several groups, but this index is not suitable to predict the accuracy of the outcome. The present study compared LR and LDA models using classification indices. This cross-sectional study selected 243 cancer patients. Sample sets of different sizes (n = 50, 100, 150, 200, 220) were randomly selected and the CE, B, and Q classification indices were calculated by the LR and LDA models. CE revealed the a lack of superiority for one model over the other, but the results showed that LR performed better than LDA for the B and Q indices in all situations. No significant effect for sample size on CE was noted for selection of an optimal model. Assessment of the accuracy of prediction of real data indicated that the B and Q indices are appropriate for selection of an optimal model. The results of this study showed that LR performs better in some cases and LDA in others when based on CE. The CE index is not appropriate for classification, although the B and Q indices performed better and offered more efficient criteria for comparison and discrimination between groups.

16. A hybrid genetic algorithm and linear regression for prediction of NOx emission in power generation plant

Bunyamin, Muhammad Afif; Yap, Keem Siah; Aziz, Nur Liyana Afiqah Abdul; Tiong, Sheih Kiong; Wong, Shen Yuong; Kamal, Md Fauzan

2013-01-01

This paper presents a new approach of gas emission estimation in power generation plant using a hybrid Genetic Algorithm (GA) and Linear Regression (LR) (denoted as GA-LR). The LR is one of the approaches that model the relationship between an output dependant variable, y, with one or more explanatory variables or inputs which denoted as x. It is able to estimate unknown model parameters from inputs data. On the other hand, GA is used to search for the optimal solution until specific criteria is met causing termination. These results include providing good solutions as compared to one optimal solution for complex problems. Thus, GA is widely used as feature selection. By combining the LR and GA (GA-LR), this new technique is able to select the most important input features as well as giving more accurate prediction by minimizing the prediction errors. This new technique is able to produce more consistent of gas emission estimation, which may help in reducing population to the environment. In this paper, the study's interest is focused on nitrous oxides (NOx) prediction. The results of the experiment are encouraging.

17. Stability theory and transition prediction applied to a general aviation fuselage

Spall, R. E.; Wie, Y.-S.

1993-01-01

The linear stability of a fully three-dimensional boundary layer formed over a general aviation fuselage was investigated. The location of the onset of transition was estimated using the N-factor method. The results were compared with existing experimental data and indicate N-factors of approximately 8.5 on the side of the fuselage and 3.0 near the top. Considerable crossflow existed along the side of the body, which significantly affected the unstable modes present in the boundary layer. Fair agreement was found between the predicted frequency range of linear instability modes and available experimental data concerning the spectral content of the boundary layer.

18. Geographical identification of saffron (Crocus sativus L.) by linear discriminant analysis applied to the UV-visible spectra of aqueous extracts.

D'Archivio, Angelo Antonio; Maggi, Maria Anna

2017-03-15

We attempted geographical classification of saffron using UV-visible spectroscopy, conventionally adopted for quality grading according to the ISO Normative 3632. We investigated 81 saffron samples produced in L'Aquila, Città della Pieve, Cascia, and Sardinia (Italy) and commercial products purchased in various supermarkets. Exploratory principal component analysis applied to the UV-vis spectra of saffron aqueous extracts revealed a clear differentiation of the samples belonging to different quality categories, but a poor separation according to the geographical origin of the spices. On the other hand, linear discriminant analysis based on 8 selected absorbance values, concentrated near 279, 305 and 328nm, allowed a good distinction of the spices coming from different sites. Under severe validation conditions (30% and 50% of saffron samples in the evaluation set), correct predictions were 85 and 83%, respectively. Copyright © 2016 Elsevier Ltd. All rights reserved.

19. Comparing artificial neural networks, general linear models and support vector machines in building predictive models for small interfering RNAs.

Kyle A McQuisten

2009-10-01

Full Text Available Exogenous short interfering RNAs (siRNAs induce a gene knockdown effect in cells by interacting with naturally occurring RNA processing machinery. However not all siRNAs induce this effect equally. Several heterogeneous kinds of machine learning techniques and feature sets have been applied to modeling siRNAs and their abilities to induce knockdown. There is some growing agreement to which techniques produce maximally predictive models and yet there is little consensus for methods to compare among predictive models. Also, there are few comparative studies that address what the effect of choosing learning technique, feature set or cross validation approach has on finding and discriminating among predictive models.Three learning techniques were used to develop predictive models for effective siRNA sequences including Artificial Neural Networks (ANNs, General Linear Models (GLMs and Support Vector Machines (SVMs. Five feature mapping methods were also used to generate models of siRNA activities. The 2 factors of learning technique and feature mapping were evaluated by complete 3x5 factorial ANOVA. Overall, both learning techniques and feature mapping contributed significantly to the observed variance in predictive models, but to differing degrees for precision and accuracy as well as across different kinds and levels of model cross-validation.The methods presented here provide a robust statistical framework to compare among models developed under distinct learning techniques and feature sets for siRNAs. Further comparisons among current or future modeling approaches should apply these or other suitable statistically equivalent methods to critically evaluate the performance of proposed models. ANN and GLM techniques tend to be more sensitive to the inclusion of noisy features, but the SVM technique is more robust under large numbers of features for measures of model precision and accuracy. Features found to result in maximally predictive models are

20. Predicting oropharyngeal tumor volume throughout the course of radiation therapy from pretreatment computed tomography data using general linear models.

2014-05-01

The purpose of this work was to develop and evaluate the accuracy of several predictive models of variation in tumor volume throughout the course of radiation therapy. Nineteen patients with oropharyngeal cancers were imaged daily with CT-on-rails for image-guided alignment per an institutional protocol. The daily volumes of 35 tumors in these 19 patients were determined and used to generate (1) a linear model in which tumor volume changed at a constant rate, (2) a general linear model that utilized the power fit relationship between the daily and initial tumor volumes, and (3) a functional general linear model that identified and exploited the primary modes of variation between time series describing the changing tumor volumes. Primary and nodal tumor volumes were examined separately. The accuracy of these models in predicting daily tumor volumes were compared with those of static and linear reference models using leave-one-out cross-validation. In predicting the daily volume of primary tumors, the general linear model and the functional general linear model were more accurate than the static reference model by 9.9% (range: -11.6%-23.8%) and 14.6% (range: -7.3%-27.5%), respectively, and were more accurate than the linear reference model by 14.2% (range: -6.8%-40.3%) and 13.1% (range: -1.5%-52.5%), respectively. In predicting the daily volume of nodal tumors, only the 14.4% (range: -11.1%-20.5%) improvement in accuracy of the functional general linear model compared to the static reference model was statistically significant. A general linear model and a functional general linear model trained on data from a small population of patients can predict the primary tumor volume throughout the course of radiation therapy with greater accuracy than standard reference models. These more accurate models may increase the prognostic value of information about the tumor garnered from pretreatment computed tomography images and facilitate improved treatment management.

1. Comparison of height-diameter models based on geographically weighted regressions and linear mixed modelling applied to large scale forest inventory data

Quirós Segovia, M.; Condés Ruiz, S.; Drápela, K.

2016-07-01

Aim of the study: The main objective of this study was to test Geographically Weighted Regression (GWR) for developing height-diameter curves for forests on a large scale and to compare it with Linear Mixed Models (LMM). Area of study: Monospecific stands of Pinus halepensis Mill. located in the region of Murcia (Southeast Spain). Materials and Methods: The dataset consisted of 230 sample plots (2582 trees) from the Third Spanish National Forest Inventory (SNFI) randomly split into training data (152 plots) and validation data (78 plots). Two different methodologies were used for modelling local (Petterson) and generalized height-diameter relationships (Cañadas I): GWR, with different bandwidths, and linear mixed models. Finally, the quality of the estimated models was compared throughout statistical analysis. Main results: In general, both LMM and GWR provide better prediction capability when applied to a generalized height-diameter function than when applied to a local one, with R2 values increasing from around 0.6 to 0.7 in the model validation. Bias and RMSE were also lower for the generalized function. However, error analysis showed that there were no large differences between these two methodologies, evidencing that GWR provides results which are as good as the more frequently used LMM methodology, at least when no additional measurements are available for calibrating. Research highlights: GWR is a type of spatial analysis for exploring spatially heterogeneous processes. GWR can model spatial variation in tree height-diameter relationship and its regression quality is comparable to LMM. The advantage of GWR over LMM is the possibility to determine the spatial location of every parameter without additional measurements. Abbreviations: GWR (Geographically Weighted Regression); LMM (Linear Mixed Model); SNFI (Spanish National Forest Inventory). (Author)

2. A State-Space Approach to Optimal Level-Crossing Prediction for Linear Gaussian Processes

Martin, Rodney Alexander

2009-01-01

In many complex engineered systems, the ability to give an alarm prior to impending critical events is of great importance. These critical events may have varying degrees of severity, and in fact they may occur during normal system operation. In this article, we investigate approximations to theoretically optimal methods of designing alarm systems for the prediction of level-crossings by a zero-mean stationary linear dynamic system driven by Gaussian noise. An optimal alarm system is designed to elicit the fewest false alarms for a fixed detection probability. This work introduces the use of Kalman filtering in tandem with the optimal level-crossing problem. It is shown that there is a negligible loss in overall accuracy when using approximations to the theoretically optimal predictor, at the advantage of greatly reduced computational complexity. I

3. Real-time detection of musical onsets with linear prediction and sinusoidal modeling

Glover, John; Lazzarini, Victor; Timoney, Joseph

2011-12-01

Real-time musical note onset detection plays a vital role in many audio analysis processes, such as score following, beat detection and various sound synthesis by analysis methods. This article provides a review of some of the most commonly used techniques for real-time onset detection. We suggest ways to improve these techniques by incorporating linear prediction as well as presenting a novel algorithm for real-time onset detection using sinusoidal modelling. We provide comprehensive results for both the detection accuracy and the computational performance of all of the described techniques, evaluated using Modal, our new open source library for musical onset detection, which comes with a free database of samples with hand-labelled note onsets.

4. Ensemble Linear Neighborhood Propagation for Predicting Subchloroplast Localization of Multi-Location Proteins.

Wan, Shibiao; Mak, Man-Wai; Kung, Sun-Yuan

2016-12-02

In the postgenomic era, the number of unreviewed protein sequences is remarkably larger and grows tremendously faster than that of reviewed ones. However, existing methods for protein subchloroplast localization often ignore the information from these unlabeled proteins. This paper proposes a multi-label predictor based on ensemble linear neighborhood propagation (LNP), namely, LNP-Chlo, which leverages hybrid sequence-based feature information from both labeled and unlabeled proteins for predicting localization of both single- and multi-label chloroplast proteins. Experimental results on a stringent benchmark dataset and a novel independent dataset suggest that LNP-Chlo performs at least 6% (absolute) better than state-of-the-art predictors. This paper also demonstrates that ensemble LNP significantly outperforms LNP based on individual features. For readers' convenience, the online Web server LNP-Chlo is freely available at http://bioinfo.eie.polyu.edu.hk/LNPChloServer/ .

5. Real time implementation of a linear predictive coding algorithm on digital signal processor DSP32C

Sheikh, N.M.; Usman, S.R.; Fatima, S.

2002-01-01

Pulse Code Modulation (PCM) has been widely used in speech coding. However, due to its high bit rate. PCM has severe limitations in application where high spectral efficiency is desired, for example, in mobile communication, CD quality broadcasting system etc. These limitation have motivated research in bit rate reduction techniques. Linear predictive coding (LPC) is one of the most powerful complex techniques for bit rate reduction. With the introduction of powerful digital signal processors (DSP) it is possible to implement the complex LPC algorithm in real time. In this paper we present a real time implementation of the LPC algorithm on AT and T's DSP32C at a sampling frequency of 8192 HZ. Application of the LPC algorithm on two speech signals is discussed. Using this implementation , a bit rate reduction of 1:3 is achieved for better than tool quality speech, while a reduction of 1.16 is possible for speech quality required in military applications. (author)

6. Analysis of infant cry through weighted linear prediction cepstral coefficients and Probabilistic Neural Network.

Hariharan, M; Chee, Lim Sin; Yaacob, Sazali

2012-06-01

Acoustic analysis of infant cry signals has been proven to be an excellent tool in the area of automatic detection of pathological status of an infant. This paper investigates the application of parameter weighting for linear prediction cepstral coefficients (LPCCs) to provide the robust representation of infant cry signals. Three classes of infant cry signals were considered such as normal cry signals, cry signals from deaf babies and babies with asphyxia. A Probabilistic Neural Network (PNN) is suggested to classify the infant cry signals into normal and pathological cries. PNN is trained with different spread factor or smoothing parameter to obtain better classification accuracy. The experimental results demonstrate that the suggested features and classification algorithms give very promising classification accuracy of above 98% and it expounds that the suggested method can be used to help medical professionals for diagnosing pathological status of an infant from cry signals.

7. The Spike-and-Slab Lasso Generalized Linear Models for Prediction and Associated Genes Detection.

Tang, Zaixiang; Shen, Yueping; Zhang, Xinyan; Yi, Nengjun

2017-01-01

Large-scale "omics" data have been increasingly used as an important resource for prognostic prediction of diseases and detection of associated genes. However, there are considerable challenges in analyzing high-dimensional molecular data, including the large number of potential molecular predictors, limited number of samples, and small effect of each predictor. We propose new Bayesian hierarchical generalized linear models, called spike-and-slab lasso GLMs, for prognostic prediction and detection of associated genes using large-scale molecular data. The proposed model employs a spike-and-slab mixture double-exponential prior for coefficients that can induce weak shrinkage on large coefficients, and strong shrinkage on irrelevant coefficients. We have developed a fast and stable algorithm to fit large-scale hierarchal GLMs by incorporating expectation-maximization (EM) steps into the fast cyclic coordinate descent algorithm. The proposed approach integrates nice features of two popular methods, i.e., penalized lasso and Bayesian spike-and-slab variable selection. The performance of the proposed method is assessed via extensive simulation studies. The results show that the proposed approach can provide not only more accurate estimates of the parameters, but also better prediction. We demonstrate the proposed procedure on two cancer data sets: a well-known breast cancer data set consisting of 295 tumors, and expression data of 4919 genes; and the ovarian cancer data set from TCGA with 362 tumors, and expression data of 5336 genes. Our analyses show that the proposed procedure can generate powerful models for predicting outcomes and detecting associated genes. The methods have been implemented in a freely available R package BhGLM (http://www.ssg.uab.edu/bhglm/). Copyright © 2017 by the Genetics Society of America.

8. Neural network-based nonlinear model predictive control vs. linear quadratic gaussian control

Cho, C.; Vance, R.; Mardi, N.; Qian, Z.; Prisbrey, K.

1997-01-01

One problem with the application of neural networks to the multivariable control of mineral and extractive processes is determining whether and how to use them. The objective of this investigation was to compare neural network control to more conventional strategies and to determine if there are any advantages in using neural network control in terms of set-point tracking, rise time, settling time, disturbance rejection and other criteria. The procedure involved developing neural network controllers using both historical plant data and simulation models. Various control patterns were tried, including both inverse and direct neural network plant models. These were compared to state space controllers that are, by nature, linear. For grinding and leaching circuits, a nonlinear neural network-based model predictive control strategy was superior to a state space-based linear quadratic gaussian controller. The investigation pointed out the importance of incorporating state space into neural networks by making them recurrent, i.e., feeding certain output state variables into input nodes in the neural network. It was concluded that neural network controllers can have better disturbance rejection, set-point tracking, rise time, settling time and lower set-point overshoot, and it was also concluded that neural network controllers can be more reliable and easy to implement in complex, multivariable plants.

9. Predictive inference for best linear combination of biomarkers subject to limits of detection.

Coolen-Maturi, Tahani

2017-08-15

Measuring the accuracy of diagnostic tests is crucial in many application areas including medicine, machine learning and credit scoring. The receiver operating characteristic (ROC) curve is a useful tool to assess the ability of a diagnostic test to discriminate between two classes or groups. In practice, multiple diagnostic tests or biomarkers are combined to improve diagnostic accuracy. Often, biomarker measurements are undetectable either below or above the so-called limits of detection (LoD). In this paper, nonparametric predictive inference (NPI) for best linear combination of two or more biomarkers subject to limits of detection is presented. NPI is a frequentist statistical method that is explicitly aimed at using few modelling assumptions, enabled through the use of lower and upper probabilities to quantify uncertainty. The NPI lower and upper bounds for the ROC curve subject to limits of detection are derived, where the objective function to maximize is the area under the ROC curve. In addition, the paper discusses the effect of restriction on the linear combination's coefficients on the analysis. Examples are provided to illustrate the proposed method. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

10. Non-linear Model Predictive Control for cooling strings of superconducting magnets using superfluid helium

AUTHOR|(SzGeCERN)673023; Blanco Viñuela, Enrique

In each of eight arcs of the 27 km circumference Large Hadron Collider (LHC), 2.5 km long strings of super-conducting magnets are cooled with superfluid Helium II at 1.9 K. The temperature stabilisation is a challenging control problem due to complex non-linear dynamics of the magnets temperature and presence of multiple operational constraints. Strong nonlinearities and variable dead-times of the dynamics originate at strongly heat-flux dependent effective heat conductivity of superfluid that varies three orders of magnitude over the range of possible operational conditions. In order to improve the temperature stabilisation, a proof of concept on-line economic output-feedback Non-linear Model Predictive Controller (NMPC) is presented in this thesis. The controller is based on a novel complex first-principles distributed parameters numerical model of the temperature dynamics over a 214 m long sub-sector of the LHC that is characterized by very low computational cost of simulation needed in real-time optimizat...

11. A Homogeneous and Self-Dual Interior-Point Linear Programming Algorithm for Economic Model Predictive Control

Sokoler, Leo Emil; Frison, Gianluca; Skajaa, Anders

2015-01-01

We develop an efficient homogeneous and self-dual interior-point method (IPM) for the linear programs arising in economic model predictive control of constrained linear systems with linear objective functions. The algorithm is based on a Riccati iteration procedure, which is adapted to the linear...... system of equations solved in homogeneous and self-dual IPMs. Fast convergence is further achieved using a warm-start strategy. We implement the algorithm in MATLAB and C. Its performance is tested using a conceptual power management case study. Closed loop simulations show that 1) the proposed algorithm...

12. Prediction of pork quality parameters by applying fractals and data mining on MRI

Caballero, Daniel; Pérez-Palacios, Trinidad; Caro, Andrés

2017-01-01

This work firstly investigates the use of MRI, fractal algorithms and data mining techniques to determine pork quality parameters non-destructively. The main objective was to evaluate the capability of fractal algorithms (Classical Fractal algorithm, CFA; Fractal Texture Algorithm, FTA and One...... Point Fractal Texture Algorithm, OPFTA) to analyse MRI in order to predict quality parameters of loin. In addition, the effect of the sequence acquisition of MRI (Gradient echo, GE; Spin echo, SE and Turbo 3D, T3D) and the predictive technique of data mining (Isotonic regression, IR and Multiple linear...... regression, MLR) were analysed. Both fractal algorithm, FTA and OPFTA are appropriate to analyse MRI of loins. The sequence acquisition, the fractal algorithm and the data mining technique seems to influence on the prediction results. For most physico-chemical parameters, prediction equations with moderate...

13. Two-step algorithm of generalized PAPA method applied to linear programming solution of dynamic matrix control

Shimizu, Yoshiaki

1991-01-01

In recent complicated nuclear systems, there are increasing demands for developing highly advanced procedures for various problems-solvings. Among them keen interests have been paid on man-machine communications to improve both safety and economy factors. Many optimization methods have been good enough to elaborate on these points. In this preliminary note, we will concern with application of linear programming (LP) for this purpose. First we will present a new superior version of the generalized PAPA method (GEPAPA) to solve LP problems. We will then examine its effectiveness when applied to derive dynamic matrix control (DMC) as the LP solution. The approach is to aim at the above goal through a quality control of process that will appear in the system. (author)

14. Projection-reduction method applied to deriving non-linear optical conductivity for an electron-impurity system

Nam Lyong Kang

2013-07-01

Full Text Available The projection-reduction method introduced by the present authors is known to give a validated theory for optical transitions in the systems of electrons interacting with phonons. In this work, using this method, we derive the linear and first order nonlinear optical conductivites for an electron-impurity system and examine whether the expressions faithfully satisfy the quantum mechanical philosophy, in the same way as for the electron-phonon systems. The result shows that the Fermi distribution function for electrons, energy denominators, and electron-impurity coupling factors are contained properly in organized manners along with absorption of photons for each electron transition process in the final expressions. Furthermore, the result is shown to be represented properly by schematic diagrams, as in the formulation of electron-phonon interaction. Therefore, in conclusion, we claim that this method can be applied in modeling optical transitions of electrons interacting with both impurities and phonons.

15. Comparing a single case to a control group - Applying linear mixed effects models to repeated measures data.

Huber, Stefan; Klein, Elise; Moeller, Korbinian; Willmes, Klaus

2015-10-01

In neuropsychological research, single-cases are often compared with a small control sample. Crawford and colleagues developed inferential methods (i.e., the modified t-test) for such a research design. In the present article, we suggest an extension of the methods of Crawford and colleagues employing linear mixed models (LMM). We first show that a t-test for the significance of a dummy coded predictor variable in a linear regression is equivalent to the modified t-test of Crawford and colleagues. As an extension to this idea, we then generalized the modified t-test to repeated measures data by using LMMs to compare the performance difference in two conditions observed in a single participant to that of a small control group. The performance of LMMs regarding Type I error rates and statistical power were tested based on Monte-Carlo simulations. We found that starting with about 15-20 participants in the control sample Type I error rates were close to the nominal Type I error rate using the Satterthwaite approximation for the degrees of freedom. Moreover, statistical power was acceptable. Therefore, we conclude that LMMs can be applied successfully to statistically evaluate performance differences between a single-case and a control sample. Copyright © 2015 Elsevier Ltd. All rights reserved.

16. Mamdani-Fuzzy Modeling Approach for Quality Prediction of Non-Linear Laser Lathing Process

Sivaraos; Khalim, A. Z.; Salleh, M. S.; Sivakumar, D.; Kadirgama, K.

2018-03-01

Lathing is a process to fashioning stock materials into desired cylindrical shapes which usually performed by traditional lathe machine. But, the recent rapid advancements in engineering materials and precision demand gives a great challenge to the traditional method. The main drawback of conventional lathe is its mechanical contact which brings to the undesirable tool wear, heat affected zone, finishing, and dimensional accuracy especially taper quality in machining of stock with high length to diameter ratio. Therefore, a novel approach has been devised to investigate in transforming a 2D flatbed CO2 laser cutting machine into 3D laser lathing capability as an alternative solution. Three significant design parameters were selected for this experiment, namely cutting speed, spinning speed, and depth of cut. Total of 24 experiments were performed with eight (8) sequential runs where they were then replicated three (3) times. The experimental results were then used to establish Mamdani - Fuzzy predictive model where it yields the accuracy of more than 95%. Thus, the proposed Mamdani - Fuzzy modelling approach is found very much suitable and practical for quality prediction of non-linear laser lathing process for cylindrical stocks of 10mm diameter.

17. Chaos and loss of predictability in the periodically kicked linear oscillator

Luna-Acosta, G.A.; Cantoral, E.

1989-01-01

Chernikov et.al. [2] have discovered new features in the dynamics of a periodically kicked LHO x'' + ω 0 2 x = ( K/ k 0 T 2 ) sin (k 0 x) x Σ n δ (t / T - n). They report that its phase space motion under exact resonance (p ω 0 = (2 π / T) q; p, q integers), and with initial conditions on the separatrix of the average Hamiltonian , accelerates unboundedly along a fractal stochastic web with q-fold symmetry. Here we investigate with numerical experiments the effects of small deviations from exact resonance on the diffusion and symmetry patterns. We show graphically that the stochastic webs are (topologically) unstable and thus the unbounded motion becomes considerably truncated. Moreover, we analyze numerically and analytically a simpler (integrable) version. We give its exact closed-form solution in complex numbers, realize that it accelerates unboundedly only when ω 0 = (2 π/T) q (q = ± 1,2,...), and show that for small uncertainties in these frequencies, total predictability is lost as time evolves. That is, trajectories of a set of systems, initially described by close neighboring points in phase space strongly diverge in a non-linear way. The great loss of predictability in the integrable model is due to the combination of translational and rotational symmetries, inherent in these systems. (Author)

18. TRM performance prediction in Yucca Mountain welded tuff from linear cutter tests

Gertsch, R.; Ozdemir, L.; Gertsch, L.

1992-01-01

Performance predictions were developed for tunnel boring machines operating in welded tuff for the construction of the experimental study facility and the potential nuclear waste repository at Yucca Mountain. The predictions were based on test data obtained from an extensive series of linear cutting tests performed on samples of Topopah Spring welded tuff from the Yucca Mountain Project site. Using the cutter force, spacing, and penetration data from the experimental program, the thrust, torque, power, and rate of penetration were estimated for a 25 ft diameter tunnel boring machine (TBM) operating in welded tuff. Guidelines were developed for the optimal design of the TBM cutterhead to achieve high production rates at the lowest possible excavation costs. The results show that the Topopah Spring welded tuff (TSw2) can be excavated at relatively high rates of advance with state-of-the-art TBMs. The results also show, however, that the TBM torque and power requirements will be higher than estimated based on rock physical properties and past tunneling experience in rock formations of similar strength

19. Predicting microRNA-disease associations using label propagation based on linear neighborhood similarity.

Li, Guanghui; Luo, Jiawei; Xiao, Qiu; Liang, Cheng; Ding, Pingjian

2018-05-12

Interactions between microRNAs (miRNAs) and diseases can yield important information for uncovering novel prognostic markers. Since experimental determination of disease-miRNA associations is time-consuming and costly, attention has been given to designing efficient and robust computational techniques for identifying undiscovered interactions. In this study, we present a label propagation model with linear neighborhood similarity, called LPLNS, to predict unobserved miRNA-disease associations. Additionally, a preprocessing step is performed to derive new interaction likelihood profiles that will contribute to the prediction since new miRNAs and diseases lack known associations. Our results demonstrate that the LPLNS model based on the known disease-miRNA associations could achieve impressive performance with an AUC of 0.9034. Furthermore, we observed that the LPLNS model based on new interaction likelihood profiles could improve the performance to an AUC of 0.9127. This was better than other comparable methods. In addition, case studies also demonstrated our method's outstanding performance for inferring undiscovered interactions between miRNAs and diseases, especially for novel diseases. Copyright © 2018. Published by Elsevier Inc.

20. Chaos and loss of predictability in the periodically kicked linear oscillator

Luna-Acosta, G A [Universidad Autonoma de Puebla (Mexico). Inst. de Ciencias; Cantoral, E [Universidad Autonoma de Puebla (Mexico). Escuela de Fisica

1989-01-01

Chernikov et.al. [2] have discovered new features in the dynamics of a periodically kicked LHO x'' + [omega] [sub 0] [sup 2] x = ( K/ k[sub 0] T [sup 2]) sin (k[sub 0] x) x [Sigma][sub n] [delta] (t / T - n). They report that its phase space motion under exact resonance (p [omega] [sub 0] = (2 [pi] / T) q; p, q integers), and with initial conditions on the separatrix of the average Hamiltonian , accelerates unboundedly along a fractal stochastic web with q-fold symmetry. Here we investigate with numerical experiments the effects of small deviations from exact resonance on the diffusion and symmetry patterns. We show graphically that the stochastic webs are (topologically) unstable and thus the unbounded motion becomes considerably truncated. Moreover, we analyze numerically and analytically a simpler (integrable) version. We give its exact closed-form solution in complex numbers, realize that it accelerates unboundedly only when [omega][sub 0] = (2 [pi]/T) q (q = [+-] 1,2,...), and show that for small uncertainties in these frequencies, total predictability is lost as time evolves. That is, trajectories of a set of systems, initially described by close neighboring points in phase space strongly diverge in a non-linear way. The great loss of predictability in the integrable model is due to the combination of translational and rotational symmetries, inherent in these systems. (Author).

1. Predicting stem borer density in maize using RapidEye data and generalized linear models

Abdel-Rahman, Elfatih M.; Landmann, Tobias; Kyalo, Richard; Ong'amo, George; Mwalusepo, Sizah; Sulieman, Saad; Ru, Bruno Le

2017-05-01

Average maize yield in eastern Africa is 2.03 t ha-1 as compared to global average of 6.06 t ha-1 due to biotic and abiotic constraints. Amongst the biotic production constraints in Africa, stem borers are the most injurious. In eastern Africa, maize yield losses due to stem borers are currently estimated between 12% and 21% of the total production. The objective of the present study was to explore the possibility of RapidEye spectral data to assess stem borer larva densities in maize fields in two study sites in Kenya. RapidEye images were acquired for the Bomet (western Kenya) test site on the 9th of December 2014 and on 27th of January 2015, and for Machakos (eastern Kenya) a RapidEye image was acquired on the 3rd of January 2015. Five RapidEye spectral bands as well as 30 spectral vegetation indices (SVIs) were utilized to predict per field maize stem borer larva densities using generalized linear models (GLMs), assuming Poisson ('Po') and negative binomial ('NB') distributions. Root mean square error (RMSE) and ratio prediction to deviation (RPD) statistics were used to assess the models performance using a leave-one-out cross-validation approach. The Zero-inflated NB ('ZINB') models outperformed the 'NB' models and stem borer larva densities could only be predicted during the mid growing season in December and early January in both study sites, respectively (RMSE = 0.69-1.06 and RPD = 8.25-19.57). Overall, all models performed similar when all the 30 SVIs (non-nested) and only the significant (nested) SVIs were used. The models developed could improve decision making regarding controlling maize stem borers within integrated pest management (IPM) interventions.

2. Multiple Linear Regression and Artificial Neural Network to Predict Blood Glucose in Overweight Patients.

Wang, J; Wang, F; Liu, Y; Xu, J; Lin, H; Jia, B; Zuo, W; Jiang, Y; Hu, L; Lin, F

2016-01-01

Overweight individuals are at higher risk for developing type II diabetes than the general population. We conducted this study to analyze the correlation between blood glucose and biochemical parameters, and developed a blood glucose prediction model tailored to overweight patients. A total of 346 overweight Chinese people patients ages 18-81 years were involved in this study. Their levels of fasting glucose (fs-GLU), blood lipids, and hepatic and renal functions were measured and analyzed by multiple linear regression (MLR). Based the MLR results, we developed a back propagation artificial neural network (BP-ANN) model by selecting tansig as the transfer function of the hidden layers nodes, and purelin for the output layer nodes, with training goal of 0.5×10(-5). There was significant correlation between fs-GLU with age, BMI, and blood biochemical indexes (P<0.05). The results of MLR analysis indicated that age, fasting alanine transaminase (fs-ALT), blood urea nitrogen (fs-BUN), total protein (fs-TP), uric acid (fs-BUN), and BMI are 6 independent variables related to fs-GLU. Based on these parameters, the BP-ANN model was performed well and reached high prediction accuracy when training 1 000 epoch (R=0.9987). The level of fs-GLU was predictable using the proposed BP-ANN model based on 6 related parameters (age, fs-ALT, fs-BUN, fs-TP, fs-UA and BMI) in overweight patients. © Georg Thieme Verlag KG Stuttgart · New York.

3. Predicting heat stress index in Sasso hens using automatic linear modeling and artificial neural network

Yakubu, A.; Oluremi, O. I. A.; Ekpo, E. I.

2018-03-01

There is an increasing use of robust analytical algorithms in the prediction of heat stress. The present investigation therefore, was carried out to forecast heat stress index (HSI) in Sasso laying hens. One hundred and sixty seven records on the thermo-physiological parameters of the birds were utilized. They were reared on deep litter and battery cage systems. Data were collected when the birds were 42- and 52-week of age. The independent variables fitted were housing system, age of birds, rectal temperature (RT), pulse rate (PR), and respiratory rate (RR). The response variable was HSI. Data were analyzed using automatic linear modeling (ALM) and artificial neural network (ANN) procedures. The ALM model building method involved Forward Stepwise using the F Statistic criterion. As regards ANN, multilayer perceptron (MLP) with back-propagation network was used. The ANN network was trained with 90% of the data set while 10% were dedicated to testing for model validation. RR and PR were the two parameters of utmost importance in the prediction of HSI. However, the fractional importance of RR was higher than that of PR in both ALM (0.947 versus 0.053) and ANN (0.677 versus 0.274) models. The two models also predicted HSI effectively with high degree of accuracy [r = 0.980, R 2 = 0.961, adjusted R 2 = 0.961, and RMSE = 0.05168 (ALM); r = 0.983, R 2 = 0.966; adjusted R 2 = 0.966, and RMSE = 0.04806 (ANN)]. The present information may be exploited in the development of a heat stress chart based largely on RR. This may aid detection of thermal discomfort in a poultry house under tropical and subtropical conditions.

4. Dual-Source Linear Energy Prediction (LINE-P) Model in the Context of WSNs.

Ahmed, Faisal; Tamberg, Gert; Le Moullec, Yannick; Annus, Paul

2017-07-20

Energy harvesting technologies such as miniature power solar panels and micro wind turbines are increasingly used to help power wireless sensor network nodes. However, a major drawback of energy harvesting is its varying and intermittent characteristic, which can negatively affect the quality of service. This calls for careful design and operation of the nodes, possibly by means of, e.g., dynamic duty cycling and/or dynamic frequency and voltage scaling. In this context, various energy prediction models have been proposed in the literature; however, they are typically compute-intensive or only suitable for a single type of energy source. In this paper, we propose Linear Energy Prediction "LINE-P", a lightweight, yet relatively accurate model based on approximation and sampling theory; LINE-P is suitable for dual-source energy harvesting. Simulations and comparisons against existing similar models have been conducted with low and medium resolutions (i.e., 60 and 22 min intervals/24 h) for the solar energy source (low variations) and with high resolutions (15 min intervals/24 h) for the wind energy source. The results show that the accuracy of the solar-based and wind-based predictions is up to approximately 98% and 96%, respectively, while requiring a lower complexity and memory than the other models. For the cases where LINE-P's accuracy is lower than that of other approaches, it still has the advantage of lower computing requirements, making it more suitable for embedded implementation, e.g., in wireless sensor network coordinator nodes or gateways.

5. Non-linear dynamical signal characterization for prediction of defibrillation success through machine learning

2012-10-01

.6% and 60.9%, respectively. Conclusion We report the development and first-use of a nontraditional non-linear method of analyzing the VF ECG signal, yielding high predictive accuracies of defibrillation success. Furthermore, incorporation of features from the PetCO2 signal noticeably increased model robustness. These predictive capabilities should further improve with the availability of a larger database.

6. Application of linear and non-linear low-Re k-ε models in two-dimensional predictions of convective heat transfer in passages with sudden contractions

Raisee, M.; Hejazi, S.H.

2007-01-01

This paper presents comparisons between heat transfer predictions and measurements for developing turbulent flow through straight rectangular channels with sudden contractions at the mid-channel section. The present numerical results were obtained using a two-dimensional finite-volume code which solves the governing equations in a vertical plane located at the lateral mid-point of the channel. The pressure field is obtained with the well-known SIMPLE algorithm. The hybrid scheme was employed for the discretization of convection in all transport equations. For modeling of the turbulence, a zonal low-Reynolds number k-ε model and the linear and non-linear low-Reynolds number k-ε models with the 'Yap' and 'NYP' length-scale correction terms have been employed. The main objective of present study is to examine the ability of the above turbulence models in the prediction of convective heat transfer in channels with sudden contraction at a mid-channel section. The results of this study show that a sudden contraction creates a relatively small recirculation bubble immediately downstream of the channel contraction. This separation bubble influences the distribution of local heat transfer coefficient and increases the heat transfer levels by a factor of three. Computational results indicate that all the turbulence models employed produce similar flow fields. The zonal k-ε model produces the wrong Nusselt number distribution by underpredicting heat transfer levels in the recirculation bubble and overpredicting them in the developing region. The linear low-Re k-ε model, on the other hand, returns the correct Nusselt number distribution in the recirculation region, although it somewhat overpredicts heat transfer levels in the developing region downstream of the separation bubble. The replacement of the 'Yap' term with the 'NYP' term in the linear low-Re k-ε model results in a more accurate local Nusselt number distribution. Moreover, the application of the non-linear k

7. The practice of prediction: What can ecologists learn from applied, ecology-related fields?

Pennekamp, Frank; Adamson, Matthew; Petchey, Owen L; Poggiale, Jean-Christophe; Aguiar, Maira; Kooi, Bob W.; Botkin, Daniel B.; DeAngelis, Donald L.

2017-01-01

The pervasive influence of human induced global environmental change affects biodiversity across the globe, and there is great uncertainty as to how the biosphere will react on short and longer time scales. To adapt to what the future holds and to manage the impacts of global change, scientists need to predict the expected effects with some confidence and communicate these predictions to policy makers. However, recent reviews found that we currently lack a clear understanding of how predictable ecology is, with views seeing it as mostly unpredictable to potentially predictable, at least over short time frames. However, in applied, ecology-related fields predictions are more commonly formulated and reported, as well as evaluated in hindsight, potentially allowing one to define baselines of predictive proficiency in these fields. We searched the literature for representative case studies in these fields and collected information about modeling approaches, target variables of prediction, predictive proficiency achieved, as well as the availability of data to parameterize predictive models. We find that some fields such as epidemiology achieve high predictive proficiency, but even in the more predictive fields proficiency is evaluated in different ways. Both phenomenological and mechanistic approaches are used in most fields, but differences are often small, with no clear superiority of one approach over the other. Data availability is limiting in most fields, with long-term studies being rare and detailed data for parameterizing mechanistic models being in short supply. We suggest that ecologists adopt a more rigorous approach to report and assess predictive proficiency, and embrace the challenges of real world decision making to strengthen the practice of prediction in ecology.

8. Prediction of high airway pressure using a non-linear autoregressive model of pulmonary mechanics.

Langdon, Ruby; Docherty, Paul D; Schranz, Christoph; Chase, J Geoffrey

2017-11-02

For mechanically ventilated patients with acute respiratory distress syndrome (ARDS), suboptimal PEEP levels can cause ventilator induced lung injury (VILI). In particular, high PEEP and high peak inspiratory pressures (PIP) can cause over distension of alveoli that is associated with VILI. However, PEEP must also be sufficient to maintain recruitment in ARDS lungs. A lung model that accurately and precisely predicts the outcome of an increase in PEEP may allow dangerous high PIP to be avoided, and reduce the incidence of VILI. Sixteen pressure-flow data sets were collected from nine mechanically ventilated ARDs patients that underwent one or more recruitment manoeuvres. A nonlinear autoregressive (NARX) model was identified on one or more adjacent PEEP steps, and extrapolated to predict PIP at 2, 4, and 6 cmH 2 O PEEP horizons. The analysis considered whether the predicted and measured PIP exceeded a threshold of 40 cmH 2 O. A direct comparison of the method was made using the first order model of pulmonary mechanics (FOM(I)). Additionally, a further, more clinically appropriate method for the FOM was tested, in which the FOM was trained on a single PEEP prior to prediction (FOM(II)). The NARX model exhibited very high sensitivity (> 0.96) in all cases, and a high specificity (> 0.88). While both FOM methods had a high specificity (> 0.96), the sensitivity was much lower, with a mean of 0.68 for FOM(I), and 0.82 for FOM(II). Clinically, false negatives are more harmful than false positives, as a high PIP may result in distension and VILI. Thus, the NARX model may be more effective than the FOM in allowing clinicians to reduce the risk of applying a PEEP that results in dangerously high airway pressures.

9. On predicting the results of applying workflow management in a healthcare context

Chermin, B.; Frey, I.A.; Reijers, H.A.; Smeets, H.J.

2012-01-01

Even though workflow management systems are currently not being applied on a wide scale in healthcare settings, their benefits with respect to operational efficiency and reducing patient risk seem enticing. The authors show how an approach that is rooted in simulation can be useful to predict the

10. Applying deep bidirectional LSTM and mixture density network for basketball trajectory prediction

Zhao, Yu; Yang, Rennong; Chevalier, Guillaume; Shah, Rajiv C.; Romijnders, Rob

2018-01-01

Data analytics helps basketball teams to create tactics. However, manual data collection and analytics are costly and ineffective. Therefore, we applied a deep bidirectional long short-term memory (BLSTM) and mixture density network (MDN) approach. This model is not only capable of predicting a

11. Conscientiousness at the workplace: Applying mixture IRT to investigate scalability and predictive validity

Egberink, I.J.L.; Meijer, R.R.; Veldkamp, Bernard P.

2010-01-01

Mixture item response theory (IRT) models have been used to assess multidimensionality of the construct being measured and to detect different response styles for different groups. In this study a mixture version of the graded response model was applied to investigate scalability and predictive

12. Conscientiousness in the workplace : Applying mixture IRT to investigate scalability and predictive validity

Egberink, I.J.L.; Meijer, R.R.; Veldkamp, B.P.

Mixture item response theory (IRT) models have been used to assess multidimensionality of the construct being measured and to detect different response styles for different groups. In this study a mixture version of the graded response model was applied to investigate scalability and predictive

13. C code generation applied to nonlinear model predictive control for an artificial pancreas

Boiroux, Dimitri; Jørgensen, John Bagterp

2017-01-01

This paper presents a method to generate C code from MATLAB code applied to a nonlinear model predictive control (NMPC) algorithm. The C code generation uses the MATLAB Coder Toolbox. It can drastically reduce the time required for development compared to a manual porting of code from MATLAB to C...

14. Dynamic segmentation and linear prediction for maternal ECG removal in antenatal abdominal recordings

Vullings, R; Sluijter, R J; Mischi, M; Bergmans, J W M; Peters, C H L; Oei, S G

2009-01-01

Monitoring the fetal heart rate (fHR) and fetal electrocardiogram (fECG) during pregnancy is important to support medical decision making. Before labor, the fHR is usually monitored using Doppler ultrasound. This method is inaccurate and therefore of limited clinical value. During labor, the fHR can be monitored more accurately using an invasive electrode; this method also enables monitoring of the fECG. Antenatally, the fECG and fHR can also be monitored using electrodes on the maternal abdomen. The signal-to-noise ratio of these recordings is, however, low, the maternal electrocardiogram (mECG) being the main interference. Existing techniques to remove the mECG from these non-invasive recordings are insufficiently accurate or do not provide all spatial information of the fECG. In this paper a new technique for mECG removal in antenatal abdominal recordings is presented. This technique operates by the linear prediction of each separate wave in the mECG. Its performance in mECG removal and fHR detection is evaluated by comparison with spatial filtering, adaptive filtering, template subtraction and independent component analysis techniques. The new technique outperforms the other techniques in both mECG removal and fHR detection (by more than 3%)

15. Seasonal Variability of Aragonite Saturation State in the North Pacific Ocean Predicted by Multiple Linear Regression

Kim, T. W.; Park, G. H.

2014-12-01

Seasonal variation of aragonite saturation state (Ωarag) in the North Pacific Ocean (NPO) was investigated, using multiple linear regression (MLR) models produced from the PACIFICA (Pacific Ocean interior carbon) dataset. Data within depth ranges of 50-1200m were used to derive MLR models, and three parameters (potential temperature, nitrate, and apparent oxygen utilization (AOU)) were chosen as predictor variables because these parameters are associated with vertical mixing, DIC (dissolved inorganic carbon) removal and release which all affect Ωarag in water column directly or indirectly. The PACIFICA dataset was divided into 5° × 5° grids, and a MLR model was produced in each grid, giving total 145 independent MLR models over the NPO. Mean RMSE (root mean square error) and r2 (coefficient of determination) of all derived MLR models were approximately 0.09 and 0.96, respectively. Then the obtained MLR coefficients for each of predictor variables and an intercept were interpolated over the study area, thereby making possible to allocate MLR coefficients to data-sparse ocean regions. Predictability from the interpolated coefficients was evaluated using Hawaiian time-series data, and as a result mean residual between measured and predicted Ωarag values was approximately 0.08, which is less than the mean RMSE of our MLR models. The interpolated MLR coefficients were combined with seasonal climatology of World Ocean Atlas 2013 (1° × 1°) to produce seasonal Ωarag distributions over various depths. Large seasonal variability in Ωarag was manifested in the mid-latitude Western NPO (24-40°N, 130-180°E) and low-latitude Eastern NPO (0-12°N, 115-150°W). In the Western NPO, seasonal fluctuations of water column stratification appeared to be responsible for the seasonal variation in Ωarag (~ 0.5 at 50 m) because it closely followed temperature variations in a layer of 0-75 m. In contrast, remineralization of organic matter was the main cause for the seasonal

16. Predicting oropharyngeal tumor volume throughout the course of radiation therapy from pretreatment computed tomography data using general linear models

2014-01-01

Purpose: The purpose of this work was to develop and evaluate the accuracy of several predictive models of variation in tumor volume throughout the course of radiation therapy. Methods: Nineteen patients with oropharyngeal cancers were imaged daily with CT-on-rails for image-guided alignment per an institutional protocol. The daily volumes of 35 tumors in these 19 patients were determined and used to generate (1) a linear model in which tumor volume changed at a constant rate, (2) a general linear model that utilized the power fit relationship between the daily and initial tumor volumes, and (3) a functional general linear model that identified and exploited the primary modes of variation between time series describing the changing tumor volumes. Primary and nodal tumor volumes were examined separately. The accuracy of these models in predicting daily tumor volumes were compared with those of static and linear reference models using leave-one-out cross-validation. Results: In predicting the daily volume of primary tumors, the general linear model and the functional general linear model were more accurate than the static reference model by 9.9% (range: −11.6%–23.8%) and 14.6% (range: −7.3%–27.5%), respectively, and were more accurate than the linear reference model by 14.2% (range: −6.8%–40.3%) and 13.1% (range: −1.5%–52.5%), respectively. In predicting the daily volume of nodal tumors, only the 14.4% (range: −11.1%–20.5%) improvement in accuracy of the functional general linear model compared to the static reference model was statistically significant. Conclusions: A general linear model and a functional general linear model trained on data from a small population of patients can predict the primary tumor volume throughout the course of radiation therapy with greater accuracy than standard reference models. These more accurate models may increase the prognostic value of information about the tumor garnered from pretreatment computed tomography

17. Applying Machine Learning and High Performance Computing to Water Quality Assessment and Prediction

Ruijian Zhang; Deren Li

2017-01-01

Water quality assessment and prediction is a more and more important issue. Traditional ways either take lots of time or they can only do assessments. In this research, by applying machine learning algorithm to a long period time of water attributes’ data; we can generate a decision tree so that it can predict the future day’s water quality in an easy and efficient way. The idea is to combine the traditional ways and the computer algorithms together. Using machine learning algorithms, the ass...

18. Comparison of the Predictive Performance and Interpretability of Random Forest and Linear Models on Benchmark Data Sets.

Marchese Robinson, Richard L; Palczewska, Anna; Palczewski, Jan; Kidley, Nathan

2017-08-28

The ability to interpret the predictions made by quantitative structure-activity relationships (QSARs) offers a number of advantages. While QSARs built using nonlinear modeling approaches, such as the popular Random Forest algorithm, might sometimes be more predictive than those built using linear modeling approaches, their predictions have been perceived as difficult to interpret. However, a growing number of approaches have been proposed for interpreting nonlinear QSAR models in general and Random Forest in particular. In the current work, we compare the performance of Random Forest to those of two widely used linear modeling approaches: linear Support Vector Machines (SVMs) (or Support Vector Regression (SVR)) and partial least-squares (PLS). We compare their performance in terms of their predictivity as well as the chemical interpretability of the predictions using novel scoring schemes for assessing heat map images of substructural contributions. We critically assess different approaches for interpreting Random Forest models as well as for obtaining predictions from the forest. We assess the models on a large number of widely employed public-domain benchmark data sets corresponding to regression and binary classification problems of relevance to hit identification and toxicology. We conclude that Random Forest typically yields comparable or possibly better predictive performance than the linear modeling approaches and that its predictions may also be interpreted in a chemically and biologically meaningful way. In contrast to earlier work looking at interpretation of nonlinear QSAR models, we directly compare two methodologically distinct approaches for interpreting Random Forest models. The approaches for interpreting Random Forest assessed in our article were implemented using open-source programs that we have made available to the community. These programs are the rfFC package ( https://r-forge.r-project.org/R/?group_id=1725 ) for the R statistical

19. Linear reaction norm models for genetic merit prediction of Angus cattle under genotype by environment interaction.

Cardoso, F F; Tempelman, R J

2012-07-01

The objectives of this work were to assess alternative linear reaction norm (RN) models for genetic evaluation of Angus cattle in Brazil. That is, we investigated the interaction between genotypes and continuous descriptors of the environmental variation to examine evidence of genotype by environment interaction (G×E) in post-weaning BW gain (PWG) and to compare the environmental sensitivity of national and imported Angus sires. Data were collected by the Brazilian Angus Improvement Program from 1974 to 2005 and consisted of 63,098 records and a pedigree file with 95,896 animals. Six models were implemented using Bayesian inference and compared using the Deviance Information Criterion (DIC). The simplest model was M(1), a traditional animal model, which showed the largest DIC and hence the poorest fit when compared with the 4 alternative RN specifications accounting for G×E. In M(2), a 2-step procedure was implemented using the contemporary group posterior means of M(1) as the environmental gradient, ranging from -92.6 to +265.5 kg. Moreover, the benefits of jointly estimating all parameters in a 1-step approach were demonstrated by M(3). Additionally, we extended M(3) to allow for residual heteroskedasticity using an exponential function (M(4)) and the best fitting (smallest DIC) environmental classification model (M(5)) specification. Finally, M(6) added just heteroskedastic residual variance to M(1). Heritabilities were less at harsh environments and increased with the improvement of production conditions for all RN models. Rank correlations among genetic merit predictions obtained by M(1) and by the best fitting RN models M(3) (homoskedastic) and M(5) (heteroskedastic) at different environmental levels ranged from 0.79 and 0.81, suggesting biological importance of G×E in Brazilian Angus PWG. These results suggest that selection progress could be optimized by adopting environment-specific genetic merit predictions. The PWG environmental sensitivity of

20. Ensemble prediction of floods – catchment non-linearity and forecast probabilities

C. Reszler

2007-07-01

Full Text Available Quantifying the uncertainty of flood forecasts by ensemble methods is becoming increasingly important for operational purposes. The aim of this paper is to examine how the ensemble distribution of precipitation forecasts propagates in the catchment system, and to interpret the flood forecast probabilities relative to the forecast errors. We use the 622 km2 Kamp catchment in Austria as an example where a comprehensive data set, including a 500 yr and a 1000 yr flood, is available. A spatially-distributed continuous rainfall-runoff model is used along with ensemble and deterministic precipitation forecasts that combine rain gauge data, radar data and the forecast fields of the ALADIN and ECMWF numerical weather prediction models. The analyses indicate that, for long lead times, the variability of the precipitation ensemble is amplified as it propagates through the catchment system as a result of non-linear catchment response. In contrast, for lead times shorter than the catchment lag time (e.g. 12 h and less, the variability of the precipitation ensemble is decreased as the forecasts are mainly controlled by observed upstream runoff and observed precipitation. Assuming that all ensemble members are equally likely, the statistical analyses for five flood events at the Kamp showed that the ensemble spread of the flood forecasts is always narrower than the distribution of the forecast errors. This is because the ensemble forecasts focus on the uncertainty in forecast precipitation as the dominant source of uncertainty, and other sources of uncertainty are not accounted for. However, a number of analyses, including Relative Operating Characteristic diagrams, indicate that the ensemble spread is a useful indicator to assess potential forecast errors for lead times larger than 12 h.

1. EVALUATING PREDICTIVE ERRORS OF A COMPLEX ENVIRONMENTAL MODEL USING A GENERAL LINEAR MODEL AND LEAST SQUARE MEANS

A General Linear Model (GLM) was used to evaluate the deviation of predicted values from expected values for a complex environmental model. For this demonstration, we used the default level interface of the Regional Mercury Cycling Model (R-MCM) to simulate epilimnetic total mer...

2. Applying Machine Learning and High Performance Computing to Water Quality Assessment and Prediction

Ruijian Zhang

2017-12-01

Full Text Available Water quality assessment and prediction is a more and more important issue. Traditional ways either take lots of time or they can only do assessments. In this research, by applying machine learning algorithm to a long period time of water attributes’ data; we can generate a decision tree so that it can predict the future day’s water quality in an easy and efficient way. The idea is to combine the traditional ways and the computer algorithms together. Using machine learning algorithms, the assessment of water quality will be far more efficient, and by generating the decision tree, the prediction will be quite accurate. The drawback of the machine learning modeling is that the execution takes quite long time, especially when we employ a better accuracy but more time-consuming algorithm in clustering. Therefore, we applied the high performance computing (HPC System to deal with this problem. Up to now, the pilot experiments have achieved very promising preliminary results. The visualized water quality assessment and prediction obtained from this project would be published in an interactive website so that the public and the environmental managers could use the information for their decision making.

3. Efficient Implementation of Solvers for Linear Model Predictive Control on Embedded Devices

Frison, Gianluca; Kwame Minde Kufoalor, D.; Imsland, Lars

2014-01-01

This paper proposes a novel approach for the efficient implementation of solvers for linear MPC on embedded devices. The main focus is to explain in detail the approach used to optimize the linear algebra for selected low-power embedded devices, and to show how the high-performance implementation...

4. Linear regressive model structures for estimation and prediction of compartmental diffusive systems

Vries, D; Keesman, K.J.; Zwart, Heiko J.

In input-output relations of (compartmental) diffusive systems, physical parameters appear non-linearly, resulting in the use of (constrained) non-linear parameter estimation techniques with its short-comings regarding global optimality and computational effort. Given a LTI system in state space

5. Linear regressive model structures for estimation and prediction of compartmental diffusive systems

Vries, D.; Keesman, K.J.; Zwart, H.

2006-01-01

Abstract In input-output relations of (compartmental) diffusive systems, physical parameters appear non-linearly, resulting in the use of (constrained) non-linear parameter estimation techniques with its short-comings regarding global optimality and computational effort. Given a LTI system in state

6. Cancellation of OpAmp virtual ground imperfections by a negative conductance applied to improve RF receiver linearity

Mahrof, D.H.; Klumperink, Eric A.M.; Ru, Z.; Oude Alink, M.S.; Nauta, Bram

2014-01-01

High linearity CMOS radio receivers often exploit linear V-I conversion at RF, followed by passive down-mixing and an OpAmp-based Transimpedance Amplifier at baseband. Due to nonlinearity and finite gain in the OpAmp, virtual ground is imperfect, inducing distortion currents. This paper proposes a

7. Prediction of failures in linear systems with the use of tolerance ranges

1993-01-01

The problem of predicting the technical state of an object can be stated in a general case as that of predicting potential failures on the basis of a quantitative evaluation of the predicted parameters in relation to the set of tolerances on these parameters. The main stages in the prediction are collecting and preparing source data on the prehistory of the predicted phenomenon, forming a mathematical model of this phenomenon, working out the algorithm for the prediction, and adopting a solution from the prediction results. The final two stages of prediction are considered in this article. The prediction algorithm is proposed based on construction of the tolerance range for the signal of error between output coordinates of the system and its mathematical model. A solution regarding possible occurrence of failure in the system is formulated as a result of comparison of the tolerance range and the found confidence interval. 5 refs

8. Validation of Lifetime Prediction of IGBT Modules Based on Linear Damage Accumulation by Means of Superimposed Power Cycling Tests

Choi, Ui-Min; Ma, Ke; Blaabjerg, Frede

2018-01-01

In this paper, the lifetime prediction of power device modules based on the linear damage accumulation is studied in conjunction with simple mission profiles of converters. Superimposed power cycling conditions, which are called simple mission profiles in this paper, are made based on a lifetime ...... prediction of IGBT modules under power converter applications.......In this paper, the lifetime prediction of power device modules based on the linear damage accumulation is studied in conjunction with simple mission profiles of converters. Superimposed power cycling conditions, which are called simple mission profiles in this paper, are made based on a lifetime...... model in respect to junction temperature swing duration. This model has been built based on 39 power cycling test results of 600-V 30-A three-phase-molded IGBT modules. Six tests are performed under three superimposed power cycling conditions using an advanced power cycling test setup. The experimental...

9. On the relevance of the micromechanics approach for predicting the linear viscoelastic behavior of semi-crystalline poly(ethylene)terephtalates (PET)

Diani, J.; Bedoui, F.; Regnier, G.

2008-01-01

The relevance of micromechanics modeling to the linear viscoelastic behavior of semi-crystalline polymers is studied. For this purpose, the linear viscoelastic behaviors of amorphous and semi-crystalline PETs are characterized. Then, two micromechanics modeling methods, which have been proven in a previous work to apply to the PET elastic behavior, are used to predict the viscoelastic behavior of three semi-crystalline PETs. The microstructures of the crystalline PETs are clearly defined using WAXS techniques. Since microstructures and mechanical properties of both constitutive phases (the crystalline and the amorphous) are defined, the simulations are run without adjustable parameters. Results show that the models are unable to reproduce the substantial decrease of viscosity induced by the increase of crystallinity. Unlike the real materials, for moderate crystallinity, both models show materials of viscosity nearly identical to the amorphous material

10. Trends of Abutment-Scour Prediction Equations Applied to 144 Field Sites in South Carolina

2006-01-01

The U.S. Geological Survey conducted a study in cooperation with the Federal Highway Administration in which predicted abutment-scour depths computed with selected predictive equations were compared with field measurements of abutment-scour depth made at 144 bridges in South Carolina. The assessment used five equations published in the Fourth Edition of 'Evaluating Scour at Bridges,' (Hydraulic Engineering Circular 18), including the original Froehlich, the modified Froehlich, the Sturm, the Maryland, and the HIRE equations. An additional unpublished equation also was assessed. Comparisons between predicted and observed scour depths are intended to illustrate general trends and order-of-magnitude differences for the prediction equations. Field measurements were taken during non-flood conditions when the hydraulic conditions that caused the scour generally are unknown. The predicted scour depths are based on hydraulic conditions associated with the 100-year flow at all sites and the flood of record for 35 sites. Comparisons showed that predicted scour depths frequently overpredict observed scour and at times were excessive. The comparison also showed that underprediction occurred, but with less frequency. The performance of these equations indicates that they are poor predictors of abutment-scour depth in South Carolina, and it is probable that poor performance will occur when the equations are applied in other geographic regions. Extensive data and graphs used to compare predicted and observed scour depths in this study were compiled into spreadsheets and are included in digital format with this report. In addition to the equation-comparison data, Water-Surface Profile Model tube-velocity data, soil-boring data, and selected abutment-scour data are included in digital format with this report. The digital database was developed as a resource for future researchers and is especially valuable for evaluating the reasonableness of future equations that may be developed.

11. Augmented chaos-multiple linear regression approach for prediction of wave parameters

M.A. Ghorbani

2017-06-01

The inter-comparisons demonstrated that the Chaos-MLR and pure MLR models yield almost the same accuracy in predicting the significant wave heights and the zero-up-crossing wave periods. Whereas, the augmented Chaos-MLR model is performed better results in term of the prediction accuracy vis-a-vis the previous prediction applications of the same case study.

12. An EM Algorithm for Double-Pareto-Lognormal Generalized Linear Model Applied to Heavy-Tailed Insurance Claims

Enrique Calderín-Ojeda

2017-11-01

Full Text Available Generalized linear models might not be appropriate when the probability of extreme events is higher than that implied by the normal distribution. Extending the method for estimating the parameters of a double Pareto lognormal distribution (DPLN in Reed and Jorgensen (2004, we develop an EM algorithm for the heavy-tailed Double-Pareto-lognormal generalized linear model. The DPLN distribution is obtained as a mixture of a lognormal distribution with a double Pareto distribution. In this paper the associated generalized linear model has the location parameter equal to a linear predictor which is used to model insurance claim amounts for various data sets. The performance is compared with those of the generalized beta (of the second kind and lognorma distributions.

13. Confirmation of linear system theory prediction: Changes in Herrnstein's k as a function of changes in reinforcer magnitude.

McDowell, J J; Wood, H M

1984-03-01

Eight human subjects pressed a lever on a range of variable-interval schedules for 0.25 cent to 35.0 cent per reinforcement. Herrnstein's hyperbola described seven of the eight subjects' response-rate data well. For all subjects, the y-asymptote of the hyperbola increased with increasing reinforcer magnitude and its reciprocal was a linear function of the reciprocal of reinforcer magnitude. These results confirm predictions made by linear system theory; they contradict formal properties of Herrnstein's account and of six other mathematical accounts of single-alternative responding.

14. The estimation and prediction of the inventories for the liquid and gaseous radwaste systems using the linear regression analysis

Kim, J. Y.; Shin, C. H.; Kim, J. K.; Lee, J. K.; Park, Y. J.

2003-01-01

The variation transitions of the inventories for the liquid radwaste system and the radioactive gas have being released in containment, and their predictive values according to the operation histories of Yonggwang(YGN) 3 and 4 were analyzed by linear regression analysis methodology. The results show that the variation transitions of the inventories for those systems are linearly increasing according to the operation histories but the inventories released to the environment are considerably lower than the recommended values based on the FSAR suggestions. It is considered that some conservation were presented in the estimation methodology in preparing stage of FSAR

15. Improved prediction of residue flexibility by embedding optimized amino acid grouping into RSA-based linear models.

Zhang, Hua; Kurgan, Lukasz

2014-12-01

Knowledge of protein flexibility is vital for deciphering the corresponding functional mechanisms. This knowledge would help, for instance, in improving computational drug design and refinement in homology-based modeling. We propose a new predictor of the residue flexibility, which is expressed by B-factors, from protein chains that use local (in the chain) predicted (or native) relative solvent accessibility (RSA) and custom-derived amino acid (AA) alphabets. Our predictor is implemented as a two-stage linear regression model that uses RSA-based space in a local sequence window in the first stage and a reduced AA pair-based space in the second stage as the inputs. This method is easy to comprehend explicit linear form in both stages. Particle swarm optimization was used to find an optimal reduced AA alphabet to simplify the input space and improve the prediction performance. The average correlation coefficients between the native and predicted B-factors measured on a large benchmark dataset are improved from 0.65 to 0.67 when using the native RSA values and from 0.55 to 0.57 when using the predicted RSA values. Blind tests that were performed on two independent datasets show consistent improvements in the average correlation coefficients by a modest value of 0.02 for both native and predicted RSA-based predictions.

16. Comparison of multiple linear regression, partial least squares and artificial neural networks for prediction of gas chromatographic relative retention times of trimethylsilylated anabolic androgenic steroids.

Fragkaki, A G; Farmaki, E; Thomaidis, N; Tsantili-Kakoulidou, A; Angelis, Y S; Koupparis, M; Georgakopoulos, C

2012-09-21

The comparison among different modelling techniques, such as multiple linear regression, partial least squares and artificial neural networks, has been performed in order to construct and evaluate models for prediction of gas chromatographic relative retention times of trimethylsilylated anabolic androgenic steroids. The performance of the quantitative structure-retention relationship study, using the multiple linear regression and partial least squares techniques, has been previously conducted. In the present study, artificial neural networks models were constructed and used for the prediction of relative retention times of anabolic androgenic steroids, while their efficiency is compared with that of the models derived from the multiple linear regression and partial least squares techniques. For overall ranking of the models, a novel procedure [Trends Anal. Chem. 29 (2010) 101-109] based on sum of ranking differences was applied, which permits the best model to be selected. The suggested models are considered useful for the estimation of relative retention times of designer steroids for which no analytical data are available. Copyright © 2012 Elsevier B.V. All rights reserved.

17. Model predictive control based on reduced order models applied to belt conveyor system.

Chen, Wei; Li, Xin

2016-11-01

In the paper, a model predictive controller based on reduced order model is proposed to control belt conveyor system, which is an electro-mechanics complex system with long visco-elastic body. Firstly, in order to design low-degree controller, the balanced truncation method is used for belt conveyor model reduction. Secondly, MPC algorithm based on reduced order model for belt conveyor system is presented. Because of the error bound between the full-order model and reduced order model, two Kalman state estimators are applied in the control scheme to achieve better system performance. Finally, the simulation experiments are shown that balanced truncation method can significantly reduce the model order with high-accuracy and model predictive control based on reduced-model performs well in controlling the belt conveyor system. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.

18. Performance assessment of a non-linear eddy-viscosity turbulence model applied to the anisotropic wake flow of a low-pressure turbine blade

Vlahostergios, Z.; Sideridis, A.; Yakinthos, K.; Goulas, A.

2012-01-01

Highlights: ► We model the wake flow produced by a LPT blade using a non-linear turbulence model. ► We use two interpolation schemes for the convection terms with different accuracy. ► We investigate the effect of each term of the non-linear constitutive expression. ► The results are compared with available experimental measurements. ► The model predicts with a good accuracy the velocity and stress distributions. - Abstract: The wake flow produced by a low-pressure turbine blade is modeled using a non-linear eddy-viscosity turbulence model. The theoretical benefit of using a non-linear eddy-viscosity model is strongly related to the capability of resolving highly anisotropic flows in contrast to the linear turbulence models, which are unable to correctly predict anisotropy. The main aim of the present work is to practically assess the performance of the model, by examining its ability to capture the anisotropic behavior of the wake-flow, mainly focusing on the measured velocity and Reynolds-stress distributions and to provide accurate results for the turbulent kinetic energy balance terms. Additionally, the contribution of each term of its non-linear constitutive expression for the Reynolds stresses is also investigated, in order to examine their direct effect on the modeling of the wake flow. The assessment is based on the experimental measurements that have been carried-out by the same group in Thessaloniki, Sideridis et al. (2011). The computational results show that the non-linear eddy viscosity model is capable to predict, with a good accuracy, all the flow and turbulence parameters while it is easy to program it in a computer code thus meeting the expectations of its originators.

19. Polaron effects on the linear and the nonlinear optical absorption coefficients and refractive index changes in cylindrical quantum dots with applied magnetic field

Wu Qingjie; Guo Kangxian; Liu Guanghui; Wu Jinghe

2013-01-01

Polaron effects on the linear and the nonlinear optical absorption coefficients and refractive index changes in cylindrical quantum dots with the radial parabolic potential and the z-direction linear potential with applied magnetic field are theoretically investigated. The optical absorption coefficients and refractive index changes are presented by using the compact-density-matrix approach and iterative method. Numerical calculations are presented for GaAs/AlGaAs. It is found that taking into account the electron-LO-phonon interaction, not only are the linear, the nonlinear and the total optical absorption coefficients and refractive index changes enhanced, but also the total optical absorption coefficients are more sensitive to the incident optical intensity. It is also found that no matter whether the electron-LO-phonon interaction is considered or not, the absorption coefficients and refractive index changes above are strongly dependent on the radial frequency, the magnetic field and the linear potential coefficient.

20. Predicting linear and nonlinear time series with applications in nuclear safeguards and nonproliferation

Burr, T.L.

1994-04-01

This report is a primer on the analysis of both linear and nonlinear time series with applications in nuclear safeguards and nonproliferation. We analyze eight simulated and two real time series using both linear and nonlinear modeling techniques. The theoretical treatment is brief but references to pertinent theory are provided. Forecasting is our main goal. However, because our most common approach is to fit models to the data, we also emphasize checking model adequacy by analyzing forecast errors for serial correlation or nonconstant variance

1. Plateletpheresis efficiency and mathematical correction of software-derived platelet yield prediction: A linear regression and ROC modeling approach.

Jaime-Pérez, José Carlos; Jiménez-Castillo, Raúl Alberto; Vázquez-Hernández, Karina Elizabeth; Salazar-Riojas, Rosario; Méndez-Ramírez, Nereida; Gómez-Almaguer, David

2017-10-01

Advances in automated cell separators have improved the efficiency of plateletpheresis and the possibility of obtaining double products (DP). We assessed cell processor accuracy of predicted platelet (PLT) yields with the goal of a better prediction of DP collections. This retrospective proof-of-concept study included 302 plateletpheresis procedures performed on a Trima Accel v6.0 at the apheresis unit of a hematology department. Donor variables, software predicted yield and actual PLT yield were statistically evaluated. Software prediction was optimized by linear regression analysis and its optimal cut-off to obtain a DP assessed by receiver operating characteristic curve (ROC) modeling. Three hundred and two plateletpheresis procedures were performed; in 271 (89.7%) occasions, donors were men and in 31 (10.3%) women. Pre-donation PLT count had the best direct correlation with actual PLT yield (r = 0.486. P Simple correction derived from linear regression analysis accurately corrected this underestimation and ROC analysis identified a precise cut-off to reliably predict a DP. © 2016 Wiley Periodicals, Inc.

2. Quasi-closed phase forward-backward linear prediction analysis of speech for accurate formant detection and estimation.

Gowda, Dhananjaya; Airaksinen, Manu; Alku, Paavo

2017-09-01

Recently, a quasi-closed phase (QCP) analysis of speech signals for accurate glottal inverse filtering was proposed. However, the QCP analysis which belongs to the family of temporally weighted linear prediction (WLP) methods uses the conventional forward type of sample prediction. This may not be the best choice especially in computing WLP models with a hard-limiting weighting function. A sample selective minimization of the prediction error in WLP reduces the effective number of samples available within a given window frame. To counter this problem, a modified quasi-closed phase forward-backward (QCP-FB) analysis is proposed, wherein each sample is predicted based on its past as well as future samples thereby utilizing the available number of samples more effectively. Formant detection and estimation experiments on synthetic vowels generated using a physical modeling approach as well as natural speech utterances show that the proposed QCP-FB method yields statistically significant improvements over the conventional linear prediction and QCP methods.

3. Bayesian integration of sensor information and a multivariate dynamic linear model for prediction of dairy cow mastitis.

Jensen, Dan B; Hogeveen, Henk; De Vries, Albert

2016-09-01

Rapid detection of dairy cow mastitis is important so corrective action can be taken as soon as possible. Automatically collected sensor data used to monitor the performance and the health state of the cow could be useful for rapid detection of mastitis while reducing the labor needs for monitoring. The state of the art in combining sensor data to predict clinical mastitis still does not perform well enough to be applied in practice. Our objective was to combine a multivariate dynamic linear model (DLM) with a naïve Bayesian classifier (NBC) in a novel method using sensor and nonsensor data to detect clinical cases of mastitis. We also evaluated reductions in the number of sensors for detecting mastitis. With the DLM, we co-modeled 7 sources of sensor data (milk yield, fat, protein, lactose, conductivity, blood, body weight) collected at each milking for individual cows to produce one-step-ahead forecasts for each sensor. The observations were subsequently categorized according to the errors of the forecasted values and the estimated forecast variance. The categorized sensor data were combined with other data pertaining to the cow (week in milk, parity, mastitis history, somatic cell count category, and season) using Bayes' theorem, which produced a combined probability of the cow having clinical mastitis. If this probability was above a set threshold, the cow was classified as mastitis positive. To illustrate the performance of our method, we used sensor data from 1,003,207 milkings from the University of Florida Dairy Unit collected from 2008 to 2014. Of these, 2,907 milkings were associated with recorded cases of clinical mastitis. Using the DLM/NBC method, we reached an area under the receiver operating characteristic curve of 0.89, with a specificity of 0.81 when the sensitivity was set at 0.80. Specificities with omissions of sensor data ranged from 0.58 to 0.81. These results are comparable to other studies, but differences in data quality, definitions of

4. An Application of Non-Linear Autoregressive Neural Networks to Predict Energy Consumption in Public Buildings

Luis Gonzaga Baca Ruiz

2016-08-01

Full Text Available This paper addresses the problem of energy consumption prediction using neural networks over a set of public buildings. Since energy consumption in the public sector comprises a substantial share of overall consumption, the prediction of such consumption represents a decisive issue in the achievement of energy savings. In our experiments, we use the data provided by an energy consumption monitoring system in a compound of faculties and research centers at the University of Granada, and provide a methodology to predict future energy consumption using nonlinear autoregressive (NAR and the nonlinear autoregressive neural network with exogenous inputs (NARX, respectively. Results reveal that NAR and NARX neural networks are both suitable for performing energy consumption prediction, but also that exogenous data may help to improve the accuracy of predictions.

5. High-performance small-scale solvers for linear Model Predictive Control

Frison, Gianluca; Sørensen, Hans Henrik Brandenborg; Dammann, Bernd

2014-01-01

, with the two main research areas of explicit MPC and tailored on-line MPC. State-of-the-art solvers in this second class can outperform optimized linear-algebra libraries (BLAS) only for very small problems, and do not explicitly exploit the hardware capabilities, relying on compilers for that. This approach...

6. INTRODUCTION TO A COMBINED MULTIPLE LINEAR REGRESSION AND ARMA MODELING APPROACH FOR BEACH BACTERIA PREDICTION

Due to the complexity of the processes contributing to beach bacteria concentrations, many researchers rely on statistical modeling, among which multiple linear regression (MLR) modeling is most widely used. Despite its ease of use and interpretation, there may be time dependence...

7. Predicting Longitudinal Change in Language Production and Comprehension in Individuals with Down Syndrome: Hierarchical Linear Modeling.

Chapman, Robin S.; Hesketh, Linda J.; Kistler, Doris J.

2002-01-01

Longitudinal change in syntax comprehension and production skill, measured over six years, was modeled in 31 individuals (ages 5-20) with Down syndrome. The best fitting Hierarchical Linear Modeling model of comprehension uses age and visual and auditory short-term memory as predictors of initial status, and age for growth trajectory. (Contains…

8. Linking linear programming and spatial simulation models to predict landscape effects of forest management alternatives

Eric J. Gustafson; L. Jay Roberts; Larry A. Leefers

2006-01-01

Forest management planners require analytical tools to assess the effects of alternative strategies on the sometimes disparate benefits from forests such as timber production and wildlife habitat. We assessed the spatial patterns of alternative management strategies by linking two models that were developed for different purposes. We used a linear programming model (...

9. Applying a radiomics approach to predict prognosis of lung cancer patients

Emaminejad, Nastaran; Yan, Shiju; Wang, Yunzhi; Qian, Wei; Guan, Yubao; Zheng, Bin

2016-03-01

Radiomics is an emerging technology to decode tumor phenotype based on quantitative analysis of image features computed from radiographic images. In this study, we applied Radiomics concept to investigate the association among the CT image features of lung tumors, which are either quantitatively computed or subjectively rated by radiologists, and two genomic biomarkers namely, protein expression of the excision repair cross-complementing 1 (ERCC1) genes and a regulatory subunit of ribonucleotide reductase (RRM1), in predicting disease-free survival (DFS) of lung cancer patients after surgery. An image dataset involving 94 patients was used. Among them, 20 had cancer recurrence within 3 years, while 74 patients remained DFS. After tumor segmentation, 35 image features were computed from CT images. Using the Weka data mining software package, we selected 10 non-redundant image features. Applying a SMOTE algorithm to generate synthetic data to balance case numbers in two DFS ("yes" and "no") groups and a leave-one-case-out training/testing method, we optimized and compared a number of machine learning classifiers using (1) quantitative image (QI) features, (2) subjective rated (SR) features, and (3) genomic biomarkers (GB). Data analyses showed relatively lower correlation among the QI, SR and GB prediction results (with Pearson correlation coefficients 0.5). Among them, using QI yielded the highest performance.

10. A linear regression model for predicting PNW estuarine temperatures in a changing climate

Pacific Northwest coastal regions, estuaries, and associated ecosystems are vulnerable to the potential effects of climate change, especially to changes in nearshore water temperature. While predictive climate models simulate future air temperatures, no such projections exist for...

11. U.S. Army Armament Research, Development and Engineering Center Grain Evaluation Software to Numerically Predict Linear Burn Regression for Solid Propellant Grain Geometries

2017-10-01

ENGINEERING CENTER GRAIN EVALUATION SOFTWARE TO NUMERICALLY PREDICT LINEAR BURN REGRESSION FOR SOLID PROPELLANT GRAIN GEOMETRIES Brian...distribution is unlimited. AD U.S. ARMY ARMAMENT RESEARCH, DEVELOPMENT AND ENGINEERING CENTER Munitions Engineering Technology Center Picatinny...U.S. ARMY ARMAMENT RESEARCH, DEVELOPMENT AND ENGINEERING CENTER GRAIN EVALUATION SOFTWARE TO NUMERICALLY PREDICT LINEAR BURN REGRESSION FOR SOLID

12. On the predictability of extreme events in records with linear and nonlinear long-range memory: Efficiency and noise robustness

Bogachev, Mikhail I.; Bunde, Armin

2011-06-01

We study the predictability of extreme events in records with linear and nonlinear long-range memory in the presence of additive white noise using two different approaches: (i) the precursory pattern recognition technique (PRT) that exploits solely the information about short-term precursors, and (ii) the return interval approach (RIA) that exploits long-range memory incorporated in the elapsed time after the last extreme event. We find that the PRT always performs better when only linear memory is present. In the presence of nonlinear memory, both methods demonstrate comparable efficiency in the absence of white noise. When additional white noise is present in the record (which is the case in most observational records), the efficiency of the PRT decreases monotonously with increasing noise level. In contrast, the RIA shows an abrupt transition between a phase of low level noise where the prediction is as good as in the absence of noise, and a phase of high level noise where the prediction becomes poor. In the phase of low and intermediate noise the RIA predicts considerably better than the PRT, which explains our recent findings in physiological and financial records.

13. On the Use of Linearized Euler Equations in the Prediction of Jet Noise

Mankbadi, Reda R.; Hixon, R.; Shih, S.-H.; Povinelli, L. A.

1995-01-01

Linearized Euler equations are used to simulate supersonic jet noise generation and propagation. Special attention is given to boundary treatment. The resulting solution is stable and nearly free from boundary reflections without the need for artificial dissipation, filtering, or a sponge layer. The computed solution is in good agreement with theory and observation and is much less CPU-intensive as compared to large-eddy simulations.

14. Non-Linear Non Stationary Analysis of Two-Dimensional Time-Series Applied to GRACE Data, Phase II

National Aeronautics and Space Administration — The proposed innovative two-dimensional (2D) empirical mode decomposition (EMD) analysis was applied to NASA's Gravity Recovery and Climate Experiment (GRACE)...

15. Predicting the multi-domain progression of Parkinson's disease: a Bayesian multivariate generalized linear mixed-effect model.

Wang, Ming; Li, Zheng; Lee, Eun Young; Lewis, Mechelle M; Zhang, Lijun; Sterling, Nicholas W; Wagner, Daymond; Eslinger, Paul; Du, Guangwei; Huang, Xuemei

2017-09-25

It is challenging for current statistical models to predict clinical progression of Parkinson's disease (PD) because of the involvement of multi-domains and longitudinal data. Past univariate longitudinal or multivariate analyses from cross-sectional trials have limited power to predict individual outcomes or a single moment. The multivariate generalized linear mixed-effect model (GLMM) under the Bayesian framework was proposed to study multi-domain longitudinal outcomes obtained at baseline, 18-, and 36-month. The outcomes included motor, non-motor, and postural instability scores from the MDS-UPDRS, and demographic and standardized clinical data were utilized as covariates. The dynamic prediction was performed for both internal and external subjects using the samples from the posterior distributions of the parameter estimates and random effects, and also the predictive accuracy was evaluated based on the root of mean square error (RMSE), absolute bias (AB) and the area under the receiver operating characteristic (ROC) curve. First, our prediction model identified clinical data that were differentially associated with motor, non-motor, and postural stability scores. Second, the predictive accuracy of our model for the training data was assessed, and improved prediction was gained in particularly for non-motor (RMSE and AB: 2.89 and 2.20) compared to univariate analysis (RMSE and AB: 3.04 and 2.35). Third, the individual-level predictions of longitudinal trajectories for the testing data were performed, with ~80% observed values falling within the 95% credible intervals. Multivariate general mixed models hold promise to predict clinical progression of individual outcomes in PD. The data was obtained from Dr. Xuemei Huang's NIH grant R01 NS060722 , part of NINDS PD Biomarker Program (PDBP). All data was entered within 24 h of collection to the Data Management Repository (DMR), which is publically available ( https://pdbp.ninds.nih.gov/data-management ).

16. Genomic predictions across Nordic Holstein and Nordic Red using the genomic best linear unbiased prediction model with different genomic relationship matrices.

Zhou, L; Lund, M S; Wang, Y; Su, G

2014-08-01

This study investigated genomic predictions across Nordic Holstein and Nordic Red using various genomic relationship matrices. Different sources of information, such as consistencies of linkage disequilibrium (LD) phase and marker effects, were used to construct the genomic relationship matrices (G-matrices) across these two breeds. Single-trait genomic best linear unbiased prediction (GBLUP) model and two-trait GBLUP model were used for single-breed and two-breed genomic predictions. The data included 5215 Nordic Holstein bulls and 4361 Nordic Red bulls, which was composed of three populations: Danish Red, Swedish Red and Finnish Ayrshire. The bulls were genotyped with 50 000 SNP chip. Using the two-breed predictions with a joint Nordic Holstein and Nordic Red reference population, accuracies increased slightly for all traits in Nordic Red, but only for some traits in Nordic Holstein. Among the three subpopulations of Nordic Red, accuracies increased more for Danish Red than for Swedish Red and Finnish Ayrshire. This is because closer genetic relationships exist between Danish Red and Nordic Holstein. Among Danish Red, individuals with higher genomic relationship coefficients with Nordic Holstein showed more increased accuracies in the two-breed predictions. Weighting the two-breed G-matrices by LD phase consistencies, marker effects or both did not further improve accuracies of the two-breed predictions. © 2014 Blackwell Verlag GmbH.

17. Jellyfish prediction of occurrence from remote sensing data and a non-linear pattern recognition approach

Albajes-Eizagirre, Anton; Romero, Laia; Soria-Frisch, Aureli; Vanhellemont, Quinten

2011-11-01

Impact of jellyfish in human activities has been increasingly reported worldwide in recent years. Segments such as tourism, water sports and leisure, fisheries and aquaculture are commonly damaged when facing blooms of gelatinous zooplankton. Hence the prediction of the appearance and disappearance of jellyfish in our coasts, which is not fully understood from its biological point of view, has been approached as a pattern recognition problem in the paper presented herein, where a set of potential ecological cues was selected to test their usefulness for prediction. Remote sensing data was used to describe environmental conditions that could support the occurrence of jellyfish blooms with the aim of capturing physical-biological interactions: forcing, coastal morphology, food availability, and water mass characteristics are some of the variables that seem to exert an effect on jellyfish accumulation on the shoreline, under specific spatial and temporal windows. A data-driven model based on computational intelligence techniques has been designed and implemented to predict jellyfish events on the beach area as a function of environmental conditions. Data from 2009 over the NW Mediterranean continental shelf have been used to train and test this prediction protocol. Standard level 2 products are used from MODIS (NASA OceanColor) and MERIS (ESA - FRS data). The procedure for designing the analysis system can be described as following. The aforementioned satellite data has been used as feature set for the performance evaluation. Ground truth has been extracted from visual observations by human agents on different beach sites along the Catalan area. After collecting the evaluation data set, the performance between different computational intelligence approaches have been compared. The outperforming one in terms of its generalization capability has been selected for prediction recall. Different tests have been conducted in order to assess the prediction capability of the

18. Three dimensional force prediction in a model linear brushless dc motor

Moghani, J.S.; Eastham, J.F.; Akmese, R.; Hill-Cottingham, R.J. (Univ. of Bath (United Kingdom). School of Electronic and Electric Engineering)

1994-11-01

Practical results are presented for the three axes forces produced on the primary of a linear brushless dc machine which is supplied from a three-phase delta-modulated inverter. Conditions of both lateral alignment and lateral displacement are considered. Finite element analysis using both two and three dimensional modeling is compared with the practical results. It is shown that a modified two dimensional model is adequate, where it can be used, in the aligned position and that the full three dimensional method gives good results when the machine is axially misaligned.

19. Improving sub-pixel imperviousness change prediction by ensembling heterogeneous non-linear regression models

Drzewiecki Wojciech

2016-12-01

Full Text Available In this work nine non-linear regression models were compared for sub-pixel impervious surface area mapping from Landsat images. The comparison was done in three study areas both for accuracy of imperviousness coverage evaluation in individual points in time and accuracy of imperviousness change assessment. The performance of individual machine learning algorithms (Cubist, Random Forest, stochastic gradient boosting of regression trees, k-nearest neighbors regression, random k-nearest neighbors regression, Multivariate Adaptive Regression Splines, averaged neural networks, and support vector machines with polynomial and radial kernels was also compared with the performance of heterogeneous model ensembles constructed from the best models trained using particular techniques.

20. Predicting the aquatic toxicity mode of action using logistic regression and linear discriminant analysis.

Ren, Y Y; Zhou, L C; Yang, L; Liu, P Y; Zhao, B W; Liu, H X

2016-09-01

The paper highlights the use of the logistic regression (LR) method in the construction of acceptable statistically significant, robust and predictive models for the classification of chemicals according to their aquatic toxic modes of action. Essentials accounting for a reliable model were all considered carefully. The model predictors were selected by stepwise forward discriminant analysis (LDA) from a combined pool of experimental data and chemical structure-based descriptors calculated by the CODESSA and DRAGON software packages. Model predictive ability was validated both internally and externally. The applicability domain was checked by the leverage approach to verify prediction reliability. The obtained models are simple and easy to interpret. In general, LR performs much better than LDA and seems to be more attractive for the prediction of the more toxic compounds, i.e. compounds that exhibit excess toxicity versus non-polar narcotic compounds and more reactive compounds versus less reactive compounds. In addition, model fit and regression diagnostics was done through the influence plot which reflects the hat-values, studentized residuals, and Cook's distance statistics of each sample. Overdispersion was also checked for the LR model. The relationships between the descriptors and the aquatic toxic behaviour of compounds are also discussed.

1. Acquaintance Rape: Applying Crime Scene Analysis to the Prediction of Sexual Recidivism.

Lehmann, Robert J B; Goodwill, Alasdair M; Hanson, R Karl; Dahle, Klaus-Peter

2016-10-01

The aim of the current study was to enhance the assessment and predictive accuracy of risk assessments for sexual offenders by utilizing detailed crime scene analysis (CSA). CSA was conducted on a sample of 247 male acquaintance rapists from Berlin (Germany) using a nonmetric, multidimensional scaling (MDS) Behavioral Thematic Analysis (BTA) approach. The age of the offenders at the time of the index offense ranged from 14 to 64 years (M = 32.3; SD = 11.4). The BTA procedure revealed three behavioral themes of hostility, criminality, and pseudo-intimacy, consistent with previous CSA research on stranger rape. The construct validity of the three themes was demonstrated through correlational analyses with known sexual offending measures and criminal histories. The themes of hostility and pseudo-intimacy were significant predictors of sexual recidivism. In addition, the pseudo-intimacy theme led to a significant increase in the incremental validity of the Static-99 actuarial risk assessment instrument for the prediction of sexual recidivism. The results indicate the potential utility and validity of crime scene behaviors in the applied risk assessment of sexual offenders. © The Author(s) 2015.

2. Clinical usefulness of the clock drawing test applying rasch analysis in predicting of cognitive impairment.

Yoo, Doo Han; Lee, Jae Shin

2016-07-01

[Purpose] This study examined the clinical usefulness of the clock drawing test applying Rasch analysis for predicting the level of cognitive impairment. [Subjects and Methods] A total of 187 stroke patients with cognitive impairment were enrolled in this study. The 187 patients were evaluated by the clock drawing test developed through Rasch analysis along with the mini-mental state examination of cognitive evaluation tool. An analysis of the variance was performed to examine the significance of the mini-mental state examination and the clock drawing test according to the general characteristics of the subjects. Receiver operating characteristic analysis was performed to determine the cutoff point for cognitive impairment and to calculate the sensitivity and specificity values. [Results] The results of comparison of the clock drawing test with the mini-mental state showed significant differences in according to gender, age, education, and affected side. A total CDT of 10.5, which was selected as the cutoff point to identify cognitive impairement, showed a sensitivity, specificity, Youden index, positive predictive, and negative predicive values of 86.4%, 91.5%, 0.8, 95%, and 88.2%. [Conclusion] The clock drawing test is believed to be useful in assessments and interventions based on its excellent ability to identify cognitive disorders.

3. Applying theory of planned behavior to predict exercise maintenance in sarcopenic elderly

Ahmad, Mohamad Hasnan; Shahar, Suzana; Teng, Nur Islami Mohd Fahmi; Manaf, Zahara Abdul; Sakian, Noor Ibrahim Mohd; Omar, Baharudin

2014-01-01

This study aimed to determine the factors associated with exercise behavior based on the theory of planned behavior (TPB) among the sarcopenic elderly people in Cheras, Kuala Lumpur. A total of 65 subjects with mean ages of 67.5±5.2 (men) and 66.1±5.1 (women) years participated in this study. Subjects were divided into two groups: 1) exercise group (n=34; 25 men, nine women); and 2) the control group (n=31; 22 men, nine women). Structural equation modeling, based on TPB components, was applied to determine specific factors that most contribute to and predict actual behavior toward exercise. Based on the TPB’s model, attitude (β=0.60) and perceived behavioral control (β=0.24) were the major predictors of intention to exercise among men at the baseline. Among women, the subjective norm (β=0.82) was the major predictor of intention to perform the exercise at the baseline. After 12 weeks, attitude (men’s, β=0.68; women’s, β=0.24) and subjective norm (men’s, β=0.12; women’s, β=0.87) were the predictors of the intention to perform the exercise. “Feels healthier with exercise” was the specific factor to improve the intention to perform and to maintain exercise behavior in men (β=0.36) and women (β=0.49). “Not motivated to perform exercise” was the main barrier among men’s intention to exercise. The intention to perform the exercise was able to predict actual behavior regarding exercise at the baseline and at 12 weeks of an intervention program. As a conclusion, TPB is a useful model to determine and to predict maintenance of exercise in the sarcopenic elderly. PMID:25258524

4. Identifiability of large-scale non-linear dynamic network models applied to the ADM1-case study.

Nimmegeers, Philippe; Lauwers, Joost; Telen, Dries; Logist, Filip; Impe, Jan Van

2017-06-01

In this work, both the structural and practical identifiability of the Anaerobic Digestion Model no. 1 (ADM1) is investigated, which serves as a relevant case study of large non-linear dynamic network models. The structural identifiability is investigated using the probabilistic algorithm, adapted to deal with the specifics of the case study (i.e., a large-scale non-linear dynamic system of differential and algebraic equations). The practical identifiability is analyzed using a Monte Carlo parameter estimation procedure for a 'non-informative' and 'informative' experiment, which are heuristically designed. The model structure of ADM1 has been modified by replacing parameters by parameter combinations, to provide a generally locally structurally identifiable version of ADM1. This means that in an idealized theoretical situation, the parameters can be estimated accurately. Furthermore, the generally positive structural identifiability results can be explained from the large number of interconnections between the states in the network structure. This interconnectivity, however, is also observed in the parameter estimates, making uncorrelated parameter estimations in practice difficult. Copyright © 2017. Published by Elsevier Inc.

5. Comparison of Damage Models for Predicting the Non-Linear Response of Laminates Under Matrix Dominated Loading Conditions

Schuecker, Clara; Davila, Carlos G.; Rose, Cheryl A.

2010-01-01

Five models for matrix damage in fiber reinforced laminates are evaluated for matrix-dominated loading conditions under plane stress and are compared both qualitatively and quantitatively. The emphasis of this study is on a comparison of the response of embedded plies subjected to a homogeneous stress state. Three of the models are specifically designed for modeling the non-linear response due to distributed matrix cracking under homogeneous loading, and also account for non-linear (shear) behavior prior to the onset of cracking. The remaining two models are localized damage models intended for predicting local failure at stress concentrations. The modeling approaches of distributed vs. localized cracking as well as the different formulations of damage initiation and damage progression are compared and discussed.

6. Integration of Attributes from Non-Linear Characterization of Cardiovascular Time-Series for Prediction of Defibrillation Outcomes.

Full Text Available The timing of defibrillation is mostly at arbitrary intervals during cardio-pulmonary resuscitation (CPR, rather than during intervals when the out-of-hospital cardiac arrest (OOH-CA patient is physiologically primed for successful countershock. Interruptions to CPR may negatively impact defibrillation success. Multiple defibrillations can be associated with decreased post-resuscitation myocardial function. We hypothesize that a more complete picture of the cardiovascular system can be gained through non-linear dynamics and integration of multiple physiologic measures from biomedical signals.Retrospective analysis of 153 anonymized OOH-CA patients who received at least one defibrillation for ventricular fibrillation (VF was undertaken. A machine learning model, termed Multiple Domain Integrative (MDI model, was developed to predict defibrillation success. We explore the rationale for non-linear dynamics and statistically validate heuristics involved in feature extraction for model development. Performance of MDI is then compared to the amplitude spectrum area (AMSA technique.358 defibrillations were evaluated (218 unsuccessful and 140 successful. Non-linear properties (Lyapunov exponent > 0 of the ECG signals indicate a chaotic nature and validate the use of novel non-linear dynamic methods for feature extraction. Classification using MDI yielded ROC-AUC of 83.2% and accuracy of 78.8%, for the model built with ECG data only. Utilizing 10-fold cross-validation, at 80% specificity level, MDI (74% sensitivity outperformed AMSA (53.6% sensitivity. At 90% specificity level, MDI had 68.4% sensitivity while AMSA had 43.3% sensitivity. Integrating available end-tidal carbon dioxide features into MDI, for the available 48 defibrillations, boosted ROC-AUC to 93.8% and accuracy to 83.3% at 80% sensitivity.At clinically relevant sensitivity thresholds, the MDI provides improved performance as compared to AMSA, yielding fewer unsuccessful defibrillations

7. Explorative methods in linear models

Høskuldsson, Agnar

2004-01-01

The author has developed the H-method of mathematical modeling that builds up the model by parts, where each part is optimized with respect to prediction. Besides providing with better predictions than traditional methods, these methods provide with graphic procedures for analyzing different feat...... features in data. These graphic methods extend the well-known methods and results of Principal Component Analysis to any linear model. Here the graphic procedures are applied to linear regression and Ridge Regression....

8. Bridging the gap between the linear and nonlinear predictive control: Adaptations fo refficient building climate control

Pčolka, M.; Žáčeková, E.; Robinett, R.; Čelikovský, Sergej; Šebek, M.

2016-01-01

Roč. 53, č. 1 (2016), s. 124-138 ISSN 0967-0661 R&D Projects: GA ČR GA13-20433S Institutional support: RVO:67985556 Keywords : Model predictive control * Identification for control * Building climatecontrol Subject RIV: BC - Control Systems Theory Impact factor: 2.602, year: 2016 http://library.utia.cas.cz/separaty/2016/TR/celikovsky-0460306.pdf

9. 10 km running performance predicted by a multiple linear regression model with allometrically adjusted variables.

Abad, Cesar C C; Barros, Ronaldo V; Bertuzzi, Romulo; Gagliardi, João F L; Lima-Silva, Adriano E; Lambert, Mike I; Pires, Flavio O

2016-06-01

The aim of this study was to verify the power of VO 2max , peak treadmill running velocity (PTV), and running economy (RE), unadjusted or allometrically adjusted, in predicting 10 km running performance. Eighteen male endurance runners performed: 1) an incremental test to exhaustion to determine VO 2max and PTV; 2) a constant submaximal run at 12 km·h -1 on an outdoor track for RE determination; and 3) a 10 km running race. Unadjusted (VO 2max , PTV and RE) and adjusted variables (VO 2max 0.72 , PTV 0.72 and RE 0.60 ) were investigated through independent multiple regression models to predict 10 km running race time. There were no significant correlations between 10 km running time and either the adjusted or unadjusted VO 2max . Significant correlations (p 0.84 and power > 0.88. The allometrically adjusted predictive model was composed of PTV 0.72 and RE 0.60 and explained 83% of the variance in 10 km running time with a standard error of the estimate (SEE) of 1.5 min. The unadjusted model composed of a single PVT accounted for 72% of the variance in 10 km running time (SEE of 1.9 min). Both regression models provided powerful estimates of 10 km running time; however, the unadjusted PTV may provide an uncomplicated estimation.

10. Communication: Predictive partial linearized path integral simulation of condensed phase electron transfer dynamics

Huo, Pengfei; Miller, Thomas F. III; Coker, David F.

2013-01-01

A partial linearized path integral approach is used to calculate the condensed phase electron transfer (ET) rate by directly evaluating the flux-flux/flux-side quantum time correlation functions. We demonstrate for a simple ET model that this approach can reliably capture the transition between non-adiabatic and adiabatic regimes as the electronic coupling is varied, while other commonly used semi-classical methods are less accurate over the broad range of electronic couplings considered. Further, we show that the approach reliably recovers the Marcus turnover as a function of thermodynamic driving force, giving highly accurate rates over four orders of magnitude from the normal to the inverted regimes. We also demonstrate that the approach yields accurate rate estimates over five orders of magnitude of inverse temperature. Finally, the approach outlined here accurately captures the electronic coherence in the flux-flux correlation function that is responsible for the decreased rate in the inverted regime

11. Metode Linear Predictive Coding (LPC Pada klasifikasi Hidden Markov Model (HMM Untuk Kata Arabic pada penutur Indonesia

Ririn Kusumawati

2016-05-01

In the classification, using Hidden Markov Model, voice signal is analyzed and searched the maximum possible value that can be recognized. The modeling results obtained parameters are used to compare with the sound of Arabic speakers. From the test results' Classification, Hidden Markov Models with Linear Predictive Coding extraction average accuracy of 78.6% for test data sampling frequency of 8,000 Hz, 80.2% for test data sampling frequency of 22050 Hz, 79% for frequencies sampling test data at 44100 Hz.

12. Real-time prediction of extreme ambient carbon monoxide concentrations due to vehicular exhaust emissions using univariate linear stochastic models

Sharma, P.; Khare, M.

2000-01-01

Historical data of the time-series of carbon monoxide (CO) concentration was analysed using Box-Jenkins modelling approach. Univariate Linear Stochastic Models (ULSMs) were developed to examine the degree of prediction possible for situations where only a limited data set, restricted only to the past record of pollutant data are available. The developed models can be used to provide short-term, real-time forecast of extreme CO concentrations for an Air Quality Control Region (AQCR), comprising a major traffic intersection in a Central Business District of Delhi City, India. (author)

13. Development of a predictive model for lead, cadmium and fluorine soil-water partition coefficients using sparse multiple linear regression analysis.

Nakamura, Kengo; Yasutaka, Tetsuo; Kuwatani, Tatsu; Komai, Takeshi

2017-11-01

14. The Comparison Study of Short-Term Prediction Methods to Enhance the Model Predictive Controller Applied to Microgrid Energy Management

César Hernández-Hernández

2017-06-01

Full Text Available Electricity load forecasting, optimal power system operation and energy management play key roles that can bring significant operational advantages to microgrids. This paper studies how methods based on time series and neural networks can be used to predict energy demand and production, allowing them to be combined with model predictive control. Comparisons of different prediction methods and different optimum energy distribution scenarios are provided, permitting us to determine when short-term energy prediction models should be used. The proposed prediction models in addition to the model predictive control strategy appear as a promising solution to energy management in microgrids. The controller has the task of performing the management of electricity purchase and sale to the power grid, maximizing the use of renewable energy sources and managing the use of the energy storage system. Simulations were performed with different weather conditions of solar irradiation. The obtained results are encouraging for future practical implementation.

15. Hybrid model predictive control applied to switching control of burner load for a compact marine boiler design

Solberg, Brian; Andersen, Palle; Maciejowski, Jan

2008-01-01

This paper discusses the application of hybrid model predictive control to control switching between different burner modes in a novel compact marine boiler design. A further purpose of the present work is to point out problems with finite horizon model predictive control applied to systems for w...

16. Structure-Dependent Water-Induced Linear Reduction Model for Predicting Gas Diffusivity and Tortuosity in Repacked and Intact Soil

Møldrup, Per; Chamindu, T. K. K. Deepagoda; Hamamoto, S.

2013-01-01

The soil-gas diffusion is a primary driver of transport, reactions, emissions, and uptake of vadose zone gases, including oxygen, greenhouse gases, fumigants, and spilled volatile organics. The soil-gas diffusion coefficient, Dp, depends not only on soil moisture content, texture, and compaction...... but also on the local-scale variability of these. Different predictive models have been developed to estimate Dp in intact and repacked soil, but clear guidelines for model choice at a given soil state are lacking. In this study, the water-induced linear reduction (WLR) model for repacked soil is made...... air) in repacked soils containing between 0 and 54% clay. With Cm = 2.1, the SWLR model on average gave excellent predictions for 290 intact soils, performing well across soil depths, textures, and compactions (dry bulk densities). The SWLR model generally outperformed similar, simple Dp/Do models...

17. A Family of High-Performance Solvers for Linear Model Predictive Control

Frison, Gianluca; Sokoler, Leo Emil; Jørgensen, John Bagterp

2014-01-01

In Model Predictive Control (MPC), an optimization problem has to be solved at each sampling time, and this has traditionally limited the use of MPC to systems with slow dynamic. In this paper, we propose an e_cient solution strategy for the unconstrained sub-problems that give the search......-direction in Interior-Point (IP) methods for MPC, and that usually are the computational bottle-neck. This strategy combines a Riccati-like solver with the use of high-performance computing techniques: in particular, in this paper we explore the performance boost given by the use of single precision computation...

18. Development of Bundle Position-Wise Linear Model for Predicting the Pressure Tube Diametral Creep in CANDU Reactors

Lee, Jae Yong; Na, Man Gyun

2011-01-01

Diametral creep of the pressure tube (PT) is one of the principal aging mechanisms governing the heat transfer and hydraulic degradation of a heat transport system. PT diametral creep leads to diametral expansion that affects the thermal hydraulic characteristics of the coolant channels and the critical heat flux. Therefore, it is essential to predict the PT diametral creep in CANDU reactors, which is caused mainly by fast neutron irradiation, reactor coolant temperature and so forth. The currently used PT diametral creep prediction model considers the complex interactions between the effects of temperature and fast neutron flux on the deformation of PT zirconium alloys. The model assumes that long-term steady-state deformation consists of separable, additive components from thermal creep, irradiation creep and irradiation growth. This is a mechanistic model based on measured data. However, this model has high prediction uncertainty. Recently, a statistical error modeling method was developed using plant inspection data from the Bruce B CANDU reactor. The aim of this study was to develop a bundle position-wise linear model (BPLM) to predict PT diametral creep employing previously measured PT diameters and HTS operating conditions. There are twelve bundles in a fuel channel and for each bundle, a linear model was developed by using the dependent variables, such as the fast neutron fluxes and the bundle temperatures. The training data set was selected using the subtractive clustering method. The data of 39 channels that consist of 80 percent of a total of 49 measured channels from Units 2, 3 and 4 were used to develop the BPLM models. The remaining 10 channels' data were used to test the developed BPLM models. The BPLM was optimized by the maximum likelihood estimation method. The developed BPLM to predict PT diametral creep was verified using the operating data gathered from the Units 2,3 and 4 in Korea. Two error components for the BPLM, which are the epistemic

19. Alignment of Electrospun Nanofibers and Prediction of Electrospinning Linear Speed Using a Rotating Jet

M. Khamforoush

2009-12-01

Full Text Available Anew and effective electrospinning method has been developed for producing aligned polymer nanofibers. The conventional electrospinning technique has been modified to fabricate nanofibers as uniaxially aligned array. The key to the success of this technique is the creation of a rotating jet by using a cylindrical collector in which the needle tip is located at its center. The unique advantage of this method among the current methods is the ability of apparatus to weave continuously nanofibers in uniaxially aligned form. Fibers produced by this method are well-aligned, with several meters in length, and can be spread over a large area. We have employed a voltage range of (6-16 kV, a collector diameter in the range of 20-50 cm and various concentrations of PAN solutions ranging from 15 wt% to 19 wt %. The electrospun nanofibers could be conveniently formed onto the surface of any thin substrate such as glass sampling plate for subsequent treatments and other applications. Therefore, the linear speed of electrospinning process is determined experimentally as a function of cylindrical collector diameter, polymer concentration and field potential  difference.

20. Design and evaluation of antimalarial peptides derived from prediction of short linear motifs in proteins related to erythrocyte invasion.

Alessandra Bianchin

Full Text Available The purpose of this study was to investigate the blood stage of the malaria causing parasite, Plasmodium falciparum, to predict potential protein interactions between the parasite merozoite and the host erythrocyte and design peptides that could interrupt these predicted interactions. We screened the P. falciparum and human proteomes for computationally predicted short linear motifs (SLiMs in cytoplasmic portions of transmembrane proteins that could play roles in the invasion of the erythrocyte by the merozoite, an essential step in malarial pathogenesis. We tested thirteen peptides predicted to contain SLiMs, twelve of them palmitoylated to enhance membrane targeting, and found three that blocked parasite growth in culture by inhibiting the initiation of new infections in erythrocytes. Scrambled peptides for two of the most promising peptides suggested that their activity may be reflective of amino acid properties, in particular, positive charge. However, one peptide showed effects which were stronger than those of scrambled peptides. This was derived from human red blood cell glycophorin-B. We concluded that proteome-wide computational screening of the intracellular regions of both host and pathogen adhesion proteins provides potential lead peptides for the development of anti-malarial compounds.

1. Group spike-and-slab lasso generalized linear models for disease prediction and associated genes detection by incorporating pathway information.

Tang, Zaixiang; Shen, Yueping; Li, Yan; Zhang, Xinyan; Wen, Jia; Qian, Chen'ao; Zhuang, Wenzhuo; Shi, Xinghua; Yi, Nengjun

2018-03-15

Large-scale molecular data have been increasingly used as an important resource for prognostic prediction of diseases and detection of associated genes. However, standard approaches for omics data analysis ignore the group structure among genes encoded in functional relationships or pathway information. We propose new Bayesian hierarchical generalized linear models, called group spike-and-slab lasso GLMs, for predicting disease outcomes and detecting associated genes by incorporating large-scale molecular data and group structures. The proposed model employs a mixture double-exponential prior for coefficients that induces self-adaptive shrinkage amount on different coefficients. The group information is incorporated into the model by setting group-specific parameters. We have developed a fast and stable deterministic algorithm to fit the proposed hierarchal GLMs, which can perform variable selection within groups. We assess the performance of the proposed method on several simulated scenarios, by varying the overlap among groups, group size, number of non-null groups, and the correlation within group. Compared with existing methods, the proposed method provides not only more accurate estimates of the parameters but also better prediction. We further demonstrate the application of the proposed procedure on three cancer datasets by utilizing pathway structures of genes. Our results show that the proposed method generates powerful models for predicting disease outcomes and detecting associated genes. The methods have been implemented in a freely available R package BhGLM (http://www.ssg.uab.edu/bhglm/). nyi@uab.edu. Supplementary data are available at Bioinformatics online.

2. Applying theory of planned behavior to predict exercise maintenance in sarcopenic elderly

2014-09-01

Full Text Available Mohamad Hasnan Ahmad,1 Suzana Shahar,2 Nur Islami Mohd Fahmi Teng,2 Zahara Abdul Manaf,2 Noor Ibrahim Mohd Sakian,3 Baharudin Omar41Centre of Nutrition Epidemiology Research, Institute of Public Health, Ministry of Health, Kuala Lumpur, Malaysia; 2Dietetics Program, 3Occupational Therapy Program, 4Department of Biomedical Sciences, Faculty of Health Sciences, Universiti Kebangsaan Malaysia, Kuala Lumpur, Malaysia Abstract: This study aimed to determine the factors associated with exercise behavior based on the theory of planned behavior (TPB among the sarcopenic elderly people in Cheras, Kuala ­Lumpur. A total of 65 subjects with mean ages of 67.5±5.2 (men and 66.1±5.1 (women years participated in this study. Subjects were divided into two groups: 1 exercise group (n=34; 25 men, nine women; and 2 the control group (n=31; 22 men, nine women. Structural equation modeling, based on TPB components, was applied to determine specific factors that most contribute to and predict actual behavior toward exercise. Based on the TPB’s model, attitude (ß=0.60 and perceived behavioral control (ß=0.24 were the major predictors of intention to exercise among men at the baseline. Among women, the subjective norm (ß=0.82 was the major predictor of intention to perform the exercise at the baseline. After 12 weeks, attitude (men’s, ß=0.68; women’s, ß=0.24 and subjective norm (men’s, ß=0.12; women’s, ß=0.87 were the predictors of the intention to perform the exercise. “Feels healthier with exercise” was the specific factor to improve the intention to perform and to maintain exercise behavior in men (ß=0.36 and women (ß=0.49. “Not motivated to perform exercise” was the main barrier among men’s intention to exercise. The intention to perform the exercise was able to predict actual behavior regarding exercise at the baseline and at 12 weeks of an intervention program. As a conclusion, TPB is a useful model to determine and

3. Limited Sampling Strategy for Accurate Prediction of Pharmacokinetics of Saroglitazar: A 3-point Linear Regression Model Development and Successful Prediction of Human Exposure.

Joshi, Shuchi N; Srinivas, Nuggehally R; Parmar, Deven V

2018-03-01

Our aim was to develop and validate the extrapolative performance of a regression model using a limited sampling strategy for accurate estimation of the area under the plasma concentration versus time curve for saroglitazar. Healthy subject pharmacokinetic data from a well-powered food-effect study (fasted vs fed treatments; n = 50) was used in this work. The first 25 subjects' serial plasma concentration data up to 72 hours and corresponding AUC 0-t (ie, 72 hours) from the fasting group comprised a training dataset to develop the limited sampling model. The internal datasets for prediction included the remaining 25 subjects from the fasting group and all 50 subjects from the fed condition of the same study. The external datasets included pharmacokinetic data for saroglitazar from previous single-dose clinical studies. Limited sampling models were composed of 1-, 2-, and 3-concentration-time points' correlation with AUC 0-t of saroglitazar. Only models with regression coefficients (R 2 ) >0.90 were screened for further evaluation. The best R 2 model was validated for its utility based on mean prediction error, mean absolute prediction error, and root mean square error. Both correlations between predicted and observed AUC 0-t of saroglitazar and verification of precision and bias using Bland-Altman plot were carried out. None of the evaluated 1- and 2-concentration-time points models achieved R 2 > 0.90. Among the various 3-concentration-time points models, only 4 equations passed the predefined criterion of R 2 > 0.90. Limited sampling models with time points 0.5, 2, and 8 hours (R 2 = 0.9323) and 0.75, 2, and 8 hours (R 2 = 0.9375) were validated. Mean prediction error, mean absolute prediction error, and root mean square error were prediction of saroglitazar. The same models, when applied to the AUC 0-t prediction of saroglitazar sulfoxide, showed mean prediction error, mean absolute prediction error, and root mean square error model predicts the exposure of

4. Adaptive LINE-P: An Adaptive Linear Energy Prediction Model for Wireless Sensor Network Nodes.

Ahmed, Faisal; Tamberg, Gert; Le Moullec, Yannick; Annus, Paul

2018-04-05

In the context of wireless sensor networks, energy prediction models are increasingly useful tools that can facilitate the power management of the wireless sensor network (WSN) nodes. However, most of the existing models suffer from the so-called fixed weighting parameter, which limits their applicability when it comes to, e.g., solar energy harvesters with varying characteristics. Thus, in this article we propose the Adaptive LINE-P (all cases) model that calculates adaptive weighting parameters based on the stored energy profiles. Furthermore, we also present a profile compression method to reduce the memory requirements. To determine the performance of our proposed model, we have used real data for the solar and wind energy profiles. The simulation results show that our model achieves 90-94% accuracy and that the compressed method reduces memory overheads by 50% as compared to state-of-the-art models.

5. Confirmation of linear system theory prediction: Rate of change of Herrnstein's κ as a function of response-force requirement

McDowell, J. J; Wood, Helena M.

1985-01-01

Four human subjects worked on all combinations of five variable-interval schedules and five reinforcer magnitudes (¢/reinforcer) in each of two phases of the experiment. In one phase the force requirement on the operandum was low (1 or 11 N) and in the other it was high (25 or 146 N). Estimates of Herrnstein's κ were obtained at each reinforcer magnitude. The results were: (1) response rate was more sensitive to changes in reinforcement rate at the high than at the low force requirement, (2) κ increased from the beginning to the end of the magnitude range for all subjects at both force requirements, (3) the reciprocal of κ was a linear function of the reciprocal of reinforcer magnitude for seven of the eight data sets, and (4) the rate of change of κ was greater at the high than at the low force requirement by an order of magnitude or more. The second and third findings confirm predictions made by linear system theory, and replicate the results of an earlier experiment (McDowell & Wood, 1984). The fourth finding confirms a further prediction of the theory and supports the theory's interpretation of conflicting data on the constancy of Herrnstein's κ. PMID:16812408

6. Confirmation of linear system theory prediction: Rate of change of Herrnstein's kappa as a function of response-force requirement.

McDowell, J J; Wood, H M

1985-01-01

Four human subjects worked on all combinations of five variable-interval schedules and five reinforcer magnitudes ( cent/reinforcer) in each of two phases of the experiment. In one phase the force requirement on the operandum was low (1 or 11 N) and in the other it was high (25 or 146 N). Estimates of Herrnstein's kappa were obtained at each reinforcer magnitude. The results were: (1) response rate was more sensitive to changes in reinforcement rate at the high than at the low force requirement, (2) kappa increased from the beginning to the end of the magnitude range for all subjects at both force requirements, (3) the reciprocal of kappa was a linear function of the reciprocal of reinforcer magnitude for seven of the eight data sets, and (4) the rate of change of kappa was greater at the high than at the low force requirement by an order of magnitude or more. The second and third findings confirm predictions made by linear system theory, and replicate the results of an earlier experiment (McDowell & Wood, 1984). The fourth finding confirms a further prediction of the theory and supports the theory's interpretation of conflicting data on the constancy of Herrnstein's kappa.

7. COMSAT: Residue contact prediction of transmembrane proteins based on support vector machines and mixed integer linear programming.

Zhang, Huiling; Huang, Qingsheng; Bei, Zhendong; Wei, Yanjie; Floudas, Christodoulos A

2016-03-01

In this article, we present COMSAT, a hybrid framework for residue contact prediction of transmembrane (TM) proteins, integrating a support vector machine (SVM) method and a mixed integer linear programming (MILP) method. COMSAT consists of two modules: COMSAT_SVM which is trained mainly on position-specific scoring matrix features, and COMSAT_MILP which is an ab initio method based on optimization models. Contacts predicted by the SVM model are ranked by SVM confidence scores, and a threshold is trained to improve the reliability of the predicted contacts. For TM proteins with no contacts above the threshold, COMSAT_MILP is used. The proposed hybrid contact prediction scheme was tested on two independent TM protein sets based on the contact definition of 14 Å between Cα-Cα atoms. First, using a rigorous leave-one-protein-out cross validation on the training set of 90 TM proteins, an accuracy of 66.8%, a coverage of 12.3%, a specificity of 99.3% and a Matthews' correlation coefficient (MCC) of 0.184 were obtained for residue pairs that are at least six amino acids apart. Second, when tested on a test set of 87 TM proteins, the proposed method showed a prediction accuracy of 64.5%, a coverage of 5.3%, a specificity of 99.4% and a MCC of 0.106. COMSAT shows satisfactory results when compared with 12 other state-of-the-art predictors, and is more robust in terms of prediction accuracy as the length and complexity of TM protein increase. COMSAT is freely accessible at http://hpcc.siat.ac.cn/COMSAT/. © 2016 Wiley Periodicals, Inc.

8. Linear Interaction Energy Based Prediction of Cytochrome P450 1A2 Binding Affinities with Reliability Estimation.

Luigi Capoferri

Full Text Available Prediction of human Cytochrome P450 (CYP binding affinities of small ligands, i.e., substrates and inhibitors, represents an important task for predicting drug-drug interactions. A quantitative assessment of the ligand binding affinity towards different CYPs can provide an estimate of inhibitory activity or an indication of isoforms prone to interact with the substrate of inhibitors. However, the accuracy of global quantitative models for CYP substrate binding or inhibition based on traditional molecular descriptors can be limited, because of the lack of information on the structure and flexibility of the catalytic site of CYPs. Here we describe the application of a method that combines protein-ligand docking, Molecular Dynamics (MD simulations and Linear Interaction Energy (LIE theory, to allow for quantitative CYP affinity prediction. Using this combined approach, a LIE model for human CYP 1A2 was developed and evaluated, based on a structurally diverse dataset for which the estimated experimental uncertainty was 3.3 kJ mol-1. For the computed CYP 1A2 binding affinities, the model showed a root mean square error (RMSE of 4.1 kJ mol-1 and a standard error in prediction (SDEP in cross-validation of 4.3 kJ mol-1. A novel approach that includes information on both structural ligand description and protein-ligand interaction was developed for estimating the reliability of predictions, and was able to identify compounds from an external test set with a SDEP for the predicted affinities of 4.6 kJ mol-1 (corresponding to 0.8 pKi units.

9. Chaos in balance: non-linear measures of postural control predict individual variations in visual illusions of motion.

Deborah Apthorp

Full Text Available Visually-induced illusions of self-motion (vection can be compelling for some people, but they are subject to large individual variations in strength. Do these variations depend, at least in part, on the extent to which people rely on vision to maintain their postural stability? We investigated by comparing physical posture measures to subjective vection ratings. Using a Bertec balance plate in a brightly-lit room, we measured 13 participants' excursions of the centre of foot pressure (CoP over a 60-second period with eyes open and with eyes closed during quiet stance. Subsequently, we collected vection strength ratings for large optic flow displays while seated, using both verbal ratings and online throttle measures. We also collected measures of postural sway (changes in anterior-posterior CoP in response to the same visual motion stimuli while standing on the plate. The magnitude of standing sway in response to expanding optic flow (in comparison to blank fixation periods was predictive of both verbal and throttle measures for seated vection. In addition, the ratio between eyes-open and eyes-closed CoP excursions during quiet stance (using the area of postural sway significantly predicted seated vection for both measures. Interestingly, these relationships were weaker for contracting optic flow displays, though these produced both stronger vection and more sway. Next we used a non-linear analysis (recurrence quantification analysis, RQA of the fluctuations in anterior-posterior position during quiet stance (both with eyes closed and eyes open; this was a much stronger predictor of seated vection for both expanding and contracting stimuli. Given the complex multisensory integration involved in postural control, our study adds to the growing evidence that non-linear measures drawn from complexity theory may provide a more informative measure of postural sway than the conventional linear measures.

10. Applying deep bidirectional LSTM and mixture density network for basketball trajectory prediction

Zhao, Yu; Yang, Rennong; Chevalier, Guillaume; Shah, Rajiv C.; Romijnders, Rob

2018-04-01

Data analytics helps basketball teams to create tactics. However, manual data collection and analytics are costly and ineffective. Therefore, we applied a deep bidirectional long short-term memory (BLSTM) and mixture density network (MDN) approach. This model is not only capable of predicting a basketball trajectory based on real data, but it also can generate new trajectory samples. It is an excellent application to help coaches and players decide when and where to shoot. Its structure is particularly suitable for dealing with time series problems. BLSTM receives forward and backward information at the same time, while stacking multiple BLSTMs further increases the learning ability of the model. Combined with BLSTMs, MDN is used to generate a multi-modal distribution of outputs. Thus, the proposed model can, in principle, represent arbitrary conditional probability distributions of output variables. We tested our model with two experiments on three-pointer datasets from NBA SportVu data. In the hit-or-miss classification experiment, the proposed model outperformed other models in terms of the convergence speed and accuracy. In the trajectory generation experiment, eight model-generated trajectories at a given time closely matched real trajectories.

11. Predicting biopharmaceutical performance of oral drug candidates - Extending the volume to dissolve applied dose concept.

Muenster, Uwe; Mueck, Wolfgang; van der Mey, Dorina; Schlemmer, Karl-Heinz; Greschat-Schade, Susanne; Haerter, Michael; Pelzetter, Christian; Pruemper, Christian; Verlage, Joerg; Göller, Andreas H; Ohm, Andreas

2016-05-01

The purpose of the study was to experimentally deduce pH-dependent critical volumes to dissolve applied dose (VDAD) that determine whether a drug candidate can be developed as immediate release (IR) tablet containing crystalline API, or if solubilization technology is needed to allow for sufficient oral bioavailability. pH-dependent VDADs of 22 and 83 compounds were plotted vs. the relative oral bioavailability (AUC solid vs. AUC solution formulation, Frel) in humans and rats, respectively. Furthermore, in order to investigate to what extent Frel rat may predict issues with solubility limited absorption in human, Frel rat was plotted vs. Frel human. Additionally, the impact of bile salts and lecithin on in vitro dissolution of poorly soluble compounds was tested and data compared to Frel rat and human. Respective in vitro - in vivo and in vivo - in vivo correlations were generated and used to build developability criteria. As a result, based on pH-dependent VDAD, Frel rat and in vitro dissolution in simulated intestinal fluid the IR formulation strategy within Pharmaceutical Research and Development organizations can be already set at late stage of drug discovery. Copyright © 2016 Elsevier B.V. All rights reserved.

12. A model for preemptive maintenance of medical linear accelerators—predictive maintenance

Able, Charles M.; Baydush, Alan H.; Nguyen, Callistus; Gersh, Jacob; Ndlovu, Alois; Rebo, Igor; Booth, Jeremy; Perez, Mario; Sintay, Benjamin; Munley, Michael T.

2016-01-01

Unscheduled accelerator downtime can negatively impact the quality of life of patients during their struggle against cancer. Currently digital data accumulated in the accelerator system is not being exploited in a systematic manner to assist in more efficient deployment of service engineering resources. The purpose of this study is to develop an effective process for detecting unexpected deviations in accelerator system operating parameters and/or performance that predicts component failure or system dysfunction and allows maintenance to be performed prior to the actuation of interlocks. The proposed predictive maintenance (PdM) model is as follows: 1) deliver a daily quality assurance (QA) treatment; 2) automatically transfer and interrogate the resulting log files; 3) once baselines are established, subject daily operating and performance values to statistical process control (SPC) analysis; 4) determine if any alarms have been triggered; and 5) alert facility and system service engineers. A robust volumetric modulated arc QA treatment is delivered to establish mean operating values and perform continuous sampling and monitoring using SPC methodology. Chart limits are calculated using a hybrid technique that includes the use of the standard SPC 3σ limits and an empirical factor based on the parameter/system specification. There are 7 accelerators currently under active surveillance. Currently 45 parameters plus each MLC leaf (120) are analyzed using Individual and Moving Range (I/MR) charts. The initial warning and alarm rule is as follows: warning (2 out of 3 consecutive values ≥ 2σ hybrid ) and alarm (2 out of 3 consecutive values or 3 out of 5 consecutive values ≥ 3σ hybrid ). A customized graphical user interface provides a means to review the SPC charts for each parameter and a visual color code to alert the reviewer of parameter status. Forty-five synthetic errors/changes were introduced to test the effectiveness of our initial chart limits. Forty

13. A Non-linear Predictive Model of Borderline Personality Disorder Based on Multilayer Perceptron.

Maldonato, Nelson M; Sperandeo, Raffaele; Moretto, Enrico; Dell'Orco, Silvia

2018-01-01

Borderline Personality Disorder is a serious mental disease, classified in Cluster B of DSM IV-TR personality disorders. People with this syndrome presents an anamnesis of traumatic experiences and shows dissociative symptoms. Since not all subjects who have been victims of trauma develop a Borderline Personality Disorder, the emergence of this serious disease seems to have the fragility of character as a predisposing condition. Infect, numerous studies show that subjects positive for diagnosis of Borderline Personality Disorder had scores extremely high or extremely low to some temperamental dimensions (harm Avoidance and reward dependence) and character dimensions (cooperativeness and self directedness). In a sample of 602 subjects, who have had consecutive access to an Outpatient Mental Health Service, it was evaluated the presence of Borderline Personality Disorder using the semi-structured interview for the DSM IV-TR personality disorders. In this population we assessed the presence of dissociative symptoms with the Dissociative Experiences Scale and the personality traits with the Temperament and Character Inventory developed by Cloninger. To assess the weight and the predictive value of these psychopathological dimensions in relation to the Borderline Personality Disorder diagnosis, a neural network statistical model called "multilayer perceptron," was implemented. This model was developed with a dichotomous dependent variable, consisting in the presence or absence of the diagnosis of borderline personality disorder and with five covariates. The first one is the taxonomic subscale of dissociative experience scale, the others are temperamental and characterial traits: Novelty-Seeking, Harm-Avoidance, Self-Directedness and Cooperativeness. The statistical model, that results satisfactory, showed a significance capacity (89%) to predict the presence of borderline personality disorder. Furthermore, the dissociative symptoms seem to have a greater influence than

14. A Non-linear Predictive Model of Borderline Personality Disorder Based on Multilayer Perceptron

Nelson M. Maldonato

2018-04-01

Full Text Available Borderline Personality Disorder is a serious mental disease, classified in Cluster B of DSM IV-TR personality disorders. People with this syndrome presents an anamnesis of traumatic experiences and shows dissociative symptoms. Since not all subjects who have been victims of trauma develop a Borderline Personality Disorder, the emergence of this serious disease seems to have the fragility of character as a predisposing condition. Infect, numerous studies show that subjects positive for diagnosis of Borderline Personality Disorder had scores extremely high or extremely low to some temperamental dimensions (harm Avoidance and reward dependence and character dimensions (cooperativeness and self directedness. In a sample of 602 subjects, who have had consecutive access to an Outpatient Mental Health Service, it was evaluated the presence of Borderline Personality Disorder using the semi-structured interview for the DSM IV-TR personality disorders. In this population we assessed the presence of dissociative symptoms with the Dissociative Experiences Scale and the personality traits with the Temperament and Character Inventory developed by Cloninger. To assess the weight and the predictive value of these psychopathological dimensions in relation to the Borderline Personality Disorder diagnosis, a neural network statistical model called “multilayer perceptron,” was implemented. This model was developed with a dichotomous dependent variable, consisting in the presence or absence of the diagnosis of borderline personality disorder and with five covariates. The first one is the taxonomic subscale of dissociative experience scale, the others are temperamental and characterial traits: Novelty-Seeking, Harm-Avoidance, Self-Directedness and Cooperativeness. The statistical model, that results satisfactory, showed a significance capacity (89% to predict the presence of borderline personality disorder. Furthermore, the dissociative symptoms seem to have a

15. A model for preemptive maintenance of medical linear accelerators-predictive maintenance.

Able, Charles M; Baydush, Alan H; Nguyen, Callistus; Gersh, Jacob; Ndlovu, Alois; Rebo, Igor; Booth, Jeremy; Perez, Mario; Sintay, Benjamin; Munley, Michael T

2016-03-10

Unscheduled accelerator downtime can negatively impact the quality of life of patients during their struggle against cancer. Currently digital data accumulated in the accelerator system is not being exploited in a systematic manner to assist in more efficient deployment of service engineering resources. The purpose of this study is to develop an effective process for detecting unexpected deviations in accelerator system operating parameters and/or performance that predicts component failure or system dysfunction and allows maintenance to be performed prior to the actuation of interlocks. The proposed predictive maintenance (PdM) model is as follows: 1) deliver a daily quality assurance (QA) treatment; 2) automatically transfer and interrogate the resulting log files; 3) once baselines are established, subject daily operating and performance values to statistical process control (SPC) analysis; 4) determine if any alarms have been triggered; and 5) alert facility and system service engineers. A robust volumetric modulated arc QA treatment is delivered to establish mean operating values and perform continuous sampling and monitoring using SPC methodology. Chart limits are calculated using a hybrid technique that includes the use of the standard SPC 3σ limits and an empirical factor based on the parameter/system specification. There are 7 accelerators currently under active surveillance. Currently 45 parameters plus each MLC leaf (120) are analyzed using Individual and Moving Range (I/MR) charts. The initial warning and alarm rule is as follows: warning (2 out of 3 consecutive values ≥ 2σ hybrid) and alarm (2 out of 3 consecutive values or 3 out of 5 consecutive values ≥ 3σ hybrid). A customized graphical user interface provides a means to review the SPC charts for each parameter and a visual color code to alert the reviewer of parameter status. Forty-five synthetic errors/changes were introduced to test the effectiveness of our initial chart limits. Forty

16. Applied predictive analytics principles and techniques for the professional data analyst

Abbott, Dean

2014-01-01

Learn the art and science of predictive analytics - techniques that get results Predictive analytics is what translates big data into meaningful, usable business information. Written by a leading expert in the field, this guide examines the science of the underlying algorithms as well as the principles and best practices that govern the art of predictive analytics. It clearly explains the theory behind predictive analytics, teaches the methods, principles, and techniques for conducting predictive analytics projects, and offers tips and tricks that are essential for successful predictive mode

17. Explicit/multi-parametric model predictive control (MPC) of linear discrete-time systems by dynamic and multi-parametric programming

Kouramas, K.I.; Faí sca, N.P.; Panos, C.; Pistikopoulos, E.N.

2011-01-01

This work presents a new algorithm for solving the explicit/multi- parametric model predictive control (or mp-MPC) problem for linear, time-invariant discrete-time systems, based on dynamic programming and multi-parametric programming techniques

18. A fast linear predictive adaptive model of packed bed coupled with UASB reactor treating onion waste to produce biofuel.

Milquez-Sanabria, Harvey; Blanco-Cocom, Luis; Alzate-Gaviria, Liliana

2016-10-03

Agro-industrial wastes are an energy source for different industries. However, its application has not reached small industries. Previous and current research activities performed on the acidogenic phase of two-phase anaerobic digestion processes deal particularly with process optimization of the acid-phase reactors operating with a wide variety of substrates, both soluble and complex in nature. Mathematical models for anaerobic digestion have been developed to understand and improve the efficient operation of the process. At present, lineal models with the advantages of requiring less data, predicting future behavior and updating when a new set of data becomes available have been developed. The aim of this research was to contribute to the reduction of organic solid waste, generate biogas and develop a simple but accurate mathematical model to predict the behavior of the UASB reactor. The system was maintained separate for 14 days during which hydrolytic and acetogenic bacteria broke down onion waste, produced and accumulated volatile fatty acids. On this day, two reactors were coupled and the system continued for 16 days more. The biogas and methane yields and volatile solid reduction were 0.6 ± 0.05 m 3 (kg VS removed ) -1 , 0.43 ± 0.06 m 3 (kg VS removed ) -1 and 83.5 ± 9.8 %, respectively. The model application showed a good prediction of all process parameters defined; maximum error between experimental and predicted value was 1.84 % for alkalinity profile. A linear predictive adaptive model for anaerobic digestion of onion waste in a two-stage process was determined under batch-fed condition. Organic load rate (OLR) was maintained constant for the entire operation, modifying effluent hydrolysis reactor feed to UASB reactor. This condition avoids intoxication of UASB reactor and also limits external buffer addition.

19. Predicting health-promoting self-care behaviors in people with pre-diabetes by applying Bandura social learning theory.

Chen, Mei-Fang; Wang, Ruey-Hsia; Hung, Shu-Ling

2015-11-01

The aim of this study was to apply Bandura social learning theory in a model for identifying personal and environmental factors that predict health-promoting self-care behaviors in people with pre-diabetes. The theoretical basis of health-promoting self-care behaviors must be examined to obtain evidence-based knowledge that can help improve the effectiveness of pre-diabetes care. However, such behaviors are rarely studied in people with pre-diabetes. This quantitative, cross-sectional survey study was performed in a convenience sample of two hospitals in southern Taiwan. Two hundred people diagnosed with pre-diabetes at a single health examination center were recruited. A questionnaire survey was performed to collect data regarding personal factors (i.e., participant characteristics, pre-diabetes knowledge, and self-efficacy) and data regarding environmental factors (i.e., social support and perceptions of empowerment process) that may have associations with health-promoting self-care behaviors in people with pre-diabetes. Multiple linear regression showed that the factors that had the largest influence on the practice of health-promoting self-care behaviors were self-efficacy, diabetes history, perceptions of empowerment process, and pre-diabetes knowledge. These factors explained 59.3% of the variance in health-promoting self-care behaviors. To prevent the development of diabetes in people with pre-diabetes, healthcare professionals should consider both the personal and the environmental factors identified in this study when assessing health promoting self-care behaviors in patients with pre-diabetes and when selecting the appropriate interventions. Copyright © 2015 Elsevier Inc. All rights reserved.

20. Applying machine learning to predict patient-specific current CD 4 ...

This work shows the application of machine learning to predict current CD4 cell count of an HIV-positive patient using genome sequences, viral load and time. A regression model predicting actual CD4 cell counts and a classification model predicting if a patient's CD4 cell count is less than 200 was built using a support ...

1. Real-time axial motion detection and correction for single photon emission computed tomography using a linear prediction filter

Saba, V.; Setayeshi, S.; Ghannadi-Maragheh, M.

2011-01-01

We have developed an algorithm for real-time detection and complete correction of the patient motion effects during single photon emission computed tomography. The algorithm is based on a linear prediction filter (LPC). The new prediction of projection data algorithm (PPDA) detects most motions-such as those of the head, legs, and hands-using comparison of the predicted and measured frame data. When the data acquisition for a specific frame is completed, the accuracy of the acquired data is evaluated by the PPDA. If patient motion is detected, the scanning procedure is stopped. After the patient rests in his or her true position, data acquisition is repeated only for the corrupted frame and the scanning procedure is continued. Various experimental data were used to validate the motion detection algorithm; on the whole, the proposed method was tested with approximately 100 test cases. The PPDA shows promising results. Using the PPDA enables us to prevent the scanner from collecting disturbed data during the scan and replaces them with motion-free data by real-time rescanning for the corrupted frames. As a result, the effects of patient motion is corrected in real time. (author)

2. KINETIC-J: A computational kernel for solving the linearized Vlasov equation applied to calculations of the kinetic, configuration space plasma current for time harmonic wave electric fields

Green, David L.; Berry, Lee A.; Simpson, Adam B.; Younkin, Timothy R.

2018-04-01

We present the KINETIC-J code, a computational kernel for evaluating the linearized Vlasov equation with application to calculating the kinetic plasma response (current) to an applied time harmonic wave electric field. This code addresses the need for a configuration space evaluation of the plasma current to enable kinetic full-wave solvers for waves in hot plasmas to move beyond the limitations of the traditional Fourier spectral methods. We benchmark the kernel via comparison with the standard k →-space forms of the hot plasma conductivity tensor.

3. Comparison of Principal Component Analysis and Linear Discriminant Analysis applied to classification of excitation-emission matrices of the selected biological material

Maciej Leśkiewicz

2016-03-01

Full Text Available Quality of two linear methods (PCA and LDA applied to reduce dimensionality of feature analysis is compared and efficiency of their algorithms in classification of the selected biological materials according to their excitation-emission fluorescence matrices is examined. It has been found that LDA method reduces the dimensions (or a number of significant variables more effectively than PCA method. A relatively good discrimination within the examined biological material has been obtained with the use of LDA algorithm.[b]Keywords[/b]: Feature Analysis, Fluorescence Spectroscopy, Biological Material Classification

4. Transient Vibration Prediction for Rotors on Ball Bearings Using Load-dependent Non-linear Bearing Stiffness

Fleming, David P.; Poplawski, J. V.

2002-01-01

Rolling-element bearing forces vary nonlinearly with bearing deflection. Thus an accurate rotordynamic transient analysis requires bearing forces to be determined at each step of the transient solution. Analyses have been carried out to show the effect of accurate bearing transient forces (accounting for non-linear speed and load dependent bearing stiffness) as compared to conventional use of average rolling-element bearing stiffness. Bearing forces were calculated by COBRA-AHS (Computer Optimized Ball and Roller Bearing Analysis - Advanced High Speed) and supplied to the rotordynamics code ARDS (Analysis of Rotor Dynamic Systems) for accurate simulation of rotor transient behavior. COBRA-AHS is a fast-running 5 degree-of-freedom computer code able to calculate high speed rolling-element bearing load-displacement data for radial and angular contact ball bearings and also for cylindrical and tapered roller beatings. Results show that use of nonlinear bearing characteristics is essential for accurate prediction of rotordynamic behavior.

5. pKa prediction for acidic phosphorus-containing compounds using multiple linear regression with computational descriptors.

Yu, Donghai; Du, Ruobing; Xiao, Ji-Chang

2016-07-05

Ninety-six acidic phosphorus-containing molecules with pKa 1.88 to 6.26 were collected and divided into training and test sets by random sampling. Structural parameters were obtained by density functional theory calculation of the molecules. The relationship between the experimental pKa values and structural parameters was obtained by multiple linear regression fitting for the training set, and tested with the test set; the R(2) values were 0.974 and 0.966 for the training and test sets, respectively. This regression equation, which quantitatively describes the influence of structural parameters on pKa , and can be used to predict pKa values of similar structures, is significant for the design of new acidic phosphorus-containing extractants. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.

6. A comparative in silico linear B-cell epitope prediction and characterization for South American and African Trypanosoma vivax strains.

Guedes, Rafael Lucas Muniz; Rodrigues, Carla Monadeli Filgueira; Coatnoan, Nicolas; Cosson, Alain; Cadioli, Fabiano Antonio; Garcia, Herakles Antonio; Gerber, Alexandra Lehmkuhl; Machado, Rosangela Zacarias; Minoprio, Paola Marcella Camargo; Teixeira, Marta Maria Geraldes; de Vasconcelos, Ana Tereza Ribeiro

2018-02-27

Trypanosoma vivax is a parasite widespread across Africa and South America. Immunological methods using recombinant antigens have been developed aiming at specific and sensitive detection of infections caused by T. vivax. Here, we sequenced for the first time the transcriptome of a virulent T. vivax strain (Lins), isolated from an outbreak of severe disease in South America (Brazil) and performed a computational integrated analysis of genome, transcriptome and in silico predictions to identify and characterize putative linear B-cell epitopes from African and South American T. vivax. A total of 2278, 3936 and 4062 linear B-cell epitopes were respectively characterized for the transcriptomes of T. vivax LIEM-176 (Venezuela), T. vivax IL1392 (Nigeria) and T. vivax Lins (Brazil) and 4684 for the genome of T. vivax Y486 (Nigeria). The results presented are a valuable theoretical source that may pave the way for highly sensitive and specific diagnostic tools. Copyright © 2018 The Authors. Published by Elsevier Inc. All rights reserved.

7. Applying Least Absolute Shrinkage Selection Operator and Akaike Information Criterion Analysis to Find the Best Multiple Linear Regression Models between Climate Indices and Components of Cow's Milk.

Marami Milani, Mohammad Reza; Hense, Andreas; Rahmani, Elham; Ploeger, Angelika

2016-07-23

This study focuses on multiple linear regression models relating six climate indices (temperature humidity THI, environmental stress ESI, equivalent temperature index ETI, heat load HLI, modified HLI (HLI new ), and respiratory rate predictor RRP) with three main components of cow's milk (yield, fat, and protein) for cows in Iran. The least absolute shrinkage selection operator (LASSO) and the Akaike information criterion (AIC) techniques are applied to select the best model for milk predictands with the smallest number of climate predictors. Uncertainty estimation is employed by applying bootstrapping through resampling. Cross validation is used to avoid over-fitting. Climatic parameters are calculated from the NASA-MERRA global atmospheric reanalysis. Milk data for the months from April to September, 2002 to 2010 are used. The best linear regression models are found in spring between milk yield as the predictand and THI, ESI, ETI, HLI, and RRP as predictors with p -value < 0.001 and R ² (0.50, 0.49) respectively. In summer, milk yield with independent variables of THI, ETI, and ESI show the highest relation ( p -value < 0.001) with R ² (0.69). For fat and protein the results are only marginal. This method is suggested for the impact studies of climate variability/change on agriculture and food science fields when short-time series or data with large uncertainty are available.

8. Prediction of the antimicrobial activity of walnut (Juglans regia L.) kernel aqueous extracts using artificial neural network and multiple linear regression.

Kavuncuoglu, Hatice; Kavuncuoglu, Erhan; Karatas, Seyda Merve; Benli, Büsra; Sagdic, Osman; Yalcin, Hasan

2018-04-09

The mathematical model was established to determine the diameter of inhibition zone of the walnut extract on the twelve bacterial species. Type of extraction, concentration, and pathogens were taken as input variables. Two models were used with the aim of designing this system. One of them was developed with artificial neural networks (ANN), and the other was formed with multiple linear regression (MLR). Four common training algorithms were used. Levenberg-Marquardt (LM), Bayesian regulation (BR), scaled conjugate gradient (SCG) and resilient back propagation (RP) were investigated, and the algorithms were compared. Root mean squared error and correlation coefficient were evaluated as performance criteria. When these criteria were analyzed, ANN showed high prediction performance, while MLR showed low prediction performance. As a result, it is seen that when the different input values are provided to the system developed with ANN, the most accurate inhibition zone (IZ) estimates were obtained. The results of this study could offer new perspectives, particularly in the field of microbiology, because these could be applied to other type of extraction, concentrations, and pathogens, without resorting to experiments. Copyright © 2018 Elsevier B.V. All rights reserved.

9. State space model extraction of thermohydraulic systems – Part II: A linear graph approach applied to a Brayton cycle-based power conversion unit

Uren, Kenneth Richard; Schoor, George van

2013-01-01

This second paper in a two part series presents the application of a developed state space model extraction methodology applied to a Brayton cycle-based PCU (power conversion unit) of a PBMR (pebble bed modular reactor). The goal is to investigate if the state space extraction methodology can cope with larger and more complex thermohydraulic systems. In Part I the state space model extraction methodology for the purpose of control was described in detail and a state space representation was extracted for a U-tube system to illustrate the concept. In this paper a 25th order nonlinear state space representation in terms of the different energy domains is extracted. This state space representation is solved and the responses of a number of important states are compared with results obtained from a PBMR PCU Flownex ® model. Flownex ® is a validated thermo fluid simulation software package. The results show that the state space model closely resembles the dynamics of the PBMR PCU. This kind of model may be used for nonlinear MIMO (multi-input, multi-output) type of control strategies. However, there is still a need for linear state space models since many control system design and analysis techniques require a linear state space model. This issue is also addressed in this paper by showing how a linear state space model can be derived from the extracted nonlinear state space model. The linearised state space model is also validated by comparing the state space model to an existing linear Simulink ® model of the PBMR PCU system. - Highlights: • State space model extraction of a pebble bed modular reactor PCU (power conversion unit). • A 25th order nonlinear time varying state space model is obtained. • Linearisation of a nonlinear state space model for use in power output control. • Non-minimum phase characteristic that is challenging in terms of control. • Models derived are useful for MIMO control strategies

10. Linear regression

Olive, David J

2017-01-01

This text covers both multiple linear regression and some experimental design models. The text uses the response plot to visualize the model and to detect outliers, does not assume that the error distribution has a known parametric distribution, develops prediction intervals that work when the error distribution is unknown, suggests bootstrap hypothesis tests that may be useful for inference after variable selection, and develops prediction regions and large sample theory for the multivariate linear regression model that has m response variables. A relationship between multivariate prediction regions and confidence regions provides a simple way to bootstrap confidence regions. These confidence regions often provide a practical method for testing hypotheses. There is also a chapter on generalized linear models and generalized additive models. There are many R functions to produce response and residual plots, to simulate prediction intervals and hypothesis tests, to detect outliers, and to choose response trans...

11. Whole-genome regression and prediction methods applied to plant and animal breeding

Los Campos, De G.; Hickey, J.M.; Pong-Wong, R.; Daetwyler, H.D.; Calus, M.P.L.

2013-01-01

Genomic-enabled prediction is becoming increasingly important in animal and plant breeding, and is also receiving attention in human genetics. Deriving accurate predictions of complex traits requires implementing whole-genome regression (WGR) models where phenotypes are regressed on thousands of

12. Predicting University Preference and Attendance: Applied Marketing in Higher Education Administration.

Cook, Robert W.; Zallocco, Ronald L.

1983-01-01

A multi-attribute attitude model was used to determine whether a multicriteria scale can be used to predict student preferences for and attendance at universities. Data were gathered from freshmen attending five state universities in Ohio. The results indicate a high level of predictability. (Author/MLW)

13. Robust predictive control strategy applied for propofol dosing using BIS as a controlled variable during anesthesia

Ionescu, Clara A.; De Keyser, Robin; Torrico, Bismark Claure; De Smet, Tom; Struys, Michel M. R. F.; Normey-Rico, Julio E.

This paper presents the application of predictive control to drug dosing during anesthesia in patients undergoing surgery. The performance of a generic predictive control strategy in drug dosing control, with a previously reported anesthesia-specific control algorithm, has been evaluated. The

14. Spatial measurement error and correction by spatial SIMEX in linear regression models when using predicted air pollution exposures.

Alexeeff, Stacey E; Carroll, Raymond J; Coull, Brent

2016-04-01

Spatial modeling of air pollution exposures is widespread in air pollution epidemiology research as a way to improve exposure assessment. However, there are key sources of exposure model uncertainty when air pollution is modeled, including estimation error and model misspecification. We examine the use of predicted air pollution levels in linear health effect models under a measurement error framework. For the prediction of air pollution exposures, we consider a universal Kriging framework, which may include land-use regression terms in the mean function and a spatial covariance structure for the residuals. We derive the bias induced by estimation error and by model misspecification in the exposure model, and we find that a misspecified exposure model can induce asymptotic bias in the effect estimate of air pollution on health. We propose a new spatial simulation extrapolation (SIMEX) procedure, and we demonstrate that the procedure has good performance in correcting this asymptotic bias. We illustrate spatial SIMEX in a study of air pollution and birthweight in Massachusetts. © The Author 2015. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

15. Prediction of the retention of s-triazines in reversed-phase high-performance liquid chromatography under linear gradient-elution conditions.

D'Archivio, Angelo Antonio; Maggi, Maria Anna; Ruggieri, Fabrizio

2014-08-01

In this paper, a multilayer artificial neural network is used to model simultaneously the effect of solute structure and eluent concentration profile on the retention of s-triazines in reversed-phase high-performance liquid chromatography under linear gradient elution. The retention data of 24 triazines, including common herbicides and their metabolites, are collected under 13 different elution modes, covering the following experimental domain: starting acetonitrile volume fraction ranging between 40 and 60% and gradient slope ranging between 0 and 1% acetonitrile/min. The gradient parameters together with five selected molecular descriptors, identified by quantitative structure-retention relationship modelling applied to individual separation conditions, are the network inputs. Predictive performance of this model is evaluated on six external triazines and four unseen separation conditions. For comparison, retention of triazines is modelled by both quantitative structure-retention relationships and response surface methodology, which describe separately the effect of molecular structure and gradient parameters on the retention. Although applied to a wider variable domain, the network provides a performance comparable to that of the above "local" models and retention times of triazines are modelled with accuracy generally better than 7%. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

16. A generic method for assignment of reliability scores applied to solvent accessibility predictions

Nielsen Morten

2009-07-01

Full Text Available Abstract Background Estimation of the reliability of specific real value predictions is nontrivial and the efficacy of this is often questionable. It is important to know if you can trust a given prediction and therefore the best methods associate a prediction with a reliability score or index. For discrete qualitative predictions, the reliability is conventionally estimated as the difference between output scores of selected classes. Such an approach is not feasible for methods that predict a biological feature as a single real value rather than a classification. As a solution to this challenge, we have implemented a method that predicts the relative surface accessibility of an amino acid and simultaneously predicts the reliability for each prediction, in the form of a Z-score. Results An ensemble of artificial neural networks has been trained on a set of experimentally solved protein structures to predict the relative exposure of the amino acids. The method assigns a reliability score to each surface accessibility prediction as an inherent part of the training process. This is in contrast to the most commonly used procedures where reliabilities are obtained by post-processing the output. Conclusion The performance of the neural networks was evaluated on a commonly used set of sequences known as the CB513 set. An overall Pearson's correlation coefficient of 0.72 was obtained, which is comparable to the performance of the currently best public available method, Real-SPINE. Both methods associate a reliability score with the individual predictions. However, our implementation of reliability scores in the form of a Z-score is shown to be the more informative measure for discriminating good predictions from bad ones in the entire range from completely buried to fully exposed amino acids. This is evident when comparing the Pearson's correlation coefficient for the upper 20% of predictions sorted according to reliability. For this subset, values of 0

17. Best linear unbiased prediction of genomic breeding values using a trait-specific marker-derived relationship matrix.

Zhe Zhang

2010-09-01

Full Text Available With the availability of high density whole-genome single nucleotide polymorphism chips, genomic selection has become a promising method to estimate genetic merit with potentially high accuracy for animal, plant and aquaculture species of economic importance. With markers covering the entire genome, genetic merit of genotyped individuals can be predicted directly within the framework of mixed model equations, by using a matrix of relationships among individuals that is derived from the markers. Here we extend that approach by deriving a marker-based relationship matrix specifically for the trait of interest.In the framework of mixed model equations, a new best linear unbiased prediction (BLUP method including a trait-specific relationship matrix (TA was presented and termed TABLUP. The TA matrix was constructed on the basis of marker genotypes and their weights in relation to the trait of interest. A simulation study with 1,000 individuals as the training population and five successive generations as candidate population was carried out to validate the proposed method. The proposed TABLUP method outperformed the ridge regression BLUP (RRBLUP and BLUP with realized relationship matrix (GBLUP. It performed slightly worse than BayesB with an accuracy of 0.79 in the standard scenario.The proposed TABLUP method is an improvement of the RRBLUP and GBLUP method. It might be equivalent to the BayesB method but it has additional benefits like the calculation of accuracies for individual breeding values. The results also showed that the TA-matrix performs better in predicting ability than the classical numerator relationship matrix and the realized relationship matrix which are derived solely from pedigree or markers without regard to the trait. This is because the TA-matrix not only accounts for the Mendelian sampling term, but also puts the greater emphasis on those markers that explain more of the genetic variance in the trait.

18. On the comparison of stochastic model predictive control strategies applied to a hydrogen-based microgrid

Velarde, P.; Valverde, L.; Maestre, J. M.; Ocampo-Martinez, C.; Bordons, C.

2017-03-01

In this paper, a performance comparison among three well-known stochastic model predictive control approaches, namely, multi-scenario, tree-based, and chance-constrained model predictive control is presented. To this end, three predictive controllers have been designed and implemented in a real renewable-hydrogen-based microgrid. The experimental set-up includes a PEM electrolyzer, lead-acid batteries, and a PEM fuel cell as main equipment. The real experimental results show significant differences from the plant components, mainly in terms of use of energy, for each implemented technique. Effectiveness, performance, advantages, and disadvantages of these techniques are extensively discussed and analyzed to give some valid criteria when selecting an appropriate stochastic predictive controller.

19. Applying Spatial-Temporal Model and Game Theory to Asymmetric Threat Prediction

Wei, Mo; Chen, Genshe; Cruz, Jr., Jose B; Haynes, Leonard; Kruger, Martin

2007-01-01

.... In most Command and Control "C2" applications, the existing techniques, such as spatial-temporal point models for ECOA prediction or Discrete Choice Model "DCM", assume that insurgent attack features...

20. Applying machine learning to predict patient-specific current CD4 ...

Apple apple

This work shows the application of machine learning to predict current CD4 cell count of an HIV- .... Pre-processing ... remaining data elements of the PR and RT datasets. ... technique based on the structure of the human brain's neuron.

1. Predictive Maintenance--An Effective Money Saving Tool Being Applied in Industry Today.

Smyth, Tom

2000-01-01

Looks at preventive/predictive maintenance as it is used in industry. Discusses core preventive maintenance tools that must be understood to prepare students. Includes a list of websites related to the topic. (JOW)

2. Predicting Causal Relationships from Biological Data: Applying Automated Casual Discovery on Mass Cytometry Data of Human Immune Cells

Triantafillou, Sofia; Lagani, Vincenzo; Heinze-Deml, Christina; Schmidt, Angelika; Tegner, Jesper; Tsamardinos, Ioannis

2017-01-01

Learning the causal relationships that define a molecular system allows us to predict how the system will respond to different interventions. Distinguishing causality from mere association typically requires randomized experiments. Methods for automated causal discovery from limited experiments exist, but have so far rarely been tested in systems biology applications. In this work, we apply state-of-the art causal discovery methods on a large collection of public mass cytometry data sets, measuring intra-cellular signaling proteins of the human immune system and their response to several perturbations. We show how different experimental conditions can be used to facilitate causal discovery, and apply two fundamental methods that produce context-specific causal predictions. Causal predictions were reproducible across independent data sets from two different studies, but often disagree with the KEGG pathway databases. Within this context, we discuss the caveats we need to overcome for automated causal discovery to become a part of the routine data analysis in systems biology.

3. Predicting Causal Relationships from Biological Data: Applying Automated Casual Discovery on Mass Cytometry Data of Human Immune Cells

Triantafillou, Sofia

2017-03-31

Learning the causal relationships that define a molecular system allows us to predict how the system will respond to different interventions. Distinguishing causality from mere association typically requires randomized experiments. Methods for automated causal discovery from limited experiments exist, but have so far rarely been tested in systems biology applications. In this work, we apply state-of-the art causal discovery methods on a large collection of public mass cytometry data sets, measuring intra-cellular signaling proteins of the human immune system and their response to several perturbations. We show how different experimental conditions can be used to facilitate causal discovery, and apply two fundamental methods that produce context-specific causal predictions. Causal predictions were reproducible across independent data sets from two different studies, but often disagree with the KEGG pathway databases. Within this context, we discuss the caveats we need to overcome for automated causal discovery to become a part of the routine data analysis in systems biology.

4. Predictive modelling of chromium removal using multiple linear and nonlinear regression with special emphasis on operating parameters of bioelectrochemical reactor.

More, Anand Govind; Gupta, Sunil Kumar

2018-03-24

Bioelectrochemical system (BES) is a novel, self-sustaining metal removal technology functioning on the utilization of chemical energy of organic matter with the help of microorganisms. Experimental trials of two chambered BES reactor were conducted with varying substrate concentration using sodium acetate (500 mg/L to 2000 mg/L COD) and different initial chromium concentration (Cr i ) (10-100 mg/L) at different cathode pH (pH 1-7). In the current study mathematical models based on multiple linear regression (MLR) and non-linear regression (NLR) approach were developed using laboratory experimental data for determining chromium removal efficiency (CRE) in the cathode chamber of BES. Substrate concentration, rate of substrate consumption, Cr i , pH, temperature and hydraulic retention time (HRT) were the operating process parameters of the reactor considered for development of the proposed models. MLR showed a better correlation coefficient (0.972) as compared to NLR (0.952). Validation of the models using t-test analysis revealed unbiasedness of both the models, with t critical value (2.04) greater than t-calculated values for MLR (-0.708) and NLR (-0.86). The root-mean-square error (RMSE) for MLR and NLR were 5.06 % and 7.45 %, respectively. Comparison between both models suggested MLR to be best suited model for predicting the chromium removal behavior using the BES technology to specify a set of operating conditions for BES. Modelling the behavior of CRE will be helpful for scale up of BES technology at industrial level. Copyright © 2018 The Society for Biotechnology, Japan. Published by Elsevier B.V. All rights reserved.

5. Sensitivity, specificity and predictive values of linear and nonlinear indices of heart rate variability in stable angina patients

Pivatelli Flávio

2012-10-01

Full Text Available Abstract Background Decreased heart rate variability (HRV is related to higher morbidity and mortality. In this study we evaluated the linear and nonlinear indices of the HRV in stable angina patients submitted to coronary angiography. Methods We studied 77 unselected patients for elective coronary angiography, which were divided into two groups: coronary artery disease (CAD and non-CAD groups. For analysis of HRV indices, HRV was recorded beat by beat with the volunteers in the supine position for 40 minutes. We analyzed the linear indices in the time (SDNN [standard deviation of normal to normal], NN50 [total number of adjacent RR intervals with a difference of duration greater than 50ms] and RMSSD [root-mean square of differences] and frequency domains ultra-low frequency (ULF ≤ 0,003 Hz, very low frequency (VLF 0,003 – 0,04 Hz, low frequency (LF (0.04–0.15 Hz, and high frequency (HF (0.15–0.40 Hz as well as the ratio between LF and HF components (LF/HF. In relation to the nonlinear indices we evaluated SD1, SD2, SD1/SD2, approximate entropy (−ApEn, α1, α2, Lyapunov Exponent, Hurst Exponent, autocorrelation and dimension correlation. The definition of the cutoff point of the variables for predictive tests was obtained by the Receiver Operating Characteristic curve (ROC. The area under the ROC curve was calculated by the extended trapezoidal rule, assuming as relevant areas under the curve ≥ 0.650. Results Coronary arterial disease patients presented reduced values of SDNN, RMSSD, NN50, HF, SD1, SD2 and -ApEn. HF ≤ 66 ms2, RMSSD ≤ 23.9 ms, ApEn ≤−0.296 and NN50 ≤ 16 presented the best discriminatory power for the presence of significant coronary obstruction. Conclusion We suggest the use of Heart Rate Variability Analysis in linear and nonlinear domains, for prognostic purposes in patients with stable angina pectoris, in view of their overall impairment.

6. A practical approach to parameter estimation applied to model predicting heart rate regulation

Olufsen, Mette; Ottesen, Johnny T.

2013-01-01

Mathematical models have long been used for prediction of dynamics in biological systems. Recently, several efforts have been made to render these models patient specific. One way to do so is to employ techniques to estimate parameters that enable model based prediction of observed quantities....... Knowledge of variation in parameters within and between groups of subjects have potential to provide insight into biological function. Often it is not possible to estimate all parameters in a given model, in particular if the model is complex and the data is sparse. However, it may be possible to estimate...... a subset of model parameters reducing the complexity of the problem. In this study, we compare three methods that allow identification of parameter subsets that can be estimated given a model and a set of data. These methods will be used to estimate patient specific parameters in a model predicting...

7. Prediction of SO{sub 2} pollution incidents near a power station using partially linear models and an historical matrix of predictor-response vectors

Prada-Sanchez, J.M.; Febrero-Bande, M.; Gonzalez-Manteiga, W. [Universidad de Santiago de Compostela, Dept. de Estadistica e Investigacion Operativa, Santiago de Compostela (Spain); Costos-Yanez, T. [Universidad de Vigo, Dept. de Estadistica e Investigacion Operativa, Orense (Spain); Bermudez-Cela, J.L.; Lucas-Dominguez, T. [Laboratorio, Central Termica de As Pontes, La Coruna (Spain)

2000-07-01

Atmospheric SO{sub 2} concentrations at sampling stations near the fossil fuel fired power station at As Pontes (La Coruna, Spain) were predicted using a model for the corresponding time series consisting of a self-explicative term and a linear combination of exogenous variables. In a supplementary simulation study, models of this kind behaved better than the corresponding pure self-explicative or pure linear regression models. (Author)

8. Prediction of SO2 pollution incidents near a power station using partially linear models and an historical matrix of predictor-response vectors

Prada-Sanchez, J.M.; Febrero-Bande, M.; Gonzalez-Manteiga, W.; Costos-Yanez, T.; Bermudez-Cela, J.L.; Lucas-Dominguez, T.

2000-01-01

Atmospheric SO 2 concentrations at sampling stations near the fossil fuel fired power station at As Pontes (La Coruna, Spain) were predicted using a model for the corresponding time series consisting of a self-explicative term and a linear combination of exogenous variables. In a supplementary simulation study, models of this kind behaved better than the corresponding pure self-explicative or pure linear regression models. (Author)

9. Applying geographic profiling used in the field of criminology for predicting the nest locations of bumble bees.

Suzuki-Ohno, Yukari; Inoue, Maki N; Ohno, Kazunori

2010-07-21

We tested whether geographic profiling (GP) can predict multiple nest locations of bumble bees. GP was originally developed in the field of criminology for predicting the area where an offender most likely resides on the basis of the actual crime sites and the predefined probability of crime interaction. The predefined probability of crime interaction in the GP model depends on the distance of a site from an offender's residence. We applied GP for predicting nest locations, assuming that foraging and nest sites were the crime sites and the offenders' residences, respectively. We identified the foraging and nest sites of the invasive species Bombus terrestris in 2004, 2005, and 2006. We fitted GP model coefficients to the field data of the foraging and nest sites, and used GP with the fitting coefficients. GP succeeded in predicting about 10-30% of actual nests. Sensitivity analysis showed that the predictability of the GP model mainly depended on the coefficient value of buffer zone, the distance at the mode of the foraging probability. GP will be able to predict the nest locations of bumble bees in other area by using the fitting coefficient values measured in this study. It will be possible to further improve the predictability of the GP model by considering food site preference and nest density. (c) 2010 Elsevier Ltd. All rights reserved.

10. Is Romantic Desire Predictable? Machine Learning Applied to Initial Romantic Attraction.

Joel, Samantha; Eastwick, Paul W; Finkel, Eli J

2017-10-01

Matchmaking companies and theoretical perspectives on close relationships suggest that initial attraction is, to some extent, a product of two people's self-reported traits and preferences. We used machine learning to test how well such measures predict people's overall tendencies to romantically desire other people (actor variance) and to be desired by other people (partner variance), as well as people's desire for specific partners above and beyond actor and partner variance (relationship variance). In two speed-dating studies, romantically unattached individuals completed more than 100 self-report measures about traits and preferences that past researchers have identified as being relevant to mate selection. Each participant met each opposite-sex participant attending a speed-dating event for a 4-min speed date. Random forests models predicted 4% to 18% of actor variance and 7% to 27% of partner variance; crucially, however, they were unable to predict relationship variance using any combination of traits and preferences reported before the dates. These results suggest that compatibility elements of human mating are challenging to predict before two people meet.

11. Applying a new mammographic imaging marker to predict breast cancer risk

Aghaei, Faranak; Danala, Gopichandh; Hollingsworth, Alan B.; Stoug, Rebecca G.; Pearce, Melanie; Liu, Hong; Zheng, Bin

2018-02-01

Identifying and developing new mammographic imaging markers to assist prediction of breast cancer risk has been attracting extensive research interest recently. Although mammographic density is considered an important breast cancer risk, its discriminatory power is lower for predicting short-term breast cancer risk, which is a prerequisite to establish a more effective personalized breast cancer screening paradigm. In this study, we presented a new interactive computer-aided detection (CAD) scheme to generate a new quantitative mammographic imaging marker based on the bilateral mammographic tissue density asymmetry to predict risk of cancer detection in the next subsequent mammography screening. An image database involving 1,397 women was retrospectively assembled and tested. Each woman had two digital mammography screenings namely, the "current" and "prior" screenings with a time interval from 365 to 600 days. All "prior" images were originally interpreted negative. In "current" screenings, these cases were divided into 3 groups, which include 402 positive, 643 negative, and 352 biopsy-proved benign cases, respectively. There is no significant difference of BIRADS based mammographic density ratings between 3 case groups (p cancer detection in the "current" screening. Study demonstrated that this new imaging marker had potential to yield significantly higher discriminatory power to predict short-term breast cancer risk.

12. The relevance of social media as it applies in South Africa to crime prediction

Featherstone, Coral

2013-05-01

Full Text Available of data gathering, prediction and spotting broader patterns. An assessment is done to determine if South African people are already using Twitter to report crime and to find out what information they are sharing, with the goal of estabishing whether...

13. Prediction of turbulent heat transfer with surface blowing using a non-linear algebraic heat flux model

Bataille, F.; Younis, B.A.; Bellettre, J.; Lallemand, A.

2003-01-01

The paper reports on the prediction of the effects of blowing on the evolution of the thermal and velocity fields in a flat-plate turbulent boundary layer developing over a porous surface. Closure of the time-averaged equations governing the transport of momentum and thermal energy is achieved using a complete Reynolds-stress transport model for the turbulent stresses and a non-linear, algebraic and explicit model for the turbulent heat fluxes. The latter model accounts explicitly for the dependence of the turbulent heat fluxes on the gradients of mean velocity. Results are reported for the case of a heated boundary layer which is first developed into equilibrium over a smooth impervious wall before encountering a porous section through which cooler fluid is continuously injected. Comparisons are made with LDA measurements for an injection rate of 1%. The reduction of the wall shear stress with increase in injection rate is obtained in the calculations, and the computed rates of heat transfer between the hot flow and the wall are found to agree well with the published data

14. Applying mathematical models to predict resident physician performance and alertness on traditional and novel work schedules.

Klerman, Elizabeth B; Beckett, Scott A; Landrigan, Christopher P

2016-09-13

15. Seasonal variation of benzo(a)pyrene in the Spanish airborne PM10. Multivariate linear regression model applied to estimate BaP concentrations.

Callén, M S; López, J M; Mastral, A M

2010-08-15

The estimation of benzo(a)pyrene (BaP) concentrations in ambient air is very important from an environmental point of view especially with the introduction of the Directive 2004/107/EC and due to the carcinogenic character of this pollutant. A sampling campaign of particulate matter less or equal than 10 microns (PM10) carried out during 2008-2009 in four locations of Spain was collected to determine experimentally BaP concentrations by gas chromatography mass-spectrometry mass-spectrometry (GC-MS-MS). Multivariate linear regression models (MLRM) were used to predict BaP air concentrations in two sampling places, taking PM10 and meteorological variables as possible predictors. The model obtained with data from two sampling sites (all sites model) (R(2)=0.817, PRESS/SSY=0.183) included the significant variables like PM10, temperature, solar radiation and wind speed and was internally and externally validated. The first validation was performed by cross validation and the last one by BaP concentrations from previous campaigns carried out in Zaragoza from 2001-2004. The proposed model constitutes a first approximation to estimate BaP concentrations in urban atmospheres with very good internal prediction (Q(CV)(2)=0.813, PRESS/SSY=0.187) and with the maximal external prediction for the 2001-2002 campaign (Q(ext)(2)=0.679 and PRESS/SSY=0.321) versus the 2001-2004 campaign (Q(ext)(2)=0.551, PRESS/SSY=0.449). Copyright 2010 Elsevier B.V. All rights reserved.

16. Seasonal variation of benzo(a)pyrene in the Spanish airborne PM10. Multivariate linear regression model applied to estimate BaP concentrations

Callen, M.S.; Lopez, J.M.; Mastral, A.M.

2010-01-01

The estimation of benzo(a)pyrene (BaP) concentrations in ambient air is very important from an environmental point of view especially with the introduction of the Directive 2004/107/EC and due to the carcinogenic character of this pollutant. A sampling campaign of particulate matter less or equal than 10 microns (PM10) carried out during 2008-2009 in four locations of Spain was collected to determine experimentally BaP concentrations by gas chromatography mass-spectrometry mass-spectrometry (GC-MS-MS). Multivariate linear regression models (MLRM) were used to predict BaP air concentrations in two sampling places, taking PM10 and meteorological variables as possible predictors. The model obtained with data from two sampling sites (all sites model) (R 2 = 0.817, PRESS/SSY = 0.183) included the significant variables like PM10, temperature, solar radiation and wind speed and was internally and externally validated. The first validation was performed by cross validation and the last one by BaP concentrations from previous campaigns carried out in Zaragoza from 2001-2004. The proposed model constitutes a first approximation to estimate BaP concentrations in urban atmospheres with very good internal prediction (Q CV 2 =0.813, PRESS/SSY = 0.187) and with the maximal external prediction for the 2001-2002 campaign (Q ext 2 =0.679 and PRESS/SSY = 0.321) versus the 2001-2004 campaign (Q ext 2 =0.551, PRESS/SSY = 0.449).

17. Complex terrain wind resource estimation with the wind-atlas method: Prediction errors using linearized and nonlinear CFD micro-scale models

Troen, Ib; Bechmann, Andreas; Kelly, Mark C.

2014-01-01

Using the Wind Atlas methodology to predict the average wind speed at one location from measured climatological wind frequency distributions at another nearby location we analyse the relative prediction errors using a linearized flow model (IBZ) and a more physically correct fully non-linear 3D...... flow model (CFD) for a number of sites in very complex terrain (large terrain slopes). We first briefly describe the Wind Atlas methodology as implemented in WAsP and the specifics of the “classical” model setup and the new setup allowing the use of the CFD computation engine. We discuss some known...

18. Double linearization theory applied to three-dimensional cascades oscillating under supersonic axial flow condition. Choonsoku jikuryu sokudo de sadosuru sanjigen shindo yokuretsu no niju senkei riron ni yoru hiteijo kukiryoku kaiseki

Toshimitsu, K; Nanba, M [Kgushu University, Fukuoka (Japan). Faculty of Engineering; Iwai, S [Mitsubishi Heavy Industries, Ltd., Tokyo (Japan)

1993-11-25

19. Whole-Genome Regression and Prediction Methods Applied to Plant and Animal Breeding

de los Campos, Gustavo; Hickey, John M.; Pong-Wong, Ricardo; Daetwyler, Hans D.; Calus, Mario P. L.

2013-01-01

Genomic-enabled prediction is becoming increasingly important in animal and plant breeding and is also receiving attention in human genetics. Deriving accurate predictions of complex traits requires implementing whole-genome regression (WGR) models where phenotypes are regressed on thousands of markers concurrently. Methods exist that allow implementing these large-p with small-n regressions, and genome-enabled selection (GS) is being implemented in several plant and animal breeding programs. The list of available methods is long, and the relationships between them have not been fully addressed. In this article we provide an overview of available methods for implementing parametric WGR models, discuss selected topics that emerge in applications, and present a general discussion of lessons learned from simulation and empirical data analysis in the last decade. PMID:22745228

20. Evaluation of Filtering Methods Applied to the Unstructured Datasets in the Predictive Learning Services

Ilin Dmitry

2017-01-01

Full Text Available Predictive learning services perform aggregation and homogenization of open data from public sources, in particular from the online recruitment agencies. However, the sample of vacancies may contain various percentage of noise due to the frequent occurrence of homonyms. This article will consider two approaches of noise reduction: the first one is based on the cosine similarity and the second one is based on the contextual words.

1. Applying a health action model to predict and improve healthy behaviors in coal miners.

2018-05-01

One of the most important ways to prevent work-related diseases in occupations such as mining is to promote healthy behaviors among miners. This study aimed to predict and promote healthy behaviors among coal miners by using a health action model (HAM). The study was conducted on 200 coal miners in Iran in two steps. In the first step, a descriptive study was implemented to determine predictive constructs and effectiveness of HAM on behavioral intention. The second step involved a quasi-experimental study to determine the effect of an HAM-based education intervention. This intervention was implemented by the researcher and the head of the safety unit based on the predictive construct specified in the first step over 12 sessions of 60 min. The data was collected using an HAM questionnaire and a checklist of healthy behavior. The results of the first step of the study showed that attitude, belief, and normative constructs were meaningful predictors of behavioral intention. Also, the results of the second step revealed that the mean score of attitude and behavioral intention increased significantly after conducting the intervention in the experimental group, while the mean score of these constructs decreased significantly in the control group. The findings of this study showed that HAM-based educational intervention could improve the healthy behaviors of mine workers. Therefore, it is recommended to extend the application of this model to other working groups to improve healthy behaviors.

2. Effects of DeOrbitSail as applied to Lifetime predictions of Low Earth Orbit Satellites

Afful, Andoh; Opperman, Ben; Steyn, Herman

2016-07-01

Orbit lifetime prediction is an important component of satellite mission design and post-launch space operations. Throughout its lifetime in space, a spacecraft is exposed to risk of collision with orbital debris or operational satellites. This risk is especially high within the Low Earth Orbit (LEO) region where the highest density of space debris is accumulated. This paper investigates orbital decay of some LEO micro-satellites and accelerating orbit decay by using a deorbitsail. The Semi-Analytical Liu Theory (SALT) and the Satellite Toolkit was employed to determine the mean elements and expressions for the time rates of change. Test cases of observed decayed satellites (Iridium-85 and Starshine-1) are used to evaluate the predicted theory. Results for the test cases indicated that the theory fitted observational data well within acceptable limits. Orbit decay progress of the SUNSAT micro-satellite was analysed using relevant orbital parameters derived from historic Two Line Element (TLE) sets and comparing with decay and lifetime prediction models. This paper also explored the deorbit date and time for a 1U CubeSat (ZACUBE-01). The use of solar sails as devices to speed up the deorbiting of LEO satellites is considered. In a drag sail mode, the deorbitsail technique significantly increases the effective cross-sectional area of a satellite, subsequently increasing atmospheric drag and accelerating orbit decay. The concept proposed in this study introduced a very useful technique of orbit decay as well as deorbiting of spacecraft.

3. ANALYSIS OF THE PREDICTIVE DMC CONTROLLER PERFORMANCE APPLIED TO A FEED-BATCH BIOREACTOR

J. A. D. RODRIGUES

1997-12-01

Full Text Available Two control algorithms were implemented in the stabilization of the dissolved oxygen concentration of the penicillin process production phase. A deterministic and nonstructured mathematical model was used, where were considered the balances of cell, substrate, dissolved oxygen and product formation as well as kinetic of the growth, respiration, product inhibition due to excess of substrate, penicillin hydrolyze, yield factors among cell growth, substrate consumption and dissolved oxygen consumption. The bioreactor was operated in a feed-batch way using an optimal strategy for the operational policy. The agitation speed was used as manipulated variable in order to achieve the dissolved oxygen control because it was found to be the most sensitive one. Two types of control configurations were implemented. First, the PID feedback control with the parameters estimated through Modified Simplex optimization method using the IAE index, and second, the DMC predictive control that had as control parameters the model, prediction and control horizons as well as suppression factor and the trajectory parameter. A sensitivity analysis of these two control algorithms was performed using the sample time and dead time as the index to make stability evaluation. Both configurations showed stable performance, however, the predictive one was found to be more robust in relation to the sample time, as well as the dead time variations. This is a very important characteristic to be considered for the implementation of control scheme in real fermentative process

4. Moving beyond regression techniques in cardiovascular risk prediction: applying machine learning to address analytic challenges.

Goldstein, Benjamin A; Navar, Ann Marie; Carter, Rickey E

2017-06-14

Risk prediction plays an important role in clinical cardiology research. Traditionally, most risk models have been based on regression models. While useful and robust, these statistical methods are limited to using a small number of predictors which operate in the same way on everyone, and uniformly throughout their range. The purpose of this review is to illustrate the use of machine-learning methods for development of risk prediction models. Typically presented as black box approaches, most machine-learning methods are aimed at solving particular challenges that arise in data analysis that are not well addressed by typical regression approaches. To illustrate these challenges, as well as how different methods can address them, we consider trying to predicting mortality after diagnosis of acute myocardial infarction. We use data derived from our institution's electronic health record and abstract data on 13 regularly measured laboratory markers. We walk through different challenges that arise in modelling these data and then introduce different machine-learning approaches. Finally, we discuss general issues in the application of machine-learning methods including tuning parameters, loss functions, variable importance, and missing data. Overall, this review serves as an introduction for those working on risk modelling to approach the diffuse field of machine learning. © The Author 2016. Published by Oxford University Press on behalf of the European Society of Cardiology.

5. Prediction of octanol-water partition coefficients of organic compounds by multiple linear regression, partial least squares, and artificial neural network.

2009-11-30

A quantitative structure-property relationship (QSPR) study was performed to develop models those relate the structure of 141 organic compounds to their octanol-water partition coefficients (log P(o/w)). A genetic algorithm was applied as a variable selection tool. Modeling of log P(o/w) of these compounds as a function of theoretically derived descriptors was established by multiple linear regression (MLR), partial least squares (PLS), and artificial neural network (ANN). The best selected descriptors that appear in the models are: atomic charge weighted partial positively charged surface area (PPSA-3), fractional atomic charge weighted partial positive surface area (FPSA-3), minimum atomic partial charge (Qmin), molecular volume (MV), total dipole moment of molecule (mu), maximum antibonding contribution of a molecule orbital in the molecule (MAC), and maximum free valency of a C atom in the molecule (MFV). The result obtained showed the ability of developed artificial neural network to prediction of partition coefficients of organic compounds. Also, the results revealed the superiority of ANN over the MLR and PLS models. Copyright 2009 Wiley Periodicals, Inc.

6. Accurate electrostatic and van der Waals pull-in prediction for fully clamped nano/micro-beams using linear universal graphs of pull-in instability

2014-09-01

In spite of the fact that pull-in instability of electrically actuated nano/micro-beams has been investigated by many researchers to date, no explicit formula has been presented yet which can predict pull-in voltage based on a geometrically non-linear and distributed parameter model. The objective of present paper is to introduce a simple and accurate formula to predict this value for a fully clamped electrostatically actuated nano/micro-beam. To this end, a non-linear Euler-Bernoulli beam model is employed, which accounts for the axial residual stress, geometric non-linearity of mid-plane stretching, distributed electrostatic force and the van der Waals (vdW) attraction. The non-linear boundary value governing equation of equilibrium is non-dimensionalized and solved iteratively through single-term Galerkin based reduced order model (ROM). The solutions are validated thorough direct comparison with experimental and other existing results reported in previous studies. Pull-in instability under electrical and vdW loads are also investigated using universal graphs. Based on the results of these graphs, non-dimensional pull-in and vdW parameters, which are defined in the text, vary linearly versus the other dimensionless parameters of the problem. Using this fact, some linear equations are presented to predict pull-in voltage, the maximum allowable length, the so-called detachment length, and the minimum allowable gap for a nano/micro-system. These linear equations are also reduced to a couple of universal pull-in formulas for systems with small initial gap. The accuracy of the universal pull-in formulas are also validated by comparing its results with available experimental and some previous geometric linear and closed-form findings published in the literature.

7. Predictive performance for population models using stochastic differential equations applied on data from an oral glucose tolerance test

Møller, Jonas Bech; Overgaard, R.V.; Madsen, Henrik

2010-01-01

Several articles have investigated stochastic differential equations (SDEs) in PK/PD models, but few have quantitatively investigated the benefits to predictive performance of models based on real data. Estimation of first phase insulin secretion which reflects beta-cell function using models of ...... obtained from the glucose tolerance tests. Since, the estimation time of extended models was not heavily increased compared to basic models, the applied method is concluded to have high relevance not only in theory but also in practice....

8. A generic method for assignment of reliability scores applied to solvent accessibility predictions

Petersen, Bent; Petersen, Thomas Nordahl; Andersen, Pernille

2009-01-01

: The performance of the neural networks was evaluated on a commonly used set of sequences known as the CB513 set. An overall Pearson's correlation coefficient of 0.72 was obtained, which is comparable to the performance of the currently best public available method, Real-SPINE. Both methods associate a reliability...... comparing the Pearson's correlation coefficient for the upper 20% of predictions sorted according to reliability. For this subset, values of 0.79 and 0.74 are obtained using our and the compared method, respectively. This tendency is true for any selected subset....

9. Applying an extended theory of planned behaviour to predict breakfast consumption in adolescents.

Kennedy, S; Davies, E L; Ryan, L; Clegg, M E

2017-05-01

Breakfast skipping increases during adolescence and is associated with lower levels of physical activity and weight gain. Theory-based interventions promoting breakfast consumption in adolescents report mixed findings, potentially because of limited research identifying which determinants to target. This study aimed to: (i) utilise the Theory of Planned Behaviour (TPB) to identify the relative contribution of attitudes (affective, cognitive and behavioural) to predict intention to eat breakfast and breakfast consumption in adolescents and (ii) determine whether demographic factors moderate the relationship between TPB variables, intention and behaviour. Questionnaires were completed by 434 students (mean 14±0.9 years) measuring breakfast consumption (0-2, 3-6 or 7 days), physical activity levels and TPB measures. Data were analysed by breakfast frequency and demographics using hierarchical and multinomial regression analyses. Breakfast was consumed everyday by 57% of students, with boys more likely to eat a regular breakfast, report higher activity levels and report more positive attitudes towards breakfast than girls (Pbehaviours (Pbehaviour relationship for girls. Findings confirm that the TPB is a successful model for predicting breakfast intentions and behaviours in adolescents. The potential for a direct effect of attitudes on behaviours should be considered in the implementation and design of breakfast interventions.

10. Applying a multi-replication framework to support dynamic situation assessment and predictive capabilities

Lammers, Craig; McGraw, Robert M.; Steinman, Jeffrey S.

2005-05-01

Technological advances and emerging threats reduce the time between target detection and action to an order of a few minutes. To effectively assist with the decision-making process, C4I decision support tools must quickly and dynamically predict and assess alternative Courses Of Action (COAs) to assist Commanders in anticipating potential outcomes. These capabilities can be provided through the faster-than-real-time predictive simulation of plans that are continuously re-calibrating with the real-time picture. This capability allows decision-makers to assess the effects of re-tasking opportunities, providing the decision-maker with tremendous freedom to make time-critical, mid-course decisions. This paper presents an overview and demonstrates the use of a software infrastructure that supports DSAP capabilities. These DSAP capabilities are demonstrated through the use of a Multi-Replication Framework that supports (1) predictivie simulations using JSAF (Joint Semi-Automated Forces); (2) real-time simulation, also using JSAF, as a state estimation mechanism; and, (3) real-time C4I data updates through TBMCS (Theater Battle Management Core Systems). This infrastructure allows multiple replications of a simulation to be executed simultaneously over a grid faster-than-real-time, calibrated with live data feeds. A cost evaluator mechanism analyzes potential outcomes and prunes simulations that diverge from the real-time picture. In particular, this paper primarily serves to walk a user through the process for using the Multi-Replication Framework providing an enhanced decision aid.

11. Applying FDTD to the Coverage Prediction of WiMAX Femtocells

Valcarce Alvaro

2009-01-01

Full Text Available Femtocells, or home base stations, are a potential future solution for operators to increase indoor coverage and reduce network cost. In a real WiMAX femtocell deployment in residential areas covered by WiMAX macrocells, interference is very likely to occur both in the streets and certain indoor regions. Propagation models that take into account both the outdoor and indoor channel characteristics are thus necessary for the purpose of WiMAX network planning in the presence of femtocells. In this paper, the finite-difference time-domain (FDTD method is adapted for the computation of radiowave propagation predictions at WiMAX frequencies. This model is particularly suitable for the study of hybrid indoor/outdoor scenarios and thus well adapted for the case of WiMAX femtocells in residential environments. Two optimization methods are proposed for the reduction of the FDTD simulation time: the reduction of the simulation frequency for problem simplification and a parallel graphics processing units (GPUs implementation. The calibration of the model is then thoroughly described. First, the calibration of the absorbing boundary condition, necessary for proper coverage predictions, is presented. Then a calibration of the material parameters that minimizes the error function between simulation and real measurements is proposed. Finally, some mobile WiMAX system-level simulations that make use of the presented propagation model are presented to illustrate the applicability of the model for the study of femto- to macrointerference.

12. Evaluation of icing drag coefficient correlations applied to iced propeller performance prediction

Miller, Thomas L.; Shaw, R. J.; Korkan, K. D.

1987-01-01

Evaluation of three empirical icing drag coefficient correlations is accomplished through application to a set of propeller icing data. The various correlations represent the best means currently available for relating drag rise to various flight and atmospheric conditions for both fixed-wing and rotating airfoils, and the work presented here ilustrates and evaluates one such application of the latter case. The origins of each of the correlations are discussed, and their apparent capabilities and limitations are summarized. These correlations have been made to be an integral part of a computer code, ICEPERF, which has been designed to calculate iced propeller performance. Comparison with experimental propeller icing data shows generally good agreement, with the quality of the predicted results seen to be directly related to the radial icing extent of each case. The code's capability to properly predict thrust coefficient, power coefficient, and propeller efficiency is shown to be strongly dependent on the choice of correlation selected, as well as upon proper specificatioon of radial icing extent.

13. Factors Predictive of Symptomatic Radiation Injury After Linear Accelerator-Based Stereotactic Radiosurgery for Intracerebral Arteriovenous Malformations

2012-07-01

14. Non-linear finite element analysis for prediction of seismic response of buildings considering soil-structure interaction

E. Çelebi

2012-11-01

Full Text Available The objective of this paper focuses primarily on the numerical approach based on two-dimensional (2-D finite element method for analysis of the seismic response of infinite soil-structure interaction (SSI system. This study is performed by a series of different scenarios that involved comprehensive parametric analyses including the effects of realistic material properties of the underlying soil on the structural response quantities. Viscous artificial boundaries, simulating the process of wave transmission along the truncated interface of the semi-infinite space, are adopted in the non-linear finite element formulation in the time domain along with Newmark's integration. The slenderness ratio of the superstructure and the local soil conditions as well as the characteristics of input excitations are important parameters for the numerical simulation in this research. The mechanical behavior of the underlying soil medium considered in this prediction model is simulated by an undrained elasto-plastic Mohr-Coulomb model under plane-strain conditions. To emphasize the important findings of this type of problems to civil engineers, systematic calculations with different controlling parameters are accomplished to evaluate directly the structural response of the vibrating soil-structure system. When the underlying soil becomes stiffer, the frequency content of the seismic motion has a major role in altering the seismic response. The sudden increase of the dynamic response is more pronounced for resonance case, when the frequency content of the seismic ground motion is close to that of the SSI system. The SSI effects under different seismic inputs are different for all considered soil conditions and structural types.

15. Factors Predictive of Symptomatic Radiation Injury After Linear Accelerator-Based Stereotactic Radiosurgery for Intracerebral Arteriovenous Malformations

Herbert, Christopher; Moiseenko, Vitali; McKenzie, Michael; Redekop, Gary; Hsu, Fred; Gete, Ermias; Gill, Brad; Lee, Richard; Luchka, Kurt; Haw, Charles; Lee, Andrew; Toyota, Brian; Martin, Montgomery

2012-01-01

16. Applying a machine learning model using a locally preserving projection based feature regeneration algorithm to predict breast cancer risk

Heidari, Morteza; Zargari Khuzani, Abolfazl; Danala, Gopichandh; Mirniaharikandehei, Seyedehnafiseh; Qian, Wei; Zheng, Bin

2018-03-01

Both conventional and deep machine learning has been used to develop decision-support tools applied in medical imaging informatics. In order to take advantages of both conventional and deep learning approach, this study aims to investigate feasibility of applying a locally preserving projection (LPP) based feature regeneration algorithm to build a new machine learning classifier model to predict short-term breast cancer risk. First, a computer-aided image processing scheme was used to segment and quantify breast fibro-glandular tissue volume. Next, initially computed 44 image features related to the bilateral mammographic tissue density asymmetry were extracted. Then, an LLP-based feature combination method was applied to regenerate a new operational feature vector using a maximal variance approach. Last, a k-nearest neighborhood (KNN) algorithm based machine learning classifier using the LPP-generated new feature vectors was developed to predict breast cancer risk. A testing dataset involving negative mammograms acquired from 500 women was used. Among them, 250 were positive and 250 remained negative in the next subsequent mammography screening. Applying to this dataset, LLP-generated feature vector reduced the number of features from 44 to 4. Using a leave-onecase-out validation method, area under ROC curve produced by the KNN classifier significantly increased from 0.62 to 0.68 (p breast cancer detected in the next subsequent mammography screening.

17. Artificial neural networks applied to the prediction of spot prices in the market of electric energy

Rodrigues, Alcantaro Lemes; Grimoni, Jose Aquiles Baesso

2010-01-01

The commercialization of electricity in Brazil as well as in the world has undergone several changes over the past 20 years. In order to achieve an economic balance between supply and demand of the good called electricity, stakeholders in this market follow both rules set by society (government, companies and consumers) and set by the laws of nature (hydrology). To deal with such complex issues, various studies have been conducted in the area of computational heuristics. This work aims to develop a software to forecast spot market prices in using artificial neural networks (ANN). ANNs are widely used in various applications especially in computational heuristics, where non-linear systems have computational challenges difficult to overcome because of the effect named 'curse of dimensionality'. This effect is due to the fact that the current computational power is not enough to handle problems with such a high combination of variables. The challenge of forecasting prices depends on factors such as: (a) foresee the demand evolution (electric load); (b) the forecast of supply (reservoirs, hydrology and climate), capacity factor; and (c) the balance of the economy (pricing, auctions, foreign markets influence, economic policy, government budget and government policy). These factors are considered be used in the forecasting model for spot market prices and the results of its effectiveness are tested and huge presented. (author)

18. A consistency-based feature selection method allied with linear SVMs for HIV-1 protease cleavage site prediction.

Orkun Oztürk

Full Text Available BACKGROUND: Predicting type-1 Human Immunodeficiency Virus (HIV-1 protease cleavage site in protein molecules and determining its specificity is an important task which has attracted considerable attention in the research community. Achievements in this area are expected to result in effective drug design (especially for HIV-1 protease inhibitors against this life-threatening virus. However, some drawbacks (like the shortage of the available training data and the high dimensionality of the feature space turn this task into a difficult classification problem. Thus, various machine learning techniques, and specifically several classification methods have been proposed in order to increase the accuracy of the classification model. In addition, for several classification problems, which are characterized by having few samples and many features, selecting the most relevant features is a major factor for increasing classification accuracy. RESULTS: We propose for HIV-1 data a consistency-based feature selection approach in conjunction with recursive feature elimination of support vector machines (SVMs. We used various classifiers for evaluating the results obtained from the feature selection process. We further demonstrated the effectiveness of our proposed method by comparing it with a state-of-the-art feature selection method applied on HIV-1 data, and we evaluated the reported results based on attributes which have been selected from different combinations. CONCLUSION: Applying feature selection on training data before realizing the classification task seems to be a reasonable data-mining process when working with types of data similar to HIV-1. On HIV-1 data, some feature selection or extraction operations in conjunction with different classifiers have been tested and noteworthy outcomes have been reported. These facts motivate for the work presented in this paper. SOFTWARE AVAILABILITY: The software is available at http

19. FPGA/NIOS Implementation of an Adaptive FIR Filter Using Linear Prediction to Reduce Narrow-Band RFI for Radio Detection of Cosmic Rays

2013-01-01

We present the FPGA/NIOS implementation of an adaptive finite impulse response (FIR) filter based on linear prediction to suppress radio frequency interference (RFI). This technique will be used for experiments that observe coherent radio emission from extensive air showers induced by

20. A model of integration among prediction tools: applied study to road freight transportation

Henrique Dias Blois

Full Text Available Abstract This study has developed a scenery analysis model which has integrated decision-making tools on investments: prospective scenarios (Grumbach Method and systems dynamics (hard modeling, with the innovated multivariate analysis of experts. It was designed through analysis and simulation scenarios and showed which are the most striking events in the study object as well as highlighted the actions could redirect the future of the analyzed system. Moreover, predictions are likely to be developed through the generated scenarios. The model has been validated empirically with road freight transport data from state of Rio Grande do Sul, Brazil. The results showed that the model contributes to the analysis of investment because it identifies probabilities of events that impact on decision making, and identifies priorities for action, reducing uncertainties in the future. Moreover, it allows an interdisciplinary discussion that correlates different areas of knowledge, fundamental when you wish more consistency in creating scenarios.

1. Applying predictive analytics to develop an intelligent risk detection application for healthcare contexts.

Moghimi, Fatemeh Hoda; Cheung, Michael; Wickramasinghe, Nilmini

2013-01-01

Healthcare is an information rich industry where successful outcomes require the processing of multi-spectral data and sound decision making. The exponential growth of data and big data issues coupled with a rapid increase of service demands in healthcare contexts today, requires a robust framework enabled by IT (information technology) solutions as well as real-time service handling in order to ensure superior decision making and successful healthcare outcomes. Such a context is appropriate for the application of real time intelligent risk detection decision support systems using predictive analytic techniques such as data mining. To illustrate the power and potential of data science technologies in healthcare decision making scenarios, the use of an intelligent risk detection (IRD) model is proffered for the context of Congenital Heart Disease (CHD) in children, an area which requires complex high risk decisions that need to be made expeditiously and accurately in order to ensure successful healthcare outcomes.

2. Applying an extended theory of planned behavior to predicting violations at automated railroad crossings.

Palat, Blazej; Paran, Françoise; Delhomme, Patricia

2017-01-01

3. Seismic rupture modelling, strong motion prediction and seismic hazard assessment: fundamental and applied approaches

Berge-Thierry, C.

2007-05-01

The defence to obtain the 'Habilitation a Diriger des Recherches' is a synthesis of the research work performed since the end of my Ph D. thesis in 1997. This synthesis covers the two years as post doctoral researcher at the Bureau d'Evaluation des Risques Sismiques at the Institut de Protection (BERSSIN), and the seven consecutive years as seismologist and head of the BERSSIN team. This work and the research project are presented in the framework of the seismic risk topic, and particularly with respect to the seismic hazard assessment. Seismic risk combines seismic hazard and vulnerability. Vulnerability combines the strength of building structures and the human and economical consequences in case of structural failure. Seismic hazard is usually defined in terms of plausible seismic motion (soil acceleration or velocity) in a site for a given time period. Either for the regulatory context or the structural specificity (conventional structure or high risk construction), seismic hazard assessment needs: to identify and locate the seismic sources (zones or faults), to characterize their activity, to evaluate the seismic motion to which the structure has to resist (including the site effects). I specialized in the field of numerical strong-motion prediction using high frequency seismic sources modelling and forming part of the IRSN allowed me to rapidly working on the different tasks of seismic hazard assessment. Thanks to the expertise practice and the participation to the regulation evolution (nuclear power plants, conventional and chemical structures), I have been able to work on empirical strong-motion prediction, including site effects. Specific questions related to the interface between seismologists and structural engineers are also presented, especially the quantification of uncertainties. This is part of the research work initiated to improve the selection of the input ground motion in designing or verifying the stability of structures. (author)

4. Genetic Gain Increases by Applying the Usefulness Criterion with Improved Variance Prediction in Selection of Crosses.

Lehermeier, Christina; Teyssèdre, Simon; Schön, Chris-Carolin

2017-12-01

A crucial step in plant breeding is the selection and combination of parents to form new crosses. Genome-based prediction guides the selection of high-performing parental lines in many crop breeding programs which ensures a high mean performance of progeny. To warrant maximum selection progress, a new cross should also provide a large progeny variance. The usefulness concept as measure of the gain that can be obtained from a specific cross accounts for variation in progeny variance. Here, it is shown that genetic gain can be considerably increased when crosses are selected based on their genomic usefulness criterion compared to selection based on mean genomic estimated breeding values. An efficient and improved method to predict the genetic variance of a cross based on Markov chain Monte Carlo samples of marker effects from a whole-genome regression model is suggested. In simulations representing selection procedures in crop breeding programs, the performance of this novel approach is compared with existing methods, like selection based on mean genomic estimated breeding values and optimal haploid values. In all cases, higher genetic gain was obtained compared with previously suggested methods. When 1% of progenies per cross were selected, the genetic gain based on the estimated usefulness criterion increased by 0.14 genetic standard deviation compared to a selection based on mean genomic estimated breeding values. Analytical derivations of the progeny genotypic variance-covariance matrix based on parental genotypes and genetic map information make simulations of progeny dispensable, and allow fast implementation in large-scale breeding programs. Copyright © 2017 by the Genetics Society of America.

5. Applying the theory of planned behavior to predict dairy product consumption by older adults.

Kim, Kyungwon; Reicks, Marla; Sjoberg, Sara

2003-01-01

The purpose of this study was to explain intention to consume dairy products and consumption of dairy products by older adults using the Theory of Planned Behavior (TPB). The factors examined were attitudes, subjective norms, and perceived behavioral control. A cross-sectional questionnaire was administered. Community centers with congregate dining programs, group classes, and recreational events for older adults. One hundred and sixty-two older adults (mean age 75 years) completed the questionnaire. Subjects were mostly women (76%) and white (65%), with about half having less than a high school education or completing high school. Variables based on the TPB were assessed through questionnaire items that were constructed to form scales measuring attitudes, subjective norms, perceived behavioral control, and intention to consume dairy products. Dairy product consumption was measured using a food frequency questionnaire. Regression analyses were used to determine the association between the scales for the 3 variables proposed in the TPB and intention to consume and consumption of dairy products; the alpha level was set at.05 to determine the statistical significance of results. Attitudes toward eating dairy products and perceived behavioral control contributed to the model for predicting intention, whereas subjective norms did not. Attitudes toward eating dairy products were slightly more important than perceived behavioral control in predicting intention. In turn, intention was strongly related to dairy product consumption, and perceived behavioral control was independently associated with dairy product consumption. These results suggest the utility of the TPB in explaining dairy product consumption for older adults. Nutrition education should focus on improving attitudes and removing barriers to consumption of dairy products for older adults.

6. Improving Air Quality (and Weather) Predictions using Advanced Data Assimilation Techniques Applied to Coupled Models during KORUS-AQ

Carmichael, G. R.; Saide, P. E.; Gao, M.; Streets, D. G.; Kim, J.; Woo, J. H.

2017-12-01

Ambient aerosols are important air pollutants with direct impacts on human health and on the Earth's weather and climate systems through their interactions with radiation and clouds. Their role is dependent on their distributions of size, number, phase and composition, which vary significantly in space and time. There remain large uncertainties in simulated aerosol distributions due to uncertainties in emission estimates and in chemical and physical processes associated with their formation and removal. These uncertainties lead to large uncertainties in weather and air quality predictions and in estimates of health and climate change impacts. Despite these uncertainties and challenges, regional-scale coupled chemistry-meteorological models such as WRF-Chem have significant capabilities in predicting aerosol distributions and explaining aerosol-weather interactions. We explore the hypothesis that new advances in on-line, coupled atmospheric chemistry/meteorological models, and new emission inversion and data assimilation techniques applicable to such coupled models, can be applied in innovative ways using current and evolving observation systems to improve predictions of aerosol distributions at regional scales. We investigate the impacts of assimilating AOD from geostationary satellite (GOCI) and surface PM2.5 measurements on predictions of AOD and PM in Korea during KORUS-AQ through a series of experiments. The results suggest assimilating datasets from multiple platforms can improve the predictions of aerosol temporal and spatial distributions.

7. Comparison of Linear Induction Motor Theories for the LIMRV and TLRV Motors

1978-01-01

The Oberretl, Yamamura, and Mosebach theories of the linear induction motor are described and also applied to predict performance characteristics of the TLRV & LIMRV linear induction motors. The effect of finite motor width and length on performance ...

8. APPLYING SPARSE CODING TO SURFACE MULTIVARIATE TENSOR-BASED MORPHOMETRY TO PREDICT FUTURE COGNITIVE DECLINE.

Zhang, Jie; Stonnington, Cynthia; Li, Qingyang; Shi, Jie; Bauer, Robert J; Gutman, Boris A; Chen, Kewei; Reiman, Eric M; Thompson, Paul M; Ye, Jieping; Wang, Yalin

2016-04-01

Alzheimer's disease (AD) is a progressive brain disease. Accurate diagnosis of AD and its prodromal stage, mild cognitive impairment, is crucial for clinical trial design. There is also growing interests in identifying brain imaging biomarkers that help evaluate AD risk presymptomatically. Here, we applied a recently developed multivariate tensor-based morphometry (mTBM) method to extract features from hippocampal surfaces, derived from anatomical brain MRI. For such surface-based features, the feature dimension is usually much larger than the number of subjects. We used dictionary learning and sparse coding to effectively reduce the feature dimensions. With the new features, an Adaboost classifier was employed for binary group classification. In tests on publicly available data from the Alzheimers Disease Neuroimaging Initiative, the new framework outperformed several standard imaging measures in classifying different stages of AD. The new approach combines the efficiency of sparse coding with the sensitivity of surface mTBM, and boosts classification performance.

9. Comprehensive applied mathematical modeling in the natural and engineering sciences theoretical predictions compared with data

Wollkind, David J

2017-01-01

This text demonstrates the process of comprehensive applied mathematical modeling through the introduction of various case studies.  The case studies are arranged in increasing order of complexity based on the mathematical methods required to analyze the models. The development of these methods is also included, providing a self-contained presentation. To reinforce and supplement the material introduced, original problem sets are offered involving case studies closely related to the ones presented.  With this style, the text’s perspective, scope, and completeness of the subject matter are considered unique. Having grown out of four self-contained courses taught by the authors, this text will be of use in a two-semester sequence for advanced undergraduate and beginning graduate students, requiring rudimentary knowledge of advanced calculus and differential equations, along with a basic understanding of some simple physical and biological scientific principles. .

10. Linear algebra

Liesen, Jörg

2015-01-01

This self-contained textbook takes a matrix-oriented approach to linear algebra and presents a complete theory, including all details and proofs, culminating in the Jordan canonical form and its proof. Throughout the development, the applicability of the results is highlighted. Additionally, the book presents special topics from applied linear algebra including matrix functions, the singular value decomposition, the Kronecker product and linear matrix equations. The matrix-oriented approach to linear algebra leads to a better intuition and a deeper understanding of the abstract concepts, and therefore simplifies their use in real world applications. Some of these applications are presented in detailed examples. In several ‘MATLAB-Minutes’ students can comprehend the concepts and results using computational experiments. Necessary basics for the use of MATLAB are presented in a short introduction. Students can also actively work with the material and practice their mathematical skills in more than 300 exerc...

11. Area under the curve predictions of dalbavancin, a new lipoglycopeptide agent, using the end of intravenous infusion concentration data point by regression analyses such as linear, log-linear and power models.

Bhamidipati, Ravi Kanth; Syed, Muzeeb; Mullangi, Ramesh; Srinivas, Nuggehally

2018-02-01

1. Dalbavancin, a lipoglycopeptide, is approved for treating gram-positive bacterial infections. Area under plasma concentration versus time curve (AUC inf ) of dalbavancin is a key parameter and AUC inf /MIC ratio is a critical pharmacodynamic marker. 2. Using end of intravenous infusion concentration (i.e. C max ) C max versus AUC inf relationship for dalbavancin was established by regression analyses (i.e. linear, log-log, log-linear and power models) using 21 pairs of subject data. 3. The predictions of the AUC inf were performed using published C max data by application of regression equations. The quotient of observed/predicted values rendered fold difference. The mean absolute error (MAE)/root mean square error (RMSE) and correlation coefficient (r) were used in the assessment. 4. MAE and RMSE values for the various models were comparable. The C max versus AUC inf exhibited excellent correlation (r > 0.9488). The internal data evaluation showed narrow confinement (0.84-1.14-fold difference) with a RMSE models predicted AUC inf with a RMSE of 3.02-27.46% with fold difference largely contained within 0.64-1.48. 5. Regardless of the regression models, a single time point strategy of using C max (i.e. end of 30-min infusion) is amenable as a prospective tool for predicting AUC inf of dalbavancin in patients.

12. Applying the Theory of Planned Behavior in Predicting Proenvironmental Behaviour: The Case of Energy Conservation

Octav-Ionuţ Macovei

2015-08-01

Full Text Available This paper aims to propose and validate a model based on the Theory of Planned Behavior in order to explain consumers’ pro-environmental behaviour regarding energy conservation. The model was constructed using the five variables from Ajzen’s Theory of Planned Behavior (TPB (behaviour, intention, perceived behavioural control, subjective norms and attitude to which a variable adapted from Schwartz’s Norm Activation Theory (NAT was added (“awareness of the consequences and the need” in order to create a unique model adapted for the special case of energy conservation behaviour. Further, a survey was conducted and the data collected were analysed using structural equation modelling. The first step of data analysis confirmed that all the constructs have good reliability, internal consistency and validity. The results of the structural equation analysis validated the proposed model, with all the model fit and quality indices having very good values. In the analysis of consumers’ proenvironmental behaviour regarding energy conservation and their intention to behave in a proenvironmental manner, this model proved to have a strong predictive power. Five of seven hypotheses were validated, the newly introduced variable proving to be a success. The proposed model is unique and will offer companies and organizations a valuable green marketing tool which can be used in the fight for environment protection and energy conservation.

13. Verifying the performance of artificial neural network and multiple linear regression in predicting the mean seasonal municipal solid waste generation rate: A case study of Fars province, Iran.

2016-02-01

Predicting the mass of solid waste generation plays an important role in integrated solid waste management plans. In this study, the performance of two predictive models, Artificial Neural Network (ANN) and Multiple Linear Regression (MLR) was verified to predict mean Seasonal Municipal Solid Waste Generation (SMSWG) rate. The accuracy of the proposed models is illustrated through a case study of 20 cities located in Fars Province, Iran. Four performance measures, MAE, MAPE, RMSE and R were used to evaluate the performance of these models. The MLR, as a conventional model, showed poor prediction performance. On the other hand, the results indicated that the ANN model, as a non-linear model, has a higher predictive accuracy when it comes to prediction of the mean SMSWG rate. As a result, in order to develop a more cost-effective strategy for waste management in the future, the ANN model could be used to predict the mean SMSWG rate. Copyright © 2015 Elsevier Ltd. All rights reserved.

14. Predicting Causal Relationships from Biological Data: Applying Automated Causal Discovery on Mass Cytometry Data of Human Immune Cells

Triantafillou, Sofia; Lagani, Vincenzo; Heinze-Deml, Christina; Schmidt, Angelika; Tegner, Jesper; Tsamardinos, Ioannis

2017-01-01

Learning the causal relationships that define a molecular system allows us to predict how the system will respond to different interventions. Distinguishing causality from mere association typically requires randomized experiments. Methods for automated  causal discovery from limited experiments exist, but have so far rarely been tested in systems biology applications. In this work, we apply state-of-the art causal discovery methods on a large collection of public mass cytometry data sets, measuring intra-cellular signaling proteins of the human immune system and their response to several perturbations. We show how different experimental conditions can be used to facilitate causal discovery, and apply two fundamental methods that produce context-specific causal predictions. Causal predictions were reproducible across independent data sets from two different studies, but often disagree with the KEGG pathway databases. Within this context, we discuss the caveats we need to overcome for automated causal discovery to become a part of the routine data analysis in systems biology.

15. Applying a computer-aided scheme to detect a new radiographic image marker for prediction of chemotherapy outcome

Wang, Yunzhi; Qiu, Yuchen; Thai, Theresa; Moore, Kathleen; Liu, Hong; Zheng, Bin

2016-01-01

To investigate the feasibility of automated segmentation of visceral and subcutaneous fat areas from computed tomography (CT) images of ovarian cancer patients and applying the computed adiposity-related image features to predict chemotherapy outcome. A computerized image processing scheme was developed to segment visceral and subcutaneous fat areas, and compute adiposity-related image features. Then, logistic regression models were applied to analyze association between the scheme-generated assessment scores and progression-free survival (PFS) of patients using a leave-one-case-out cross-validation method and a dataset involving 32 patients. The correlation coefficients between automated and radiologist’s manual segmentation of visceral and subcutaneous fat areas were 0.76 and 0.89, respectively. The scheme-generated prediction scores using adiposity-related radiographic image features significantly associated with patients’ PFS (p < 0.01). Using a computerized scheme enables to more efficiently and robustly segment visceral and subcutaneous fat areas. The computed adiposity-related image features also have potential to improve accuracy in predicting chemotherapy outcome

16. Predicting Causal Relationships from Biological Data: Applying Automated Causal Discovery on Mass Cytometry Data of Human Immune Cells

Triantafillou, Sofia

2017-09-29

Learning the causal relationships that define a molecular system allows us to predict how the system will respond to different interventions. Distinguishing causality from mere association typically requires randomized experiments. Methods for automated  causal discovery from limited experiments exist, but have so far rarely been tested in systems biology applications. In this work, we apply state-of-the art causal discovery methods on a large collection of public mass cytometry data sets, measuring intra-cellular signaling proteins of the human immune system and their response to several perturbations. We show how different experimental conditions can be used to facilitate causal discovery, and apply two fundamental methods that produce context-specific causal predictions. Causal predictions were reproducible across independent data sets from two different studies, but often disagree with the KEGG pathway databases. Within this context, we discuss the caveats we need to overcome for automated causal discovery to become a part of the routine data analysis in systems biology.

17. Benefits of Applying Predictive Intelligence to the Space Situational Awareness (SSA) Mission

Lane, B.; Mann, B.; Millard, C.

Recent events have heightened the interest in providing improved Space Situational Awareness (SSA) to the warfighter using novel techniques that are affordable and effective. The current Space Surveillance Network (SSN) detects, tracks, catalogs and identifies artificial objects orbiting earth and provides information on Resident Space Objects (RSO) as well as new foreign launch (NFL) satellites. The reactive nature of the SSN provides little to no warning on changes to the expected states of these RSOs or NFLs. This paper will detail the use of the historical data collected on RSOs to characterize what their steady state is, proactively help identify when changes or anomalies have occurred using a pattern-of-like activity based intelligence approach, and apply dynamic, adaptive mission planning to the observables that lead up to a NFL. Multiple hypotheses will be carried along with the intent or the changes to the steady state to assist the SSN in tasking the various sensors in the network to collect the relevant data needed to help prune the number of hypotheses by assigning likelihood to each of those activities. Depending on the hypothesis and thresholds set, these likelihoods will then be used in turn to alert the SSN operator with changes to the steady state, prioritize additional data collections, and provide a watch list of likely next activities.

18. Adaptive Neuro-Fuzzy Inference System Applied QSAR with Quantum Chemical Descriptors for Predicting Radical Scavenging Activities of Carotenoids.

Jhin, Changho; Hwang, Keum Taek

2015-01-01

One of the physiological characteristics of carotenoids is their radical scavenging activity. In this study, the relationship between radical scavenging activities and quantum chemical descriptors of carotenoids was determined. Adaptive neuro-fuzzy inference system (ANFIS) applied quantitative structure-activity relationship models (QSAR) were also developed for predicting and comparing radical scavenging activities of carotenoids. Semi-empirical PM6 and PM7 quantum chemical calculations were done by MOPAC. Ionisation energies of neutral and monovalent cationic carotenoids and the product of chemical potentials of neutral and monovalent cationic carotenoids were significantly correlated with the radical scavenging activities, and consequently these descriptors were used as independent variables for the QSAR study. The ANFIS applied QSAR models were developed with two triangular-shaped input membership functions made for each of the independent variables and optimised by a backpropagation method. High prediction efficiencies were achieved by the ANFIS applied QSAR. The R-square values of the developed QSAR models with the variables calculated by PM6 and PM7 methods were 0.921 and 0.902, respectively. The results of this study demonstrated reliabilities of the selected quantum chemical descriptors and the significance of QSAR models.

19. Adaptive Neuro-Fuzzy Inference System Applied QSAR with Quantum Chemical Descriptors for Predicting Radical Scavenging Activities of Carotenoids.

Changho Jhin

Full Text Available One of the physiological characteristics of carotenoids is their radical scavenging activity. In this study, the relationship between radical scavenging activities and quantum chemical descriptors of carotenoids was determined. Adaptive neuro-fuzzy inference system (ANFIS applied quantitative structure-activity relationship models (QSAR were also developed for predicting and comparing radical scavenging activities of carotenoids. Semi-empirical PM6 and PM7 quantum chemical calculations were done by MOPAC. Ionisation energies of neutral and monovalent cationic carotenoids and the product of chemical potentials of neutral and monovalent cationic carotenoids were significantly correlated with the radical scavenging activities, and consequently these descriptors were used as independent variables for the QSAR study. The ANFIS applied QSAR models were developed with two triangular-shaped input membership functions made for each of the independent variables and optimised by a backpropagation method. High prediction efficiencies were achieved by the ANFIS applied QSAR. The R-square values of the developed QSAR models with the variables calculated by PM6 and PM7 methods were 0.921 and 0.902, respectively. The results of this study demonstrated reliabilities of the selected quantum chemical descriptors and the significance of QSAR models.

20. Generalising better: Applying deep learning to integrate deleteriousness prediction scores for whole-exome SNV studies.

Korvigo, Ilia; Afanasyev, Andrey; Romashchenko, Nikolay; Skoblov, Mikhail

2018-01-01

even relatively simple modern neural networks can significantly improve both prediction accuracy and coverage. We provide open-access to our finest model via the web-site: http://score.generesearch.ru/services/badmut/.

1. Evaluation of standardized and applied variables in predicting treatment outcomes of polytrauma patients.

Aksamija, Goran; Mulabdic, Adi; Rasic, Ismar; Muhovic, Samir; Gavric, Igor

2011-01-01

Polytrauma is defined as an injury where they are affected by at least two different organ systems or body, with at least one life-threatening injuries. Given the multilevel model care of polytrauma patients within KCUS are inevitable weaknesses in the management of this category of patients. To determine the dynamics of existing procedures in treatment of polytrauma patients on admission to KCUS, and based on statistical analysis of variables applied to determine and define the factors that influence the final outcome of treatment, and determine their mutual relationship, which may result in eliminating the flaws in the approach to the problem. The study was based on 263 polytrauma patients. Parametric and non-parametric statistical methods were used. Basic statistics were calculated, based on the calculated parameters for the final achievement of research objectives, multicoleration analysis, image analysis, discriminant analysis and multifactorial analysis were used. From the universe of variables for this study we selected sample of n = 25 variables, of which the first two modular, others belong to the common measurement space (n = 23) and in this paper defined as a system variable methods, procedures and assessments of polytrauma patients. After the multicoleration analysis, since the image analysis gave a reliable measurement results, we started the analysis of eigenvalues, that is defining the factors upon which they obtain information about the system solve the problem of the existing model and its correlation with treatment outcome. The study singled out the essential factors that determine the current organizational model of care, which may affect the treatment and better outcome of polytrauma patients. This analysis has shown the maximum correlative relationships between these practices and contributed to development guidelines that are defined by isolated factors.

2. A new method for class prediction based on signed-rank algorithms applied to Affymetrix® microarray experiments

Vassal Aurélien

2008-01-01

Full Text Available Abstract Background The huge amount of data generated by DNA chips is a powerful basis to classify various pathologies. However, constant evolution of microarray technology makes it difficult to mix data from different chip types for class prediction of limited sample populations. Affymetrix® technology provides both a quantitative fluorescence signal and a decision (detection call: absent or present based on signed-rank algorithms applied to several hybridization repeats of each gene, with a per-chip normalization. We developed a new prediction method for class belonging based on the detection call only from recent Affymetrix chip type. Biological data were obtained by hybridization on U133A, U133B and U133Plus 2.0 microarrays of purified normal B cells and cells from three independent groups of multiple myeloma (MM patients. Results After a call-based data reduction step to filter out non class-discriminative probe sets, the gene list obtained was reduced to a predictor with correction for multiple testing by iterative deletion of probe sets that sequentially improve inter-class comparisons and their significance. The error rate of the method was determined using leave-one-out and 5-fold cross-validation. It was successfully applied to (i determine a sex predictor with the normal donor group classifying gender with no error in all patient groups except for male MM samples with a Y chromosome deletion, (ii predict the immunoglobulin light and heavy chains expressed by the malignant myeloma clones of the validation group and (iii predict sex, light and heavy chain nature for every new patient. Finally, this method was shown powerful when compared to the popular classification method Prediction Analysis of Microarray (PAM. Conclusion This normalization-free method is routinely used for quality control and correction of collection errors in patient reports to clinicians. It can be easily extended to multiple class prediction suitable with

3. Applying quantitative adiposity feature analysis models to predict benefit of bevacizumab-based chemotherapy in ovarian cancer patients

Wang, Yunzhi; Qiu, Yuchen; Thai, Theresa; More, Kathleen; Ding, Kai; Liu, Hong; Zheng, Bin

2016-03-01

How to rationally identify epithelial ovarian cancer (EOC) patients who will benefit from bevacizumab or other antiangiogenic therapies is a critical issue in EOC treatments. The motivation of this study is to quantitatively measure adiposity features from CT images and investigate the feasibility of predicting potential benefit of EOC patients with or without receiving bevacizumab-based chemotherapy treatment using multivariate statistical models built based on quantitative adiposity image features. A dataset involving CT images from 59 advanced EOC patients were included. Among them, 32 patients received maintenance bevacizumab after primary chemotherapy and the remaining 27 patients did not. We developed a computer-aided detection (CAD) scheme to automatically segment subcutaneous fat areas (VFA) and visceral fat areas (SFA) and then extracted 7 adiposity-related quantitative features. Three multivariate data analysis models (linear regression, logistic regression and Cox proportional hazards regression) were performed respectively to investigate the potential association between the model-generated prediction results and the patients' progression-free survival (PFS) and overall survival (OS). The results show that using all 3 statistical models, a statistically significant association was detected between the model-generated results and both of the two clinical outcomes in the group of patients receiving maintenance bevacizumab (p<0.01), while there were no significant association for both PFS and OS in the group of patients without receiving maintenance bevacizumab. Therefore, this study demonstrated the feasibility of using quantitative adiposity-related CT image features based statistical prediction models to generate a new clinical marker and predict the clinical outcome of EOC patients receiving maintenance bevacizumab-based chemotherapy.

4. Prediction of spontaneous ureteral stone passage: Automated 3D-measurements perform equal to radiologists, and linear measurements equal to volumetric.

Jendeberg, Johan; Geijer, Håkan; Alshamari, Muhammed; Lidén, Mats

2018-01-24

To compare the ability of different size estimates to predict spontaneous passage of ureteral stones using a 3D-segmentation and to investigate the impact of manual measurement variability on the prediction of stone passage. We retrospectively included 391 consecutive patients with ureteral stones on non-contrast-enhanced CT (NECT). Three-dimensional segmentation size estimates were compared to the mean of three radiologists' measurements. Receiver-operating characteristic (ROC) analysis was performed for the prediction of spontaneous passage for each estimate. The difference in predicted passage probability between the manual estimates in upper and lower stones was compared. The area under the ROC curve (AUC) for the measurements ranged from 0.88 to 0.90. Between the automated 3D algorithm and the manual measurements the 95% limits of agreement were 0.2 ± 1.4 mm for the width. The manual bone window measurements resulted in a > 20 percentage point (ppt) difference between the readers in the predicted passage probability in 44% of the upper and 6% of the lower ureteral stones. All automated 3D algorithm size estimates independently predicted the spontaneous stone passage with similar high accuracy as the mean of three readers' manual linear measurements. Manual size estimation of upper stones showed large inter-reader variations for spontaneous passage prediction. • An automated 3D technique predicts spontaneous stone passage with high accuracy. • Linear, areal and volumetric measurements performed similarly in predicting stone passage. • Reader variability has a large impact on the predicted prognosis for stone passage.

5. Relationship between neighbourhood socioeconomic position and neighbourhood public green space availability: An environmental inequality analysis in a large German city applying generalized linear models.

Schüle, Steffen Andreas; Gabriel, Katharina M A; Bolte, Gabriele

2017-06-01

6. Applying a new computer-aided detection scheme generated imaging marker to predict short-term breast cancer risk

Mirniaharikandehei, Seyedehnafiseh; Hollingsworth, Alan B.; Patel, Bhavika; Heidari, Morteza; Liu, Hong; Zheng, Bin

2018-05-01

This study aims to investigate the feasibility of identifying a new quantitative imaging marker based on false-positives generated by a computer-aided detection (CAD) scheme to help predict short-term breast cancer risk. An image dataset including four view mammograms acquired from 1044 women was retrospectively assembled. All mammograms were originally interpreted as negative by radiologists. In the next subsequent mammography screening, 402 women were diagnosed with breast cancer and 642 remained negative. An existing CAD scheme was applied ‘as is’ to process each image. From CAD-generated results, four detection features including the total number of (1) initial detection seeds and (2) the final detected false-positive regions, (3) average and (4) sum of detection scores, were computed from each image. Then, by combining the features computed from two bilateral images of left and right breasts from either craniocaudal or mediolateral oblique view, two logistic regression models were trained and tested using a leave-one-case-out cross-validation method to predict the likelihood of each testing case being positive in the next subsequent screening. The new prediction model yielded the maximum prediction accuracy with an area under a ROC curve of AUC  =  0.65  ±  0.017 and the maximum adjusted odds ratio of 4.49 with a 95% confidence interval of (2.95, 6.83). The results also showed an increasing trend in the adjusted odds ratio and risk prediction scores (p  breast cancer risk.

7. The spatial prediction of landslide susceptibility applying artificial neural network and logistic regression models: A case study of Inje, Korea

Saro, Lee; Woo, Jeon Seong; Kwan-Young, Oh; Moung-Jin, Lee

2016-02-01

The aim of this study is to predict landslide susceptibility caused using the spatial analysis by the application of a statistical methodology based on the GIS. Logistic regression models along with artificial neutral network were applied and validated to analyze landslide susceptibility in Inje, Korea. Landslide occurrence area in the study were identified based on interpretations of optical remote sensing data (Aerial photographs) followed by field surveys. A spatial database considering forest, geophysical, soil and topographic data, was built on the study area using the Geographical Information System (GIS). These factors were analysed using artificial neural network (ANN) and logistic regression models to generate a landslide susceptibility map. The study validates the landslide susceptibility map by comparing them with landslide occurrence areas. The locations of landslide occurrence were divided randomly into a training set (50%) and a test set (50%). A training set analyse the landslide susceptibility map using the artificial network along with logistic regression models, and a test set was retained to validate the prediction map. The validation results revealed that the artificial neural network model (with an accuracy of 80.10%) was better at predicting landslides than the logistic regression model (with an accuracy of 77.05%). Of the weights used in the artificial neural network model, `slope' yielded the highest weight value (1.330), and `aspect' yielded the lowest value (1.000). This research applied two statistical analysis methods in a GIS and compared their results. Based on the findings, we were able to derive a more effective method for analyzing landslide susceptibility.

8. The spatial prediction of landslide susceptibility applying artificial neural network and logistic regression models: A case study of Inje, Korea

Saro Lee

2016-02-01

Full Text Available The aim of this study is to predict landslide susceptibility caused using the spatial analysis by the application of a statistical methodology based on the GIS. Logistic regression models along with artificial neutral network were applied and validated to analyze landslide susceptibility in Inje, Korea. Landslide occurrence area in the study were identified based on interpretations of optical remote sensing data (Aerial photographs followed by field surveys. A spatial database considering forest, geophysical, soil and topographic data, was built on the study area using the Geographical Information System (GIS. These factors were analysed using artificial neural network (ANN and logistic regression models to generate a landslide susceptibility map. The study validates the landslide susceptibility map by comparing them with landslide occurrence areas. The locations of landslide occurrence were divided randomly into a training set (50% and a test set (50%. A training set analyse the landslide susceptibility map using the artificial network along with logistic regression models, and a test set was retained to validate the prediction map. The validation results revealed that the artificial neural network model (with an accuracy of 80.10% was better at predicting landslides than the logistic regression model (with an accuracy of 77.05%. Of the weights used in the artificial neural network model, ‘slope’ yielded the highest weight value (1.330, and ‘aspect’ yielded the lowest value (1.000. This research applied two statistical analysis methods in a GIS and compared their results. Based on the findings, we were able to derive a more effective method for analyzing landslide susceptibility.

9. Prediction of radical scavenging activities of anthocyanins applying adaptive neuro-fuzzy inference system (ANFIS) with quantum chemical descriptors.

Jhin, Changho; Hwang, Keum Taek

2014-08-22

Radical scavenging activity of anthocyanins is well known, but only a few studies have been conducted by quantum chemical approach. The adaptive neuro-fuzzy inference system (ANFIS) is an effective technique for solving problems with uncertainty. The purpose of this study was to construct and evaluate quantitative structure-activity relationship (QSAR) models for predicting radical scavenging activities of anthocyanins with good prediction efficiency. ANFIS-applied QSAR models were developed by using quantum chemical descriptors of anthocyanins calculated by semi-empirical PM6 and PM7 methods. Electron affinity (A) and electronegativity (χ) of flavylium cation, and ionization potential (I) of quinoidal base were significantly correlated with radical scavenging activities of anthocyanins. These descriptors were used as independent variables for QSAR models. ANFIS models with two triangular-shaped input fuzzy functions for each independent variable were constructed and optimized by 100 learning epochs. The constructed models using descriptors calculated by both PM6 and PM7 had good prediction efficiency with Q-square of 0.82 and 0.86, respectively.

10. Prediction of Radical Scavenging Activities of Anthocyanins Applying Adaptive Neuro-Fuzzy Inference System (ANFIS with Quantum Chemical Descriptors

Changho Jhin

2014-08-01

Full Text Available Radical scavenging activity of anthocyanins is well known, but only a few studies have been conducted by quantum chemical approach. The adaptive neuro-fuzzy inference system (ANFIS is an effective technique for solving problems with uncertainty. The purpose of this study was to construct and evaluate quantitative structure-activity relationship (QSAR models for predicting radical scavenging activities of anthocyanins with good prediction efficiency. ANFIS-applied QSAR models were developed by using quantum chemical descriptors of anthocyanins calculated by semi-empirical PM6 and PM7 methods. Electron affinity (A and electronegativity (χ of flavylium cation, and ionization potential (I of quinoidal base were significantly correlated with radical scavenging activities of anthocyanins. These descriptors were used as independent variables for QSAR models. ANFIS models with two triangular-shaped input fuzzy functions for each independent variable were constructed and optimized by 100 learning epochs. The constructed models using descriptors calculated by both PM6 and PM7 had good prediction efficiency with Q-square of 0.82 and 0.86, respectively.

11. Predicting hyperketonemia by logistic and linear regression using test-day milk and performance variables in early-lactation Holstein and Jersey cows.

Chandler, T L; Pralle, R S; Dórea, J R R; Poock, S E; Oetzel, G R; Fourdraine, R H; White, H M

2018-03-01

Although cowside testing strategies for diagnosing hyperketonemia (HYK) are available, many are labor intensive and costly, and some lack sufficient accuracy. Predicting milk ketone bodies by Fourier transform infrared spectrometry during routine milk sampling may offer a more practical monitoring strategy. The objectives of this study were to (1) develop linear and logistic regression models using all available test-day milk and performance variables for predicting HYK and (2) compare prediction methods (Fourier transform infrared milk ketone bodies, linear regression models, and logistic regression models) to determine which is the most predictive of HYK. Given the data available, a secondary objective was to evaluate differences in test-day milk and performance variables (continuous measurements) between Holsteins and Jerseys and between cows with or without HYK within breed. Blood samples were collected on the same day as milk sampling from 658 Holstein and 468 Jersey cows between 5 and 20 d in milk (DIM). Diagnosis of HYK was at a serum β-hydroxybutyrate (BHB) concentration ≥1.2 mmol/L. Concentrations of milk BHB and acetone were predicted by Fourier transform infrared spectrometry (Foss Analytical, Hillerød, Denmark). Thresholds of milk BHB and acetone were tested for diagnostic accuracy, and logistic models were built from continuous variables to predict HYK in primiparous and multiparous cows within breed. Linear models were constructed from continuous variables for primiparous and multiparous cows within breed that were 5 to 11 DIM or 12 to 20 DIM. Milk ketone body thresholds diagnosed HYK with 64.0 to 92.9% accuracy in Holsteins and 59.1 to 86.6% accuracy in Jerseys. Logistic models predicted HYK with 82.6 to 97.3% accuracy. Internally cross-validated multiple linear regression models diagnosed HYK of Holstein cows with 97.8% accuracy for primiparous and 83.3% accuracy for multiparous cows. Accuracy of Jersey models was 81.3% in primiparous and 83

12. Creep-fatigue life prediction for different heats of Type 304 stainless steel by linear-damage rule, strain-range partitioning method, and damage-rate approach

Maiya, P.S.

1978-07-01

The creep-fatigue life results for five different heats of Type 304 stainless steel at 593 0 C (1100 0 F), generated under push-pull conditions in the axial strain-control mode, are presented. The life predictions for the various heats based on the linear-damage rule, strain-range partitioning method, and damage-rate approach are discussed. The appropriate material properties required for computation of fatigue life are also included

13. Persistent ectopic pregnancy after linear salpingotomy: a non-predictable complication to conservative surgery for tubal gestation

Lund, Claus Otto; Nilas, Lisbeth; Bangsgaard, Nannie

2002-01-01

CG levels had a low diagnostic sensitivity (0.38-0.66) and specificity (0.74-0.77) for predicting PEP. In multivariate logistic analysis, none of the following clinical variables were predictive of PEP: duration of surgery, laparoscopic approach, history of previous EP, history of previous lower abdominal...

14. A Warm-Started Homogeneous and Self-Dual Interior-Point Method for Linear Economic Model Predictive Control

Sokoler, Leo Emil; Skajaa, Anders; Frison, Gianluca

2013-01-01

algorithm in MATLAB and its performance is analyzed based on a smart grid power management case study. Closed loop simulations show that 1) our algorithm is significantly faster than state-of-the-art IPMs based on sparse linear algebra routines, and 2) warm-starting reduces the number of iterations...

15. Deliberate practice predicts performance throughout time in adolescent chess players and dropouts: A linear mixed models analysis.

de Bruin, A.B.H.; Smits, N.; Rikers, R.M.J.P.; Schmidt, H.G.

2008-01-01

In this study, the longitudinal relation between deliberate practice and performance in chess was examined using a linear mixed models analysis. The practice activities and performance ratings of young elite chess players, who were either in, or had dropped out of the Dutch national chess training,

16. Predicting sediment sorption coefficients for linear alkylbenzenesulfonate congeners from polyacrylate-water partition coefficients at different salinities.

Rico Rico, A.; Droge, S.T.J.|info:eu-repo/dai/nl/304834017; Hermens, J.L.M.|info:eu-repo/dai/nl/069681384

2010-01-01

The effect of the molecular structure and the salinity on the sorption of the anionic surfactant linear alkylbenzenesulfonate (LAS) to marine sediment has been studied. The analysis of several individual LAS congeners in seawater and of one specific LAS congener at different dilutions of seawater

17. Newborn length predicts early infant linear growth retardation and disproportionately high weight gain in a low-income population.

Berngard, Samuel Clark; Berngard, Jennifer Bishop; Krebs, Nancy F; Garcés, Ana; Miller, Leland V; Westcott, Jamie; Wright, Linda L; Kindem, Mark; Hambidge, K Michael

2013-12-01

Stunting is prevalent by the age of 6 months in the indigenous population of the Western Highlands of Guatemala. The objective of this study was to determine the time course and predictors of linear growth failure and weight-for-age in early infancy. One hundred and forty eight term newborns had measurements of length and weight in their homes, repeated at 3 and 6 months. Maternal measurements were also obtained. Mean ± SD length-for-age Z-score (LAZ) declined from newborn -1.0 ± 1.01 to -2.20 ± 1.05 and -2.26 ± 1.01 at 3 and 6 months respectively. Stunting rates for newborn, 3 and 6 months were 47%, 53% and 56% respectively. A multiple regression model (R(2) = 0.64) demonstrated that the major predictor of LAZ at 3 months was newborn LAZ with the other predictors being newborn weight-for-age Z-score (WAZ), gender and maternal education∗maternal age interaction. Because WAZ remained essentially constant and LAZ declined during the same period, weight-for-length Z-score (WLZ) increased from -0.44 to +1.28 from birth to 3 months. The more severe the linear growth failure, the greater WAZ was in proportion to the LAZ. The primary conclusion is that impaired fetal linear growth is the major predictor of early infant linear growth failure indicating that prevention needs to start with maternal interventions. © 2013.

18. Applying psychological theories to evidence-based clinical practice: identifying factors predictive of placing preventive fissure sealants.

Bonetti, Debbie; Johnston, Marie; Clarkson, Jan E; Grimshaw, Jeremy; Pitts, Nigel B; Eccles, Martin; Steen, Nick; Thomas, Ruth; Maclennan, Graeme; Glidewell, Liz; Walker, Anne

2010-04-08

Psychological models are used to understand and predict behaviour in a wide range of settings, but have not been consistently applied to health professional behaviours, and the contribution of differing theories is not clear. This study explored the usefulness of a range of models to predict an evidence-based behaviour -- the placing of fissure sealants. Measures were collected by postal questionnaire from a random sample of general dental practitioners (GDPs) in Scotland. Outcomes were behavioural simulation (scenario decision-making), and behavioural intention. Predictor variables were from the Theory of Planned Behaviour (TPB), Social Cognitive Theory (SCT), Common Sense Self-regulation Model (CS-SRM), Operant Learning Theory (OLT), Implementation Intention (II), Stage Model, and knowledge (a non-theoretical construct). Multiple regression analysis was used to examine the predictive value of each theoretical model individually. Significant constructs from all theories were then entered into a 'cross theory' stepwise regression analysis to investigate their combined predictive value. Behavioural simulation - theory level variance explained was: TPB 31%; SCT 29%; II 7%; OLT 30%. Neither CS-SRM nor stage explained significant variance. In the cross theory analysis, habit (OLT), timeline acute (CS-SRM), and outcome expectancy (SCT) entered the equation, together explaining 38% of the variance. Behavioural intention - theory level variance explained was: TPB 30%; SCT 24%; OLT 58%, CS-SRM 27%. GDPs in the action stage had significantly higher intention to place fissure sealants. In the cross theory analysis, habit (OLT) and attitude (TPB) entered the equation, together explaining 68% of the variance in intention. The study provides evidence that psychological models can be useful in understanding and predicting clinical behaviour. Taking a theory-based approach enables the creation of a replicable methodology for identifying factors that may predict clinical behaviour

19. Applying psychological theories to evidence-based clinical practice: identifying factors predictive of placing preventive fissure sealants

Maclennan Graeme

2010-04-01

Full Text Available Abstract Background Psychological models are used to understand and predict behaviour in a wide range of settings, but have not been consistently applied to health professional behaviours, and the contribution of differing theories is not clear. This study explored the usefulness of a range of models to predict an evidence-based behaviour -- the placing of fissure sealants. Methods Measures were collected by postal questionnaire from a random sample of general dental practitioners (GDPs in Scotland. Outcomes were behavioural simulation (scenario decision-making, and behavioural intention. Predictor variables were from the Theory of Planned Behaviour (TPB, Social Cognitive Theory (SCT, Common Sense Self-regulation Model (CS-SRM, Operant Learning Theory (OLT, Implementation Intention (II, Stage Model, and knowledge (a non-theoretical construct. Multiple regression analysis was used to examine the predictive value of each theoretical model individually. Significant constructs from all theories were then entered into a 'cross theory' stepwise regression analysis to investigate their combined predictive value Results Behavioural simulation - theory level variance explained was: TPB 31%; SCT 29%; II 7%; OLT 30%. Neither CS-SRM nor stage explained significant variance. In the cross theory analysis, habit (OLT, timeline acute (CS-SRM, and outcome expectancy (SCT entered the equation, together explaining 38% of the variance. Behavioural intention - theory level variance explained was: TPB 30%; SCT 24%; OLT 58%, CS-SRM 27%. GDPs in the action stage had significantly higher intention to place fissure sealants. In the cross theory analysis, habit (OLT and attitude (TPB entered the equation, together explaining 68% of the variance in intention. Summary The study provides evidence that psychological models can be useful in understanding and predicting clinical behaviour. Taking a theory-based approach enables the creation of a replicable methodology for

20. The development of a practical and uncomplicated predictive equation to determine liver volume from simple linear ultrasound measurements of the liver

Childs, Jessie T.; Thoirs, Kerry A.; Esterman, Adrian J.

2016-01-01

This study sought to develop a practical and uncomplicated predictive equation that could accurately calculate liver volumes, using multiple simple linear ultrasound measurements combined with measurements of body size. Penalized (lasso) regression was used to develop a new model and compare it to the ultrasonic linear measurements currently used clinically. A Bland–Altman analysis showed that the large limits of agreement of the new model render it too inaccurate to be of clinical use for estimating liver volume per se, but it holds value in tracking disease progress or response to treatment over time in individuals, and is certainly substantially better as an indicator of overall liver size than the ultrasonic linear measurements currently being used clinically. - Highlights: • A new model to calculate liver volumes from simple linear ultrasound measurements. • This model was compared to the linear measurements currently used clinically. • The new model holds value in tracking disease progress or response to treatment. • This model is better as an indicator of overall liver size.

1. Binding free energy predictions of farnesoid X receptor (FXR) agonists using a linear interaction energy (LIE) approach with reliability estimation: application to the D3R Grand Challenge 2

Rifai, Eko Aditya; van Dijk, Marc; Vermeulen, Nico P. E.; Geerke, Daan P.

2018-01-01

Computational protein binding affinity prediction can play an important role in drug research but performing efficient and accurate binding free energy calculations is still challenging. In the context of phase 2 of the Drug Design Data Resource (D3R) Grand Challenge 2 we used our automated eTOX ALLIES approach to apply the (iterative) linear interaction energy (LIE) method and we evaluated its performance in predicting binding affinities for farnesoid X receptor (FXR) agonists. Efficiency was obtained by our pre-calibrated LIE models and molecular dynamics (MD) simulations at the nanosecond scale, while predictive accuracy was obtained for a small subset of compounds. Using our recently introduced reliability estimation metrics, we could classify predictions with higher confidence by featuring an applicability domain (AD) analysis in combination with protein-ligand interaction profiling. The outcomes of and agreement between our AD and interaction-profile analyses to distinguish and rationalize the performance of our predictions highlighted the relevance of sufficiently exploring protein-ligand interactions during training and it demonstrated the possibility to quantitatively and efficiently evaluate if this is achieved by using simulation data only.

2. Linear Water Waves

Kuznetsov, N.; Maz'ya, V.; Vainberg, B.

2002-08-01

This book gives a self-contained and up-to-date account of mathematical results in the linear theory of water waves. The study of waves has many applications, including the prediction of behavior of floating bodies (ships, submarines, tension-leg platforms etc.), the calculation of wave-making resistance in naval architecture, and the description of wave patterns over bottom topography in geophysical hydrodynamics. The first section deals with time-harmonic waves. Three linear boundary value problems serve as the approximate mathematical models for these types of water waves. The next section uses a plethora of mathematical techniques in the investigation of these three problems. The techniques used in the book include integral equations based on Green's functions, various inequalities between the kinetic and potential energy and integral identities which are indispensable for proving the uniqueness theorems. The so-called inverse procedure is applied to constructing examples of non-uniqueness, usually referred to as 'trapped nodes.'

3. Applying psychological theories to evidence-based clinical practice: Identifying factors predictive of managing upper respiratory tract infections without antibiotics

Glidewell Elizabeth

2007-08-01

Full Text Available Abstract Background Psychological models can be used to understand and predict behaviour in a wide range of settings. However, they have not been consistently applied to health professional behaviours, and the contribution of differing theories is not clear. The aim of this study was to explore the usefulness of a range of psychological theories to predict health professional behaviour relating to management of upper respiratory tract infections (URTIs without antibiotics. Methods Psychological measures were collected by postal questionnaire survey from a random sample of general practitioners (GPs in Scotland. The outcome measures were clinical behaviour (using antibiotic prescription rates as a proxy indicator, behavioural simulation (scenario-based decisions to managing URTI with or without antibiotics and behavioural intention (general intention to managing URTI without antibiotics. Explanatory variables were the constructs within the following theories: Theory of Planned Behaviour (TPB, Social Cognitive Theory (SCT, Common Sense Self-Regulation Model (CS-SRM, Operant Learning Theory (OLT, Implementation Intention (II, Stage Model (SM, and knowledge (a non-theoretical construct. For each outcome measure, multiple regression analysis was used to examine the predictive value of each theoretical model individually. Following this 'theory level' analysis, a 'cross theory' analysis was conducted to investigate the combined predictive value of all significant individual constructs across theories. Results All theories were tested, but only significant results are presented. When predicting behaviour, at the theory level, OLT explained 6% of the variance and, in a cross theory analysis, OLT 'evidence of habitual behaviour' also explained 6%. When predicting behavioural simulation, at the theory level, the proportion of variance explained was: TPB, 31%; SCT, 26%; II, 6%; OLT, 24%. GPs who reported having already decided to change their management to

4. A comparative study between the use of artificial neural networks and multiple linear regression for caustic concentration prediction in a stage of alumina production

Giovanni Leopoldo Rozza

2015-09-01

Full Text Available With world becoming each day a global village, enterprises continuously seek to optimize their internal processes to hold or improve their competitiveness and make better use of natural resources. In this context, decision support tools are an underlying requirement. Such tools are helpful on predicting operational issues, avoiding cost risings, loss of productivity, work-related accident leaves or environmental disasters. This paper has its focus on the prediction of spent liquor caustic concentration of Bayer process for alumina production. Caustic concentration measuring is essential to keep it at expected levels, otherwise quality issues might arise. The organization requests caustic concentration by chemical analysis laboratory once a day, such information is not enough to issue preventive actions to handle process inefficiencies that will be known only after new measurement on the next day. Thereby, this paper proposes using Multiple Linear Regression and Artificial Neural Networks techniques a mathematical model to predict the spent liquor´s caustic concentration. Hence preventive actions will occur in real time. Such models were built using software tool for numerical computation (MATLAB and a statistical analysis software package (SPSS. The models output (predicted caustic concentration were compared with the real lab data. We found evidence suggesting superior results with use of Artificial Neural Networks over Multiple Linear Regression model. The results demonstrate that replacing laboratorial analysis by the forecasting model to support technical staff on decision making could be feasible.

5. Using multiple linear regression and physicochemical changes of amino acid mutations to predict antigenic variants of influenza A/H3N2 viruses.

Cui, Haibo; Wei, Xiaomei; Huang, Yu; Hu, Bin; Fang, Yaping; Wang, Jia

2014-01-01

Among human influenza viruses, strain A/H3N2 accounts for over a quarter of a million deaths annually. Antigenic variants of these viruses often render current vaccinations ineffective and lead to repeated infections. In this study, a computational model was developed to predict antigenic variants of the A/H3N2 strain. First, 18 critical antigenic amino acids in the hemagglutinin (HA) protein were recognized using a scoring method combining phi (ϕ) coefficient and information entropy. Next, a prediction model was developed by integrating multiple linear regression method with eight types of physicochemical changes in critical amino acid positions. When compared to other three known models, our prediction model achieved the best performance not only on the training dataset but also on the commonly-used testing dataset composed of 31878 antigenic relationships of the H3N2 influenza virus.

6. Applying a new criterion to predict glass forming alloys in the Zr–Ni–Cu ternary system

Déo, L.P., E-mail: leonardopratavieira@gmail.com [Universidade de São Paulo, EESC, SMM - Av. Trabalhador São Carlense, 400 – São Carlos, SP 13566-590 (Brazil); Mendes, M.A.B., E-mail: marcio.andreato@gmail.com [Universidade Federal de São Carlos, DEMa - Rod. Washington Luiz, Km 235 – São Carlos, SP 13565-905 (Brazil); Costa, A.M.S., E-mail: alexmatos1980@gmail.com [Universidade de São Paulo, DEMAR, EEL – Polo Urbo-Industrial Gleba AI-6, s/n – Lorena, SP 12600-970 (Brazil); Campos Neto, N.D., E-mail: nelsonddcn@gmail.com [Universidade de São Paulo, EESC, SMM - Av. Trabalhador São Carlense, 400 – São Carlos, SP 13566-590 (Brazil); Oliveira, M.F. de, E-mail: falcao@sc.usp.br [Universidade de São Paulo, EESC, SMM - Av. Trabalhador São Carlense, 400 – São Carlos, SP 13566-590 (Brazil)

2013-03-15

Highlights: ► Calculation to predict and select the glass forming ability (GFA) of metallic alloys in Zr–Ni–Cu system. ► Good correlation between theoretical and experimental GFA samples. ► Combination of X-ray diffraction (XRD) and differential scanning calorimetry (DSC) techniques mainly to characterize the samples. ► Oxygen impurity dramatically reduced the GFA. ► The selection criterion used opens the possibility to obtain new amorphous alloys, reducing the experimental procedures of trial and error. -- Abstract: A new criterion has been recently proposed to predict and select the glass forming ability (GFA) of metallic alloys. It was found that the critical cooling rate for glass formation (R{sub c}) correlates well with a proper combination of two factors, the minimum topological instability (λ{sub min}) and the thermodynamic parameter (Δh). The (λ{sub min}) criterion is based on the concept of topological instability of stable crystalline structures and (Δh) depends on the average work function difference (Δϕ) and the average electron density difference Δn{sub ws}{sup 1/3} among the constituent elements of the alloy. In the present work, the selection criterion was applied in the Zr–Ni–Cu system and its predictability was analyzed experimentally. Ribbon-shaped and splat-shaped samples were produced by melt-spinning and splat-cooling techniques respectively. The crystallization content and behavior were analyzed by X-ray diffraction (XRD) and differential scanning calorimetry (DSC), respectively. The results showed a good correlation between the theoretical GFA values and the amorphous phase percentages found in different alloy compositions.

7. Artificial neural networks applied for soil class prediction in mountainous landscape of the Serra do Mar¹

Braz Calderano Filho

2014-12-01

Full Text Available Soil information is needed for managing the agricultural environment. The aim of this study was to apply artificial neural networks (ANNs for the prediction of soil classes using orbital remote sensing products, terrain attributes derived from a digital elevation model and local geology information as data sources. This approach to digital soil mapping was evaluated in an area with a high degree of lithologic diversity in the Serra do Mar. The neural network simulator used in this study was JavaNNS and the backpropagation learning algorithm. For soil class prediction, different combinations of the selected discriminant variables were tested: elevation, declivity, aspect, curvature, curvature plan, curvature profile, topographic index, solar radiation, LS topographic factor, local geology information, and clay mineral indices, iron oxides and the normalized difference vegetation index (NDVI derived from an image of a Landsat-7 Enhanced Thematic Mapper Plus (ETM+ sensor. With the tested sets, best results were obtained when all discriminant variables were associated with geological information (overall accuracy 93.2 - 95.6 %, Kappa index 0.924 - 0.951, for set 13. Excluding the variable profile curvature (set 12, overall accuracy ranged from 93.9 to 95.4 % and the Kappa index from 0.932 to 0.948. The maps based on the neural network classifier were consistent and similar to conventional soil maps drawn for the study area, although with more spatial details. The results show the potential of ANNs for soil class prediction in mountainous areas with lithological diversity.

8. Use of non-linear mixed-effects modelling and regression analysis to predict the number of somatic coliphages by plaque enumeration after 3 hours of incubation.

Mendez, Javier; Monleon-Getino, Antonio; Jofre, Juan; Lucena, Francisco

2017-10-01

The present study aimed to establish the kinetics of the appearance of coliphage plaques using the double agar layer titration technique to evaluate the feasibility of using traditional coliphage plaque forming unit (PFU) enumeration as a rapid quantification method. Repeated measurements of the appearance of plaques of coliphages titrated according to ISO 10705-2 at different times were analysed using non-linear mixed-effects regression to determine the most suitable model of their appearance kinetics. Although this model is adequate, to simplify its applicability two linear models were developed to predict the numbers of coliphages reliably, using the PFU counts as determined by the ISO after only 3 hours of incubation. One linear model, when the number of plaques detected was between 4 and 26 PFU after 3 hours, had a linear fit of: (1.48 × Counts 3 h + 1.97); and the other, values >26 PFU, had a fit of (1.18 × Counts 3 h + 2.95). If the number of plaques detected was PFU after 3 hours, we recommend incubation for (18 ± 3) hours. The study indicates that the traditional coliphage plating technique has a reasonable potential to provide results in a single working day without the need to invest in additional laboratory equipment.

9. A linear model to predict with a multi-spectral radiometer the amount of nitrogen in winter wheat

Reyniers, M.; Walvoort, D.J.J.; Baardemaaker, De J.

2006-01-01

The objective was to develop an optimal vegetation index (VIopt) to predict with a multi-spectral radiometer nitrogen in wheat crop (kg[N] ha-1). Optimality means that nitrogen in the crop can be measured accurately in the field during the growing season. It also means that the measurements are

10. The established mega watt linear programming-based optimal power flow model applied to the real power 56-bus system in eastern province of Saudi Arabia

Al-Muhawesh, Tareq A.; Qamber, Isa S.

2008-01-01

A current trend in electric power industries is the deregulation around the world. One of the questions arise during any deregulation process is: where will be the future generation expansion? In the present paper, the study is concentrated on the wheeling computational method as a part of mega watt (MW) linear programming-based optimal power flow (LP-based OPF) method. To observe the effects of power wheeling on the power system operations, the paper uses linear interactive and discrete optimizer (LINDO) optimizer software as a powerful tool for solving linear programming problems to evaluate the influence of the power wheeling. As well, the paper uses the optimization tool to solve the economic generation dispatch and transmission management problems. The transmission line flow was taken in consideration with some constraints discussed in this paper. The complete linear model of the MW LP-based OPF, which is used to know the future generation potential areas in any utility is proposed. The paper also explains the available economic load dispatch (ELD) as the basic optimization tool to dispatch the power system. It can be concluded in the present study that accuracy is expensive in terms of money and time and in the competitive market enough accuracy is needed without paying much

11. The AMC Linear Disability Score (ALDS): a cross-sectional study with a new generic instrument to measure disability applied to patients with peripheral arterial disease

Met, R.; Reekers, J.A.; Koelemay, M.J.W.; Legemate, D.A.; de Haan, R.J.

2009-01-01

Background: The AMC Linear Disability Score (ALDS) is a calibrated generic itembank to measure the level of physical disability in patients with chronic diseases. The ALDS has already been validated in different patient populations suffering from chronic diseases. The aim of this study was to assess

12. An analysis of the relationship between the linear hammer speed and the thrower applied forces during the hammer throw for male and female throwers.

Brice, Sara M; Ness, Kevin F; Rosemond, Doug

2011-09-01

The purpose of this study was to investigate the relationship between the cable force and linear hammer speed in the hammer throw and to identify how the magnitude and direction of the cable force affects the fluctuations in linear hammer speed. Five male (height: 1.88 +/- 0.06 m; body mass: 106.23 +/- 4.83 kg) and five female (height: 1.69 +/- 0.05 m; body mass: 101.60 +/- 20.92 kg) throwers participated and were required to perform 10 throws each. The hammer's linear velocity and the cable force and its tangential component were calculated via hammer head positional data. As expected, a strong correlation was observed between decreases in the linear hammer speed and decreases in the cable force (normalised for hammer weight). A strong correlation was also found to exist between the angle by which the cable force lags the radius of rotation at its maximum (when tangential force is at its most negative) and the size of the decreases in hammer speed. These findings indicate that the most effective way to minimise the effect of the negative tangential force is to reduce the size of the lag angle.

13. The Effects Of Gender, Engineering Identification, and Engineering Program Expectancy On Engineering Career Intentions: Applying Hierarchical Linear Modeling (HLM) In Engineering Education Research

Tendhar, Chosang; Paretti, Marie C.; Jones, Brett D.

2017-01-01

This study had three purposes and four hypotheses were tested. Three purposes: (1) To use hierarchical linear modeling (HLM) to investigate whether students' perceptions of their engineering career intentions changed over time; (2) To use HLM to test the effects of gender, engineering identification (the degree to which an individual values a…

14. Non-linear model predictive supervisory controller for building, air handling unit with recuperator and refrigeration system with heat waste recovery

Minko, Tomasz; Wisniewski, Rafal; Bendtsen, Jan Dimon

2016-01-01

. The retrieved heat excess can be stored in the water tank. For this purpose the charging and the discharging water loops has been designed. We present the non-linear model of the above described system and a non-linear model predictive supervisory controller that according to the received price signal......, occupancy information and ambient temperature minimizes the operation cost of the whole system and distributes set points to local controllers of supermarkets subsystems. We find that when reliable information about the high price period is available, it is profitable to use the refrigeration system...... to generate heat during the low price period, store it and use it to substitute the conventional heater during the high price period....

15. A Fisher’s Criterion-Based Linear Discriminant Analysis for Predicting the Critical Values of Coal and Gas Outbursts Using the Initial Gas Flow in a Borehole

Xiaowei Li

2017-01-01

Full Text Available The risk of coal and gas outbursts can be predicted using a method that is linear and continuous and based on the initial gas flow in the borehole (IGFB; this method is significantly superior to the traditional point prediction method. Acquiring accurate critical values is the key to ensuring accurate predictions. Based on ideal rock cross-cut coal uncovering model, the IGFB measurement device was developed. The present study measured the data of the initial gas flow over 3 min in a 1 m long borehole with a diameter of 42 mm in the laboratory. A total of 48 sets of data were obtained. These data were fuzzy and chaotic. Fisher’s discrimination method was able to transform these spatial data, which were multidimensional due to the factors influencing the IGFB, into a one-dimensional function and determine its critical value. Then, by processing the data into a normal distribution, the critical values of the outbursts were analyzed using linear discriminant analysis with Fisher’s criterion. The weak and strong outbursts had critical values of 36.63 L and 80.85 L, respectively, and the accuracy of the back-discriminant analysis for the weak and strong outbursts was 94.74% and 92.86%, respectively. Eight outburst tests were simulated in the laboratory, the reverse verification accuracy was 100%, and the accuracy of the critical value was verified.

16. Evaluation of heat transfer mathematical models and multiple linear regression to predict the inside variables in semi-solar greenhouse

M Taki

2017-05-01

Full Text Available Introduction Controlling greenhouse microclimate not only influences the growth of plants, but also is critical in the spread of diseases inside the greenhouse. The microclimate parameters were inside air, greenhouse roof and soil temperature, relative humidity and solar radiation intensity. Predicting the microclimate conditions inside a greenhouse and enabling the use of automatic control systems are the two main objectives of greenhouse climate model. The microclimate inside a greenhouse can be predicted by conducting experiments or by using simulation. Static and dynamic models are used for this purpose as a function of the metrological conditions and the parameters of the greenhouse components. Some works were done in past to 2015 year to simulation and predict the inside variables in different greenhouse structures. Usually simulation has a lot of problems to predict the inside climate of greenhouse and the error of simulation is higher in literature. The main objective of this paper is comparison between heat transfer and regression models to evaluate them to predict inside air and roof temperature in a semi-solar greenhouse in Tabriz University. Materials and Methods In this study, a semi-solar greenhouse was designed and constructed at the North-West of Iran in Azerbaijan Province (geographical location of 38°10′ N and 46°18′ E with elevation of 1364 m above the sea level. In this research, shape and orientation of the greenhouse, selected between some greenhouses common shapes and according to receive maximum solar radiation whole the year. Also internal thermal screen and cement north wall was used to store and prevent of heat lost during the cold period of year. So we called this structure, ‘semi-solar’ greenhouse. It was covered with glass (4 mm thickness. It occupies a surface of approximately 15.36 m2 and 26.4 m3. The orientation of this greenhouse was East–West and perpendicular to the direction of the wind prevailing

17. Analysis and monitoring of energy security and prediction of indicator values using conventional non-linear mathematical programming

Elena Vital'evna Bykova

2011-09-01

Full Text Available This paper describes the concept of energy security and a system of indicators for its monitoring. The indicator system includes more than 40 parameters that reflect the structure and state of fuel and energy complex sectors (fuel, electricity and heat & power, as well as takes into account economic, environmental and social aspects. A brief description of the structure of the computer system to monitor and analyze energy security is given. The complex contains informational, analytical and calculation modules, provides applications for forecasting and modeling energy scenarios, modeling threats and determining levels of energy security. Its application to predict the values of the indicators and methods developed for it are described. This paper presents a method developed by conventional nonlinear mathematical programming needed to address several problems of energy and, in particular, the prediction problem of the security. An example of its use and implementation of this method in the application, "Prognosis", is also given.

18. High peer popularity longitudinally predicts adolescent health risk behavior, or does it?: an examination of linear and quadratic associations.

Prinstein, Mitchell J; Choukas-Bradley, Sophia C; Helms, Sarah W; Brechwald, Whitney A; Rancourt, Diana

2011-10-01

In contrast to prior work, recent theory suggests that high, not low, levels of adolescent peer popularity may be associated with health risk behavior. This study examined (a) whether popularity may be uniquely associated with cigarette use, marijuana use, and sexual risk behavior, beyond the predictive effects of aggression; (b) whether the longitudinal association between popularity and health risk behavior may be curvilinear; and (c) gender moderation. A total of 336 adolescents, initially in 10-11th grades, reported cigarette use, marijuana use, and number of sexual intercourse partners at two time points 18 months apart. Sociometric peer nominations were used to examine popularity and aggression. Longitudinal quadratic effects and gender moderation suggest that both high and low levels of popularity predict some, but not all, health risk behaviors. New theoretical models can be useful for understanding the complex manner in which health risk behaviors may be reinforced within the peer context.

19. Evaluation of prediction capability, robustness, and sensitivity in non-linear landslide susceptibility models, Guantánamo, Cuba

Melchiorre, C.; Castellanos Abella, E. A.; van Westen, C. J.; Matteucci, M.

2011-04-01

This paper describes a procedure for landslide susceptibility assessment based on artificial neural networks, and focuses on the estimation of the prediction capability, robustness, and sensitivity of susceptibility models. The study is carried out in the Guantanamo Province of Cuba, where 186 landslides were mapped using photo-interpretation. Twelve conditioning factors were mapped including geomorphology, geology, soils, landuse, slope angle, slope direction, internal relief, drainage density, distance from roads and faults, rainfall intensity, and ground peak acceleration. A methodology was used that subdivided the database in 3 subsets. A training set was used for updating the weights. A validation set was used to stop the training procedure when the network started losing generalization capability, and a test set was used to calculate the performance of the network. A 10-fold cross-validation was performed in order to show that the results are repeatable. The prediction capability, the robustness analysis, and the sensitivity analysis were tested on 10 mutually exclusive datasets. The results show that by means of artificial neural networks it is possible to obtain models with high prediction capability and high robustness, and that an exploration of the effect of the individual variables is possible, even if they are considered as a black-box model.

20. Patient-specific non-linear finite element modelling for predicting soft organ deformation in real-time: application to non-rigid neuroimage registration.

Wittek, Adam; Joldes, Grand; Couton, Mathieu; Warfield, Simon K; Miller, Karol

2010-12-01

Long computation times of non-linear (i.e. accounting for geometric and material non-linearity) biomechanical models have been regarded as one of the key factors preventing application of such models in predicting organ deformation for image-guided surgery. This contribution presents real-time patient-specific computation of the deformation field within the brain for six cases of brain shift induced by craniotomy (i.e. surgical opening of the skull) using specialised non-linear finite element procedures implemented on a graphics processing unit (GPU). In contrast to commercial finite element codes that rely on an updated Lagrangian formulation and implicit integration in time domain for steady state solutions, our procedures utilise the total Lagrangian formulation with explicit time stepping and dynamic relaxation. We used patient-specific finite element meshes consisting of hexahedral and non-locking tetrahedral elements, together with realistic material properties for the brain tissue and appropriate contact conditions at the boundaries. The loading was defined by prescribing deformations on the brain surface under the craniotomy. Application of the computed deformation fields to register (i.e. align) the preoperative and intraoperative images indicated that the models very accurately predict the intraoperative deformations within the brain. For each case, computing the brain deformation field took less than 4 s using an NVIDIA Tesla C870 GPU, which is two orders of magnitude reduction in computation time in comparison to our previous study in which the brain deformation was predicted using a commercial finite element solver executed on a personal computer. Copyright © 2010 Elsevier Ltd. All rights reserved.

1. Predictive performance for population models using stochastic differential equations applied on data from an oral glucose tolerance test.

Møller, Jonas B; Overgaard, Rune V; Madsen, Henrik; Hansen, Torben; Pedersen, Oluf; Ingwersen, Steen H

2010-02-01

Several articles have investigated stochastic differential equations (SDEs) in PK/PD models, but few have quantitatively investigated the benefits to predictive performance of models based on real data. Estimation of first phase insulin secretion which reflects beta-cell function using models of the OGTT is a difficult problem in need of further investigation. The present work aimed at investigating the power of SDEs to predict the first phase insulin secretion (AIR (0-8)) in the IVGTT based on parameters obtained from the minimal model of the OGTT, published by Breda et al. (Diabetes 50(1):150-158, 2001). In total 174 subjects underwent both an OGTT and a tolbutamide modified IVGTT. Estimation of parameters in the oral minimal model (OMM) was performed using the FOCE-method in NONMEM VI on insulin and C-peptide measurements. The suggested SDE models were based on a continuous AR(1) process, i.e. the Ornstein-Uhlenbeck process, and the extended Kalman filter was implemented in order to estimate the parameters of the models. Inclusion of the Ornstein-Uhlenbeck (OU) process caused improved description of the variation in the data as measured by the autocorrelation function (ACF) of one-step prediction errors. A main result was that application of SDE models improved the correlation between the individual first phase indexes obtained from OGTT and AIR (0-8) (r = 0.36 to r = 0.49 and r = 0.32 to r = 0.47 with C-peptide and insulin measurements, respectively). In addition to the increased correlation also the properties of the indexes obtained using the SDE models more correctly assessed the properties of the first phase indexes obtained from the IVGTT. In general it is concluded that the presented SDE approach not only caused autocorrelation of errors to decrease but also improved estimation of clinical measures obtained from the glucose tolerance tests. Since, the estimation time of extended models was not heavily increased compared to basic models, the applied method

2. Development of a predictive model for distribution coefficient (Kd) of 13'7Cs and 60Co in marine sediments using multiple linear regression analysis

Kumar, Ajay; Ravi, P.M.; Guneshwar, S.L.; Rout, Sabyasachi; Mishra, Manish K.; Pulhani, Vandana; Tripathi, R.M.

2018-01-01

Numerous common methods (batch laboratory, the column laboratory, field-batch method, field modeling and K 0c method) are used frequently for determination of K d values. Recently, multiple regression models are considered as new best estimates for predicting the K d of radionuclides in the environment. It is also well known fact that the K d value is highly influenced by physico-chemical properties of sediment. Due to the significant variability in influencing parameters, the measured K d values can range over several orders of magnitude under different environmental conditions. The aim of this study is to develop a predictive model for K d values of 137 Cs and 60 Co based on the sediment properties using multiple linear regression analysis

3. Deliberate practice predicts performance over time in adolescent chess players and drop-outs: a linear mixed models analysis.

de Bruin, Anique B H; Smits, Niels; Rikers, Remy M J P; Schmidt, Henk G

2008-11-01

In this study, the longitudinal relation between deliberate practice and performance in chess was examined using a linear mixed models analysis. The practice activities and performance ratings of young elite chess players, who were either in, or had dropped out of the Dutch national chess training, were analysed since they had started playing chess seriously. The results revealed that deliberate practice (i.e. serious chess study alone and serious chess play) strongly contributed to chess performance. The influence of deliberate practice was not only observable in current performance, but also over chess players' careers. Moreover, although the drop-outs' chess ratings developed more slowly over time, both the persistent and drop-out chess players benefited to the same extent from investments in deliberate practice. Finally, the effect of gender on chess performance proved to be much smaller than the effect of deliberate practice. This study provides longitudinal support for the monotonic benefits assumption of deliberate practice, by showing that over chess players' careers, deliberate practice has a significant effect on performance, and to the same extent for chess players of different ultimate performance levels. The results of this study are not in line with critique raised against the deliberate practice theory that the factors deliberate practice and talent could be confounded.

4. Towards the prediction of multiple necking during dynamic extension of round bar: linear stability approach versus finite element calculations

Maï, S El; Petit, J; Mercier, S; Molinari, A

2014-01-01

The fragmentation of structures subject to dynamic conditions is a matter of interest for civil industries as well as for Defence institutions. Dynamic expansions of structures, such as cylinders or rings, have been performed to obtain crucial information on fragment distributions. Many authors have proposed to capture by FEA the experimental distribution of fragment size by introducing in the FE model a perturbation. Stability and bifurcation analyses have also been proposed to describe the evolution of the perturbation growth rate. In the proposed contribution, the multiple necking of a round bar in dynamic tensile loading is analysed by the FE method. A perturbation on the initial flow stress is introduced in the numerical model to trigger instabilities. The onset time and the dominant mode of necking have been characterized precisely and showed power law evolutions, with the loading velocities and moderately with the amplitudes and the cell sizes of the perturbations. In the second part of the paper, the development of linear stability analysis and the use of salient criteria in terms of the growth rate of perturbations enabled comparisons with the numerical results. A good correlation in terms of onset time of instabilities and of number of necks is shown.

5. Full radius linear and nonlinear gyrokinetic simulations for tokamaks and stellarators: Zonal flows, applied E x B flows, trapped electrons and finite beta

Villard, L.; Allfrey, S.J.; Bottino, A.

2003-01-01

The aim of this paper is to report on recent advances made on global gyrokinetic simulations of Ion Temperature Gradient modes (ITG) and other microinstabilities. The nonlinear development and saturation of ITG modes and the role of E x B zonal flows are studied with a global nonlinear δ f formulation that retains parallel nonlinearity and thus allows for a check of the energy conservation property as a means to verify the quality of the numerical simulation. Due to an optimised loading technique the conservation property is satisfied with an unprecedented quality well into the nonlinear stage. The zonal component of the perturbation establishes a quasi-steady state with regions of ITG suppression, strongly reduced radial energy flux and steepened effective temperature profile alternating with regions of higher ITG mode amplitudes, larger radial energy flux and flattened effective temperature profile. A semi-Lagrangian approach free of statistical noise is proposed as an alternative to the nonlinear δf formulation. An ASDEX-Upgrade experiment with an Internal Transport Barrier (ITB) is analysed with a global gyrokinetic code that includes trapped electron dynamics. The weakly destabilizing effect of trapped electron dynamics on ITG modes in an axisymmetric bumpy configuration modelling W7-X is shown in global linear simulations that retain the full electron dynamics. Finite β effects on microinstabilities are investigated with a linear global spectral electromagnetic gyrokinetic formulation. The radial global structure of electromagnetic modes shows a resonant behaviour with rational q values. (author)

6. Two generations of selection on restricted best linear unbiased prediction breeding values for income minus feed cost in laying hens.

Hagger, C

1992-07-01

Two generations of selection on restricted BLUP breeding values were applied in an experiment with laying hens. Selection had been on phenotype of income minus feed cost (IFC) between 21 and 40 wk of age in the previous five generations. The restriction of no genetic change in egg weight was included in the EBV for power-transformed IFC (i.e., IFCt, with t-values of 3.7 and 3.6 in the two generations, respectively). The experiment consisted of two selection lines plus a randomly bred control of 20 male and 80 female breeders each. Observations on 8,844 survivors to 40 wk were available. Relative to the base population average, the restriction reduced genetic gain in IFC from 4.1 and 3.9% to 2.0 and 2.2% per generation in the two selection lines, respectively. Average EBV for egg weight remained nearly constant after a strong increase in the previous five generations. Rates of genetic gain for egg number, body weight, and feed conversion (feed/egg mass) were not affected significantly. In the seventh generation, a genetic gain in feed conversion of 10.3% relative to the phenotypic mean of the base population was obtained.

7. Ab initio optical potentials applied to low-energy e-H2 and e-N2 collisions in the linear-algebraic approach

Schneider, B.I.; Collins, L.A.

1983-01-01

We propose a method for constructing an effective optical potential through which correlation effects can be introduced into the electron-molecule scattering formulation. The optical potential is based on a nonperturbative, Feshbach projection-operator procedure and is evaluated on an L 2 basis. The optical potential is incorporated into the scattering equations by means of a separable expansion, and the resulting scattering equations are solved by a linear-algebraic method based on the integral-equation formulation. We report the results of scattering calculations, which include polarization effects, for low-energy e-H 2 and e-N 2 collisions. The agreement with other theoretical and with experimental results is quite good

8. Usefulness of semi-automatic volumetry compared to established linear measurements in predicting lymph node metastases in MSCT

Buerke, Boris; Puesken, Michael; Heindel, Walter; Wessling, Johannes (Dept. of Clinical Radiology, Univ. of Muenster (Germany)), email: buerkeb@uni-muenster.de; Gerss, Joachim (Dept. of Medical Informatics and Biomathematics, Univ. of Muenster (Germany)); Weckesser, Matthias (Dept. of Nuclear Medicine, Univ. of Muenster (Germany))

2011-06-15

9. Dynamics of unsymmetric piecewise-linear/non-linear systems using finite elements in time

Wang, Yu

1995-08-01

The dynamic response and stability of a single-degree-of-freedom system with unsymmetric piecewise-linear/non-linear stiffness are analyzed using the finite element method in the time domain. Based on a Hamilton's weak principle, this method provides a simple and efficient approach for predicting all possible fundamental and sub-periodic responses. The stability of the steady state response is determined by using Floquet's theory without any special effort for calculating transition matrices. This method is applied to a number of examples, demonstrating its effectiveness even for a strongly non-linear problem involving both clearance and continuous stiffness non-linearities. Close agreement is found between available published findings and the predictions of the finite element in time approach, which appears to be an efficient and reliable alternative technique for non-linear dynamic response and stability analysis of periodic systems.

10. Prediction of the thermal expansion coefficients of bio diesels from several sources through the application of linear regression; Predicao dos coeficientes de expansao termica de biodieseis de diversas origens atraves da aplicacao da regressa linear

Canciam, Cesar Augusto [Universidade Tecnologica Federal do Parana (UTFPR), Campus Ponta Grossa, PR (Brazil)], e-mail: canciam@utfpr.edu.br

2012-07-01

When evaluating the consumption of bio fuels, the knowledge of the density is of great importance for rectify the effect of temperature. The thermal expansion coefficient is a thermodynamic property that provides a measure of the density variation in response to temperature variation, keeping the pressure constant. This study aimed to predict the thermal expansion coefficients of ethyl bio diesels from castor beans, soybeans, sunflower seeds and Mabea fistulifera Mart. oils and of methyl bio diesels from soybeans, sunflower seeds, souari nut, cotton, coconut, castor beans and palm oils, from beef tallow, chicken fat and hydrogenated vegetable fat residual. For this purpose, there was a linear regression analysis of the density of each bio diesel a function of temperature. These data were obtained from other works. The thermal expansion coefficients for bio diesels are between 6.3729x{sup 10-4} and 1.0410x10{sup -3} degree C-1. In all the cases, the correlation coefficients were over 0.99. (author)

11. Explicit/multi-parametric model predictive control (MPC) of linear discrete-time systems by dynamic and multi-parametric programming

Kouramas, K.I.

2011-08-01

This work presents a new algorithm for solving the explicit/multi- parametric model predictive control (or mp-MPC) problem for linear, time-invariant discrete-time systems, based on dynamic programming and multi-parametric programming techniques. The algorithm features two key steps: (i) a dynamic programming step, in which the mp-MPC problem is decomposed into a set of smaller subproblems in which only the current control, state variables, and constraints are considered, and (ii) a multi-parametric programming step, in which each subproblem is solved as a convex multi-parametric programming problem, to derive the control variables as an explicit function of the states. The key feature of the proposed method is that it overcomes potential limitations of previous methods for solving multi-parametric programming problems with dynamic programming, such as the need for global optimization for each subproblem of the dynamic programming step. © 2011 Elsevier Ltd. All rights reserved.

12. Stability and amplitude ranges of two dimensional non-linear oscillations with periodical Hamiltonian applied to betatron oscillations in circular particle accelerators: Part 1 and Part 2

Hagedorn, R

1957-03-07

A mechanical system of two degrees of freedom is considered which can be described by a system of canonical differential equations. The Hamiltonian is assumed to be explicitly time-dependent with period 2. The aim is to bring this system by a sequence of canonical and periodical transformations into a form where the new Hamiltonian is constant and as simple as possible. The general theory is then brought to a stage where it becomes immediately applicable to given particular cases, particularly to circular particle accelerators. More general results are given on exciting strengths of different subresonance lines of equal order, on symmetry relations and on the one-dimensional case. An example is also given where the theory is overstressed and its predictions become wrong.

13. Use of linear free energy relationship to predict Gibbs free energies of formation of pyrochlore phases (CaMTi2O7)

Xu, H.; Wang, Y.

1999-01-01

In this letter, a linear free energy relationship is used to predict the Gibbs free energies of formation of crystalline phases of pyrochlore and zirconolite families with stoichiometry of MCaTi 2 O 7 (or, CaMTi 2 O 7 ,) from the known thermodynamic properties of aqueous tetravalent cations (M 4+ ). The linear free energy relationship for tetravalent cations is expressed as ΔG f,M v X 0 =a M v X ΔG n,M 4+ 0 +b M v X +β M v X r M 4+ , where the coefficients a M v X , b M v X , and β M v X characterize a particular structural family of M v X, r M 4+ is the ionic radius of M 4+ cation, ΔG f,M v X 0 is the standard Gibbs free energy of formation of M v X, and ΔG n,M 4+ 0 is the standard non-solvation energy of cation M 4+ . The coefficients for the structural family of zirconolite with the stoichiometry of M 4+ CaTi 2 O 7 are estimated to be: a M v X =0.5717, b M v X =-4284.67 (kJ/mol), and β M v X =27.2 (kJ/mol nm). The coefficients for the structural family of pyrochlore with the stoichiometry of M 4+ CaTi 2 O 7 are estimated to be: a M v X =0.5717, b M v X =-4174.25 (kJ/mol), and β M v X =13.4 (kJ/mol nm). Using the linear free energy relationship, the Gibbs free energies of formation of various zirconolite and pyrochlore phases are calculated. (orig.)

14. Multiple Linear Regression Modeling To Predict the Stability of Polymer-Drug Solid Dispersions: Comparison of the Effects of Polymers and Manufacturing Methods on Solid Dispersion Stability.

Fridgeirsdottir, Gudrun A; Harris, Robert J; Dryden, Ian L; Fischer, Peter M; Roberts, Clive J

2018-03-29

Solid dispersions can be a successful way to enhance the bioavailability of poorly soluble drugs. Here 60 solid dispersion formulations were produced using ten chemically diverse, neutral, poorly soluble drugs, three commonly used polymers, and two manufacturing techniques, spray-drying and melt extrusion. Each formulation underwent a six-month stability study at accelerated conditions, 40 °C and 75% relative humidity (RH). Significant differences in times to crystallization (onset of crystallization) were observed between both the different polymers and the two processing methods. Stability from zero days to over one year was observed. The extensive experimental data set obtained from this stability study was used to build multiple linear regression models to correlate physicochemical properties of the active pharmaceutical ingredients (API) with the stability data. The purpose of these models is to indicate which combination of processing method and polymer carrier is most likely to give a stable solid dispersion. Six quantitative mathematical multiple linear regression-based models were produced based on selection of the most influential independent physical and chemical parameters from a set of 33 possible factors, one model for each combination of polymer and processing method, with good predictability of stability. Three general rules are proposed from these models for the formulation development of suitably stable solid dispersions. Namely, increased stability is correlated with increased glass transition temperature ( T g ) of solid dispersions, as well as decreased number of H-bond donors and increased molecular flexibility (such as rotatable bonds and ring count) of the drug molecule.

15. Application of single-step genomic best linear unbiased prediction with a multiple-lactation random regression test-day model for Japanese Holsteins.

Baba, Toshimi; Gotoh, Yusaku; Yamaguchi, Satoshi; Nakagawa, Satoshi; Abe, Hayato; Masuda, Yutaka; Kawahara, Takayoshi

2017-08-01

This study aimed to evaluate a validation reliability of single-step genomic best linear unbiased prediction (ssGBLUP) with a multiple-lactation random regression test-day model and investigate an effect of adding genotyped cows on the reliability. Two data sets for test-day records from the first three lactations were used: full data from February 1975 to December 2015 (60 850 534 records from 2 853 810 cows) and reduced data cut off in 2011 (53 091 066 records from 2 502 307 cows). We used marker genotypes of 4480 bulls and 608 cows. Genomic enhanced breeding values (GEBV) of 305-day milk yield in all the lactations were estimated for at least 535 young bulls using two marker data sets: bull genotypes only and both bulls and cows genotypes. The realized reliability (R 2 ) from linear regression analysis was used as an indicator of validation reliability. Using only genotyped bulls, R 2 was ranged from 0.41 to 0.46 and it was always higher than parent averages. The very similar R 2 were observed when genotyped cows were added. An application of ssGBLUP to a multiple-lactation random regression model is feasible and adding a limited number of genotyped cows has no significant effect on reliability of GEBV for genotyped bulls. © 2016 Japanese Society of Animal Science.

16. Prediction of retention indices for frequently reported compounds of plant essential oils using multiple linear regression, partial least squares, and support vector machine.

Yan, Jun; Huang, Jian-Hua; He, Min; Lu, Hong-Bing; Yang, Rui; Kong, Bo; Xu, Qing-Song; Liang, Yi-Zeng

2013-08-01

Retention indices for frequently reported compounds of plant essential oils on three different stationary phases were investigated. Multivariate linear regression, partial least squares, and support vector machine combined with a new variable selection approach called random-frog recently proposed by our group, were employed to model quantitative structure-retention relationships. Internal and external validations were performed to ensure the stability and predictive ability. All the three methods could obtain an acceptable model, and the optimal results by support vector machine based on a small number of informative descriptors with the square of correlation coefficient for cross validation, values of 0.9726, 0.9759, and 0.9331 on the dimethylsilicone stationary phase, the dimethylsilicone phase with 5% phenyl groups, and the PEG stationary phase, respectively. The performances of two variable selection approaches, random-frog and genetic algorithm, are compared. The importance of the variables was found to be consistent when estimated from correlation coefficients in multivariate linear regression equations and selection probability in model spaces. © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

17. Modeling and simulation of protein elution in linear pH and salt gradients on weak, strong and mixed cation exchange resins applying an extended Donnan ion exchange model.

Wittkopp, Felix; Peeck, Lars; Hafner, Mathias; Frech, Christian

2018-04-13

Process development and characterization based on mathematic modeling provides several advantages and has been applied more frequently over the last few years. In this work, a Donnan equilibrium ion exchange (DIX) model is applied for modelling and simulation of ion exchange chromatography of a monoclonal antibody in linear chromatography. Four different cation exchange resin prototypes consisting of weak, strong and mixed ligands are characterized using pH and salt gradient elution experiments applying the extended DIX model. The modelling results are compared with the results using a classic stoichiometric displacement model. The Donnan equilibrium model is able to describe all four prototype resins while the stoichiometric displacement model fails for the weak and mixed weak/strong ligands. Finally, in silico chromatogram simulations of pH and pH/salt dual gradients are performed to verify the results and to show the consistency of the developed model. Copyright © 2018 Elsevier B.V. All rights reserved.

18. The Predictive Validity of the Torrance Figural Test (Form B) of Creative Thinking in the College of Fine and Applied Arts.

Stallings, William M.

In an effort to improve the predictability of course grades in the College of Fine and Applied Arts the Torrance Figural Test (Form B) of Creative Thinking was administered to entering 1968 freshmen. Four figural creativity variables (Fluency, Flexibility, Originality, and Elaboration) were correlated with course grades, American College Testing…

19. Linear algebra

Shilov, Georgi E

1977-01-01

Covers determinants, linear spaces, systems of linear equations, linear functions of a vector argument, coordinate transformations, the canonical form of the matrix of a linear operator, bilinear and quadratic forms, Euclidean spaces, unitary spaces, quadratic forms in Euclidean and unitary spaces, finite-dimensional space. Problems with hints and answers.

20. Heterogeneity index evaluated by slope of linear regression on 18F-FDG PET/CT as a prognostic marker for predicting tumor recurrence in pancreatic ductal adenocarcinoma

Kim, Yong-il; Kim, Yong Joong; Paeng, Jin Chul; Cheon, Gi Jeong; Lee, Dong Soo; Chung, June-Key; Kang, Keon Wook

2017-01-01

18 F-Fluorodeoxyglucose (FDG) positron emission tomography (PET)/computed tomography (CT) has been investigated as a method to predict pancreatic cancer recurrence after pancreatic surgery. We evaluated the recently introduced heterogeneity indices of 18 F-FDG PET/CT used for predicting pancreatic cancer recurrence after surgery and compared them with current clinicopathologic and 18 F-FDG PET/CT parameters. A total of 93 pancreatic ductal adenocarcinoma patients (M:F = 60:33, mean age = 64.2 ± 9.1 years) who underwent preoperative 18 F-FDG PET/CT following pancreatic surgery were retrospectively enrolled. The standardized uptake values (SUVs) and tumor-to-background ratios (TBR) were measured on each 18 F-FDG PET/CT, as metabolic parameters. Metabolic tumor volume (MTV) and total lesion glycolysis (TLG) were examined as volumetric parameters. The coefficient of variance (heterogeneity index-1; SUVmean divided by the standard deviation) and linear regression slopes (heterogeneity index-2) of the MTV, according to SUV thresholds of 2.0, 2.5 and 3.0, were evaluated as heterogeneity indices. Predictive values of clinicopathologic and 18 F-FDG PET/CT parameters and heterogeneity indices were compared in terms of pancreatic cancer recurrence. Seventy patients (75.3%) showed recurrence after pancreatic cancer surgery (mean recurrence = 9.4 ± 8.4 months). Comparing the recurrence and no recurrence patients, all of the 18 F-FDG PET/CT parameters and heterogeneity indices demonstrated significant differences. In univariate Cox-regression analyses, MTV (P = 0.013), TLG (P = 0.007), and heterogeneity index-2 (P = 0.027) were significant. Among the clinicopathologic parameters, CA19-9 (P = 0.025) and venous invasion (P = 0.002) were selected as significant parameters. In multivariate Cox-regression analyses, MTV (P = 0.005), TLG (P = 0.004), and heterogeneity index-2 (P = 0.016) with venous invasion (P < 0.001, 0.001, and 0.001, respectively) demonstrated significant results

1. Development of multiple linear regression models as predictive tools for fecal indicator concentrations in a stretch of the lower Lahn River, Germany.

Herrig, Ilona M; Böer, Simone I; Brennholt, Nicole; Manz, Werner

2015-11-15

Since rivers are typically subject to rapid changes in microbiological water quality, tools are needed to allow timely water quality assessment. A promising approach is the application of predictive models. In our study, we developed multiple linear regression (MLR) models in order to predict the abundance of the fecal indicator organisms Escherichia coli (EC), intestinal enterococci (IE) and somatic coliphages (SC) in the Lahn River, Germany. The models were developed on the basis of an extensive set of environmental parameters collected during a 12-months monitoring period. Two models were developed for each type of indicator: 1) an extended model including the maximum number of variables significantly explaining variations in indicator abundance and 2) a simplified model reduced to the three most influential explanatory variables, thus obtaining a model which is less resource-intensive with regard to required data. Both approaches have the ability to model multiple sites within one river stretch. The three most important predictive variables in the optimized models for the bacterial indicators were NH4-N, turbidity and global solar irradiance, whereas chlorophyll a content, discharge and NH4-N were reliable model variables for somatic coliphages. Depending on indicator type, the extended mode models also included the additional variables rainfall, O2 content, pH and chlorophyll a. The extended mode models could explain 69% (EC), 74% (IE) and 72% (SC) of the observed variance in fecal indicator concentrations. The optimized models explained the observed variance in fecal indicator concentrations to 65% (EC), 70% (IE) and 68% (SC). Site-specific efficiencies ranged up to 82% (EC) and 81% (IE, SC). Our results suggest that MLR models are a promising tool for a timely water quality assessment in the Lahn area. Copyright © 2015 Elsevier Ltd. All rights reserved.

2. Cosmological constraints from the CFHTLenS shear measurements using a new, accurate, and flexible way of predicting non-linear mass clustering

Angulo, Raul E.; Hilbert, Stefan

2015-03-01

We explore the cosmological constraints from cosmic shear using a new way of modelling the non-linear matter correlation functions. The new formalism extends the method of Angulo & White, which manipulates outputs of N-body simulations to represent the 3D non-linear mass distribution in different cosmological scenarios. We show that predictions from our approach for shear two-point correlations at 1-300 arcmin separations are accurate at the ˜10 per cent level, even for extreme changes in cosmology. For moderate changes, with target cosmologies similar to that preferred by analyses of recent Planck data, the accuracy is close to ˜5 per cent. We combine this approach with a Monte Carlo Markov chain sampler to explore constraints on a Λ cold dark matter model from the shear correlation functions measured in the Canada-France-Hawaii Telescope Lensing Survey (CFHTLenS). We obtain constraints on the parameter combination σ8(Ωm/0.27)0.6 = 0.801 ± 0.028. Combined with results from cosmic microwave background data, we obtain marginalized constraints on σ8 = 0.81 ± 0.01 and Ωm = 0.29 ± 0.01. These results are statistically compatible with previous analyses, which supports the validity of our approach. We discuss the advantages of our method and the potential it offers, including a path to model in detail (i) the effects of baryons, (ii) high-order shear correlation functions, and (iii) galaxy-galaxy lensing, among others, in future high-precision cosmological analyses.

3. The Nature and Predictive Value of Mothers’ Beliefs Regarding Infants’ and Toddlers’ TV/Video Viewing: Applying the Integrative Model of Behavioral Prediction

Vaala, Sarah E.

2014-01-01

Viewing television and video programming has become a normative behavior among US infants and toddlers. Little is understood about parents’ decision-making about the extent of their young children’s viewing, though numerous organizations are interested in reducing time spent viewing among infants and toddlers. Prior research has examined parents’ belief in the educational value of TV/videos for young children and the predictive value of this belief for understanding infant/toddler viewing rates, though other possible salient beliefs remain largely unexplored. This study employs the integrative model of behavioral prediction (Fishbein & Ajzen, 2010) to examine 30 maternal beliefs about infants’ and toddlers’ TV/video viewing which were elicited from a prior sample of mothers. Results indicate that mothers tend to hold more positive than negative beliefs about the outcomes associated with young children’s TV/video viewing, and that the nature of the aggregate set of beliefs is predictive of their general attitudes and intentions to allow their children to view, as well as children’s estimated viewing rates. Analyses also uncover multiple dimensions within the full set of beliefs, which explain more variance in mothers’ attitudes and intentions and children’s viewing than the uni-dimensional index. The theoretical and practical implications of the findings are discussed. PMID:25431537

4. A Combined Hydrological and Hydraulic Model for Flood Prediction in Vietnam Applied to the Huong River Basin as a Test Case Study

Dang Thanh Mai

2017-11-01

Full Text Available A combined hydrological and hydraulic model is presented for flood prediction in Vietnam. This model is applied to the Huong river basin as a test case study. Observed flood flows and water surface levels of the 2002–2005 flood seasons are used for model calibration, and those of the 2006–2007 flood seasons are used for validation of the model. The physically based distributed hydrologic model WetSpa is used for predicting the generation and propagation of flood flows in the mountainous upper sub-basins, and proves to predict flood flows accurately. The Hydrologic Engineering Center River Analysis System (HEC-RAS hydraulic model is applied to simulate flood flows and inundation levels in the downstream floodplain, and also proves to predict water levels accurately. The predicted water profiles are used for mapping of inundations in the floodplain. The model may be useful in developing flood forecasting and early warning systems to mitigate losses due to flooding in Vietnam.

5. SU-E-T-131: Artificial Neural Networks Applied to Overall Survival Prediction for Patients with Periampullary Carcinoma

Gong, Y; Yu, J; Yeung, V; Palmer, J; Yu, Y; Lu, B; Babinsky, L; Burkhart, R; Leiby, B; Siow, V; Lavu, H; Rosato, E; Winter, J; Lewis, N; Sama, A; Mitchell, E; Anne, P; Hurwitz, M; Yeo, C; Bar-Ad, V [Thomas Jefferson University Hospital, Philadelphia, PA (United States); and others

2015-06-15

Purpose: Artificial neural networks (ANN) can be used to discover complex relations within datasets to help with medical decision making. This study aimed to develop an ANN method to predict two-year overall survival of patients with peri-ampullary cancer (PAC) following resection. Methods: Data were collected from 334 patients with PAC following resection treated in our institutional pancreatic tumor registry between 2006 and 2012. The dataset contains 14 variables including age, gender, T-stage, tumor differentiation, positive-lymph-node ratio, positive resection margins, chemotherapy, radiation therapy, and tumor histology.After censoring for two-year survival analysis, 309 patients were left, of which 44 patients (∼15%) were randomly selected to form testing set. The remaining 265 cases were randomly divided into training set (211 cases, ∼80% of 265) and validation set (54 cases, ∼20% of 265) for 20 times to build 20 ANN models. Each ANN has one hidden layer with 5 units. The 20 ANN models were ranked according to their concordance index (c-index) of prediction on validation sets. To further improve prediction, the top 10% of ANN models were selected, and their outputs averaged for prediction on testing set. Results: By random division, 44 cases in testing set and the remaining 265 cases have approximately equal two-year survival rates, 36.4% and 35.5% respectively. The 20 ANN models, which were trained and validated on the 265 cases, yielded mean c-indexes as 0.59 and 0.63 on validation sets and the testing set, respectively. C-index was 0.72 when the two best ANN models (top 10%) were used in prediction on testing set. The c-index of Cox regression analysis was 0.63. Conclusion: ANN improved survival prediction for patients with PAC. More patient data and further analysis of additional factors may be needed for a more robust model, which will help guide physicians in providing optimal post-operative care. This project was supported by PA CURE Grant.

6. SU-E-T-131: Artificial Neural Networks Applied to Overall Survival Prediction for Patients with Periampullary Carcinoma

Gong, Y; Yu, J; Yeung, V; Palmer, J; Yu, Y; Lu, B; Babinsky, L; Burkhart, R; Leiby, B; Siow, V; Lavu, H; Rosato, E; Winter, J; Lewis, N; Sama, A; Mitchell, E; Anne, P; Hurwitz, M; Yeo, C; Bar-Ad, V

2015-01-01

Purpose: Artificial neural networks (ANN) can be used to discover complex relations within datasets to help with medical decision making. This study aimed to develop an ANN method to predict two-year overall survival of patients with peri-ampullary cancer (PAC) following resection. Methods: Data were collected from 334 patients with PAC following resection treated in our institutional pancreatic tumor registry between 2006 and 2012. The dataset contains 14 variables including age, gender, T-stage, tumor differentiation, positive-lymph-node ratio, positive resection margins, chemotherapy, radiation therapy, and tumor histology.After censoring for two-year survival analysis, 309 patients were left, of which 44 patients (∼15%) were randomly selected to form testing set. The remaining 265 cases were randomly divided into training set (211 cases, ∼80% of 265) and validation set (54 cases, ∼20% of 265) for 20 times to build 20 ANN models. Each ANN has one hidden layer with 5 units. The 20 ANN models were ranked according to their concordance index (c-index) of prediction on validation sets. To further improve prediction, the top 10% of ANN models were selected, and their outputs averaged for prediction on testing set. Results: By random division, 44 cases in testing set and the remaining 265 cases have approximately equal two-year survival rates, 36.4% and 35.5% respectively. The 20 ANN models, which were trained and validated on the 265 cases, yielded mean c-indexes as 0.59 and 0.63 on validation sets and the testing set, respectively. C-index was 0.72 when the two best ANN models (top 10%) were used in prediction on testing set. The c-index of Cox regression analysis was 0.63. Conclusion: ANN improved survival prediction for patients with PAC. More patient data and further analysis of additional factors may be needed for a more robust model, which will help guide physicians in providing optimal post-operative care. This project was supported by PA CURE Grant

7. Non-linear feeding functional responses in the Greater Flamingo (Phoenicopterus roseus) predict immediate negative impact of wetland degradation on this flagship species

Deville, Anne-Sophie; Grémillet, David; Gauthier-Clerc, Michel; Guillemain, Matthieu; Von Houwald, Friederike; Gardelli, Bruno; Béchet, Arnaud

2013-01-01

Accurate knowledge of the functional response of predators to prey density is essential for understanding food web dynamics, to parameterize mechanistic models of animal responses to environmental change, and for designing appropriate conservation measures. Greater flamingos (Phoenicopterus roseus), a flagship species of Mediterranean wetlands, primarily feed on Artemias (Artemia spp.) in commercial salt pans, an industry which may collapse for economic reasons. Flamingos also feed on alternative prey such as Chironomid larvae (e.g., Chironomid spp.) and rice seeds (Oryza sativa). However, the profitability of these food items for flamingos remains unknown. We determined the functional responses of flamingos feeding on Artemias, Chironomids, or rice. Experiments were conducted on 11 captive flamingos. For each food item, we offered different ranges of food densities, up to 13 times natural abundance. Video footage allowed estimating intake rates. Contrary to theoretical predictions for filter feeders, intake rates did not increase linearly with increasing food density (type I). Intake rates rather increased asymptotically with increasing food density (type II) or followed a sigmoid shape (type III). Hence, flamingos were not able to ingest food in direct proportion to their abundance, possibly because of unique bill structure resulting in limited filtering capabilities. Overall, flamingos foraged more efficiently on Artemias. When feeding on Chironomids, birds had lower instantaneous rates of food discovery and required more time to extract food from the sediment and ingest it, than when filtering Artemias from the water column. However, feeding on rice was energetically more profitable for flamingos than feeding on Artemias or Chironomids, explaining their attraction for rice fields. Crucially, we found that food densities required for flamingos to reach asymptotic intake rates are rarely met under natural conditions. This allows us to predict an immediate

8. Crop Yield Predictions - High Resolution Statistical Model for Intra-season Forecasts Applied to Corn in the US

Cai, Y.

2017-12-01

Accurately forecasting crop yields has broad implications for economic trading, food production monitoring, and global food security. However, the variation of environmental variables presents challenges to model yields accurately, especially when the lack of highly accurate measurements creates difficulties in creating models that can succeed across space and time. In 2016, we developed a sequence of machine-learning based models forecasting end-of-season corn yields for the US at both the county and national levels. We combined machine learning algorithms in a hierarchical way, and used an understanding of physiological processes in temporal feature selection, to achieve high precision in our intra-season forecasts, including in very anomalous seasons. During the live run, we predicted the national corn yield within 1.40% of the final USDA number as early as August. In the backtesting of the 2000-2015 period, our model predicts national yield within 2.69% of the actual yield on average already by mid-August. At the county level, our model predicts 77% of the variation in final yield using data through the beginning of August and improves to 80% by the beginning of October, with the percentage of counties predicted within 10% of the average yield increasing from 68% to 73%. Further, the lowest errors are in the most significant producing regions, resulting in very high precision national-level forecasts. In addition, we identify the changes of important variables throughout the season, specifically early-season land surface temperature, and mid-season land surface temperature and vegetation index. For the 2017 season, we feed 2016 data to the training set, together with additional geospatial data sources, aiming to make the current model even more precise. We will show how our 2017 US corn yield forecasts converges in time, which factors affect the yield the most, as well as present our plans for 2018 model adjustments.

9. Improvement of predictive tools for vapor-liquid equilibrium based on group contribution methods applied to lipid technology

Damaceno, Daniela S.; Perederic, Olivia A.; Ceriani, Roberta

2017-01-01

structures that the first-order functional groups are unable to handle. In the particular case of fatty systems these models are not able to adequately predict the non-ideality in the liquid phase. Consequently, a new set of functional groups is proposed to represent the lipid compounds, requiring thereby....... There are rather small differences between the models and no single model is the best in all cases....

10. Applying Probability Theory for the Quality Assessment of a Wildfire Spread Prediction Framework Based on Genetic Algorithms

2013-01-01

Full Text Available This work presents a framework for assessing how the existing constraints at the time of attending an ongoing forest fire affect simulation results, both in terms of quality (accuracy obtained and the time needed to make a decision. In the wildfire spread simulation and prediction area, it is essential to properly exploit the computational power offered by new computing advances. For this purpose, we rely on a two-stage prediction process to enhance the quality of traditional predictions, taking advantage of parallel computing. This strategy is based on an adjustment stage which is carried out by a well-known evolutionary technique: Genetic Algorithms. The core of this framework is evaluated according to the probability theory principles. Thus, a strong statistical study is presented and oriented towards the characterization of such an adjustment technique in order to help the operation managers deal with the two aspects previously mentioned: time and quality. The experimental work in this paper is based on a region in Spain which is one of the most prone to forest fires: El Cap de Creus.

11. Determining quantitative road safety targets by applying statistical prediction techniques and a multi-stage adjustment procedure.

Wittenberg, P; Sever, K; Knoth, S; Sahin, N; Bondarenko, J

2013-01-01

12. Linear models with R

Faraway, Julian J

2014-01-01

A Hands-On Way to Learning Data AnalysisPart of the core of statistics, linear models are used to make predictions and explain the relationship between the response and the predictors. Understanding linear models is crucial to a broader competence in the practice of statistics. Linear Models with R, Second Edition explains how to use linear models in physical science, engineering, social science, and business applications. The book incorporates several improvements that reflect how the world of R has greatly expanded since the publication of the first edition.New to the Second EditionReorganiz

13. Predictive Control Applied to a Solar Desalination Plant Connected to a Greenhouse with Daily Variation of Irrigation Water Demand

Lidia Roca

2016-03-01

Full Text Available The water deficit in the Mediterranean area is a known matter severely affecting agriculture. One way to avoid the aquifers’ exploitation is to supply water to crops by using thermal desalination processes. Moreover, in order to guarantee long-term sustainability, the required thermal energy for the desalination process can be provided by solar energy. This paper shows simulations for a case study in which a solar multi-effect distillation plant produces water for irrigation purposes. Detailed models of the involved systems are the base of a predictive controller to operate the desalination plant and fulfil the water demanded by the crops.

14. Linear integrated circuits

Carr, Joseph

1996-01-01

The linear IC market is large and growing, as is the demand for well trained technicians and engineers who understand how these devices work and how to apply them. Linear Integrated Circuits provides in-depth coverage of the devices and their operation, but not at the expense of practical applications in which linear devices figure prominently. This book is written for a wide readership from FE and first degree students, to hobbyists and professionals.Chapter 1 offers a general introduction that will provide students with the foundations of linear IC technology. From chapter 2 onwa

15. Concept of ecological corridors and agroforestal systems applied for the implementation of PETROBRAS punctual and linear projects: case study of COMPERJ (Rio de Janeiro Petrochemical Complex); Conceito de corredores ecologicos e sistemas agroflorestais aplicados a implantacao de empreendimentos pontuais e lineares em ambito PETROBRAS: estudo de caso do COMPERJ (Complexo Petroquimico do Rio de Janeiro)

Secron, Marcelo B; Mesquita, Ivan D; Soares, Luiz Felipe R; Almeida, Ronaldo Bento G. de; Fernandes, Renato; Dellamea, Giovani S [PETROBRAS, Rio de Janeiro, RJ (Brazil); Nunes, Rodrigo T; Pereira, Junior, Edson Rodrigues [SEEBLA, Servicos de Engenharia Emilio Baumgart Ltda., Rio de Janeiro, RJ (Brazil)

2008-07-01

The land use and human occupation realized with an indiscriminate form across many parts of the world, including Brazil, have been causing destruction of great amount of forest mass and green areas. These actions results an isolation of a forest reminder fragment, and in such case, along the time, these fragments become weak and debilitated, characterizing general biodiversity loss or its extinction, in a worse case. This study presents basic concepts of ecological corridors and agroforestal systems, showing the case study that will be implemented in COMPERJ (Rio de Janeiro Petrochemical Complex), pointing the aspects that can be applied for PETROBRAS to offset impacts (biodiversity offsets concept) of punctual and linear projects. (author)

16. Optimization the Initial Weights of Artificial Neural Networks via Genetic Algorithm Applied to Hip Bone Fracture Prediction

Yu-Tzu Chang

2012-01-01

Full Text Available This paper aims to find the optimal set of initial weights to enhance the accuracy of artificial neural networks (ANNs by using genetic algorithms (GA. The sample in this study included 228 patients with first low-trauma hip fracture and 215 patients without hip fracture, both of them were interviewed with 78 questions. We used logistic regression to select 5 important factors (i.e., bone mineral density, experience of fracture, average hand grip strength, intake of coffee, and peak expiratory flow rate for building artificial neural networks to predict the probabilities of hip fractures. Three-layer (one hidden layer ANNs models with back-propagation training algorithms were adopted. The purpose in this paper is to find the optimal initial weights of neural networks via genetic algorithm to improve the predictability. Area under the ROC curve (AUC was used to assess the performance of neural networks. The study results showed the genetic algorithm obtained an AUC of 0.858±0.00493 on modeling data and 0.802 ± 0.03318 on testing data. They were slightly better than the results of our previous study (0.868±0.00387 and 0.796±0.02559, resp.. Thus, the preliminary study for only using simple GA has been proved to be effective for improving the accuracy of artificial neural networks.

17. Virtual In-Silico Modeling Guided Catheter Ablation Predicts Effective Linear Ablation Lesion Set for Longstanding Persistent Atrial Fibrillation: Multicenter Prospective Randomized Study.

Shim, Jaemin; Hwang, Minki; Song, Jun-Seop; Lim, Byounghyun; Kim, Tae-Hoon; Joung, Boyoung; Kim, Sung-Hwan; Oh, Yong-Seog; Nam, Gi-Byung; On, Young Keun; Oh, Seil; Kim, Young-Hoon; Pak, Hui-Nam

2017-01-01

Objective: Radiofrequency catheter ablation for persistent atrial fibrillation (PeAF) still has a substantial recurrence rate. This study aims to investigate whether an AF ablation lesion set chosen using in-silico ablation (V-ABL) is clinically feasible and more effective than an empirically chosen ablation lesion set (Em-ABL) in patients with PeAF. Methods: We prospectively included 108 patients with antiarrhythmic drug-resistant PeAF (77.8% men, age 60.8 ± 9.9 years), and randomly assigned them to the V-ABL ( n = 53) and Em-ABL ( n = 55) groups. Five different in-silico ablation lesion sets [1 pulmonary vein isolation (PVI), 3 linear ablations, and 1 electrogram-guided ablation] were compared using heart-CT integrated AF modeling. We evaluated the feasibility, safety, and efficacy of V-ABL compared with that of Em-ABL. Results: The pre-procedural computing time for five different ablation strategies was 166 ± 11 min. In the Em-ABL group, the earliest terminating blinded in-silico lesion set matched with the Em-ABL lesion set in 21.8%. V-ABL was not inferior to Em-ABL in terms of procedure time ( p = 0.403), ablation time ( p = 0.510), and major complication rate ( p = 0.900). During 12.6 ± 3.8 months of follow-up, the clinical recurrence rate was 14.0% in the V-ABL group and 18.9% in the Em-ABL group ( p = 0.538). In Em-ABL group, clinical recurrence rate was significantly lower after PVI+posterior box+anterior linear ablation, which showed the most frequent termination during in-silico ablation (log-rank p = 0.027). Conclusions: V-ABL was feasible in clinical practice, not inferior to Em-ABL, and predicts the most effective ablation lesion set in patients who underwent PeAF ablation.

18. Making oxidation potentials predictable: Coordination of additives applied to the electronic fine tuning of an iron(II) complex

Haslinger, Stefan

2014-11-03

This work examines the impact of axially coordinating additives on the electronic structure of a bioinspired octahedral low-spin iron(II) N-heterocyclic carbene (Fe-NHC) complex. Bearing two labile trans-acetonitrile ligands, the Fe-NHC complex, which is also an excellent oxidation catalyst, is prone to axial ligand exchange. Phosphine- and pyridine-based additives are used for substitution of the acetonitrile ligands. On the basis of the resulting defined complexes, predictability of the oxidation potentials is demonstrated, based on a correlation between cyclic voltammetry experiments and density functional theory calculated molecular orbital energies. Fundamental insights into changes of the electronic properties upon axial ligand exchange and the impact on related attributes will finally lead to target-oriented manipulation of the electronic properties and consequently to the effective tuning of the reactivity of bioinspired systems.

19. Making oxidation potentials predictable: Coordination of additives applied to the electronic fine tuning of an iron(II) complex

Haslinger, Stefan; Kü ck, Jens W.; Hahn, Eva M.; Cokoja, Mirza; Pö thig, Alexander; Basset, Jean-Marie; Kü hn, Fritz

2014-01-01

This work examines the impact of axially coordinating additives on the electronic structure of a bioinspired octahedral low-spin iron(II) N-heterocyclic carbene (Fe-NHC) complex. Bearing two labile trans-acetonitrile ligands, the Fe-NHC complex, which is also an excellent oxidation catalyst, is prone to axial ligand exchange. Phosphine- and pyridine-based additives are used for substitution of the acetonitrile ligands. On the basis of the resulting defined complexes, predictability of the oxidation potentials is demonstrated, based on a correlation between cyclic voltammetry experiments and density functional theory calculated molecular orbital energies. Fundamental insights into changes of the electronic properties upon axial ligand exchange and the impact on related attributes will finally lead to target-oriented manipulation of the electronic properties and consequently to the effective tuning of the reactivity of bioinspired systems.

20. Strong ground motion prediction applying dynamic rupture simulations for Beppu-Haneyama Active Fault Zone, southwestern Japan

Yoshimi, M.; Matsushima, S.; Ando, R.; Miyake, H.; Imanishi, K.; Hayashida, T.; Takenaka, H.; Suzuki, H.; Matsuyama, H.

2017-12-01

We conducted strong ground motion prediction for the active Beppu-Haneyama Fault zone (BHFZ), Kyushu island, southwestern Japan. Since the BHFZ runs through Oita and Beppy cities, strong ground motion as well as fault displacement may affect much to the cities.We constructed a 3-dimensional velocity structure of a sedimentary basin, Beppu bay basin, where the fault zone runs through and Oita and Beppu cities are located. Minimum shear wave velocity of the 3d model is 500 m/s. Additional 1-d structure is modeled for sites with softer sediment: holocene plain area. We observed, collected, and compiled data obtained from microtremor surveys, ground motion observations, boreholes etc. phase velocity and H/V ratio. Finer structure of the Oita Plain is modeled, as 250m-mesh model, with empirical relation among N-value, lithology, depth and Vs, using borehole data, then validated with the phase velocity data obtained by the dense microtremor array observation (Yoshimi et al., 2016).Synthetic ground motion has been calculated with a hybrid technique composed of a stochastic Green's function method (for HF wave), a 3D finite difference (LF wave) and 1D amplification calculation. Fault geometry has been determined based on reflection surveys and active fault map. The rake angles are calculated with a dynamic rupture simulation considering three fault segments under a stress filed estimated from source mechanism of earthquakes around the faults (Ando et al., JpGU-AGU2017). Fault parameters such as the average stress drop, a size of asperity etc. are determined based on an empirical relation proposed by Irikura and Miyake (2001). As a result, strong ground motion stronger than 100 cm/s is predicted in the hanging wall side of the Oita plain.This work is supported by the Comprehensive Research on the Beppu-Haneyama Fault Zone funded by the Ministry of Education, Culture, Sports, Science, and Technology (MEXT), Japan.