Determination of regression laws: Linear and nonlinear
International Nuclear Information System (INIS)
Onishchenko, A.M.
1994-01-01
A detailed mathematical determination of regression laws is presented in the article. Particular emphasis is place on determining the laws of X j on X l to account for source nuclei decay and detector errors in nuclear physics instrumentation. Both linear and nonlinear relations are presented. Linearization of 19 functions is tabulated, including graph, relation, variable substitution, obtained linear function, and remarks. 6 refs., 1 tab
The Theory of Linear Prediction
Vaidyanathan, PP
2007-01-01
Linear prediction theory has had a profound impact in the field of digital signal processing. Although the theory dates back to the early 1940s, its influence can still be seen in applications today. The theory is based on very elegant mathematics and leads to many beautiful insights into statistical signal processing. Although prediction is only a part of the more general topics of linear estimation, filtering, and smoothing, this book focuses on linear prediction. This has enabled detailed discussion of a number of issues that are normally not found in texts. For example, the theory of vecto
Power laws from linear neuronal cable theory
DEFF Research Database (Denmark)
Pettersen, Klas H; Lindén, Henrik Anders; Tetzlaff, Tom
2014-01-01
suggested to be at the root of this phenomenon, we here demonstrate a possible origin of such power laws in the biophysical properties of single neurons described by the standard cable equation. Taking advantage of the analytical tractability of the so called ball and stick neuron model, we derive general...... are homogeneously distributed across the neural membranes and themselves exhibit pink ([Formula: see text]) noise distributions. While the PSD noise spectra at low frequencies may be dominated by synaptic noise, our findings suggest that the high-frequency power laws may originate in noise from intrinsic ion...
Decay properties of linear thermoelastic plates: Cattaneo versus Fourier law
Said-Houari, Belkacem
2013-01-01
In this article, we investigate the decay properties of the linear thermoelastic plate equations in the whole space for both Fourier and Cattaneo's laws of heat conduction. We point out that while the paradox of infinite propagation speed inherent
Similarities and Differences Between Warped Linear Prediction and Laguerre Linear Prediction
Brinker, Albertus C. den; Krishnamoorthi, Harish; Verbitskiy, Evgeny A.
2011-01-01
Linear prediction has been successfully applied in many speech and audio processing systems. This paper presents the similarities and differences between two classes of linear prediction schemes, namely, Warped Linear Prediction (WLP) and Laguerre Linear Prediction (LLP). It is shown that both
The Use of Linear Programming for Prediction.
Schnittjer, Carl J.
The purpose of the study was to develop a linear programming model to be used for prediction, test the accuracy of the predictions, and compare the accuracy with that produced by curvilinear multiple regression analysis. (Author)
Decay properties of linear thermoelastic plates: Cattaneo versus Fourier law
Said-Houari, Belkacem
2013-02-01
In this article, we investigate the decay properties of the linear thermoelastic plate equations in the whole space for both Fourier and Cattaneo\\'s laws of heat conduction. We point out that while the paradox of infinite propagation speed inherent in Fourier\\'s law is removed by changing to the Cattaneo law, the latter always leads to a loss of regularity of the solution. The main tool used to prove our results is the energy method in the Fourier space together with some integral estimates. We prove the decay estimates for initial data U0 ∈ Hs(ℝ) ∩ L1(ℝ). In addition, by restricting the initial data to U0 ∈ Hs(ℝ) ∩ L1,γ(ℝ) and γ ∈ [0, 1], we can derive faster decay estimates with the decay rate improvement by a factor of t-γ/2. © 2013 Copyright Taylor and Francis Group, LLC.
Linear zonal atmospheric prediction for adaptive optics
McGuire, Patrick C.; Rhoadarmer, Troy A.; Coy, Hanna A.; Angel, J. Roger P.; Lloyd-Hart, Michael
2000-07-01
We compare linear zonal predictors of atmospheric turbulence for adaptive optics. Zonal prediction has the possible advantage of being able to interpret and utilize wind-velocity information from the wavefront sensor better than modal prediction. For simulated open-loop atmospheric data for a 2- meter 16-subaperture AO telescope with 5 millisecond prediction and a lookback of 4 slope-vectors, we find that Widrow-Hoff Delta-Rule training of linear nets and Back- Propagation training of non-linear multilayer neural networks is quite slow, getting stuck on plateaus or in local minima. Recursive Least Squares training of linear predictors is two orders of magnitude faster and it also converges to the solution with global minimum error. We have successfully implemented Amari's Adaptive Natural Gradient Learning (ANGL) technique for a linear zonal predictor, which premultiplies the Delta-Rule gradients with a matrix that orthogonalizes the parameter space and speeds up the training by two orders of magnitude, like the Recursive Least Squares predictor. This shows that the simple Widrow-Hoff Delta-Rule's slow convergence is not a fluke. In the case of bright guidestars, the ANGL, RLS, and standard matrix-inversion least-squares (MILS) algorithms all converge to the same global minimum linear total phase error (approximately 0.18 rad2), which is only approximately 5% higher than the spatial phase error (approximately 0.17 rad2), and is approximately 33% lower than the total 'naive' phase error without prediction (approximately 0.27 rad2). ANGL can, in principle, also be extended to make non-linear neural network training feasible for these large networks, with the potential to lower the predictor error below the linear predictor error. We will soon scale our linear work to the approximately 108-subaperture MMT AO system, both with simulations and real wavefront sensor data from prime focus.
Infinite sets of conservation laws for linear and non-linear field equations
International Nuclear Information System (INIS)
Niederle, J.
1984-01-01
The work was motivated by a desire to understand group theoretically the existence of an infinite set of conservation laws for non-interacting fields and to carry over these conservation laws to the case of interacting fields. The relation between an infinite set of conservation laws of a linear field equation and the enveloping algebra of its space-time symmetry group was established. It is shown that in the case of the Korteweg-de Vries (KdV) equation to each symmetry of the corresponding linear equation delta sub(o)uxxx=u sub() determined by an element of the enveloping algebra of the space translation algebra, there corresponds a symmetry of the full KdV equation
Modeling and analysis of linear hyperbolic systems of balance laws
Bartecki, Krzysztof
2016-01-01
This monograph focuses on the mathematical modeling of distributed parameter systems in which mass/energy transport or wave propagation phenomena occur and which are described by partial differential equations of hyperbolic type. The case of linear (or linearized) 2 x 2 hyperbolic systems of balance laws is considered, i.e., systems described by two coupled linear partial differential equations with two variables representing physical quantities, depending on both time and one-dimensional spatial variable. Based on practical examples of a double-pipe heat exchanger and a transportation pipeline, two typical configurations of boundary input signals are analyzed: collocated, wherein both signals affect the system at the same spatial point, and anti-collocated, in which the input signals are applied to the two different end points of the system. The results of this book emerge from the practical experience of the author gained during his studies conducted in the experimental installation of a heat exchange cente...
On the universality of power laws for tokamak plasma predictions
Garcia, J.; Cambon, D.; Contributors, JET
2018-02-01
Significant deviations from well established power laws for the thermal energy confinement time, obtained from extensive databases analysis as the IPB98(y,2), have been recently reported in dedicated power scans. In order to illuminate the adequacy, validity and universality of power laws as tools for predicting plasma performance, a simplified analysis has been carried out in the framework of a minimal modeling for heat transport which is, however, able to account for the interplay between turbulence and collinear effects with the input power known to play a role in experiments with significant deviations from such power laws. Whereas at low powers, the usual scaling laws are recovered with little influence of other plasma parameters, resulting in a robust power low exponent, at high power it is shown how the exponents obtained are extremely sensitive to the heating deposition, the q-profile or even the sampling or the number of points considered due to highly non-linear behavior of the heat transport. In particular circumstances, even a minimum of the thermal energy confinement time with the input power can be obtained, which means that the approach of the energy confinement time as a power law might be intrinsically invalid. Therefore plasma predictions with a power law approximation with a constant exponent obtained from a regression of a broad range of powers and other plasma parameters which can non-linearly affect and suppress heat transport, can lead to misleading results suggesting that this approach should be taken cautiously and its results continuously compared with modeling which can properly capture the underline physics, as gyrokinetic simulations.
Scaling laws for e+/e- linear colliders
International Nuclear Information System (INIS)
Delahaye, J.P.; Guignard, G.; Raubenheimer, T.; Wilson, I.
1999-01-01
Design studies of a future TeV e + e - Linear Collider (TLC) are presently being made by five major laboratories within the framework of a world-wide collaboration. A figure of merit is defined which enables an objective comparison of these different designs. This figure of merit is shown to depend only on a small number of parameters. General scaling laws for the main beam parameters and linac parameters are derived and prove to be very effective when used as guidelines to optimize the linear collider design. By adopting appropriate parameters for beam stability, the figure of merit becomes nearly independent of accelerating gradient and RF frequency of the accelerating structures. In spite of the strong dependence of the wake fields with frequency, the single-bunch emittance blow-up during acceleration along the linac is also shown to be independent of the RF frequency when using equivalent trajectory correction schemes. In this situation, beam acceleration using high-frequency structures becomes very advantageous because it enables high accelerating fields to be obtained, which reduces the overall length and consequently the total cost of the linac. (Copyright (c) 1999 Elsevier Science B.V., Amsterdam. All rights reserved.)
Linear Prediction Using Refined Autocorrelation Function
Directory of Open Access Journals (Sweden)
M. Shahidur Rahman
2007-07-01
Full Text Available This paper proposes a new technique for improving the performance of linear prediction analysis by utilizing a refined version of the autocorrelation function. Problems in analyzing voiced speech using linear prediction occur often due to the harmonic structure of the excitation source, which causes the autocorrelation function to be an aliased version of that of the vocal tract impulse response. To estimate the vocal tract characteristics accurately, however, the effect of aliasing must be eliminated. In this paper, we employ homomorphic deconvolution technique in the autocorrelation domain to eliminate the aliasing effect occurred due to periodicity. The resulted autocorrelation function of the vocal tract impulse response is found to produce significant improvement in estimating formant frequencies. The accuracy of formant estimation is verified on synthetic vowels for a wide range of pitch frequencies typical for male and female speakers. The validity of the proposed method is also illustrated by inspecting the spectral envelopes of natural speech spoken by high-pitched female speaker. The synthesis filter obtained by the current method is guaranteed to be stable, which makes the method superior to many of its alternatives.
Sparsity in Linear Predictive Coding of Speech
DEFF Research Database (Denmark)
Giacobello, Daniele
of the effectiveness of their application in audio processing. The second part of the thesis deals with introducing sparsity directly in the linear prediction analysis-by-synthesis (LPAS) speech coding paradigm. We first propose a novel near-optimal method to look for a sparse approximate excitation using a compressed...... one with direct applications to coding but also consistent with the speech production model of voiced speech, where the excitation of the all-pole filter can be modeled as an impulse train, i.e., a sparse sequence. Introducing sparsity in the LP framework will also bring to de- velop the concept...... sensing formulation. Furthermore, we define a novel re-estimation procedure to adapt the predictor coefficients to the given sparse excitation, balancing the two representations in the context of speech coding. Finally, the advantages of the compact parametric representation of a segment of speech, given...
Nonlinear and linear wave equations for propagation in media with frequency power law losses
Szabo, Thomas L.
2003-10-01
The Burgers, KZK, and Westervelt wave equations used for simulating wave propagation in nonlinear media are based on absorption that has a quadratic dependence on frequency. Unfortunately, most lossy media, such as tissue, follow a more general frequency power law. The authors first research involved measurements of loss and dispersion associated with a modification to Blackstock's solution to the linear thermoviscous wave equation [J. Acoust. Soc. Am. 41, 1312 (1967)]. A second paper by Blackstock [J. Acoust. Soc. Am. 77, 2050 (1985)] showed the loss term in the Burgers equation for plane waves could be modified for other known instances of loss. The authors' work eventually led to comprehensive time-domain convolutional operators that accounted for both dispersion and general frequency power law absorption [Szabo, J. Acoust. Soc. Am. 96, 491 (1994)]. Versions of appropriate loss terms were developed to extend the standard three nonlinear wave equations to these more general losses. Extensive experimental data has verified the predicted phase velocity dispersion for different power exponents for the linear case. Other groups are now working on methods suitable for solving wave equations numerically for these types of loss directly in the time domain for both linear and nonlinear media.
Conservation laws for multidimensional systems and related linear algebra problems
International Nuclear Information System (INIS)
Igonin, Sergei
2002-01-01
We consider multidimensional systems of PDEs of generalized evolution form with t-derivatives of arbitrary order on the left-hand side and with the right-hand side dependent on lower order t-derivatives and arbitrary space derivatives. For such systems we find an explicit necessary condition for the existence of higher conservation laws in terms of the system's symbol. For systems that violate this condition we give an effective upper bound on the order of conservation laws. Using this result, we completely describe conservation laws for viscous transonic equations, for the Brusselator model and the Belousov-Zhabotinskii system. To achieve this, we solve over an arbitrary field the matrix equations SA=A t S and SA=-A t S for a quadratic matrix A and its transpose A t , which may be of independent interest
Linear regression crash prediction models : issues and proposed solutions.
2010-05-01
The paper develops a linear regression model approach that can be applied to : crash data to predict vehicle crashes. The proposed approach involves novice data aggregation : to satisfy linear regression assumptions; namely error structure normality ...
Perruisseau-Carrier, A; Bahlouli, N; Bierry, G; Vernet, P; Facca, S; Liverneaux, P
2017-12-01
Augmented reality could help the identification of nerve structures in brachial plexus surgery. The goal of this study was to determine which law of mechanical behavior was more adapted by comparing the results of Hooke's isotropic linear elastic law to those of Ogden's isotropic hyperelastic law, applied to a biomechanical model of the brachial plexus. A model of finite elements was created using the ABAQUS ® from a 3D model of the brachial plexus acquired by segmentation and meshing of MRI images at 0°, 45° and 135° of shoulder abduction of a healthy subject. The offset between the reconstructed model and the deformed model was evaluated quantitatively by the Hausdorff distance and qualitatively by the identification of 3 anatomical landmarks. In every case the Hausdorff distance was shorter with Ogden's law compared to Hooke's law. On a qualitative aspect, the model deformed by Ogden's law followed the concavity of the reconstructed model whereas the model deformed by Hooke's law remained convex. In conclusion, the results of this study demonstrate that the behavior of Ogden's isotropic hyperelastic mechanical model was more adapted to the modeling of the deformations of the brachial plexus. Copyright © 2017 Elsevier Masson SAS. All rights reserved.
Deformation Prediction Using Linear Polynomial Functions ...
African Journals Online (AJOL)
By Deformation, we mean change of shape of any structure from its original shape and by monitoring over time using Geodetic means, the change in shape, size and the overall structural dynamics behaviors of structure can be detected. Prediction is therefor based on the epochs measurement obtained during monitoring, ...
Infinite sets of conservation laws for linear and nonlinear field equations
International Nuclear Information System (INIS)
Mickelsson, J.
1984-01-01
The relation between an infinite set of conservation laws of a linear field equation and the enveloping algebra of the space-time symmetry group is established. It is shown that each symmetric element of the enveloping algebra of the space-time symmetry group of a linear field equation generates a one-parameter group of symmetries of the field equation. The cases of the Maxwell and Dirac equations are studied in detail. Then it is shown that (at least in the sense of a power series in the 'coupling constant') the conservation laws of the linear case can be deformed to conservation laws of a nonlinear field equation which is obtained from the linear one by adding a nonlinear term invariant under the group of space-time symmetries. As an example, our method is applied to the Korteweg-de Vries equation and to the massless Thirring model. (orig.)
Model Predictive Control for Linear Complementarity and Extended Linear Complementarity Systems
Directory of Open Access Journals (Sweden)
Bambang Riyanto
2005-11-01
Full Text Available In this paper, we propose model predictive control method for linear complementarity and extended linear complementarity systems by formulating optimization along prediction horizon as mixed integer quadratic program. Such systems contain interaction between continuous dynamics and discrete event systems, and therefore, can be categorized as hybrid systems. As linear complementarity and extended linear complementarity systems finds applications in different research areas, such as impact mechanical systems, traffic control and process control, this work will contribute to the development of control design method for those areas as well, as shown by three given examples.
Approximate analytical relationships for linear optimal aeroelastic flight control laws
Kassem, Ayman Hamdy
1998-09-01
This dissertation introduces new methods to uncover functional relationships between design parameters of a contemporary control design technique and the resulting closed-loop properties. Three new methods are developed for generating such relationships through analytical expressions: the Direct Eigen-Based Technique, the Order of Magnitude Technique, and the Cost Function Imbedding Technique. Efforts concentrated on the linear-quadratic state-feedback control-design technique applied to an aeroelastic flight control task. For this specific application, simple and accurate analytical expressions for the closed-loop eigenvalues and zeros in terms of basic parameters such as stability and control derivatives, structural vibration damping and natural frequency, and cost function weights are generated. These expressions explicitly indicate how the weights augment the short period and aeroelastic modes, as well as the closed-loop zeros, and by what physical mechanism. The analytical expressions are used to address topics such as damping, nonminimum phase behavior, stability, and performance with robustness considerations, and design modifications. This type of knowledge is invaluable to the flight control designer and would be more difficult to formulate when obtained from numerical-based sensitivity analysis.
Large-scale linear programs in planning and prediction.
2017-06-01
Large-scale linear programs are at the core of many traffic-related optimization problems in both planning and prediction. Moreover, many of these involve significant uncertainty, and hence are modeled using either chance constraints, or robust optim...
Who Will Win?: Predicting the Presidential Election Using Linear Regression
Lamb, John H.
2007-01-01
This article outlines a linear regression activity that engages learners, uses technology, and fosters cooperation. Students generated least-squares linear regression equations using TI-83 Plus[TM] graphing calculators, Microsoft[C] Excel, and paper-and-pencil calculations using derived normal equations to predict the 2004 presidential election.…
Predicting birth weight with conditionally linear transformation models.
Möst, Lisa; Schmid, Matthias; Faschingbauer, Florian; Hothorn, Torsten
2016-12-01
Low and high birth weight (BW) are important risk factors for neonatal morbidity and mortality. Gynecologists must therefore accurately predict BW before delivery. Most prediction formulas for BW are based on prenatal ultrasound measurements carried out within one week prior to birth. Although successfully used in clinical practice, these formulas focus on point predictions of BW but do not systematically quantify uncertainty of the predictions, i.e. they result in estimates of the conditional mean of BW but do not deliver prediction intervals. To overcome this problem, we introduce conditionally linear transformation models (CLTMs) to predict BW. Instead of focusing only on the conditional mean, CLTMs model the whole conditional distribution function of BW given prenatal ultrasound parameters. Consequently, the CLTM approach delivers both point predictions of BW and fetus-specific prediction intervals. Prediction intervals constitute an easy-to-interpret measure of prediction accuracy and allow identification of fetuses subject to high prediction uncertainty. Using a data set of 8712 deliveries at the Perinatal Centre at the University Clinic Erlangen (Germany), we analyzed variants of CLTMs and compared them to standard linear regression estimation techniques used in the past and to quantile regression approaches. The best-performing CLTM variant was competitive with quantile regression and linear regression approaches in terms of conditional coverage and average length of the prediction intervals. We propose that CLTMs be used because they are able to account for possible heteroscedasticity, kurtosis, and skewness of the distribution of BWs. © The Author(s) 2014.
Montoye, Alexander H K; Begum, Munni; Henning, Zachary; Pfeiffer, Karin A
2017-02-01
This study had three purposes, all related to evaluating energy expenditure (EE) prediction accuracy from body-worn accelerometers: (1) compare linear regression to linear mixed models, (2) compare linear models to artificial neural network models, and (3) compare accuracy of accelerometers placed on the hip, thigh, and wrists. Forty individuals performed 13 activities in a 90 min semi-structured, laboratory-based protocol. Participants wore accelerometers on the right hip, right thigh, and both wrists and a portable metabolic analyzer (EE criterion). Four EE prediction models were developed for each accelerometer: linear regression, linear mixed, and two ANN models. EE prediction accuracy was assessed using correlations, root mean square error (RMSE), and bias and was compared across models and accelerometers using repeated-measures analysis of variance. For all accelerometer placements, there were no significant differences for correlations or RMSE between linear regression and linear mixed models (correlations: r = 0.71-0.88, RMSE: 1.11-1.61 METs; p > 0.05). For the thigh-worn accelerometer, there were no differences in correlations or RMSE between linear and ANN models (ANN-correlations: r = 0.89, RMSE: 1.07-1.08 METs. Linear models-correlations: r = 0.88, RMSE: 1.10-1.11 METs; p > 0.05). Conversely, one ANN had higher correlations and lower RMSE than both linear models for the hip (ANN-correlation: r = 0.88, RMSE: 1.12 METs. Linear models-correlations: r = 0.86, RMSE: 1.18-1.19 METs; p linear models for the wrist-worn accelerometers (ANN-correlations: r = 0.82-0.84, RMSE: 1.26-1.32 METs. Linear models-correlations: r = 0.71-0.73, RMSE: 1.55-1.61 METs; p models offer a significant improvement in EE prediction accuracy over linear models. Conversely, linear models showed similar EE prediction accuracy to machine learning models for hip- and thigh
Bianchi-Baecklund transformations, conservation laws, and linearization of various field theories
International Nuclear Information System (INIS)
Chau Wang, L.L.
1980-01-01
The discussion includes: the Sine-Gordon equation, parametric Bianchi-Baecklund transformations and the derivation of local conservation laws; chiral fields, parametric Bianchi-Baecklund transformations, local and non-local conservation laws, and linearization; super chiral fields, a parallel development similar to the chiral field; and self-dual Yang-Mills fields in 4-dimensional Euclidean space; loop-cpace chiral equations, parallel development but with subtlety
Implementation of neural network based non-linear predictive
DEFF Research Database (Denmark)
Sørensen, Paul Haase; Nørgård, Peter Magnus; Ravn, Ole
1998-01-01
The paper describes a control method for non-linear systems based on generalized predictive control. Generalized predictive control (GPC) was developed to control linear systems including open loop unstable and non-minimum phase systems, but has also been proposed extended for the control of non......-linear systems. GPC is model-based and in this paper we propose the use of a neural network for the modeling of the system. Based on the neural network model a controller with extended control horizon is developed and the implementation issues are discussed, with particular emphasis on an efficient Quasi......-Newton optimization algorithm. The performance is demonstrated on a pneumatic servo system....
An online re-linearization scheme suited for Model Predictive and Linear Quadratic Control
DEFF Research Database (Denmark)
Henriksen, Lars Christian; Poulsen, Niels Kjølstad
This technical note documents the equations for primal-dual interior-point quadratic programming problem solver used for MPC. The algorithm exploits the special structure of the MPC problem and is able to reduce the computational burden such that the computational burden scales with prediction...... horizon length in a linear way rather than cubic, which would be the case if the structure was not exploited. It is also shown how models used for design of model-based controllers, e.g. linear quadratic and model predictive, can be linearized both at equilibrium and non-equilibrium points, making...
A uniform law for convergence to the local times of linear fractional stable motions
Duffy, James A.
2016-01-01
We provide a uniform law for the weak convergence of additive functionals of partial sum processes to the local times of linear fractional stable motions, in a setting sufficiently general for statistical applications. Our results are fundamental to the analysis of the global properties of nonparametric estimators of nonlinear statistical models that involve such processes as covariates.
Modelling and Predicting Backstroke Start Performance Using Non-Linear and Linear Models.
de Jesus, Karla; Ayala, Helon V H; de Jesus, Kelly; Coelho, Leandro Dos S; Medeiros, Alexandre I A; Abraldes, José A; Vaz, Mário A P; Fernandes, Ricardo J; Vilas-Boas, João Paulo
2018-03-01
Our aim was to compare non-linear and linear mathematical model responses for backstroke start performance prediction. Ten swimmers randomly completed eight 15 m backstroke starts with feet over the wedge, four with hands on the highest horizontal and four on the vertical handgrip. Swimmers were videotaped using a dual media camera set-up, with the starts being performed over an instrumented block with four force plates. Artificial neural networks were applied to predict 5 m start time using kinematic and kinetic variables and to determine the accuracy of the mean absolute percentage error. Artificial neural networks predicted start time more robustly than the linear model with respect to changing training to the validation dataset for the vertical handgrip (3.95 ± 1.67 vs. 5.92 ± 3.27%). Artificial neural networks obtained a smaller mean absolute percentage error than the linear model in the horizontal (0.43 ± 0.19 vs. 0.98 ± 0.19%) and vertical handgrip (0.45 ± 0.19 vs. 1.38 ± 0.30%) using all input data. The best artificial neural network validation revealed a smaller mean absolute error than the linear model for the horizontal (0.007 vs. 0.04 s) and vertical handgrip (0.01 vs. 0.03 s). Artificial neural networks should be used for backstroke 5 m start time prediction due to the quite small differences among the elite level performances.
Nonlinear Dynamic Inversion Baseline Control Law: Architecture and Performance Predictions
Miller, Christopher J.
2011-01-01
A model reference dynamic inversion control law has been developed to provide a baseline control law for research into adaptive elements and other advanced flight control law components. This controller has been implemented and tested in a hardware-in-the-loop simulation; the simulation results show excellent handling qualities throughout the limited flight envelope. A simple angular momentum formulation was chosen because it can be included in the stability proofs for many basic adaptive theories, such as model reference adaptive control. Many design choices and implementation details reflect the requirements placed on the system by the nonlinear flight environment and the desire to keep the system as basic as possible to simplify the addition of the adaptive elements. Those design choices are explained, along with their predicted impact on the handling qualities.
Predictive IP controller for robust position control of linear servo system.
Lu, Shaowu; Zhou, Fengxing; Ma, Yajie; Tang, Xiaoqi
2016-07-01
Position control is a typical application of linear servo system. In this paper, to reduce the system overshoot, an integral plus proportional (IP) controller is used in the position control implementation. To further improve the control performance, a gain-tuning IP controller based on a generalized predictive control (GPC) law is proposed. Firstly, to represent the dynamics of the position loop, a second-order linear model is used and its model parameters are estimated on-line by using a recursive least squares method. Secondly, based on the GPC law, an optimal control sequence is obtained by using receding horizon, then directly supplies the IP controller with the corresponding control parameters in the real operations. Finally, simulation and experimental results are presented to show the efficiency of proposed scheme. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.
Fall with linear drag and Wien's displacement law: approximate solution and Lambert function
International Nuclear Information System (INIS)
Vial, Alexandre
2012-01-01
We present an approximate solution for the downward time of travel in the case of a mass falling with a linear drag force. We show how a quasi-analytical solution implying the Lambert function can be found. We also show that solving the previous problem is equivalent to the search for Wien's displacement law. These results can be of interest for undergraduate students, as they show that some transcendental equations found in physics may be solved without purely numerical methods. Moreover, as will be seen in the case of Wien's displacement law, solutions based on series expansion can be very accurate even with few terms. (paper)
EPMLR: sequence-based linear B-cell epitope prediction method using multiple linear regression.
Lian, Yao; Ge, Meng; Pan, Xian-Ming
2014-12-19
B-cell epitopes have been studied extensively due to their immunological applications, such as peptide-based vaccine development, antibody production, and disease diagnosis and therapy. Despite several decades of research, the accurate prediction of linear B-cell epitopes has remained a challenging task. In this work, based on the antigen's primary sequence information, a novel linear B-cell epitope prediction model was developed using the multiple linear regression (MLR). A 10-fold cross-validation test on a large non-redundant dataset was performed to evaluate the performance of our model. To alleviate the problem caused by the noise of negative dataset, 300 experiments utilizing 300 sub-datasets were performed. We achieved overall sensitivity of 81.8%, precision of 64.1% and area under the receiver operating characteristic curve (AUC) of 0.728. We have presented a reliable method for the identification of linear B cell epitope using antigen's primary sequence information. Moreover, a web server EPMLR has been developed for linear B-cell epitope prediction: http://www.bioinfo.tsinghua.edu.cn/epitope/EPMLR/ .
Validation of Individual Non-Linear Predictive Pharmacokinetic ...
African Journals Online (AJOL)
3Department of Veterinary Medicine, Faculty of Agriculture, University of Novi Sad, Novi Sad, Republic of Serbia ... Purpose: To evaluate the predictive performance of phenytoin multiple dosing non-linear pharmacokinetic ... status epilepticus affects an estimated 152,000 ..... causal factors, i.e., infection, inflammation, tissue.
Neural Generalized Predictive Control of a non-linear Process
DEFF Research Database (Denmark)
Sørensen, Paul Haase; Nørgård, Peter Magnus; Ravn, Ole
1998-01-01
The use of neural network in non-linear control is made difficult by the fact the stability and robustness is not guaranteed and that the implementation in real time is non-trivial. In this paper we introduce a predictive controller based on a neural network model which has promising stability qu...... detail and discuss the implementation difficulties. The neural generalized predictive controller is tested on a pneumatic servo sys-tem.......The use of neural network in non-linear control is made difficult by the fact the stability and robustness is not guaranteed and that the implementation in real time is non-trivial. In this paper we introduce a predictive controller based on a neural network model which has promising stability...... qualities. The controller is a non-linear version of the well-known generalized predictive controller developed in linear control theory. It involves minimization of a cost function which in the present case has to be done numerically. Therefore, we develop the numerical algorithms necessary in substantial...
Implementation of neural network based non-linear predictive control
DEFF Research Database (Denmark)
Sørensen, Paul Haase; Nørgård, Peter Magnus; Ravn, Ole
1999-01-01
This paper describes a control method for non-linear systems based on generalized predictive control. Generalized predictive control (GPC) was developed to control linear systems, including open-loop unstable and non-minimum phase systems, but has also been proposed to be extended for the control...... of non-linear systems. GPC is model based and in this paper we propose the use of a neural network for the modeling of the system. Based on the neural network model, a controller with extended control horizon is developed and the implementation issues are discussed, with particular emphasis...... on an efficient quasi-Newton algorithm. The performance is demonstrated on a pneumatic servo system....
ORACLS: A system for linear-quadratic-Gaussian control law design
Armstrong, E. S.
1978-01-01
A modern control theory design package (ORACLS) for constructing controllers and optimal filters for systems modeled by linear time-invariant differential or difference equations is described. Numerical linear-algebra procedures are used to implement the linear-quadratic-Gaussian (LQG) methodology of modern control theory. Algorithms are included for computing eigensystems of real matrices, the relative stability of a matrix, factored forms for nonnegative definite matrices, the solutions and least squares approximations to the solutions of certain linear matrix algebraic equations, the controllability properties of a linear time-invariant system, and the steady state covariance matrix of an open-loop stable system forced by white noise. Subroutines are provided for solving both the continuous and discrete optimal linear regulator problems with noise free measurements and the sampled-data optimal linear regulator problem. For measurement noise, duality theory and the optimal regulator algorithms are used to solve the continuous and discrete Kalman-Bucy filter problems. Subroutines are also included which give control laws causing the output of a system to track the output of a prescribed model.
Improved Methods for Pitch Synchronous Linear Prediction Analysis of Speech
劉, 麗清
2015-01-01
Linear prediction (LP) analysis has been applied to speech system over the last few decades. LP technique is well-suited for speech analysis due to its ability to model speech production process approximately. Hence LP analysis has been widely used for speech enhancement, low-bit-rate speech coding in cellular telephony, speech recognition, characteristic parameter extraction (vocal tract resonances frequencies, fundamental frequency called pitch) and so on. However, the performance of the co...
Linear and nonlinear dynamic systems in financial time series prediction
Directory of Open Access Journals (Sweden)
Salim Lahmiri
2012-10-01
Full Text Available Autoregressive moving average (ARMA process and dynamic neural networks namely the nonlinear autoregressive moving average with exogenous inputs (NARX are compared by evaluating their ability to predict financial time series; for instance the S&P500 returns. Two classes of ARMA are considered. The first one is the standard ARMA model which is a linear static system. The second one uses Kalman filter (KF to estimate and predict ARMA coefficients. This model is a linear dynamic system. The forecasting ability of each system is evaluated by means of mean absolute error (MAE and mean absolute deviation (MAD statistics. Simulation results indicate that the ARMA-KF system performs better than the standard ARMA alone. Thus, introducing dynamics into the ARMA process improves the forecasting accuracy. In addition, the ARMA-KF outperformed the NARX. This result may suggest that the linear component found in the S&P500 return series is more dominant than the nonlinear part. In sum, we conclude that introducing dynamics into the ARMA process provides an effective system for S&P500 time series prediction.
Characterizing and predicting the robustness of power-law networks
International Nuclear Information System (INIS)
LaRocca, Sarah; Guikema, Seth D.
2015-01-01
Power-law networks such as the Internet, terrorist cells, species relationships, and cellular metabolic interactions are susceptible to node failures, yet maintaining network connectivity is essential for network functionality. Disconnection of the network leads to fragmentation and, in some cases, collapse of the underlying system. However, the influences of the topology of networks on their ability to withstand node failures are poorly understood. Based on a study of the response of 2000 randomly-generated power-law networks to node failures, we find that networks with higher nodal degree and clustering coefficient, lower betweenness centrality, and lower variability in path length and clustering coefficient maintain their cohesion better during such events. We also find that network robustness, i.e., the ability to withstand node failures, can be accurately predicted a priori for power-law networks across many fields. These results provide a basis for designing new, more robust networks, improving the robustness of existing networks such as the Internet and cellular metabolic pathways, and efficiently degrading networks such as terrorist cells. - Highlights: • Examine relationship between network topology and robustness to failures. • Relationship is statistically significant for scale-free networks. • Use statistical models to estimate robustness to failures for real-world networks
Non-linear aeroelastic prediction for aircraft applications
de C. Henshaw, M. J.; Badcock, K. J.; Vio, G. A.; Allen, C. B.; Chamberlain, J.; Kaynes, I.; Dimitriadis, G.; Cooper, J. E.; Woodgate, M. A.; Rampurawala, A. M.; Jones, D.; Fenwick, C.; Gaitonde, A. L.; Taylor, N. V.; Amor, D. S.; Eccles, T. A.; Denley, C. J.
2007-05-01
Current industrial practice for the prediction and analysis of flutter relies heavily on linear methods and this has led to overly conservative design and envelope restrictions for aircraft. Although the methods have served the industry well, it is clear that for a number of reasons the inclusion of non-linearity in the mathematical and computational aeroelastic prediction tools is highly desirable. The increase in available and affordable computational resources, together with major advances in algorithms, mean that non-linear aeroelastic tools are now viable within the aircraft design and qualification environment. The Partnership for Unsteady Methods in Aerodynamics (PUMA) Defence and Aerospace Research Partnership (DARP) was sponsored in 2002 to conduct research into non-linear aeroelastic prediction methods and an academic, industry, and government consortium collaborated to address the following objectives: To develop useable methodologies to model and predict non-linear aeroelastic behaviour of complete aircraft. To evaluate the methodologies on real aircraft problems. To investigate the effect of non-linearities on aeroelastic behaviour and to determine which have the greatest effect on the flutter qualification process. These aims have been very effectively met during the course of the programme and the research outputs include: New methods available to industry for use in the flutter prediction process, together with the appropriate coaching of industry engineers. Interesting results in both linear and non-linear aeroelastics, with comprehensive comparison of methods and approaches for challenging problems. Additional embryonic techniques that, with further research, will further improve aeroelastics capability. This paper describes the methods that have been developed and how they are deployable within the industrial environment. We present a thorough review of the PUMA aeroelastics programme together with a comprehensive review of the relevant research
The Inverse System Method Applied to the Derivation of Power System Non—linear Control Laws
Institute of Scientific and Technical Information of China (English)
DonghaiLI; XuezhiJIANG; 等
1997-01-01
The differential geometric method has been applied to a series of power system non-linear control problems effectively.However a set of differential equations must be solved for obtaining the required diffeomorphic transformation.Therefore the derivation of control laws is very complicated.In fact because of the specificity of power system models the required diffeomorphic transformation may be obtained directly,so it is unnecessary to solve a set of differential equations.In addition inverse system method is equivalent to differential geometric method in reality and not limited to affine nonlinear systems,Its physical meaning is able to be viewed directly and its deduction needs only algebraic operation and derivation,so control laws can be obtained easily and the application to engineering is very convenient.Authors of this paper take steam valving control of power system as a typical case to be studied.It is demonstrated that the control law deduced by inverse system method is just the same as one by differential geometric method.The conclusion will simplify the control law derivations of steam valving,excitation,converter and static var compensator by differential geometric method and may be suited to similar control problems in other areas.
A kinetic approach to some quasi-linear laws of macroeconomics
Gligor, M.; Ignat, M.
2002-11-01
Some previous works have presented the data on wealth and income distributions in developed countries and have found that the great majority of population is described by an exponential distribution, which results in idea that the kinetic approach could be adequate to describe this empirical evidence. The aim of our paper is to extend this framework by developing a systematic kinetic approach of the socio-economic systems and to explain how linear laws, modelling correlations between macroeconomic variables, may arise in this context. Firstly we construct the Boltzmann kinetic equation for an idealised system composed by many individuals (workers, officers, business men, etc.), each of them getting a certain income and spending money for their needs. To each individual a certain time variable amount of money is associated this meaning him/her phase space coordinate. In this way the exponential distribution of money in a closed economy is explicitly found. The extension of this result, including states near the equilibrium, give us the possibility to take into account the regular increase of the total amount of money, according to the modern economic theories. The Kubo-Green-Onsager linear response theory leads us to a set of linear equations between some macroeconomic variables. Finally, the validity of such laws is discussed in relation with the time reversal symmetry and is tested empirically using some macroeconomic time series.
Four-dimensional Hooke's law can encompass linear elasticity and inertia
International Nuclear Information System (INIS)
Antoci, S.; Mihich, L.
1999-01-01
The question is examined whether the formally straightforward extension of Hooke's time-honoured stress-strain relation to the four dimensions of special and of general relativity can make physical sense. The four-dimensional Hooke law is found able to account for the inertia of matter; in the flat-space, slow-motion approximation the field equations for the displacement four-vector field ξ i can encompass both linear elasticity and inertia. In this limit one just recovers the equations of motion of the classical theory of elasticity
Technical note: A linear model for predicting δ13 Cprotein.
Pestle, William J; Hubbe, Mark; Smith, Erin K; Stevenson, Joseph M
2015-08-01
Development of a model for the prediction of δ(13) Cprotein from δ(13) Ccollagen and Δ(13) Cap-co . Model-generated values could, in turn, serve as "consumer" inputs for multisource mixture modeling of paleodiet. Linear regression analysis of previously published controlled diet data facilitated the development of a mathematical model for predicting δ(13) Cprotein (and an experimentally generated error term) from isotopic data routinely generated during the analysis of osseous remains (δ(13) Cco and Δ(13) Cap-co ). Regression analysis resulted in a two-term linear model (δ(13) Cprotein (%) = (0.78 × δ(13) Cco ) - (0.58× Δ(13) Cap-co ) - 4.7), possessing a high R-value of 0.93 (r(2) = 0.86, P analysis of human osseous remains. These predicted values are ideal for use in multisource mixture modeling of dietary protein source contribution. © 2015 Wiley Periodicals, Inc.
Zhang, Langwen; Xie, Wei; Wang, Jingcheng
2017-11-01
In this work, synthesis of robust distributed model predictive control (MPC) is presented for a class of linear systems subject to structured time-varying uncertainties. By decomposing a global system into smaller dimensional subsystems, a set of distributed MPC controllers, instead of a centralised controller, are designed. To ensure the robust stability of the closed-loop system with respect to model uncertainties, distributed state feedback laws are obtained by solving a min-max optimisation problem. The design of robust distributed MPC is then transformed into solving a minimisation optimisation problem with linear matrix inequality constraints. An iterative online algorithm with adjustable maximum iteration is proposed to coordinate the distributed controllers to achieve a global performance. The simulation results show the effectiveness of the proposed robust distributed MPC algorithm.
Linear predictions of supercritical flow instability in two parallel channels
International Nuclear Information System (INIS)
Shah, M.
2008-01-01
A steady state linear code that can predict thermo-hydraulic instability boundaries in a two parallel channel system under supercritical conditions has been developed. Linear and non-linear solutions of the instability boundary in a two parallel channel system are also compared. The effect of gravity on the instability boundary in a two parallel channel system, by changing the orientation of the system flow from horizontal flow to vertical up-flow and vertical down-flow has been analyzed. Vertical up-flow is found to be more unstable than horizontal flow and vertical down flow is found to be the most unstable configuration. The type of instability present in each flow-orientation of a parallel channel system has been checked and the density wave oscillation type is observed in horizontal flow and vertical up-flow, while the static type of instability is observed in a vertical down-flow for the cases studied here. The parameters affecting the instability boundary, such as the heating power, inlet temperature, inlet and outlet K-factors are varied to assess their effects. This study is important for the design of future Generation IV nuclear reactors in which supercritical light water is proposed as the primary coolant. (author)
Predicting Madura cattle growth curve using non-linear model
Widyas, N.; Prastowo, S.; Widi, T. S. M.; Baliarti, E.
2018-03-01
Madura cattle is Indonesian native. It is a composite breed that has undergone hundreds of years of selection and domestication to reach nowadays remarkable uniformity. Crossbreeding has reached the isle of Madura and the Madrasin, a cross between Madura cows and Limousine semen emerged. This paper aimed to compare the growth curve between Madrasin and one type of pure Madura cows, the common Madura cattle (Madura) using non-linear models. Madura cattles are kept traditionally thus reliable records are hardly available. Data were collected from small holder farmers in Madura. Cows from different age classes (5years) were observed, and body measurements (chest girth, body length and wither height) were taken. In total 63 Madura and 120 Madrasin records obtained. Linear model was built with cattle sub-populations and age as explanatory variables. Body weights were estimated based on the chest girth. Growth curves were built using logistic regression. Results showed that within the same age, Madrasin has significantly larger body compared to Madura (plogistic models fit better for Madura and Madrasin cattle data; with the estimated MSE for these models were 39.09 and 759.28 with prediction accuracy of 99 and 92% for Madura and Madrasin, respectively. Prediction of growth curve using logistic regression model performed well in both types of Madura cattle. However, attempts to administer accurate data on Madura cattle are necessary to better characterize and study these cattle.
Comparison of Linear Prediction Models for Audio Signals
Directory of Open Access Journals (Sweden)
2009-03-01
Full Text Available While linear prediction (LP has become immensely popular in speech modeling, it does not seem to provide a good approach for modeling audio signals. This is somewhat surprising, since a tonal signal consisting of a number of sinusoids can be perfectly predicted based on an (all-pole LP model with a model order that is twice the number of sinusoids. We provide an explanation why this result cannot simply be extrapolated to LP of audio signals. If noise is taken into account in the tonal signal model, a low-order all-pole model appears to be only appropriate when the tonal components are uniformly distributed in the Nyquist interval. Based on this observation, different alternatives to the conventional LP model can be suggested. Either the model should be changed to a pole-zero, a high-order all-pole, or a pitch prediction model, or the conventional LP model should be preceded by an appropriate frequency transform, such as a frequency warping or downsampling. By comparing these alternative LP models to the conventional LP model in terms of frequency estimation accuracy, residual spectral flatness, and perceptual frequency resolution, we obtain several new and promising approaches to LP-based audio modeling.
Genomic prediction based on data from three layer lines: a comparison between linear methods
Calus, M.P.L.; Huang, H.; Vereijken, J.; Visscher, J.; Napel, ten J.; Windig, J.J.
2014-01-01
Background The prediction accuracy of several linear genomic prediction models, which have previously been used for within-line genomic prediction, was evaluated for multi-line genomic prediction. Methods Compared to a conventional BLUP (best linear unbiased prediction) model using pedigree data, we
Li, Yanning
2013-10-01
This article presents a new robust control framework for transportation problems in which the state is modeled by a first order scalar conservation law. Using an equivalent formulation based on a Hamilton-Jacobi equation, we pose the problem of controlling the state of the system on a network link, using boundary flow control, as a Linear Program. Unlike many previously investigated transportation control schemes, this method yields a globally optimal solution and is capable of handling shocks (i.e. discontinuities in the state of the system). We also demonstrate that the same framework can handle robust control problems, in which the uncontrollable components of the initial and boundary conditions are encoded in intervals on the right hand side of inequalities in the linear program. The lower bound of the interval which defines the smallest feasible solution set is used to solve the robust LP (or MILP if the objective function depends on boolean variables). Since this framework leverages the intrinsic properties of the Hamilton-Jacobi equation used to model the state of the system, it is extremely fast. Several examples are given to demonstrate the performance of the robust control solution and the trade-off between the robustness and the optimality. © 2013 IEEE.
Li, Yanning; Canepa, Edward S.; Claudel, Christian G.
2013-01-01
This article presents a new robust control framework for transportation problems in which the state is modeled by a first order scalar conservation law. Using an equivalent formulation based on a Hamilton-Jacobi equation, we pose the problem of controlling the state of the system on a network link, using boundary flow control, as a Linear Program. Unlike many previously investigated transportation control schemes, this method yields a globally optimal solution and is capable of handling shocks (i.e. discontinuities in the state of the system). We also demonstrate that the same framework can handle robust control problems, in which the uncontrollable components of the initial and boundary conditions are encoded in intervals on the right hand side of inequalities in the linear program. The lower bound of the interval which defines the smallest feasible solution set is used to solve the robust LP (or MILP if the objective function depends on boolean variables). Since this framework leverages the intrinsic properties of the Hamilton-Jacobi equation used to model the state of the system, it is extremely fast. Several examples are given to demonstrate the performance of the robust control solution and the trade-off between the robustness and the optimality. © 2013 IEEE.
On the structure on non-local conservation laws in the two-dimensional non-linear sigma-model
International Nuclear Information System (INIS)
Zamolodchikov, Al.B.
1978-01-01
The non-local conserved charges are supposed to satisfy a special multiplicative law in the space of asymptotic states of the non-linear sigma-model. This supposition leads to factorization equations for two-particle scattering matrix elements and determines to some extent the action of these charges in the asymptotic space. Their conservation turns out to be consistent with the factorized S-matrix of the non-linear sigma-model. It is shown also that the factorized sine-Gordon S-matrix is consistent with a similar family of conservation laws
Frequency prediction by linear stability analysis around mean flow
Bengana, Yacine; Tuckerman, Laurette
2017-11-01
The frequency of certain limit cycles resulting from a Hopf bifurcation, such as the von Karman vortex street, can be predicted by linear stability analysis around their mean flows. Barkley (2006) has shown this to yield an eigenvalue whose real part is zero and whose imaginary part matches the nonlinear frequency. This property was named RZIF by Turton et al. (2015); moreover they found that the traveling waves (TW) of thermosolutal convection have the RZIF property. They explained this as a consequence of the fact that the temporal Fourier spectrum is dominated by the mean flow and first harmonic. We could therefore consider that only the first mode is important in the saturation of the mean flow as presented in the Self-Consistent Model (SCM) of Mantic-Lugo et al. (2014). We have implemented a full Newton's method to solve the SCM for thermosolutal convection. We show that while the RZIF property is satisfied far from the threshold, the SCM model reproduces the exact frequency only very close to the threshold. Thus, the nonlinear interaction of only the first mode with itself is insufficiently accurate to estimate the mean flow. Our next step will be to take into account higher harmonics and to apply this analysis to the standing waves, for which RZIF does not hold.
Flow discharge prediction in compound channels using linear genetic programming
Azamathulla, H. Md.; Zahiri, A.
2012-08-01
SummaryFlow discharge determination in rivers is one of the key elements in mathematical modelling in the design of river engineering projects. Because of the inundation of floodplains and sudden changes in river geometry, flow resistance equations are not applicable for compound channels. Therefore, many approaches have been developed for modification of flow discharge computations. Most of these methods have satisfactory results only in laboratory flumes. Due to the ability to model complex phenomena, the artificial intelligence methods have recently been employed for wide applications in various fields of water engineering. Linear genetic programming (LGP), a branch of artificial intelligence methods, is able to optimise the model structure and its components and to derive an explicit equation based on the variables of the phenomena. In this paper, a precise dimensionless equation has been derived for prediction of flood discharge using LGP. The proposed model was developed using published data compiled for stage-discharge data sets for 394 laboratories, and field of 30 compound channels. The results indicate that the LGP model has a better performance than the existing models.
Tutcuoglu, A.; Majidi, C.
2014-12-01
Using principles of damped harmonic oscillation with continuous media, we examine electrostatic energy harvesting with a "soft-matter" array of dielectric elastomer (DE) transducers. The array is composed of infinitely thin and deformable electrodes separated by layers of insulating elastomer. During vibration, it deforms longitudinally, resulting in a change in the capacitance and electrical enthalpy of the charged electrodes. Depending on the phase of electrostatic loading, the DE array can function as either an actuator that amplifies small vibrations or a generator that converts these external excitations into electrical power. Both cases are addressed with a comprehensive theory that accounts for the influence of viscoelasticity, dielectric breakdown, and electromechanical coupling induced by Maxwell stress. In the case of a linearized Kelvin-Voigt model of the dielectric, we obtain a closed-form estimate for the electrical power output and a scaling law for DE generator design. For the complete nonlinear model, we obtain the optimal electrostatic voltage input for maximum electrical power output.
Non-linear laws of echoic memory and auditory change detection in humans.
Inui, Koji; Urakawa, Tomokazu; Yamashiro, Koya; Otsuru, Naofumi; Nishihara, Makoto; Takeshima, Yasuyuki; Keceli, Sumru; Kakigi, Ryusuke
2010-07-03
The detection of any abrupt change in the environment is important to survival. Since memory of preceding sensory conditions is necessary for detecting changes, such a change-detection system relates closely to the memory system. Here we used an auditory change-related N1 subcomponent (change-N1) of event-related brain potentials to investigate cortical mechanisms underlying change detection and echoic memory. Change-N1 was elicited by a simple paradigm with two tones, a standard followed by a deviant, while subjects watched a silent movie. The amplitude of change-N1 elicited by a fixed sound pressure deviance (70 dB vs. 75 dB) was negatively correlated with the logarithm of the interval between the standard sound and deviant sound (1, 10, 100, or 1000 ms), while positively correlated with the logarithm of the duration of the standard sound (25, 100, 500, or 1000 ms). The amplitude of change-N1 elicited by a deviance in sound pressure, sound frequency, and sound location was correlated with the logarithm of the magnitude of physical differences between the standard and deviant sounds. The present findings suggest that temporal representation of echoic memory is non-linear and Weber-Fechner law holds for the automatic cortical response to sound changes within a suprathreshold range. Since the present results show that the behavior of echoic memory can be understood through change-N1, change-N1 would be a useful tool to investigate memory systems.
Li, Yanning
2014-03-01
This article presents a new optimal control framework for transportation networks in which the state is modeled by a first order scalar conservation law. Using an equivalent formulation based on a Hamilton-Jacobi (H-J) equation and the commonly used triangular fundamental diagram, we pose the problem of controlling the state of the system on a network link, in a finite horizon, as a Linear Program (LP). We then show that this framework can be extended to an arbitrary transportation network, resulting in an LP or a Quadratic Program. Unlike many previously investigated transportation network control schemes, this method yields a globally optimal solution and is capable of handling shocks (i.e., discontinuities in the state of the system). As it leverages the intrinsic properties of the H-J equation used to model the state of the system, it does not require any approximation, unlike classical methods that are based on discretizations of the model. The computational efficiency of the method is illustrated on a transportation network. © 2014 IEEE.
Li, Yanning; Canepa, Edward S.; Claudel, Christian
2014-01-01
This article presents a new optimal control framework for transportation networks in which the state is modeled by a first order scalar conservation law. Using an equivalent formulation based on a Hamilton-Jacobi (H-J) equation and the commonly used triangular fundamental diagram, we pose the problem of controlling the state of the system on a network link, in a finite horizon, as a Linear Program (LP). We then show that this framework can be extended to an arbitrary transportation network, resulting in an LP or a Quadratic Program. Unlike many previously investigated transportation network control schemes, this method yields a globally optimal solution and is capable of handling shocks (i.e., discontinuities in the state of the system). As it leverages the intrinsic properties of the H-J equation used to model the state of the system, it does not require any approximation, unlike classical methods that are based on discretizations of the model. The computational efficiency of the method is illustrated on a transportation network. © 2014 IEEE.
Non-linear laws of echoic memory and auditory change detection in humans
Directory of Open Access Journals (Sweden)
Takeshima Yasuyuki
2010-07-01
Full Text Available Abstract Background The detection of any abrupt change in the environment is important to survival. Since memory of preceding sensory conditions is necessary for detecting changes, such a change-detection system relates closely to the memory system. Here we used an auditory change-related N1 subcomponent (change-N1 of event-related brain potentials to investigate cortical mechanisms underlying change detection and echoic memory. Results Change-N1 was elicited by a simple paradigm with two tones, a standard followed by a deviant, while subjects watched a silent movie. The amplitude of change-N1 elicited by a fixed sound pressure deviance (70 dB vs. 75 dB was negatively correlated with the logarithm of the interval between the standard sound and deviant sound (1, 10, 100, or 1000 ms, while positively correlated with the logarithm of the duration of the standard sound (25, 100, 500, or 1000 ms. The amplitude of change-N1 elicited by a deviance in sound pressure, sound frequency, and sound location was correlated with the logarithm of the magnitude of physical differences between the standard and deviant sounds. Conclusions The present findings suggest that temporal representation of echoic memory is non-linear and Weber-Fechner law holds for the automatic cortical response to sound changes within a suprathreshold range. Since the present results show that the behavior of echoic memory can be understood through change-N1, change-N1 would be a useful tool to investigate memory systems.
Law Enforcement Use of Threat Assessments to Predict Violence
Wood, Tracey Michelle
2016-01-01
The purpose of this qualitative, descriptive multiple case study was to explore what process, policies and procedures, or set of empirically supported norms governed law enforcement officers in a selected county in the southwest region of the United States when threat assessments were conducted on potentially violent subjects threatening mass…
Fast Algorithms for High-Order Sparse Linear Prediction with Applications to Speech Processing
DEFF Research Database (Denmark)
Jensen, Tobias Lindstrøm; Giacobello, Daniele; van Waterschoot, Toon
2016-01-01
In speech processing applications, imposing sparsity constraints on high-order linear prediction coefficients and prediction residuals has proven successful in overcoming some of the limitation of conventional linear predictive modeling. However, this modeling scheme, named sparse linear prediction...... problem with lower accuracy than in previous work. In the experimental analysis, we clearly show that a solution with lower accuracy can achieve approximately the same performance as a high accuracy solution both objectively, in terms of prediction gain, as well as with perceptual relevant measures, when...... evaluated in a speech reconstruction application....
The log-linear return approximation, bubbles, and predictability
DEFF Research Database (Denmark)
Engsted, Tom; Pedersen, Thomas Quistgaard; Tanggaard, Carsten
We study in detail the log-linear return approximation introduced by Campbell and Shiller (1988a). First, we derive an upper bound for the mean approximation error, given stationarity of the log dividendprice ratio. Next, we simulate various rational bubbles which have explosive conditional expec...
The Log-Linear Return Approximation, Bubbles, and Predictability
DEFF Research Database (Denmark)
Engsted, Tom; Pedersen, Thomas Quistgaard; Tanggaard, Carsten
2012-01-01
We study in detail the log-linear return approximation introduced by Campbell and Shiller (1988a). First, we derive an upper bound for the mean approximation error, given stationarity of the log dividend-price ratio. Next, we simulate various rational bubbles which have explosive conditional expe...
Bond lengths in Cd1-xZnxTe beyond linear laws revisited
International Nuclear Information System (INIS)
Koteski, V.; Haas, H.; Holub-Krappe, E.; Ivanovic, N.; Mahnke, H.-E.
2004-01-01
We have investigated the development of local bond lengths with composition in the Cd 1-x Zn x Te mixed system by measuring the fine structure in X-ray absorption (EXAFS) at all three constituent atoms. The bond strength is found to dominate over the averaging of the bulk so that the local bond length deviates only slightly from its natural value determined for the pure binary components ZnTe and CdTe, respectively. The deviations are significantly less than predicted by a simple radial force constant model for tetrahedrally co-ordinated binary systems, and the bond-length variation with concentration is significantly non-linear. For the second shell, bimodal anion-anion distances are found while the cation-cation distances can already be described by the virtual crystal approximation. In the diluted regime close to the end-point compounds, we have complemented our experimental work by ab initio calculations based on density functional theory with the WIEN97 program using the linearised augmented plane wave method. Equilibrium atomic lattice positions have been calculated for the substitutional isovalent metal atom in a 32-atom super cell, Zn in the CdTe lattice or Cd in the ZnTe lattice, respectively, yielding good agreement with the atomic distances as determined in our EXAFS experiments
Directory of Open Access Journals (Sweden)
Yan Zhang
2011-01-01
Full Text Available The problem of steady, laminar, thermal Marangoni convection flow of non-Newtonian power law fluid along a horizontal surface with variable surface temperature is studied. The partial differential equations are transformed into ordinary differential equations by using a suitable similarity transformation and analytical approximate solutions are obtained by an efficient transformation, asymptotic expansion and Padé approximants technique. The effects of power law index and Marangoni number on velocity and temperature profiles are examined and discussed.
Linear-quadratic model predictions for tumor control probability
International Nuclear Information System (INIS)
Yaes, R.J.
1987-01-01
Sigmoid dose-response curves for tumor control are calculated from the linear-quadratic model parameters α and Β, obtained from human epidermoid carcinoma cell lines, and are much steeper than the clinical dose-response curves for head and neck cancers. One possible explanation is the presence of small radiation-resistant clones arising from mutations in an initially homogeneous tumor. Using the mutation theory of Delbruck and Luria and of Goldie and Coldman, the authors discuss the implications of such radiation-resistant clones for clinical radiation therapy
Genomic prediction based on data from three layer lines using non-linear regression models.
Huang, Heyun; Windig, Jack J; Vereijken, Addie; Calus, Mario P L
2014-11-06
Most studies on genomic prediction with reference populations that include multiple lines or breeds have used linear models. Data heterogeneity due to using multiple populations may conflict with model assumptions used in linear regression methods. In an attempt to alleviate potential discrepancies between assumptions of linear models and multi-population data, two types of alternative models were used: (1) a multi-trait genomic best linear unbiased prediction (GBLUP) model that modelled trait by line combinations as separate but correlated traits and (2) non-linear models based on kernel learning. These models were compared to conventional linear models for genomic prediction for two lines of brown layer hens (B1 and B2) and one line of white hens (W1). The three lines each had 1004 to 1023 training and 238 to 240 validation animals. Prediction accuracy was evaluated by estimating the correlation between observed phenotypes and predicted breeding values. When the training dataset included only data from the evaluated line, non-linear models yielded at best a similar accuracy as linear models. In some cases, when adding a distantly related line, the linear models showed a slight decrease in performance, while non-linear models generally showed no change in accuracy. When only information from a closely related line was used for training, linear models and non-linear radial basis function (RBF) kernel models performed similarly. The multi-trait GBLUP model took advantage of the estimated genetic correlations between the lines. Combining linear and non-linear models improved the accuracy of multi-line genomic prediction. Linear models and non-linear RBF models performed very similarly for genomic prediction, despite the expectation that non-linear models could deal better with the heterogeneous multi-population data. This heterogeneity of the data can be overcome by modelling trait by line combinations as separate but correlated traits, which avoids the occasional
Construction of local and non-local conservation laws for non-linear field equations
International Nuclear Information System (INIS)
Vladimirov, V.S.; Volovich, I.V.
1984-08-01
A method of constructing conserved currents for non-linear field equations is presented. More explicitly for non-linear equations, which can be derived from compatibility conditions of some linear system with a parameter, a procedure of obtaining explicit expressions for local and non-local currents is developed. Some examples such as the classical Heisenberg spin chain and supersymmetric Yang-Mills theory are considered. (author)
Case Studies of Predictive Analysis Applications in Law Enforcement
2015-12-01
Management, December 2, 2006, 12– 15; Whiting, “Predict the Future-Or Try, Anyway.” 59 Hsinchun Chen, Roger HL Chiang, and Veda C. Storey, “Business...Roger HL Chiang, and Veda C. Storey. “Business Intelligence and Analytics: From Big Data to Big Impact.” MIS Quarterly 36, no. 4 (2012). http
Glass, Alexis; Fukudome, Kimitoshi
2004-12-01
A sound recording of a plucked string instrument is encoded and resynthesized using two stages of prediction. In the first stage of prediction, a simple physical model of a plucked string is estimated and the instrument excitation is obtained. The second stage of prediction compensates for the simplicity of the model in the first stage by encoding either the instrument excitation or the model error using warped linear prediction. These two methods of compensation are compared with each other, and to the case of single-stage warped linear prediction, adjustments are introduced, and their applications to instrument synthesis and MPEG4's audio compression within the structured audio format are discussed.
Directory of Open Access Journals (Sweden)
R. Barbiero
2007-05-01
Full Text Available Model Output Statistics (MOS refers to a method of post-processing the direct outputs of numerical weather prediction (NWP models in order to reduce the biases introduced by a coarse horizontal resolution. This technique is especially useful in orographically complex regions, where large differences can be found between the NWP elevation model and the true orography. This study carries out a comparison of linear and non-linear MOS methods, aimed at the prediction of minimum temperatures in a fruit-growing region of the Italian Alps, based on the output of two different NWPs (ECMWF T511–L60 and LAMI-3. Temperature, of course, is a particularly important NWP output; among other roles it drives the local frost forecast, which is of great interest to agriculture. The mechanisms of cold air drainage, a distinctive aspect of mountain environments, are often unsatisfactorily captured by global circulation models. The simplest post-processing technique applied in this work was a correction for the mean bias, assessed at individual model grid points. We also implemented a multivariate linear regression on the output at the grid points surrounding the target area, and two non-linear models based on machine learning techniques: Neural Networks and Random Forest. We compare the performance of all these techniques on four different NWP data sets. Downscaling the temperatures clearly improved the temperature forecasts with respect to the raw NWP output, and also with respect to the basic mean bias correction. Multivariate methods generally yielded better results, but the advantage of using non-linear algorithms was small if not negligible. RF, the best performing method, was implemented on ECMWF prognostic output at 06:00 UTC over the 9 grid points surrounding the target area. Mean absolute errors in the prediction of 2 m temperature at 06:00 UTC were approximately 1.2°C, close to the natural variability inside the area itself.
Force prediction in permanent magnet flat linear motors (abstract)
International Nuclear Information System (INIS)
Eastham, J.F.; Akmese, R.
1991-01-01
The advent of neodymium iron boron rare-earth permanent magnet material has afforded the opportunity to construct linear machines of high force to weight ratio. The paper describes the design and construction of an axial flux machine and rotating drum test rig. The machine occupies an arc of 45 degree on a drum 1.22 m in diameter. The excitation is provided by blocks of NdFeB material which are skewed in order to minimize the force variations due to slotting. The stator carries a three-phase short-chorded double-layer winding of four poles. The machine is supplied by a PWM inverter the fundamental component of which is phase locked to the rotor position so that a ''dc brushless'' drive system is produced. Electromagnetic forces including ripple forces are measured at supply frequencies up to 100 Hz. They are compared with finite-element analysis which calculates the force variation over the time period. The paper then considers some of the causes of ripple torque. In particular, the force production due solely to the permanent magnet excitation is considered. This has two important components each acting along the line of motion of the machine, one is due to slotting and the other is due to the finite length of the primary. In the practical machine the excitation poles are skewed to minimize the slotting force and the effectiveness of this is confirmed by both results from the experiments and the finite-element analysis. The end effect force is shown to have a space period of twice that of the excitation. The amplitude of this force and its period are again confirmed by practical results
Pozo, Carlos; Marín-Sanguino, Alberto; Alves, Rui; Guillén-Gosálbez, Gonzalo; Jiménez, Laureano; Sorribas, Albert
2011-08-25
Design of newly engineered microbial strains for biotechnological purposes would greatly benefit from the development of realistic mathematical models for the processes to be optimized. Such models can then be analyzed and, with the development and application of appropriate optimization techniques, one could identify the modifications that need to be made to the organism in order to achieve the desired biotechnological goal. As appropriate models to perform such an analysis are necessarily non-linear and typically non-convex, finding their global optimum is a challenging task. Canonical modeling techniques, such as Generalized Mass Action (GMA) models based on the power-law formalism, offer a possible solution to this problem because they have a mathematical structure that enables the development of specific algorithms for global optimization. Based on the GMA canonical representation, we have developed in previous works a highly efficient optimization algorithm and a set of related strategies for understanding the evolution of adaptive responses in cellular metabolism. Here, we explore the possibility of recasting kinetic non-linear models into an equivalent GMA model, so that global optimization on the recast GMA model can be performed. With this technique, optimization is greatly facilitated and the results are transposable to the original non-linear problem. This procedure is straightforward for a particular class of non-linear models known as Saturable and Cooperative (SC) models that extend the power-law formalism to deal with saturation and cooperativity. Our results show that recasting non-linear kinetic models into GMA models is indeed an appropriate strategy that helps overcoming some of the numerical difficulties that arise during the global optimization task.
Directory of Open Access Journals (Sweden)
Sorribas Albert
2011-08-01
Full Text Available Abstract Background Design of newly engineered microbial strains for biotechnological purposes would greatly benefit from the development of realistic mathematical models for the processes to be optimized. Such models can then be analyzed and, with the development and application of appropriate optimization techniques, one could identify the modifications that need to be made to the organism in order to achieve the desired biotechnological goal. As appropriate models to perform such an analysis are necessarily non-linear and typically non-convex, finding their global optimum is a challenging task. Canonical modeling techniques, such as Generalized Mass Action (GMA models based on the power-law formalism, offer a possible solution to this problem because they have a mathematical structure that enables the development of specific algorithms for global optimization. Results Based on the GMA canonical representation, we have developed in previous works a highly efficient optimization algorithm and a set of related strategies for understanding the evolution of adaptive responses in cellular metabolism. Here, we explore the possibility of recasting kinetic non-linear models into an equivalent GMA model, so that global optimization on the recast GMA model can be performed. With this technique, optimization is greatly facilitated and the results are transposable to the original non-linear problem. This procedure is straightforward for a particular class of non-linear models known as Saturable and Cooperative (SC models that extend the power-law formalism to deal with saturation and cooperativity. Conclusions Our results show that recasting non-linear kinetic models into GMA models is indeed an appropriate strategy that helps overcoming some of the numerical difficulties that arise during the global optimization task.
Predicting Fuel Ignition Quality Using 1H NMR Spectroscopy and Multiple Linear Regression
Abdul Jameel, Abdul Gani; Naser, Nimal; Emwas, Abdul-Hamid M.; Dooley, Stephen; Sarathy, Mani
2016-01-01
An improved model for the prediction of ignition quality of hydrocarbon fuels has been developed using 1H nuclear magnetic resonance (NMR) spectroscopy and multiple linear regression (MLR) modeling. Cetane number (CN) and derived cetane number (DCN
Directory of Open Access Journals (Sweden)
Muayad Al-Qaisy
2015-02-01
Full Text Available In this article, multi-input multi-output (MIMO linear model predictive controller (LMPC based on state space model and nonlinear model predictive controller based on neural network (NNMPC are applied on a continuous stirred tank reactor (CSTR. The idea is to have a good control system that will be able to give optimal performance, reject high load disturbance, and track set point change. In order to study the performance of the two model predictive controllers, MIMO Proportional-Integral-Derivative controller (PID strategy is used as benchmark. The LMPC, NNMPC, and PID strategies are used for controlling the residual concentration (CA and reactor temperature (T. NNMPC control shows a superior performance over the LMPC and PID controllers by presenting a smaller overshoot and shorter settling time.
Predicting law enforcement officer job performance with the Personality Assessment Inventory.
Lowmaster, Sara E; Morey, Leslie C
2012-01-01
This study examined the descriptive and predictive characteristics of the Personality Assessment Inventory (PAI; Morey, 1991) in a sample of 85 law enforcement officer candidates. Descriptive results indicate that mean PAI full-scale and subscale scores are consistently lower than normative community sample scores, with some exceptions noted typically associated with defensive responding. Predictive validity was examined by relating PAI full-scale and subscale scores to supervisor ratings in the areas of job performance, integrity problems, and abuse of disability status. Modest correlations were observed for all domains; however, predictive validity was moderated by defensive response style, with greater predictive validity observed among less defensive responders. These results suggest that the PAI's full scales and subscales are able to predict law enforcement officers' performance, but their utility is appreciably improved when taken in the context of indicators of defensive responding.
Genomic prediction based on data from three layer lines using non-linear regression models
Huang, H.; Windig, J.J.; Vereijken, A.; Calus, M.P.L.
2014-01-01
Background - Most studies on genomic prediction with reference populations that include multiple lines or breeds have used linear models. Data heterogeneity due to using multiple populations may conflict with model assumptions used in linear regression methods. Methods - In an attempt to alleviate
International Nuclear Information System (INIS)
Segal, I.E.
1989-01-01
The directly observed average apparent magnitude (or in one case, angular diameter) as a function of redshift in each of a number of large complete galaxy samples is compared with the predictions of hypothetical redshift-distance power laws, as a systematic statistical question. Due account is taken of observational flux limits by an entirely objective and reproducible optimal statistical procedure, and no assumptions are made regarding the distribution of the galaxies in space. The laws considered are of the form z varies as r p , where r denotes the distance, for p = 1, 2 and 3. The comparative fits of the various redshift-distance laws are similar in all the samples. Overall, the cubic law fits better than the linear law, but each shows substantial systematic deviations from observation. The quadratic law fits extremely well except at high redshifts in some of the samples, where no power law fits closely and the correlation of apparent magnitude with redshift is small or negative. In all cases, the luminosity function required for theoretical prediction was estimated from the sample by the non-parametric procedure ROBUST, whose intrinsic neutrality as programmed was checked by comprehensive computer simulations. (author)
International Nuclear Information System (INIS)
Lima, M.L.; Mignaco, J.A.
1983-01-01
The power law potentials in the Schroedinger equation solved recently are shown to come from the classical treatment of the singularities of a linear, second order differential equation. This allows to enlarge the class of solvable power law potentials. (Author) [pt
Pérez-Rodríguez, Paulino; Gianola, Daniel; González-Camacho, Juan Manuel; Crossa, José; Manès, Yann; Dreisigacker, Susanne
2012-12-01
In genome-enabled prediction, parametric, semi-parametric, and non-parametric regression models have been used. This study assessed the predictive ability of linear and non-linear models using dense molecular markers. The linear models were linear on marker effects and included the Bayesian LASSO, Bayesian ridge regression, Bayes A, and Bayes B. The non-linear models (this refers to non-linearity on markers) were reproducing kernel Hilbert space (RKHS) regression, Bayesian regularized neural networks (BRNN), and radial basis function neural networks (RBFNN). These statistical models were compared using 306 elite wheat lines from CIMMYT genotyped with 1717 diversity array technology (DArT) markers and two traits, days to heading (DTH) and grain yield (GY), measured in each of 12 environments. It was found that the three non-linear models had better overall prediction accuracy than the linear regression specification. Results showed a consistent superiority of RKHS and RBFNN over the Bayesian LASSO, Bayesian ridge regression, Bayes A, and Bayes B models.
A unified frame of predicting side effects of drugs by using linear neighborhood similarity.
Zhang, Wen; Yue, Xiang; Liu, Feng; Chen, Yanlin; Tu, Shikui; Zhang, Xining
2017-12-14
Drug side effects are one of main concerns in the drug discovery, which gains wide attentions. Investigating drug side effects is of great importance, and the computational prediction can help to guide wet experiments. As far as we known, a great number of computational methods have been proposed for the side effect predictions. The assumption that similar drugs may induce same side effects is usually employed for modeling, and how to calculate the drug-drug similarity is critical in the side effect predictions. In this paper, we present a novel measure of drug-drug similarity named "linear neighborhood similarity", which is calculated in a drug feature space by exploring linear neighborhood relationship. Then, we transfer the similarity from the feature space into the side effect space, and predict drug side effects by propagating known side effect information through a similarity-based graph. Under a unified frame based on the linear neighborhood similarity, we propose method "LNSM" and its extension "LNSM-SMI" to predict side effects of new drugs, and propose the method "LNSM-MSE" to predict unobserved side effect of approved drugs. We evaluate the performances of LNSM and LNSM-SMI in predicting side effects of new drugs, and evaluate the performances of LNSM-MSE in predicting missing side effects of approved drugs. The results demonstrate that the linear neighborhood similarity can improve the performances of side effect prediction, and the linear neighborhood similarity-based methods can outperform existing side effect prediction methods. More importantly, the proposed methods can predict side effects of new drugs as well as unobserved side effects of approved drugs under a unified frame.
Linear drag law for high-Reynolds-number flow past an oscillating body
Agre, Natalie; Childress, Stephen; Zhang, Jun; Ristroph, Leif
2016-07-01
An object immersed in a fast flow typically experiences fluid forces that increase with the square of speed. Here we explore how this high-Reynolds-number force-speed relationship is affected by unsteady motions of a body. Experiments on disks that are driven to oscillate while progressing through air reveal two distinct regimes: a conventional quadratic relationship for slow oscillations and an anomalous scaling for fast flapping in which the time-averaged drag increases linearly with flow speed. In the linear regime, flow visualization shows that a pair of counterrotating vortices is shed with each oscillation and a model that views a train of such dipoles as a momentum jet reproduces the linearity. We also show that appropriate scaling variables collapse the experimental data from both regimes and for different oscillatory motions into a single drag-speed relationship. These results could provide insight into the aerodynamic resistance incurred by oscillating wings in flight and they suggest that vibrations can be an effective means to actively control the drag on an object.
Structural Dynamic Analyses And Test Predictions For Spacecraft Structures With Non-Linearities
Vergniaud, Jean-Baptiste; Soula, Laurent; Newerla, Alfred
2012-07-01
The overall objective of the mechanical development and verification process is to ensure that the spacecraft structure is able to sustain the mechanical environments encountered during launch. In general the spacecraft structures are a-priori assumed to behave linear, i.e. the responses to a static load or dynamic excitation, respectively, will increase or decrease proportionally to the amplitude of the load or excitation induced. However, past experiences have shown that various non-linearities might exist in spacecraft structures and the consequences of their dynamic effects can significantly affect the development and verification process. Current processes are mainly adapted to linear spacecraft structure behaviour. No clear rules exist for dealing with major structure non-linearities. They are handled outside the process by individual analysis and margin policy, and analyses after tests to justify the CLA coverage. Non-linearities can primarily affect the current spacecraft development and verification process on two aspects. Prediction of flights loads by launcher/satellite coupled loads analyses (CLA): only linear satellite models are delivered for performing CLA and no well-established rules exist how to properly linearize a model when non- linearities are present. The potential impact of the linearization on the results of the CLA has not yet been properly analyzed. There are thus difficulties to assess that CLA results will cover actual flight levels. Management of satellite verification tests: the CLA results generated with a linear satellite FEM are assumed flight representative. If the internal non- linearities are present in the tested satellite then there might be difficulties to determine which input level must be passed to cover satellite internal loads. The non-linear behaviour can also disturb the shaker control, putting the satellite at risk by potentially imposing too high levels. This paper presents the results of a test campaign performed in
From Newton's Law to the Linear Boltzmann Equation Without Cut-Off
Ayi, Nathalie
2017-03-01
We provide a rigorous derivation of the linear Boltzmann equation without cut-off starting from a system of particles interacting via a potential with infinite range as the number of particles N goes to infinity under the Boltzmann-Grad scaling. More particularly, we will describe the motion of a tagged particle in a gas close to global equilibrium. The main difficulty in our context is that, due to the infinite range of the potential, a non-integrable singularity appears in the angular collision kernel, making no longer valid the single-use of Lanford's strategy. Our proof relies then on a combination of Lanford's strategy, of tools developed recently by Bodineau, Gallagher and Saint-Raymond to study the collision process, and of new duality arguments to study the additional terms associated with the long-range interaction, leading to some explicit weak estimates.
A Review of Darcy's Law: Limitations and Alternatives for Predicting Solute Transport
Steenhuis, Tammo; Kung, K.-J. Sam; Jaynes, Dan; Helling, Charles S.; Gish, Tim; Kladivko, Eileen
2016-04-01
Darcy's Law that was derived originally empirically 160 years ago, has been used successfully in calculating the (Darcy) flux in porous media throughout the world. However, field and laboratory experiments have demonstrated that the Darcy flux employed in the convective disperse equation could only successfully predict solute transport under two conditions: (1) uniformly or densely packed porous media; and (2) field soils under relatively dry condition. Employing the Darcy flux for solute transport in porous media with preferential flow pathways was problematic. In this paper we examine the theoretical background behind these field and laboratory observations and then provide an alternative to predict solute movement. By examining the characteristics of the momentum conservation principles on which Darcy's law is based, we show under what conditions Darcy flux can predict solute transport in porous media of various complexity. We find that, based on several case studies with capillary pores, Darcy's Law inherently merges momentum and in that way erases information on pore-scale velocities. For that reason the Darcy flux cannot predict flow in media with preferential flow conduits where individual pore velocities are essential in predicting the shape of the breakthrough curve and especially "the early arrival" of solutes. To overcome the limitations of the assumption in Darcy's law, we use Jury's conceptualization and employ the measured chemical breakthrough curve as input to characterize the impact of individual preferential flow pathways on chemical transport. Specifically, we discuss how best to take advantage of Jury's conceptualization to extract the pore-scale flow velocity to accurately predict chemical transport through soils with preferential flow pathways.
Machine learning-based methods for prediction of linear B-cell epitopes.
Wang, Hsin-Wei; Pai, Tun-Wen
2014-01-01
B-cell epitope prediction facilitates immunologists in designing peptide-based vaccine, diagnostic test, disease prevention, treatment, and antibody production. In comparison with T-cell epitope prediction, the performance of variable length B-cell epitope prediction is still yet to be satisfied. Fortunately, due to increasingly available verified epitope databases, bioinformaticians could adopt machine learning-based algorithms on all curated data to design an improved prediction tool for biomedical researchers. Here, we have reviewed related epitope prediction papers, especially those for linear B-cell epitope prediction. It should be noticed that a combination of selected propensity scales and statistics of epitope residues with machine learning-based tools formulated a general way for constructing linear B-cell epitope prediction systems. It is also observed from most of the comparison results that the kernel method of support vector machine (SVM) classifier outperformed other machine learning-based approaches. Hence, in this chapter, except reviewing recently published papers, we have introduced the fundamentals of B-cell epitope and SVM techniques. In addition, an example of linear B-cell prediction system based on physicochemical features and amino acid combinations is illustrated in details.
Applications of Kalman filters based on non-linear functions to numerical weather predictions
Directory of Open Access Journals (Sweden)
G. Galanis
2006-10-01
Full Text Available This paper investigates the use of non-linear functions in classical Kalman filter algorithms on the improvement of regional weather forecasts. The main aim is the implementation of non linear polynomial mappings in a usual linear Kalman filter in order to simulate better non linear problems in numerical weather prediction. In addition, the optimal order of the polynomials applied for such a filter is identified. This work is based on observations and corresponding numerical weather predictions of two meteorological parameters characterized by essential differences in their evolution in time, namely, air temperature and wind speed. It is shown that in both cases, a polynomial of low order is adequate for eliminating any systematic error, while higher order functions lead to instabilities in the filtered results having, at the same time, trivial contribution to the sensitivity of the filter. It is further demonstrated that the filter is independent of the time period and the geographic location of application.
Applications of Kalman filters based on non-linear functions to numerical weather predictions
Directory of Open Access Journals (Sweden)
G. Galanis
2006-10-01
Full Text Available This paper investigates the use of non-linear functions in classical Kalman filter algorithms on the improvement of regional weather forecasts. The main aim is the implementation of non linear polynomial mappings in a usual linear Kalman filter in order to simulate better non linear problems in numerical weather prediction. In addition, the optimal order of the polynomials applied for such a filter is identified. This work is based on observations and corresponding numerical weather predictions of two meteorological parameters characterized by essential differences in their evolution in time, namely, air temperature and wind speed. It is shown that in both cases, a polynomial of low order is adequate for eliminating any systematic error, while higher order functions lead to instabilities in the filtered results having, at the same time, trivial contribution to the sensitivity of the filter. It is further demonstrated that the filter is independent of the time period and the geographic location of application.
I. Udeh; J.O. Isikwenu and G. Ukughere
2011-01-01
The objectives of this study were to compare the performance characteristics of four strains of broiler chicken from 2 to 8 weeks of age and predict body weight of the broilers using linear body measurements. The four strains of broiler chicken used were Anak, Arbor Acre, Ross and Marshall. The parameters recorded were bodyweight, weight gain, total feed intake, feed conversion ratio, mortality and some linear body measurements (body length, body width, breast width, drumstick length, shank l...
Predicting musically induced emotions from physiological inputs: linear and neural network models.
Russo, Frank A; Vempala, Naresh N; Sandstrom, Gillian M
2013-01-01
Listening to music often leads to physiological responses. Do these physiological responses contain sufficient information to infer emotion induced in the listener? The current study explores this question by attempting to predict judgments of "felt" emotion from physiological responses alone using linear and neural network models. We measured five channels of peripheral physiology from 20 participants-heart rate (HR), respiration, galvanic skin response, and activity in corrugator supercilii and zygomaticus major facial muscles. Using valence and arousal (VA) dimensions, participants rated their felt emotion after listening to each of 12 classical music excerpts. After extracting features from the five channels, we examined their correlation with VA ratings, and then performed multiple linear regression to see if a linear relationship between the physiological responses could account for the ratings. Although linear models predicted a significant amount of variance in arousal ratings, they were unable to do so with valence ratings. We then used a neural network to provide a non-linear account of the ratings. The network was trained on the mean ratings of eight of the 12 excerpts and tested on the remainder. Performance of the neural network confirms that physiological responses alone can be used to predict musically induced emotion. The non-linear model derived from the neural network was more accurate than linear models derived from multiple linear regression, particularly along the valence dimension. A secondary analysis allowed us to quantify the relative contributions of inputs to the non-linear model. The study represents a novel approach to understanding the complex relationship between physiological responses and musically induced emotion.
Iterated non-linear model predictive control based on tubes and contractive constraints.
Murillo, M; Sánchez, G; Giovanini, L
2016-05-01
This paper presents a predictive control algorithm for non-linear systems based on successive linearizations of the non-linear dynamic around a given trajectory. A linear time varying model is obtained and the non-convex constrained optimization problem is transformed into a sequence of locally convex ones. The robustness of the proposed algorithm is addressed adding a convex contractive constraint. To account for linearization errors and to obtain more accurate results an inner iteration loop is added to the algorithm. A simple methodology to obtain an outer bounding-tube for state trajectories is also presented. The convergence of the iterative process and the stability of the closed-loop system are analyzed. The simulation results show the effectiveness of the proposed algorithm in controlling a quadcopter type unmanned aerial vehicle. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.
Pettermann, Heinz E.; DeSimone, Antonio
2017-09-01
A constitutive material law for linear thermo-viscoelasticity in the time domain is presented. The time-dependent relaxation formulation is given for full anisotropy, i.e., both the elastic and the viscous properties are anisotropic. Thereby, each element of the relaxation tensor is described by its own and independent Prony series expansion. Exceeding common viscoelasticity, time-dependent thermal expansion relaxation/creep is treated as inherent material behavior. The pertinent equations are derived and an incremental, implicit time integration scheme is presented. The developments are implemented into an implicit FEM software for orthotropic material symmetry under plane stress assumption. Even if this is a reduced problem, all essential features are present and allow for the entire verification and validation of the approach. Various simulations on isotropic and orthotropic problems are carried out to demonstrate the material behavior under investigation.
Predicting the long tail of book sales: Unearthing the power-law exponent
Fenner, Trevor; Levene, Mark; Loizou, George
2010-06-01
The concept of the long tail has recently been used to explain the phenomenon in e-commerce where the total volume of sales of the items in the tail is comparable to that of the most popular items. In the case of online book sales, the proportion of tail sales has been estimated using regression techniques on the assumption that the data obeys a power-law distribution. Here we propose a different technique for estimation based on a generative model of book sales that results in an asymptotic power-law distribution of sales, but which does not suffer from the problems related to power-law regression techniques. We show that the proportion of tail sales predicted is very sensitive to the estimated power-law exponent. In particular, if we assume that the power-law exponent of the cumulative distribution is closer to 1.1 rather than to 1.2 (estimates published in 2003, calculated using regression by two groups of researchers), then our computations suggest that the tail sales of Amazon.com, rather than being 40% as estimated by Brynjolfsson, Hu and Smith in 2003, are actually closer to 20%, the proportion estimated by its CEO.
Prediction of the creep properties of discontinuous fibre composites from the matrix creep law
International Nuclear Information System (INIS)
Bilde-Soerensen, J.B.; Boecker Pedersen, O.; Lilholt, H.
1975-02-01
Existing theories for predicting the creep properties of discontinuous fibre composites with non-creeping fibres from matrix creep properties, originally based on a power law, are extended to include an exponential law, and in principle a general matrixlaw. An analysis shows that the composite creep curve can be obtained by a simple displacement of the matrix creep curve in a log sigma vs. log epsilon diagram. This principle, that each point on the matrix curve has a corresponding point on the composite curve,is given a physical interpretation. The direction of displacement is such that the transition from a power law toan exponential law occurs at a lower strain rate for the composite than for the unreinforced matrix. This emphasizes the importance of the exponential creep range in the creep of fibre composites. The combined use of matrix and composite data may allow the creep phenomenon to be studied over a larger range of strain rates than otherwise possible. A method for constructing generalized composite creep diagrams is suggested. Creep properties predicted from matrix data by the present analysis are compared with experimental data from the literature. (author)
Model for predicting non-linear crack growth considering load sequence effects (LOSEQ)
International Nuclear Information System (INIS)
Fuehring, H.
1982-01-01
A new analytical model for predicting non-linear crack growth is presented which takes into account the retardation as well as the acceleration effects due to irregular loading. It considers not only the maximum peak of a load sequence to effect crack growth but also all other loads of the history according to a generalised memory criterion. Comparisons between crack growth predicted by using the LOSEQ-programme and experimentally observed data are presented. (orig.) [de
Bayesian prediction of spatial count data using generalized linear mixed models
DEFF Research Database (Denmark)
Christensen, Ole Fredslund; Waagepetersen, Rasmus Plenge
2002-01-01
Spatial weed count data are modeled and predicted using a generalized linear mixed model combined with a Bayesian approach and Markov chain Monte Carlo. Informative priors for a data set with sparse sampling are elicited using a previously collected data set with extensive sampling. Furthermore, ...
Fung, Karen; ElAtia, Samira
2015-01-01
Using Hierarchical Linear Modelling (HLM), this study aimed to identify factors such as ESL/ELL/EAL status that would predict students' reading performance in an English language arts exam taken across Canada. Using data from the 2007 administration of the Pan-Canadian Assessment Program (PCAP) along with the accompanying surveys for students and…
An improved robust model predictive control for linear parameter-varying input-output models
Abbas, H.S.; Hanema, J.; Tóth, R.; Mohammadpour, J.; Meskin, N.
2018-01-01
This paper describes a new robust model predictive control (MPC) scheme to control the discrete-time linear parameter-varying input-output models subject to input and output constraints. Closed-loop asymptotic stability is guaranteed by including a quadratic terminal cost and an ellipsoidal terminal
DEFF Research Database (Denmark)
Sokoler, Leo Emil; Dammann, Bernd; Madsen, Henrik
2014-01-01
This paper presents a decomposition algorithm for solving the optimal control problem (OCP) that arises in Mean-Variance Economic Model Predictive Control of stochastic linear systems. The algorithm applies the alternating direction method of multipliers to a reformulation of the OCP...
Akbulut, Yavuz
2007-01-01
Factors predicting vocabulary learning and reading comprehension of advanced language learners of English in a linear multimedia text were investigated in the current study. Predictor variables of interest were multimedia type, reading proficiency, learning styles, topic interest and background knowledge about the topic. The outcome variables of…
Convergence Guaranteed Nonlinear Constraint Model Predictive Control via I/O Linearization
Directory of Open Access Journals (Sweden)
Xiaobing Kong
2013-01-01
Full Text Available Constituting reliable optimal solution is a key issue for the nonlinear constrained model predictive control. Input-output feedback linearization is a popular method in nonlinear control. By using an input-output feedback linearizing controller, the original linear input constraints will change to nonlinear constraints and sometimes the constraints are state dependent. This paper presents an iterative quadratic program (IQP routine on the continuous-time system. To guarantee its convergence, another iterative approach is incorporated. The proposed algorithm can reach a feasible solution over the entire prediction horizon. Simulation results on both a numerical example and the continuous stirred tank reactors (CSTR demonstrate the effectiveness of the proposed method.
Cohen, Joel E; Xu, Meng; Schuster, William S F
2012-09-25
Two widely tested empirical patterns in ecology are combined here to predict how the variation of population density relates to the average body size of organisms. Taylor's law (TL) asserts that the variance of the population density of a set of populations is a power-law function of the mean population density. Density-mass allometry (DMA) asserts that the mean population density of a set of populations is a power-law function of the mean individual body mass. Combined, DMA and TL predict that the variance of the population density is a power-law function of mean individual body mass. We call this relationship "variance-mass allometry" (VMA). We confirmed the theoretically predicted power-law form and the theoretically predicted parameters of VMA, using detailed data on individual oak trees (Quercus spp.) of Black Rock Forest, Cornwall, New York. These results connect the variability of population density to the mean body mass of individuals.
A study on two phase flows of linear compressors for the prediction of refrigerant leakage
International Nuclear Information System (INIS)
Hwang, Il Sun; Lee, Young Lim; Oh, Won Sik; Park, Kyeong Bae
2015-01-01
Usage of linear compressors is on the rise due to their high efficiency. In this paper, leakage of a linear compressor has been studied through numerical analysis and experiments. First, nitrogen leakage for a stagnant piston with fixed cylinder pressure as well as for a moving piston with fixed cylinder pressure was analyzed to verify the validity of the two-phase flow analysis model. Next, refrigerant leakage of a linear compressor in operation was finally predicted through 3-dimensional unsteady, two phase flow CFD (Computational fluid dynamics). According to the research results, the numerical analyses for the fixed cylinder pressure models were in good agreement with the experimental results. The refrigerant leakage of the linear compressor in operation mainly occurred through the oil exit and the leakage became negligible after about 0.4s following operation where the leakage became lower than 2.0x10 -4 kg/s.
Prediction of Accurate Mixed Mode Fatigue Crack Growth Curves using the Paris' Law
Sajith, S.; Krishna Murthy, K. S. R.; Robi, P. S.
2017-12-01
Accurate information regarding crack growth times and structural strength as a function of the crack size is mandatory in damage tolerance analysis. Various equivalent stress intensity factor (SIF) models are available for prediction of mixed mode fatigue life using the Paris' law. In the present investigation these models have been compared to assess their efficacy in prediction of the life close to the experimental findings as there are no guidelines/suggestions available on selection of these models for accurate and/or conservative predictions of fatigue life. Within the limitations of availability of experimental data and currently available numerical simulation techniques, the results of present study attempts to outline models that would provide accurate and conservative life predictions.
Godin, Bruno; Mayer, Frédéric; Agneessens, Richard; Gerin, Patrick; Dardenne, Pierre; Delfosse, Philippe; Delcarte, Jérôme
2015-01-01
The reliability of different models to predict the biochemical methane potential (BMP) of various plant biomasses using a multispecies dataset was compared. The most reliable prediction models of the BMP were those based on the near infrared (NIR) spectrum compared to those based on the chemical composition. The NIR predictions of local (specific regression and non-linear) models were able to estimate quantitatively, rapidly, cheaply and easily the BMP. Such a model could be further used for biomethanation plant management and optimization. The predictions of non-linear models were more reliable compared to those of linear models. The presentation form (green-dried, silage-dried and silage-wet form) of biomasses to the NIR spectrometer did not influence the performances of the NIR prediction models. The accuracy of the BMP method should be improved to enhance further the BMP prediction models. Copyright © 2014 Elsevier Ltd. All rights reserved.
Suzuki, Makoto; Sugimura, Yuko; Yamada, Sumio; Omori, Yoshitsugu; Miyamoto, Masaaki; Yamamoto, Jun-ichi
2013-01-01
Cognitive disorders in the acute stage of stroke are common and are important independent predictors of adverse outcome in the long term. Despite the impact of cognitive disorders on both patients and their families, it is still difficult to predict the extent or duration of cognitive impairments. The objective of the present study was, therefore, to provide data on predicting the recovery of cognitive function soon after stroke by differential modeling with logarithmic and linear regression. This study included two rounds of data collection comprising 57 stroke patients enrolled in the first round for the purpose of identifying the time course of cognitive recovery in the early-phase group data, and 43 stroke patients in the second round for the purpose of ensuring that the correlation of the early-phase group data applied to the prediction of each individual's degree of cognitive recovery. In the first round, Mini-Mental State Examination (MMSE) scores were assessed 3 times during hospitalization, and the scores were regressed on the logarithm and linear of time. In the second round, calculations of MMSE scores were made for the first two scoring times after admission to tailor the structures of logarithmic and linear regression formulae to fit an individual's degree of functional recovery. The time course of early-phase recovery for cognitive functions resembled both logarithmic and linear functions. However, MMSE scores sampled at two baseline points based on logarithmic regression modeling could estimate prediction of cognitive recovery more accurately than could linear regression modeling (logarithmic modeling, R(2) = 0.676, PLogarithmic modeling based on MMSE scores could accurately predict the recovery of cognitive function soon after the occurrence of stroke. This logarithmic modeling with mathematical procedures is simple enough to be adopted in daily clinical practice.
A national-scale model of linear features improves predictions of farmland biodiversity.
Sullivan, Martin J P; Pearce-Higgins, James W; Newson, Stuart E; Scholefield, Paul; Brereton, Tom; Oliver, Tom H
2017-12-01
Modelling species distribution and abundance is important for many conservation applications, but it is typically performed using relatively coarse-scale environmental variables such as the area of broad land-cover types. Fine-scale environmental data capturing the most biologically relevant variables have the potential to improve these models. For example, field studies have demonstrated the importance of linear features, such as hedgerows, for multiple taxa, but the absence of large-scale datasets of their extent prevents their inclusion in large-scale modelling studies.We assessed whether a novel spatial dataset mapping linear and woody-linear features across the UK improves the performance of abundance models of 18 bird and 24 butterfly species across 3723 and 1547 UK monitoring sites, respectively.Although improvements in explanatory power were small, the inclusion of linear features data significantly improved model predictive performance for many species. For some species, the importance of linear features depended on landscape context, with greater importance in agricultural areas. Synthesis and applications . This study demonstrates that a national-scale model of the extent and distribution of linear features improves predictions of farmland biodiversity. The ability to model spatial variability in the role of linear features such as hedgerows will be important in targeting agri-environment schemes to maximally deliver biodiversity benefits. Although this study focuses on farmland, data on the extent of different linear features are likely to improve species distribution and abundance models in a wide range of systems and also can potentially be used to assess habitat connectivity.
Drug-Target Interaction Prediction through Label Propagation with Linear Neighborhood Information.
Zhang, Wen; Chen, Yanlin; Li, Dingfang
2017-11-25
Interactions between drugs and target proteins provide important information for the drug discovery. Currently, experiments identified only a small number of drug-target interactions. Therefore, the development of computational methods for drug-target interaction prediction is an urgent task of theoretical interest and practical significance. In this paper, we propose a label propagation method with linear neighborhood information (LPLNI) for predicting unobserved drug-target interactions. Firstly, we calculate drug-drug linear neighborhood similarity in the feature spaces, by considering how to reconstruct data points from neighbors. Then, we take similarities as the manifold of drugs, and assume the manifold unchanged in the interaction space. At last, we predict unobserved interactions between known drugs and targets by using drug-drug linear neighborhood similarity and known drug-target interactions. The experiments show that LPLNI can utilize only known drug-target interactions to make high-accuracy predictions on four benchmark datasets. Furthermore, we consider incorporating chemical structures into LPLNI models. Experimental results demonstrate that the model with integrated information (LPLNI-II) can produce improved performances, better than other state-of-the-art methods. The known drug-target interactions are an important information source for computational predictions. The usefulness of the proposed method is demonstrated by cross validation and the case study.
Linear and nonlinear models for predicting fish bioconcentration factors for pesticides.
Yuan, Jintao; Xie, Chun; Zhang, Ting; Sun, Jinfang; Yuan, Xuejie; Yu, Shuling; Zhang, Yingbiao; Cao, Yunyuan; Yu, Xingchen; Yang, Xuan; Yao, Wu
2016-08-01
This work is devoted to the applications of the multiple linear regression (MLR), multilayer perceptron neural network (MLP NN) and projection pursuit regression (PPR) to quantitative structure-property relationship analysis of bioconcentration factors (BCFs) of pesticides tested on Bluegill (Lepomis macrochirus). Molecular descriptors of a total of 107 pesticides were calculated with the DRAGON Software and selected by inverse enhanced replacement method. Based on the selected DRAGON descriptors, a linear model was built by MLR, nonlinear models were developed using MLP NN and PPR. The robustness of the obtained models was assessed by cross-validation and external validation using test set. Outliers were also examined and deleted to improve predictive power. Comparative results revealed that PPR achieved the most accurate predictions. This study offers useful models and information for BCF prediction, risk assessment, and pesticide formulation. Copyright © 2016 Elsevier Ltd. All rights reserved.
Improving the Prediction of Total Surgical Procedure Time Using Linear Regression Modeling.
Edelman, Eric R; van Kuijk, Sander M J; Hamaekers, Ankie E W; de Korte, Marcel J M; van Merode, Godefridus G; Buhre, Wolfgang F F A
2017-01-01
For efficient utilization of operating rooms (ORs), accurate schedules of assigned block time and sequences of patient cases need to be made. The quality of these planning tools is dependent on the accurate prediction of total procedure time (TPT) per case. In this paper, we attempt to improve the accuracy of TPT predictions by using linear regression models based on estimated surgeon-controlled time (eSCT) and other variables relevant to TPT. We extracted data from a Dutch benchmarking database of all surgeries performed in six academic hospitals in The Netherlands from 2012 till 2016. The final dataset consisted of 79,983 records, describing 199,772 h of total OR time. Potential predictors of TPT that were included in the subsequent analysis were eSCT, patient age, type of operation, American Society of Anesthesiologists (ASA) physical status classification, and type of anesthesia used. First, we computed the predicted TPT based on a previously described fixed ratio model for each record, multiplying eSCT by 1.33. This number is based on the research performed by van Veen-Berkx et al., which showed that 33% of SCT is generally a good approximation of anesthesia-controlled time (ACT). We then systematically tested all possible linear regression models to predict TPT using eSCT in combination with the other available independent variables. In addition, all regression models were again tested without eSCT as a predictor to predict ACT separately (which leads to TPT by adding SCT). TPT was most accurately predicted using a linear regression model based on the independent variables eSCT, type of operation, ASA classification, and type of anesthesia. This model performed significantly better than the fixed ratio model and the method of predicting ACT separately. Making use of these more accurate predictions in planning and sequencing algorithms may enable an increase in utilization of ORs, leading to significant financial and productivity related benefits.
Improving the Prediction of Total Surgical Procedure Time Using Linear Regression Modeling
Directory of Open Access Journals (Sweden)
Eric R. Edelman
2017-06-01
Full Text Available For efficient utilization of operating rooms (ORs, accurate schedules of assigned block time and sequences of patient cases need to be made. The quality of these planning tools is dependent on the accurate prediction of total procedure time (TPT per case. In this paper, we attempt to improve the accuracy of TPT predictions by using linear regression models based on estimated surgeon-controlled time (eSCT and other variables relevant to TPT. We extracted data from a Dutch benchmarking database of all surgeries performed in six academic hospitals in The Netherlands from 2012 till 2016. The final dataset consisted of 79,983 records, describing 199,772 h of total OR time. Potential predictors of TPT that were included in the subsequent analysis were eSCT, patient age, type of operation, American Society of Anesthesiologists (ASA physical status classification, and type of anesthesia used. First, we computed the predicted TPT based on a previously described fixed ratio model for each record, multiplying eSCT by 1.33. This number is based on the research performed by van Veen-Berkx et al., which showed that 33% of SCT is generally a good approximation of anesthesia-controlled time (ACT. We then systematically tested all possible linear regression models to predict TPT using eSCT in combination with the other available independent variables. In addition, all regression models were again tested without eSCT as a predictor to predict ACT separately (which leads to TPT by adding SCT. TPT was most accurately predicted using a linear regression model based on the independent variables eSCT, type of operation, ASA classification, and type of anesthesia. This model performed significantly better than the fixed ratio model and the method of predicting ACT separately. Making use of these more accurate predictions in planning and sequencing algorithms may enable an increase in utilization of ORs, leading to significant financial and productivity related
Prediction of Mind-Wandering with Electroencephalogram and Non-linear Regression Modeling.
Kawashima, Issaku; Kumano, Hiroaki
2017-01-01
Mind-wandering (MW), task-unrelated thought, has been examined by researchers in an increasing number of articles using models to predict whether subjects are in MW, using numerous physiological variables. However, these models are not applicable in general situations. Moreover, they output only binary classification. The current study suggests that the combination of electroencephalogram (EEG) variables and non-linear regression modeling can be a good indicator of MW intensity. We recorded EEGs of 50 subjects during the performance of a Sustained Attention to Response Task, including a thought sampling probe that inquired the focus of attention. We calculated the power and coherence value and prepared 35 patterns of variable combinations and applied Support Vector machine Regression (SVR) to them. Finally, we chose four SVR models: two of them non-linear models and the others linear models; two of the four models are composed of a limited number of electrodes to satisfy model usefulness. Examination using the held-out data indicated that all models had robust predictive precision and provided significantly better estimations than a linear regression model using single electrode EEG variables. Furthermore, in limited electrode condition, non-linear SVR model showed significantly better precision than linear SVR model. The method proposed in this study helps investigations into MW in various little-examined situations. Further, by measuring MW with a high temporal resolution EEG, unclear aspects of MW, such as time series variation, are expected to be revealed. Furthermore, our suggestion that a few electrodes can also predict MW contributes to the development of neuro-feedback studies.
Predicting musically induced emotions from physiological inputs: Linear and neural network models
Directory of Open Access Journals (Sweden)
Frank A. Russo
2013-08-01
Full Text Available Listening to music often leads to physiological responses. Do these physiological responses contain sufficient information to infer emotion induced in the listener? The current study explores this question by attempting to predict judgments of 'felt' emotion from physiological responses alone using linear and neural network models. We measured five channels of peripheral physiology from 20 participants – heart rate, respiration, galvanic skin response, and activity in corrugator supercilii and zygomaticus major facial muscles. Using valence and arousal (VA dimensions, participants rated their felt emotion after listening to each of 12 classical music excerpts. After extracting features from the five channels, we examined their correlation with VA ratings, and then performed multiple linear regression to see if a linear relationship between the physiological responses could account for the ratings. Although linear models predicted a significant amount of variance in arousal ratings, they were unable to do so with valence ratings. We then used a neural network to provide a nonlinear account of the ratings. The network was trained on the mean ratings of eight of the 12 excerpts and tested on the remainder. Performance of the neural network confirms that physiological responses alone can be used to predict musically induced emotion. The nonlinear model derived from the neural network was more accurate than linear models derived from multiple linear regression, particularly along the valence dimension. A secondary analysis allowed us to quantify the relative contributions of inputs to the nonlinear model. The study represents a novel approach to understanding the complex relationship between physiological responses and musically induced emotion.
Prediction of Mind-Wandering with Electroencephalogram and Non-linear Regression Modeling
Directory of Open Access Journals (Sweden)
Issaku Kawashima
2017-07-01
Full Text Available Mind-wandering (MW, task-unrelated thought, has been examined by researchers in an increasing number of articles using models to predict whether subjects are in MW, using numerous physiological variables. However, these models are not applicable in general situations. Moreover, they output only binary classification. The current study suggests that the combination of electroencephalogram (EEG variables and non-linear regression modeling can be a good indicator of MW intensity. We recorded EEGs of 50 subjects during the performance of a Sustained Attention to Response Task, including a thought sampling probe that inquired the focus of attention. We calculated the power and coherence value and prepared 35 patterns of variable combinations and applied Support Vector machine Regression (SVR to them. Finally, we chose four SVR models: two of them non-linear models and the others linear models; two of the four models are composed of a limited number of electrodes to satisfy model usefulness. Examination using the held-out data indicated that all models had robust predictive precision and provided significantly better estimations than a linear regression model using single electrode EEG variables. Furthermore, in limited electrode condition, non-linear SVR model showed significantly better precision than linear SVR model. The method proposed in this study helps investigations into MW in various little-examined situations. Further, by measuring MW with a high temporal resolution EEG, unclear aspects of MW, such as time series variation, are expected to be revealed. Furthermore, our suggestion that a few electrodes can also predict MW contributes to the development of neuro-feedback studies.
Darcy’s law predicts widespread forest mortality under climate warming
McDowell, Nate G.; Allen, Craig D.
2015-01-01
Drought and heat-induced tree mortality is accelerating in many forest biomes as a consequence of a warming climate, resulting in a threat to global forests unlike any in recorded history. Forests store the majority of terrestrial carbon, thus their loss may have significant and sustained impacts on the global carbon cycle. We use a hydraulic corollary to Darcy’s law, a core principle of vascular plant physiology, to predict characteristics of plants that will survive and die during drought under warmer future climates. Plants that are tall with isohydric stomatal regulation, low hydraulic conductance, and high leaf area are most likely to die from future drought stress. Thus, tall trees of old-growth forests are at the greatest risk of loss, which has ominous implications for terrestrial carbon storage. This application of Darcy’s law indicates today’s forests generally should be replaced by shorter and more xeric plants, owing to future warmer droughts and associated wildfires and pest attacks. The Darcy’s corollary also provides a simple, robust framework for informing forest management interventions needed to promote the survival of current forests. Given the robustness of Darcy’s law for predictions of vascular plant function, we conclude with high certainty that today’s forests are going to be subject to continued increases in mortality rates that will result in substantial reorganization of their structure and carbon storage.
Directory of Open Access Journals (Sweden)
Xiaoli Liu
2018-01-01
Full Text Available Alzheimer’s disease (AD has been not only the substantial financial burden to the health care system but also the emotional burden to patients and their families. Predicting cognitive performance of subjects from their magnetic resonance imaging (MRI measures and identifying relevant imaging biomarkers are important research topics in the study of Alzheimer’s disease. Recently, the multitask learning (MTL methods with sparsity-inducing norm (e.g., l2,1-norm have been widely studied to select the discriminative feature subset from MRI features by incorporating inherent correlations among multiple clinical cognitive measures. However, these previous works formulate the prediction tasks as a linear regression problem. The major limitation is that they assumed a linear relationship between the MRI features and the cognitive outcomes. Some multikernel-based MTL methods have been proposed and shown better generalization ability due to the nonlinear advantage. We quantify the power of existing linear and nonlinear MTL methods by evaluating their performance on cognitive score prediction of Alzheimer’s disease. Moreover, we extend the traditional l2,1-norm to a more general lql1-norm (q≥1. Experiments on the Alzheimer’s Disease Neuroimaging Initiative database showed that the nonlinear l2,1lq-MKMTL method not only achieved better prediction performance than the state-of-the-art competitive methods but also effectively fused the multimodality data.
International Nuclear Information System (INIS)
Ernst, Floris; Schweikard, Achim
2008-01-01
Forecasting of respiration motion in image-guided radiotherapy requires algorithms that can accurately and efficiently predict target location. Improved methods for respiratory motion forecasting were developed and tested. MULIN, a new family of prediction algorithms based on linear expansions of the prediction error, was developed and tested. Computer-generated data with a prediction horizon of 150 ms was used for testing in simulation experiments. MULIN was compared to Least Mean Squares-based predictors (LMS; normalized LMS, nLMS; wavelet-based multiscale autoregression, wLMS) and a multi-frequency Extended Kalman Filter (EKF) approach. The in vivo performance of the algorithms was tested on data sets of patients who underwent radiotherapy. The new MULIN methods are highly competitive, outperforming the LMS and the EKF prediction algorithms in real-world settings and performing similarly to optimized nLMS and wLMS prediction algorithms. On simulated, periodic data the MULIN algorithms are outperformed only by the EKF approach due to its inherent advantage in predicting periodic signals. In the presence of noise, the MULIN methods significantly outperform all other algorithms. The MULIN family of algorithms is a feasible tool for the prediction of respiratory motion, performing as well as or better than conventional algorithms while requiring significantly lower computational complexity. The MULIN algorithms are of special importance wherever high-speed prediction is required. (orig.)
Energy Technology Data Exchange (ETDEWEB)
Ernst, Floris; Schweikard, Achim [University of Luebeck, Institute for Robotics and Cognitive Systems, Luebeck (Germany)
2008-06-15
Forecasting of respiration motion in image-guided radiotherapy requires algorithms that can accurately and efficiently predict target location. Improved methods for respiratory motion forecasting were developed and tested. MULIN, a new family of prediction algorithms based on linear expansions of the prediction error, was developed and tested. Computer-generated data with a prediction horizon of 150 ms was used for testing in simulation experiments. MULIN was compared to Least Mean Squares-based predictors (LMS; normalized LMS, nLMS; wavelet-based multiscale autoregression, wLMS) and a multi-frequency Extended Kalman Filter (EKF) approach. The in vivo performance of the algorithms was tested on data sets of patients who underwent radiotherapy. The new MULIN methods are highly competitive, outperforming the LMS and the EKF prediction algorithms in real-world settings and performing similarly to optimized nLMS and wLMS prediction algorithms. On simulated, periodic data the MULIN algorithms are outperformed only by the EKF approach due to its inherent advantage in predicting periodic signals. In the presence of noise, the MULIN methods significantly outperform all other algorithms. The MULIN family of algorithms is a feasible tool for the prediction of respiratory motion, performing as well as or better than conventional algorithms while requiring significantly lower computational complexity. The MULIN algorithms are of special importance wherever high-speed prediction is required. (orig.)
DEFF Research Database (Denmark)
Sokoler, Leo Emil; Standardi, Laura; Edlund, Kristian
2014-01-01
This paper presents a warm-started Dantzig–Wolfe decomposition algorithm tailored to economic model predictive control of dynamically decoupled subsystems. We formulate the constrained optimal control problem solved at each sampling instant as a linear program with state space constraints, input...... limits, input rate limits, and soft output limits. The objective function of the linear program is related directly to the cost of operating the subsystems, and the cost of violating the soft output constraints. Simulations for large-scale economic power dispatch problems show that the proposed algorithm...... is significantly faster than both state-of-the-art linear programming solvers, and a structure exploiting implementation of the alternating direction method of multipliers. It is also demonstrated that the control strategy presented in this paper can be tuned using a weighted ℓ1-regularization term...
Predicting non-linear dynamics by stable local learning in a recurrent spiking neural network.
Gilra, Aditya; Gerstner, Wulfram
2017-11-27
The brain needs to predict how the body reacts to motor commands, but how a network of spiking neurons can learn non-linear body dynamics using local, online and stable learning rules is unclear. Here, we present a supervised learning scheme for the feedforward and recurrent connections in a network of heterogeneous spiking neurons. The error in the output is fed back through fixed random connections with a negative gain, causing the network to follow the desired dynamics. The rule for Feedback-based Online Local Learning Of Weights (FOLLOW) is local in the sense that weight changes depend on the presynaptic activity and the error signal projected onto the postsynaptic neuron. We provide examples of learning linear, non-linear and chaotic dynamics, as well as the dynamics of a two-link arm. Under reasonable approximations, we show, using the Lyapunov method, that FOLLOW learning is uniformly stable, with the error going to zero asymptotically.
Financial Distress Prediction using Linear Discriminant Analysis and Support Vector Machine
Santoso, Noviyanti; Wibowo, Wahyu
2018-03-01
A financial difficulty is the early stages before the bankruptcy. Bankruptcies caused by the financial distress can be seen from the financial statements of the company. The ability to predict financial distress became an important research topic because it can provide early warning for the company. In addition, predicting financial distress is also beneficial for investors and creditors. This research will be made the prediction model of financial distress at industrial companies in Indonesia by comparing the performance of Linear Discriminant Analysis (LDA) and Support Vector Machine (SVM) combined with variable selection technique. The result of this research is prediction model based on hybrid Stepwise-SVM obtains better balance among fitting ability, generalization ability and model stability than the other models.
Prediction of Complex Human Traits Using the Genomic Best Linear Unbiased Predictor
DEFF Research Database (Denmark)
de los Campos, Gustavo; Vazquez, Ana I; Fernando, Rohan
2013-01-01
Despite important advances from Genome Wide Association Studies (GWAS), for most complex human traits and diseases, a sizable proportion of genetic variance remains unexplained and prediction accuracy (PA) is usually low. Evidence suggests that PA can be improved using Whole-Genome Regression (WGR......) models where phenotypes are regressed on hundreds of thousands of variants simultaneously. The Genomic Best Linear Unbiased Prediction G-BLUP, a ridge-regression type method) is a commonly used WGR method and has shown good predictive performance when applied to plant and animal breeding populations....... However, breeding and human populations differ greatly in a number of factors that can affect the predictive performance of G-BLUP. Using theory, simulations, and real data analysis, we study the erformance of G-BLUP when applied to data from related and unrelated human subjects. Under perfect linkage...
Directory of Open Access Journals (Sweden)
Avval Zhila Mohajeri
2015-01-01
Full Text Available This paper deals with developing a linear quantitative structure-activity relationship (QSAR model for predicting the RSK inhibition activity of some new compounds. A dataset consisting of 62 pyrazino [1,2-α] indole, diazepino [1,2-α] indole, and imidazole derivatives with known inhibitory activities was used. Multiple linear regressions (MLR technique combined with the stepwise (SW and the genetic algorithm (GA methods as variable selection tools was employed. For more checking stability, robustness and predictability of the proposed models, internal and external validation techniques were used. Comparison of the results obtained, indicate that the GA-MLR model is superior to the SW-MLR model and that it isapplicable for designing novel RSK inhibitors.
The application of sparse linear prediction dictionary to compressive sensing in speech signals
Directory of Open Access Journals (Sweden)
YOU Hanxu
2016-04-01
Full Text Available Appling compressive sensing (CS,which theoretically guarantees that signal sampling and signal compression can be achieved simultaneously,into audio and speech signal processing is one of the most popular research topics in recent years.In this paper,K-SVD algorithm was employed to learn a sparse linear prediction dictionary regarding as the sparse basis of underlying speech signals.Compressed signals was obtained by applying random Gaussian matrix to sample original speech frames.Orthogonal matching pursuit (OMP and compressive sampling matching pursuit (CoSaMP were adopted to recovery original signals from compressed one.Numbers of experiments were carried out to investigate the impact of speech frames length,compression ratios,sparse basis and reconstruction algorithms on CS performance.Results show that sparse linear prediction dictionary can advance the performance of speech signals reconstruction compared with discrete cosine transform (DCT matrix.
DEFF Research Database (Denmark)
Christensen, M. G.; Jensen, Søren Holdt
2006-01-01
A method for amplitude modulated sinusoidal audio coding is presented that has low complexity and low delay. This is based on a subband processing system, where, in each subband, the signal is modeled as an amplitude modulated sum of sinusoids. The envelopes are estimated using frequency......-domain linear prediction and the prediction coefficients are quantized. As a proof of concept, we evaluate different configurations in a subjective listening test, and this shows that the proposed method offers significant improvements in sinusoidal coding. Furthermore, the properties of the frequency...
DEFF Research Database (Denmark)
Petersen, Lars Norbert; Poulsen, Niels Kjølstad; Niemann, Hans Henrik
2015-01-01
In this paper, we compare the performance of an economically optimizing Nonlinear Model Predictive Controller (E-NMPC) to a linear tracking Model Predictive Controller (MPC) for a spray drying plant. We find in this simulation study, that the economic performance of the two controllers are almost...... equal. We evaluate the economic performance with an industrially recorded disturbance scenario, where unmeasured disturbances and model mismatch are present. The state of the spray dryer, used in the E-NMPC and MPC, is estimated using Kalman Filters with noise covariances estimated by a maximum...
Modified linear predictive coding approach for moving target tracking by Doppler radar
Ding, Yipeng; Lin, Xiaoyi; Sun, Ke-Hui; Xu, Xue-Mei; Liu, Xi-Yao
2016-07-01
Doppler radar is a cost-effective tool for moving target tracking, which can support a large range of civilian and military applications. A modified linear predictive coding (LPC) approach is proposed to increase the target localization accuracy of the Doppler radar. Based on the time-frequency analysis of the received echo, the proposed approach first real-time estimates the noise statistical parameters and constructs an adaptive filter to intelligently suppress the noise interference. Then, a linear predictive model is applied to extend the available data, which can help improve the resolution of the target localization result. Compared with the traditional LPC method, which empirically decides the extension data length, the proposed approach develops an error array to evaluate the prediction accuracy and thus, adjust the optimum extension data length intelligently. Finally, the prediction error array is superimposed with the predictor output to correct the prediction error. A series of experiments are conducted to illustrate the validity and performance of the proposed techniques.
Tewarie, P.; Bright, M.G.; Hillebrand, A.; Robson, S.E.; Gascoyne, L.E.; Morris, P.G.; Meier, J.; Van Mieghem, P.; Brookes, M.J.
2016-01-01
Understanding the electrophysiological basis of resting state networks (RSNs) in the human brain is a critical step towards elucidating how inter-areal connectivity supports healthy brain function. In recent years, the relationship between RSNs (typically measured using haemodynamic signals) and electrophysiology has been explored using functional Magnetic Resonance Imaging (fMRI) and magnetoencephalography (MEG). Significant progress has been made, with similar spatial structure observable in both modalities. However, there is a pressing need to understand this relationship beyond simple visual similarity of RSN patterns. Here, we introduce a mathematical model to predict fMRI-based RSNs using MEG. Our unique model, based upon a multivariate Taylor series, incorporates both phase and amplitude based MEG connectivity metrics, as well as linear and non-linear interactions within and between neural oscillations measured in multiple frequency bands. We show that including non-linear interactions, multiple frequency bands and cross-frequency terms significantly improves fMRI network prediction. This shows that fMRI connectivity is not only the result of direct electrophysiological connections, but is also driven by the overlap of connectivity profiles between separate regions. Our results indicate that a complete understanding of the electrophysiological basis of RSNs goes beyond simple frequency-specific analysis, and further exploration of non-linear and cross-frequency interactions will shed new light on distributed network connectivity, and its perturbation in pathology. PMID:26827811
Ling, Ru; Liu, Jiawang
2011-12-01
To construct prediction model for health workforce and hospital beds in county hospitals of Hunan by multiple linear regression. We surveyed 16 counties in Hunan with stratified random sampling according to uniform questionnaires,and multiple linear regression analysis with 20 quotas selected by literature view was done. Independent variables in the multiple linear regression model on medical personnels in county hospitals included the counties' urban residents' income, crude death rate, medical beds, business occupancy, professional equipment value, the number of devices valued above 10 000 yuan, fixed assets, long-term debt, medical income, medical expenses, outpatient and emergency visits, hospital visits, actual available bed days, and utilization rate of hospital beds. Independent variables in the multiple linear regression model on county hospital beds included the the population of aged 65 and above in the counties, disposable income of urban residents, medical personnel of medical institutions in county area, business occupancy, the total value of professional equipment, fixed assets, long-term debt, medical income, medical expenses, outpatient and emergency visits, hospital visits, actual available bed days, utilization rate of hospital beds, and length of hospitalization. The prediction model shows good explanatory and fitting, and may be used for short- and mid-term forecasting.
MULTIPLE LINEAR REGRESSION ANALYSIS FOR PREDICTION OF BOILER LOSSES AND BOILER EFFICIENCY
Chayalakshmi C.L
2018-01-01
MULTIPLE LINEAR REGRESSION ANALYSIS FOR PREDICTION OF BOILER LOSSES AND BOILER EFFICIENCY ABSTRACT Calculation of boiler efficiency is essential if its parameters need to be controlled for either maintaining or enhancing its efficiency. But determination of boiler efficiency using conventional method is time consuming and very expensive. Hence, it is not recommended to find boiler efficiency frequently. The work presented in this paper deals with establishing the statistical mo...
Integrating piecewise linear representation and ensemble neural network for stock price prediction
Asaduzzaman, Md.; Shahjahan, Md.; Ahmed, Fatema Johera; Islam, Md. Monirul; Murase, Kazuyuki
2014-01-01
Stock Prices are considered to be very dynamic and susceptible to quick changes because of the underlying nature of the financial domain, and in part because of the interchange between known parameters and unknown factors. Of late, several researchers have used Piecewise Linear Representation (PLR) to predict the stock market pricing. However, some improvements are needed to avoid the appropriate threshold of the trading decision, choosing the input index as well as improving the overall perf...
Hu, J H; Wang, Y; Cahill, P T
1997-01-01
This paper reports a multispectral code excited linear prediction (MCELP) method for the compression of multispectral images. Different linear prediction models and adaptation schemes have been compared. The method that uses a forward adaptive autoregressive (AR) model has been proven to achieve a good compromise between performance, complexity, and robustness. This approach is referred to as the MFCELP method. Given a set of multispectral images, the linear predictive coefficients are updated over nonoverlapping three-dimensional (3-D) macroblocks. Each macroblock is further divided into several 3-D micro-blocks, and the best excitation signal for each microblock is determined through an analysis-by-synthesis procedure. The MFCELP method has been applied to multispectral magnetic resonance (MR) images. To satisfy the high quality requirement for medical images, the error between the original image set and the synthesized one is further specified using a vector quantizer. This method has been applied to images from 26 clinical MR neuro studies (20 slices/study, three spectral bands/slice, 256x256 pixels/band, 12 b/pixel). The MFCELP method provides a significant visual improvement over the discrete cosine transform (DCT) based Joint Photographers Expert Group (JPEG) method, the wavelet transform based embedded zero-tree wavelet (EZW) coding method, and the vector tree (VT) coding method, as well as the multispectral segmented autoregressive moving average (MSARMA) method we developed previously.
Quantifying the predictive consequences of model error with linear subspace analysis
White, Jeremy T.; Doherty, John E.; Hughes, Joseph D.
2014-01-01
All computer models are simplified and imperfect simulators of complex natural systems. The discrepancy arising from simplification induces bias in model predictions, which may be amplified by the process of model calibration. This paper presents a new method to identify and quantify the predictive consequences of calibrating a simplified computer model. The method is based on linear theory, and it scales efficiently to the large numbers of parameters and observations characteristic of groundwater and petroleum reservoir models. The method is applied to a range of predictions made with a synthetic integrated surface-water/groundwater model with thousands of parameters. Several different observation processing strategies and parameterization/regularization approaches are examined in detail, including use of the Karhunen-Loève parameter transformation. Predictive bias arising from model error is shown to be prediction specific and often invisible to the modeler. The amount of calibration-induced bias is influenced by several factors, including how expert knowledge is applied in the design of parameterization schemes, the number of parameters adjusted during calibration, how observations and model-generated counterparts are processed, and the level of fit with observations achieved through calibration. Failure to properly implement any of these factors in a prediction-specific manner may increase the potential for predictive bias in ways that are not visible to the calibration and uncertainty analysis process.
Predicting and understanding law-making with word vectors and an ensemble model.
Nay, John J
2017-01-01
Out of nearly 70,000 bills introduced in the U.S. Congress from 2001 to 2015, only 2,513 were enacted. We developed a machine learning approach to forecasting the probability that any bill will become law. Starting in 2001 with the 107th Congress, we trained models on data from previous Congresses, predicted all bills in the current Congress, and repeated until the 113th Congress served as the test. For prediction we scored each sentence of a bill with a language model that embeds legislative vocabulary into a high-dimensional, semantic-laden vector space. This language representation enables our investigation into which words increase the probability of enactment for any topic. To test the relative importance of text and context, we compared the text model to a context-only model that uses variables such as whether the bill's sponsor is in the majority party. To test the effect of changes to bills after their introduction on our ability to predict their final outcome, we compared using the bill text and meta-data available at the time of introduction with using the most recent data. At the time of introduction context-only predictions outperform text-only, and with the newest data text-only outperforms context-only. Combining text and context always performs best. We conducted a global sensitivity analysis on the combined model to determine important variables predicting enactment.
Linear filters as a method of real-time prediction of geomagnetic activity
International Nuclear Information System (INIS)
McPherron, R.L.; Baker, D.N.; Bargatze, L.F.
1985-01-01
Important factors controlling geomagnetic activity include the solar wind velocity, the strength of the interplanetary magnetic field (IMF), and the field orientation. Because these quantities change so much in transit through the solar wind, real-time monitoring immediately upstream of the earth provides the best input for any technique of real-time prediction. One such technique is linear prediction filtering which utilizes past histories of the input and output of a linear system to create a time-invariant filter characterizing the system. Problems of nonlinearity or temporal changes of the system can be handled by appropriate choice of input parameters and piecewise approximation in various ranges of the input. We have created prediction filters for all the standard magnetic indices and tested their efficiency. The filters show that the initial response of the magnetosphere to a southward turning of the IMF peaks in 20 minutes and then again in 55 minutes. After a northward turning, auroral zone indices and the midlatitude ASYM index return to background within 2 hours, while Dst decays exponentially with a time constant of about 8 hours. This paper describes a simple, real-time system utilizing these filters which could predict a substantial fraction of the variation in magnetic activity indices 20 to 50 minutes in advance
An atomistic vision of the Mass Action Law: Prediction of carbon/oxygen defects in silicon
Energy Technology Data Exchange (ETDEWEB)
Brenet, G.; Timerkaeva, D.; Caliste, D.; Pochet, P. [CEA, INAC-SP2M, Atomistic Simulation Laboratory, F-38000 Grenoble (France); Univ. Grenoble Alpes, INAC-SP2M, L-Sim, F-38000 Grenoble (France); Sgourou, E. N.; Londos, C. A. [University of Athens, Solid State Physics Section, Panepistimiopolis Zografos, Athens 157 84 (Greece)
2015-09-28
We introduce an atomistic description of the kinetic Mass Action Law to predict concentrations of defects and complexes. We demonstrate in this paper that this approach accurately predicts carbon/oxygen related defect concentrations in silicon upon annealing. The model requires binding and migration energies of the impurities and complexes, here obtained from density functional theory (DFT) calculations. Vacancy-oxygen complex kinetics are studied as a model system during both isochronal and isothermal annealing. Results are in good agreement with experimental data, confirming the success of the methodology. More importantly, it gives access to the sequence of chain reactions by which oxygen and carbon related complexes are created in silicon. Beside the case of silicon, the understanding of such intricate reactions is a key to develop point defect engineering strategies to control defects and thus semiconductors properties.
Linear Model-Based Predictive Control of the LHC 1.8 K Cryogenic Loop
Blanco-Viñuela, E; De Prada-Moraga, C
1999-01-01
The LHC accelerator will employ 1800 superconducting magnets (for guidance and focusing of the particle beams) in a pressurized superfluid helium bath at 1.9 K. This temperature is a severely constrained control parameter in order to avoid the transition from the superconducting to the normal state. Cryogenic processes are difficult to regulate due to their highly non-linear physical parameters (heat capacity, thermal conductance, etc.) and undesirable peculiarities like non self-regulating process, inverse response and variable dead time. To reduce the requirements on either temperature sensor or cryogenic system performance, various control strategies have been investigated on a reduced-scale LHC prototype built at CERN (String Test). Model Based Predictive Control (MBPC) is a regulation algorithm based on the explicit use of a process model to forecast the plant output over a certain prediction horizon. This predicted controlled variable is used in an on-line optimization procedure that minimizes an approp...
Directory of Open Access Journals (Sweden)
Mariana Santos Matos Cavalca
2012-01-01
Full Text Available One of the main advantages of predictive control approaches is the capability of dealing explicitly with constraints on the manipulated and output variables. However, if the predictive control formulation does not consider model uncertainties, then the constraint satisfaction may be compromised. A solution for this inconvenience is to use robust model predictive control (RMPC strategies based on linear matrix inequalities (LMIs. However, LMI-based RMPC formulations typically consider only symmetric constraints. This paper proposes a method based on pseudoreferences to treat asymmetric output constraints in integrating SISO systems. Such technique guarantees robust constraint satisfaction and convergence of the state to the desired equilibrium point. A case study using numerical simulation indicates that satisfactory results can be achieved.
Frehner, Marcel; Amschwand, Dominik; Gärtner-Roer, Isabelle
2016-04-01
Rockglaciers consist of unconsolidated rock fragments (silt/sand-rock boulders) with interstitial ice; hence their creep behavior (i.e., rheology) may deviate from the simple and well-known flow-laws for pure ice. Here we constrain the non-linear viscous flow law that governs rockglacier creep based on geomorphological observations. We use the Murtèl rockglacier (upper Engadin valley, SE Switzerland) as a case study, for which high-resolution digital elevation models (DEM), time-lapse borehole deformation data, and geophysical soundings exist that reveal the exterior and interior architecture and dynamics of the landform. Rockglaciers often feature a prominent furrow-and-ridge topography. For the Murtèl rockglacier, Frehner et al. (2015) reproduced the wavelength, amplitude, and distribution of the furrow-and-ridge morphology using a linear viscous (Newtonian) flow model. Arenson et al. (2002) presented borehole deformation data, which highlight the basal shear zone at about 30 m depth and a curved deformation profile above the shear zone. Similarly, the furrow-and-ridge morphology also exhibits a curved geometry in map view. Hence, the surface morphology and the borehole deformation data together describe a curved 3D geometry, which is close to, but not quite parabolic. We use a high-resolution DEM to quantify the curved geometry of the Murtèl furrow-and-ridge morphology. We then calculate theoretical 3D flow geometries using different non-linear viscous flow laws. By comparing them to the measured curved 3D geometry (i.e., both surface morphology and borehole deformation data), we can determine the most adequate flow-law that fits the natural data best. Linear viscous models result in perfectly parabolic flow geometries; non-linear creep leads to localized deformation at the sides and bottom of the rockglacier while the deformation in the interior and top are less intense. In other words, non-linear creep results in non-parabolic flow geometries. Both the
Madarang, Krish J; Kang, Joo-Hyon
2014-06-01
Stormwater runoff has been identified as a source of pollution for the environment, especially for receiving waters. In order to quantify and manage the impacts of stormwater runoff on the environment, predictive models and mathematical models have been developed. Predictive tools such as regression models have been widely used to predict stormwater discharge characteristics. Storm event characteristics, such as antecedent dry days (ADD), have been related to response variables, such as pollutant loads and concentrations. However it has been a controversial issue among many studies to consider ADD as an important variable in predicting stormwater discharge characteristics. In this study, we examined the accuracy of general linear regression models in predicting discharge characteristics of roadway runoff. A total of 17 storm events were monitored in two highway segments, located in Gwangju, Korea. Data from the monitoring were used to calibrate United States Environmental Protection Agency's Storm Water Management Model (SWMM). The calibrated SWMM was simulated for 55 storm events, and the results of total suspended solid (TSS) discharge loads and event mean concentrations (EMC) were extracted. From these data, linear regression models were developed. R(2) and p-values of the regression of ADD for both TSS loads and EMCs were investigated. Results showed that pollutant loads were better predicted than pollutant EMC in the multiple regression models. Regression may not provide the true effect of site-specific characteristics, due to uncertainty in the data. Copyright © 2014 The Research Centre for Eco-Environmental Sciences, Chinese Academy of Sciences. Published by Elsevier B.V. All rights reserved.
Directory of Open Access Journals (Sweden)
Luiz Augusto da Cruz Meleiro
2005-06-01
Full Text Available In this work a MIMO non-linear predictive controller was developed for an extractive alcoholic fermentation process. The internal model of the controller was represented by two MISO Functional Link Networks (FLNs, identified using simulated data generated from a deterministic mathematical model whose kinetic parameters were determined experimentally. The FLN structure presents as advantages fast training and guaranteed convergence, since the estimation of the weights is a linear optimization problem. Besides, the elimination of non-significant weights generates parsimonious models, which allows for fast execution in an MPC-based algorithm. The proposed algorithm showed good potential in identification and control of non-linear processes.Neste trabalho um controlador preditivo não linear multivariável foi desenvolvido para um processo de fermentação alcoólica extrativa. O modelo interno do controlador foi representado por duas redes do tipo Functional Link (FLN, identificadas usando dados de simulação gerados a partir de um modelo validado experimentalmente. A estrutura FLN apresenta como vantagem o treinamento rápido e convergência garantida, já que a estimação dos seus pesos é um problema de otimização linear. Além disso, a eliminação de pesos não significativos gera modelos parsimoniosos, o que permite a rápida execução em algoritmos de controle preditivo baseado em modelo. Os resultados mostram que o algoritmo proposto tem grande potencial para identificação e controle de processos não lineares.
Lee, Eunjee; Zhu, Hongtu; Kong, Dehan; Wang, Yalin; Giovanello, Kelly Sullivan; Ibrahim, Joseph G
2015-12-01
The aim of this paper is to develop a Bayesian functional linear Cox regression model (BFLCRM) with both functional and scalar covariates. This new development is motivated by establishing the likelihood of conversion to Alzheimer's disease (AD) in 346 patients with mild cognitive impairment (MCI) enrolled in the Alzheimer's Disease Neuroimaging Initiative 1 (ADNI-1) and the early markers of conversion. These 346 MCI patients were followed over 48 months, with 161 MCI participants progressing to AD at 48 months. The functional linear Cox regression model was used to establish that functional covariates including hippocampus surface morphology and scalar covariates including brain MRI volumes, cognitive performance (ADAS-Cog), and APOE status can accurately predict time to onset of AD. Posterior computation proceeds via an efficient Markov chain Monte Carlo algorithm. A simulation study is performed to evaluate the finite sample performance of BFLCRM.
The Application Law of Large Numbers That Predicts The Amount of Actual Loss in Insurance of Life
Tinungki, Georgina Maria
2018-03-01
The law of large numbers is a statistical concept that calculates the average number of events or risks in a sample or population to predict something. The larger the population is calculated, the more accurate predictions. In the field of insurance, the Law of Large Numbers is used to predict the risk of loss or claims of some participants so that the premium can be calculated appropriately. For example there is an average that of every 100 insurance participants, there is one participant who filed an accident claim, then the premium of 100 participants should be able to provide Sum Assured to at least 1 accident claim. The larger the insurance participant is calculated, the more precise the prediction of the calendar and the calculation of the premium. Life insurance, as a tool for risk spread, can only work if a life insurance company is able to bear the same risk in large numbers. Here apply what is called the law of large number. The law of large numbers states that if the amount of exposure to losses increases, then the predicted loss will be closer to the actual loss. The use of the law of large numbers allows the number of losses to be predicted better.
Sun, Lei; Jin, Hong-Yu; Tian, Run-Tao; Wang, Ming-Juan; Liu, Li-Na; Ye, Liu-Ping; Zuo, Tian-Tian; Ma, Shuang-Cheng
2017-01-01
Analysis of related substances in pharmaceutical chemicals and multi-components in traditional Chinese medicines needs bulk of reference substances to identify the chromatographic peaks accurately. But the reference substances are costly. Thus, the relative retention (RR) method has been widely adopted in pharmacopoeias and literatures for characterizing HPLC behaviors of those reference substances unavailable. The problem is it is difficult to reproduce the RR on different columns due to the error between measured retention time (t R ) and predicted t R in some cases. Therefore, it is useful to develop an alternative and simple method for prediction of t R accurately. In the present study, based on the thermodynamic theory of HPLC, a method named linear calibration using two reference substances (LCTRS) was proposed. The method includes three steps, procedure of two points prediction, procedure of validation by multiple points regression and sequential matching. The t R of compounds on a HPLC column can be calculated by standard retention time and linear relationship. The method was validated in two medicines on 30 columns. It was demonstrated that, LCTRS method is simple, but more accurate and more robust on different HPLC columns than RR method. Hence quality standards using LCTRS method are easy to reproduce in different laboratories with lower cost of reference substances.
Adachi, Daiki; Nishiguchi, Shu; Fukutani, Naoto; Hotta, Takayuki; Tashiro, Yuto; Morino, Saori; Shirooka, Hidehiko; Nozaki, Yuma; Hirata, Hinako; Yamaguchi, Moe; Yorozu, Ayanori; Takahashi, Masaki; Aoyama, Tomoki
2017-05-01
The purpose of this study was to investigate which spatial and temporal parameters of the Timed Up and Go (TUG) test are associated with motor function in elderly individuals. This study included 99 community-dwelling women aged 72.9 ± 6.3 years. Step length, step width, single support time, variability of the aforementioned parameters, gait velocity, cadence, reaction time from starting signal to first step, and minimum distance between the foot and a marker placed to 3 in front of the chair were measured using our analysis system. The 10-m walk test, five times sit-to-stand (FTSTS) test, and one-leg standing (OLS) test were used to assess motor function. Stepwise multivariate linear regression analysis was used to determine which TUG test parameters were associated with each motor function test. Finally, we calculated a predictive model for each motor function test using each regression coefficient. In stepwise linear regression analysis, step length and cadence were significantly associated with the 10-m walk test, FTSTS and OLS test. Reaction time was associated with the FTSTS test, and step width was associated with the OLS test. Each predictive model showed a strong correlation with the 10-m walk test and OLS test (P motor function test. Moreover, the TUG test time regarded as the lower extremity function and mobility has strong predictive ability in each motor function test. Copyright © 2017 The Japanese Orthopaedic Association. Published by Elsevier B.V. All rights reserved.
Chen, Ruoying; Zhang, Zhiwang; Wu, Di; Zhang, Peng; Zhang, Xinyang; Wang, Yong; Shi, Yong
2011-01-21
Protein-protein interactions are fundamentally important in many biological processes and it is in pressing need to understand the principles of protein-protein interactions. Mutagenesis studies have found that only a small fraction of surface residues, known as hot spots, are responsible for the physical binding in protein complexes. However, revealing hot spots by mutagenesis experiments are usually time consuming and expensive. In order to complement the experimental efforts, we propose a new computational approach in this paper to predict hot spots. Our method, Rough Set-based Multiple Criteria Linear Programming (RS-MCLP), integrates rough sets theory and multiple criteria linear programming to choose dominant features and computationally predict hot spots. Our approach is benchmarked by a dataset of 904 alanine-mutated residues and the results show that our RS-MCLP method performs better than other methods, e.g., MCLP, Decision Tree, Bayes Net, and the existing HotSprint database. In addition, we reveal several biological insights based on our analysis. We find that four features (the change of accessible surface area, percentage of the change of accessible surface area, size of a residue, and atomic contacts) are critical in predicting hot spots. Furthermore, we find that three residues (Tyr, Trp, and Phe) are abundant in hot spots through analyzing the distribution of amino acids. Copyright © 2010 Elsevier Ltd. All rights reserved.
Drzewiecki, Wojciech
2016-12-01
In this work nine non-linear regression models were compared for sub-pixel impervious surface area mapping from Landsat images. The comparison was done in three study areas both for accuracy of imperviousness coverage evaluation in individual points in time and accuracy of imperviousness change assessment. The performance of individual machine learning algorithms (Cubist, Random Forest, stochastic gradient boosting of regression trees, k-nearest neighbors regression, random k-nearest neighbors regression, Multivariate Adaptive Regression Splines, averaged neural networks, and support vector machines with polynomial and radial kernels) was also compared with the performance of heterogeneous model ensembles constructed from the best models trained using particular techniques. The results proved that in case of sub-pixel evaluation the most accurate prediction of change may not necessarily be based on the most accurate individual assessments. When single methods are considered, based on obtained results Cubist algorithm may be advised for Landsat based mapping of imperviousness for single dates. However, Random Forest may be endorsed when the most reliable evaluation of imperviousness change is the primary goal. It gave lower accuracies for individual assessments, but better prediction of change due to more correlated errors of individual predictions. Heterogeneous model ensembles performed for individual time points assessments at least as well as the best individual models. In case of imperviousness change assessment the ensembles always outperformed single model approaches. It means that it is possible to improve the accuracy of sub-pixel imperviousness change assessment using ensembles of heterogeneous non-linear regression models.
Predicting recycling behaviour: Comparison of a linear regression model and a fuzzy logic model.
Vesely, Stepan; Klöckner, Christian A; Dohnal, Mirko
2016-03-01
In this paper we demonstrate that fuzzy logic can provide a better tool for predicting recycling behaviour than the customarily used linear regression. To show this, we take a set of empirical data on recycling behaviour (N=664), which we randomly divide into two halves. The first half is used to estimate a linear regression model of recycling behaviour, and to develop a fuzzy logic model of recycling behaviour. As the first comparison, the fit of both models to the data included in estimation of the models (N=332) is evaluated. As the second comparison, predictive accuracy of both models for "new" cases (hold-out data not included in building the models, N=332) is assessed. In both cases, the fuzzy logic model significantly outperforms the regression model in terms of fit. To conclude, when accurate predictions of recycling and possibly other environmental behaviours are needed, fuzzy logic modelling seems to be a promising technique. Copyright © 2015 Elsevier Ltd. All rights reserved.
Smith, Paul F; Ganesh, Siva; Liu, Ping
2013-10-30
Regression is a common statistical tool for prediction in neuroscience. However, linear regression is by far the most common form of regression used, with regression trees receiving comparatively little attention. In this study, the results of conventional multiple linear regression (MLR) were compared with those of random forest regression (RFR), in the prediction of the concentrations of 9 neurochemicals in the vestibular nucleus complex and cerebellum that are part of the l-arginine biochemical pathway (agmatine, putrescine, spermidine, spermine, l-arginine, l-ornithine, l-citrulline, glutamate and γ-aminobutyric acid (GABA)). The R(2) values for the MLRs were higher than the proportion of variance explained values for the RFRs: 6/9 of them were ≥ 0.70 compared to 4/9 for RFRs. Even the variables that had the lowest R(2) values for the MLRs, e.g. ornithine (0.50) and glutamate (0.61), had much lower proportion of variance explained values for the RFRs (0.27 and 0.49, respectively). The RSE values for the MLRs were lower than those for the RFRs in all but two cases. In general, MLRs seemed to be superior to the RFRs in terms of predictive value and error. In the case of this data set, MLR appeared to be superior to RFR in terms of its explanatory value and error. This result suggests that MLR may have advantages over RFR for prediction in neuroscience with this kind of data set, but that RFR can still have good predictive value in some cases. Copyright © 2013 Elsevier B.V. All rights reserved.
Towards Automated Binding Affinity Prediction Using an Iterative Linear Interaction Energy Approach
Directory of Open Access Journals (Sweden)
C. Ruben Vosmeer
2014-01-01
Full Text Available Binding affinity prediction of potential drugs to target and off-target proteins is an essential asset in drug development. These predictions require the calculation of binding free energies. In such calculations, it is a major challenge to properly account for both the dynamic nature of the protein and the possible variety of ligand-binding orientations, while keeping computational costs tractable. Recently, an iterative Linear Interaction Energy (LIE approach was introduced, in which results from multiple simulations of a protein-ligand complex are combined into a single binding free energy using a Boltzmann weighting-based scheme. This method was shown to reach experimental accuracy for flexible proteins while retaining the computational efficiency of the general LIE approach. Here, we show that the iterative LIE approach can be used to predict binding affinities in an automated way. A workflow was designed using preselected protein conformations, automated ligand docking and clustering, and a (semi-automated molecular dynamics simulation setup. We show that using this workflow, binding affinities of aryloxypropanolamines to the malleable Cytochrome P450 2D6 enzyme can be predicted without a priori knowledge of dominant protein-ligand conformations. In addition, we provide an outlook for an approach to assess the quality of the LIE predictions, based on simulation outcomes only.
Linear genetic programming application for successive-station monthly streamflow prediction
Danandeh Mehr, Ali; Kahya, Ercan; Yerdelen, Cahit
2014-09-01
In recent decades, artificial intelligence (AI) techniques have been pronounced as a branch of computer science to model wide range of hydrological phenomena. A number of researches have been still comparing these techniques in order to find more effective approaches in terms of accuracy and applicability. In this study, we examined the ability of linear genetic programming (LGP) technique to model successive-station monthly streamflow process, as an applied alternative for streamflow prediction. A comparative efficiency study between LGP and three different artificial neural network algorithms, namely feed forward back propagation (FFBP), generalized regression neural networks (GRNN), and radial basis function (RBF), has also been presented in this study. For this aim, firstly, we put forward six different successive-station monthly streamflow prediction scenarios subjected to training by LGP and FFBP using the field data recorded at two gauging stations on Çoruh River, Turkey. Based on Nash-Sutcliffe and root mean squared error measures, we then compared the efficiency of these techniques and selected the best prediction scenario. Eventually, GRNN and RBF algorithms were utilized to restructure the selected scenario and to compare with corresponding FFBP and LGP. Our results indicated the promising role of LGP for successive-station monthly streamflow prediction providing more accurate results than those of all the ANN algorithms. We found an explicit LGP-based expression evolved by only the basic arithmetic functions as the best prediction model for the river, which uses the records of the both target and upstream stations.
The Dangers of Estimating V˙O2max Using Linear, Nonexercise Prediction Models.
Nevill, Alan M; Cooke, Carlton B
2017-05-01
This study aimed to compare the accuracy and goodness of fit of two competing models (linear vs allometric) when estimating V˙O2max (mL·kg·min) using nonexercise prediction models. The two competing models were fitted to the V˙O2max (mL·kg·min) data taken from two previously published studies. Study 1 (the Allied Dunbar National Fitness Survey) recruited 1732 randomly selected healthy participants, 16 yr and older, from 30 English parliamentary constituencies. Estimates of V˙O2max were obtained using a progressive incremental test on a motorized treadmill. In study 2, maximal oxygen uptake was measured directly during a fatigue limited treadmill test in older men (n = 152) and women (n = 146) 55 to 86 yr old. In both studies, the quality of fit associated with estimating V˙O2max (mL·kg·min) was superior using allometric rather than linear (additive) models based on all criteria (R, maximum log-likelihood, and Akaike information criteria). Results suggest that linear models will systematically overestimate V˙O2max for participants in their 20s and underestimate V˙O2max for participants in their 60s and older. The residuals saved from the linear models were neither normally distributed nor independent of the predicted values nor age. This will probably explain the absence of a key quadratic age term in the linear models, crucially identified using allometric models. Not only does the curvilinear age decline within an exponential function follow a more realistic age decline (the right-hand side of a bell-shaped curve), but the allometric models identified either a stature-to-body mass ratio (study 1) or a fat-free mass-to-body mass ratio (study 2), both associated with leanness when estimating V˙O2max. Adopting allometric models will provide more accurate predictions of V˙O2max (mL·kg·min) using plausible, biologically sound, and interpretable models.
International Nuclear Information System (INIS)
Montoya Andrade, Dan-El; Villa Jaén, Antonio de la; García Santana, Agustín
2014-01-01
Highlights: • We considered the linear generator copper losses in the proposed MPC strategy. • We maximized the power transferred to the generator side power converter. • The proposed MPC increases the useful average power injected into the grid. • The stress level of the PTO system can be reduced by the proposed MPC. - Abstract: The amount of energy that a wave energy converter can extract depends strongly on the control strategy applied to the power take-off system. It is well known that, ideally, the reactive control allows for maximum energy extraction from waves. However, the reactive control is intrinsically noncausal in practice and requires some kind of causal approach to be applied. Moreover, this strategy does not consider physical constraints and this could be a problem because the system could achieve unacceptable dynamic values. These, and other control techniques have focused on the wave energy extraction problem in order to maximize the energy absorbed by the power take-off device without considering the possible losses in intermediate devices. In this sense, a reactive control that considers the linear generator copper losses has been recently proposed to increase the useful power injected into the grid. Among the control techniques that have emerged recently, the model predictive control represents a promising strategy. This approach performs an optimization process on a time prediction horizon incorporating dynamic constraints associated with the physical features of the power take-off system. This paper proposes a model predictive control technique that considers the copper losses in the control optimization process of point absorbers with direct drive linear generators. This proposal makes the most of reactive control as it considers the copper losses, and it makes the most of the model predictive control, as it considers the system constraints. This means that the useful power transferred from the linear generator to the power
Prediction of linear B-cell epitopes of hepatitis C virus for vaccine development
2015-01-01
Background High genetic heterogeneity in the hepatitis C virus (HCV) is the major challenge of the development of an effective vaccine. Existing studies for developing HCV vaccines have mainly focused on T-cell immune response. However, identification of linear B-cell epitopes that can stimulate B-cell response is one of the major tasks of peptide-based vaccine development. Owing to the variability in B-cell epitope length, the prediction of B-cell epitopes is much more complex than that of T-cell epitopes. Furthermore, the motifs of linear B-cell epitopes in different pathogens are quite different (e. g. HCV and hepatitis B virus). To cope with this challenge, this work aims to propose an HCV-customized sequence-based prediction method to identify B-cell epitopes of HCV. Results This work establishes an experimentally verified dataset comprising the B-cell response of HCV dataset consisting of 774 linear B-cell epitopes and 774 non B-cell epitopes from the Immune Epitope Database. An interpretable rule mining system of B-cell epitopes (IRMS-BE) is proposed to select informative physicochemical properties (PCPs) and then extracts several if-then rule-based knowledge for identifying B-cell epitopes. A web server Bcell-HCV was implemented using an SVM with the 34 informative PCPs, which achieved a training accuracy of 79.7% and test accuracy of 70.7% better than the SVM-based methods for identifying B-cell epitopes of HCV and the two general-purpose methods. This work performs advanced analysis of the 34 informative properties, and the results indicate that the most effective property is the alpha-helix structure of epitopes, which influences the connection between host cells and the E2 proteins of HCV. Furthermore, 12 interpretable rules are acquired from top-five PCPs and achieve a sensitivity of 75.6% and specificity of 71.3%. Finally, a conserved promising vaccine candidate, PDREMVLYQE, is identified for inclusion in a vaccine against HCV. Conclusions This work
Directory of Open Access Journals (Sweden)
Longge Zhang
2013-01-01
Full Text Available Two automatic robust model predictive control strategies are presented for uncertain polytopic linear plants with input and output constraints. A sequence of nested geometric proportion asymptotically stable ellipsoids and controllers is constructed offline first. Then the feedback controllers are automatically selected with the receding horizon online in the first strategy. Finally, a modified automatic offline robust MPC approach is constructed to improve the closed system's performance. The new proposed strategies not only reduce the conservatism but also decrease the online computation. Numerical examples are given to illustrate their effectiveness.
Morales, Esteban; de Leon, John Mark S; Abdollahi, Niloufar; Yu, Fei; Nouri-Mahdavi, Kouros; Caprioli, Joseph
2016-03-01
The study was conducted to evaluate threshold smoothing algorithms to enhance prediction of the rates of visual field (VF) worsening in glaucoma. We studied 798 patients with primary open-angle glaucoma and 6 or more years of follow-up who underwent 8 or more VF examinations. Thresholds at each VF location for the first 4 years or first half of the follow-up time (whichever was greater) were smoothed with clusters defined by the nearest neighbor (NN), Garway-Heath, Glaucoma Hemifield Test (GHT), and weighting by the correlation of rates at all other VF locations. Thresholds were regressed with a pointwise exponential regression (PER) model and a pointwise linear regression (PLR) model. Smaller root mean square error (RMSE) values of the differences between the observed and the predicted thresholds at last two follow-ups indicated better model predictions. The mean (SD) follow-up times for the smoothing and prediction phase were 5.3 (1.5) and 10.5 (3.9) years. The mean RMSE values for the PER and PLR models were unsmoothed data, 6.09 and 6.55; NN, 3.40 and 3.42; Garway-Heath, 3.47 and 3.48; GHT, 3.57 and 3.74; and correlation of rates, 3.59 and 3.64. Smoothed VF data predicted better than unsmoothed data. Nearest neighbor provided the best predictions; PER also predicted consistently more accurately than PLR. Smoothing algorithms should be used when forecasting VF results with PER or PLR. The application of smoothing algorithms on VF data can improve forecasting in VF points to assist in treatment decisions.
MCKissick, Burnell T. (Technical Monitor); Plassman, Gerald E.; Mall, Gerald H.; Quagliano, John R.
2005-01-01
Linear multivariable regression models for predicting day and night Eddy Dissipation Rate (EDR) from available meteorological data sources are defined and validated. Model definition is based on a combination of 1997-2000 Dallas/Fort Worth (DFW) data sources, EDR from Aircraft Vortex Spacing System (AVOSS) deployment data, and regression variables primarily from corresponding Automated Surface Observation System (ASOS) data. Model validation is accomplished through EDR predictions on a similar combination of 1994-1995 Memphis (MEM) AVOSS and ASOS data. Model forms include an intercept plus a single term of fixed optimal power for each of these regression variables; 30-minute forward averaged mean and variance of near-surface wind speed and temperature, variance of wind direction, and a discrete cloud cover metric. Distinct day and night models, regressing on EDR and the natural log of EDR respectively, yield best performance and avoid model discontinuity over day/night data boundaries.
Flexible non-linear predictive models for large-scale wind turbine diagnostics
DEFF Research Database (Denmark)
Bach-Andersen, Martin; Rømer-Odgaard, Bo; Winther, Ole
2017-01-01
We demonstrate how flexible non-linear models can provide accurate and robust predictions on turbine component temperature sensor data using data-driven principles and only a minimum of system modeling. The merits of different model architectures are evaluated using data from a large set...... of turbines operating under diverse conditions. We then go on to test the predictive models in a diagnostic setting, where the output of the models are used to detect mechanical faults in rotor bearings. Using retrospective data from 22 actual rotor bearing failures, the fault detection performance...... of the models are quantified using a structured framework that provides the metrics required for evaluating the performance in a fleet wide monitoring setup. It is demonstrated that faults are identified with high accuracy up to 45 days before a warning from the hard-threshold warning system....
Dynamics and control of quadcopter using linear model predictive control approach
Islam, M.; Okasha, M.; Idres, M. M.
2017-12-01
This paper investigates the dynamics and control of a quadcopter using the Model Predictive Control (MPC) approach. The dynamic model is of high fidelity and nonlinear, with six degrees of freedom that include disturbances and model uncertainties. The control approach is developed based on MPC to track different reference trajectories ranging from simple ones such as circular to complex helical trajectories. In this control technique, a linearized model is derived and the receding horizon method is applied to generate the optimal control sequence. Although MPC is computer expensive, it is highly effective to deal with the different types of nonlinearities and constraints such as actuators’ saturation and model uncertainties. The MPC parameters (control and prediction horizons) are selected by trial-and-error approach. Several simulation scenarios are performed to examine and evaluate the performance of the proposed control approach using MATLAB and Simulink environment. Simulation results show that this control approach is highly effective to track a given reference trajectory.
TBM performance prediction in Yucca Mountain welded tuff from linear cutter tests
International Nuclear Information System (INIS)
Gertsch, R.; Ozdemir, L.; Gertsch, L.
1992-01-01
This paper discusses performance prediction which were developed for tunnel boring machines operating in welded tuff for the construction of the experimental study facility and the potential nuclear waste repository at Yucca Mountain. The predictions were based on test data obtained from an extensive series of linear cutting tests performed on samples of Topopah String welded tuff from the Yucca Mountain Project site. Using the cutter force, spacing, and penetration data from the experimental program, the thrust, torque, power, and rate of penetration were estimated for a 25 ft diameter tunnel boring machine (TBM) operating in welded tuff. The result show that the Topopah Spring welded tuff (TSw2) can be excavated at relatively high rates of advance with state-of-the-art TBMs. The result also show, however, that the TBM torque and power requirements will be higher than estimated based on rock physical properties and past tunneling experience in rock formations of similar strength
Uca; Toriman, Ekhwan; Jaafar, Othman; Maru, Rosmini; Arfan, Amal; Saleh Ahmar, Ansari
2018-01-01
Prediction of suspended sediment discharge in a catchments area is very important because it can be used to evaluation the erosion hazard, management of its water resources, water quality, hydrology project management (dams, reservoirs, and irrigation) and to determine the extent of the damage that occurred in the catchments. Multiple Linear Regression analysis and artificial neural network can be used to predict the amount of daily suspended sediment discharge. Regression analysis using the least square method, whereas artificial neural networks using Radial Basis Function (RBF) and feedforward multilayer perceptron with three learning algorithms namely Levenberg-Marquardt (LM), Scaled Conjugate Descent (SCD) and Broyden-Fletcher-Goldfarb-Shanno Quasi-Newton (BFGS). The number neuron of hidden layer is three to sixteen, while in output layer only one neuron because only one output target. The mean absolute error (MAE), root mean square error (RMSE), coefficient of determination (R2 ) and coefficient of efficiency (CE) of the multiple linear regression (MLRg) value Model 2 (6 input variable independent) has the lowest the value of MAE and RMSE (0.0000002 and 13.6039) and highest R2 and CE (0.9971 and 0.9971). When compared between LM, SCG and RBF, the BFGS model structure 3-7-1 is the better and more accurate to prediction suspended sediment discharge in Jenderam catchment. The performance value in testing process, MAE and RMSE (13.5769 and 17.9011) is smallest, meanwhile R2 and CE (0.9999 and 0.9998) is the highest if it compared with the another BFGS Quasi-Newton model (6-3-1, 9-10-1 and 12-12-1). Based on the performance statistics value, MLRg, LM, SCG, BFGS and RBF suitable and accurately for prediction by modeling the non-linear complex behavior of suspended sediment responses to rainfall, water depth and discharge. The comparison between artificial neural network (ANN) and MLRg, the MLRg Model 2 accurately for to prediction suspended sediment discharge (kg
International Nuclear Information System (INIS)
Lima, M.L.; Mignaco, J.A.
1985-01-01
It is shown that the rational power law potentials in the two-body radial Schrodinger equations admit a systematic treatment available from the classical theory of ordinary linear differential equations of the second order. The resulting potentials come into families evolved from equations having a fixed number of elementary regular singularities. As a consequence, relations are found and discussed among the several potentials in a family. (Author) [pt
International Nuclear Information System (INIS)
Lima, M.L.; Mignaco, J.A.
1985-01-01
It is shown that the rational power law potentials in the two-body radial Schoedinger equation admit a systematic treatment available from the classical theory of ordinary linear differential equations of the second order. The admissible potentials come into families evolved from equations having a fixed number of elementary singularities. As a consequence, relations are found and discussed among the several potentials in a family. (Author) [pt
Wilson, S. R.; Close, M. E.; Abraham, P.
2018-01-01
Diffuse nitrate losses from agricultural land pollute groundwater resources worldwide, but can be attenuated under reducing subsurface conditions. In New Zealand, the ability to predict where groundwater denitrification occurs is important for understanding the linkage between land use and discharges of nitrate-bearing groundwater to streams. This study assesses the application of linear discriminant analysis (LDA) for predicting groundwater redox status for Southland, a major dairy farming region in New Zealand. Data cases were developed by assigning a redox status to samples derived from a regional groundwater quality database. Pre-existing regional-scale geospatial databases were used as training variables for the discriminant functions. The predictive accuracy of the discriminant functions was slightly improved by optimising the thresholds between sample depth classes. The models predict 23% of the region as being reducing at shallow depths (water table, and low-permeability clastic sediments. The coastal plains are an area of widespread groundwater discharge, and the soil and hydrology characteristics require the land to be artificially drained to render the land suitable for farming. For the improvement of water quality in coastal areas, it is therefore important that land and water management efforts focus on understanding hydrological bypassing that may occur via artificial drainage systems.
Energy Technology Data Exchange (ETDEWEB)
Hong Junjie, E-mail: hongjjie@mail.sysu.edu.cn [School of Engineering, Sun Yat-Sen University, Guangzhou 510006 (China); Li Liyi, E-mail: liliyi@hit.edu.cn [Dept. Electrical Engineering, Harbin Institute of Technology, Harbin 150000 (China); Zong Zhijian; Liu Zhongtu [School of Engineering, Sun Yat-Sen University, Guangzhou 510006 (China)
2011-10-15
Highlights: {yields} The structure of the permanent magnet linear synchronous motor (SW-PMLSM) is new. {yields} A new current control method CEVPC is employed in this motor. {yields} The sectional power supply method is different to the others and effective. {yields} The performance gets worse with voltage and current limitations. - Abstract: To include features such as greater thrust density, higher efficiency without reducing the thrust stability, this paper proposes a section winding permanent magnet linear synchronous motor (SW-PMLSM), whose iron core is continuous, whereas winding is divided. The discrete system model of the motor is derived. With the definition of the current error vector and selection of the value function, the theory of the current error vector based prediction control (CEVPC) for the motor currents is explained clearly. According to the winding section feature, the motion region of the mover is divided into five zones, in which the implementation of the current predictive control method is proposed. Finally, the experimental platform is constructed and experiments are carried out. The results show: the current control effect has good dynamic response, and the thrust on the mover remains constant basically.
A review of model predictive control: moving from linear to nonlinear design methods
International Nuclear Information System (INIS)
Nandong, J.; Samyudia, Y.; Tade, M.O.
2006-01-01
Linear model predictive control (LMPC) has now been considered as an industrial control standard in process industry. Its extension to nonlinear cases however has not yet gained wide acceptance due to many reasons, e.g. excessively heavy computational load and effort, thus, preventing its practical implementation in real-time control. The application of nonlinear MPC (NMPC) is advantageous for processes with strong nonlinearity or when the operating points are frequently moved from one set point to another due to, for instance, changes in market demands. Much effort has been dedicated towards improving the computational efficiency of NMPC as well as its stability analysis. This paper provides a review on alternative ways of extending linear MPC to the nonlinear one. We also highlight the critical issues pertinent to the applications of NMPC and discuss possible solutions to address these issues. In addition, we outline the future research trend in the area of model predictive control by emphasizing on the potential applications of multi-scale process model within NMPC
Using NCAP to predict RFI effects in linear bipolar integrated circuits
Fang, T.-F.; Whalen, J. J.; Chen, G. K. C.
1980-11-01
Applications of the Nonlinear Circuit Analysis Program (NCAP) to calculate RFI effects in electronic circuits containing discrete semiconductor devices have been reported upon previously. The objective of this paper is to demonstrate that the computer program NCAP also can be used to calcuate RFI effects in linear bipolar integrated circuits (IC's). The IC's reported upon are the microA741 operational amplifier (op amp) which is one of the most widely used IC's, and a differential pair which is a basic building block in many linear IC's. The microA741 op amp was used as the active component in a unity-gain buffer amplifier. The differential pair was used in a broad-band cascode amplifier circuit. The computer program NCAP was used to predict how amplitude-modulated RF signals are demodulated in the IC's to cause undesired low-frequency responses. The predicted and measured results for radio frequencies in the 0.050-60-MHz range are in good agreement.
Robust entry guidance using linear covariance-based model predictive control
Directory of Open Access Journals (Sweden)
Jianjun Luo
2017-02-01
Full Text Available For atmospheric entry vehicles, guidance design can be accomplished by solving an optimal issue using optimal control theories. However, traditional design methods generally focus on the nominal performance and do not include considerations of the robustness in the design process. This paper proposes a linear covariance-based model predictive control method for robust entry guidance design. Firstly, linear covariance analysis is employed to directly incorporate the robustness into the guidance design. The closed-loop covariance with the feedback updated control command is initially formulated to provide the expected errors of the nominal state variables in the presence of uncertainties. Then, the closed-loop covariance is innovatively used as a component of the cost function to guarantee the robustness to reduce its sensitivity to uncertainties. After that, the models predictive control is used to solve the optimal problem, and the control commands (bank angles are calculated. Finally, a series of simulations for different missions have been completed to demonstrate the high performance in precision and the robustness with respect to initial perturbations as well as uncertainties in the entry process. The 3σ confidence region results in the presence of uncertainties which show that the robustness of the guidance has been improved, and the errors of the state variables are decreased by approximately 35%.
International Nuclear Information System (INIS)
Wang, L.C.
1980-01-01
Baecklund Transformations (BT) and the derivation of local conservation laws are first reviewed in the classic case of the Sine-Gordon equation. The BT, conservation laws (local and nonlocal), and the inverse-scattering formulation are discussed for the chiral and the self-dual Yang-Mills fields. Their possible applications to the loop formulation for the Yang-Mills fields are mentioned. 55 references, 1 figure
International Nuclear Information System (INIS)
Pfirsch, D.; Duechs, D.F.
1985-01-01
A number of statistical implications of empirical scaling laws in form of power products obtained by linear regression are analysed. The sensitivity of the error against a change of exponents is described by a sensitivity factor and the uncertainty of predictions by a ''range of predictions factor''. Inner relations in the statistical material is discussed, as well as the consequences of discarding variables.A recipe is given for the computations to be done. The whole is exemplified by considering scaling laws for the electron energy confinement time of ohmically heated tokamak plasmas. (author)
Predicting Fuel Ignition Quality Using 1H NMR Spectroscopy and Multiple Linear Regression
Abdul Jameel, Abdul Gani
2016-09-14
An improved model for the prediction of ignition quality of hydrocarbon fuels has been developed using 1H nuclear magnetic resonance (NMR) spectroscopy and multiple linear regression (MLR) modeling. Cetane number (CN) and derived cetane number (DCN) of 71 pure hydrocarbons and 54 hydrocarbon blends were utilized as a data set to study the relationship between ignition quality and molecular structure. CN and DCN are functional equivalents and collectively referred to as D/CN, herein. The effect of molecular weight and weight percent of structural parameters such as paraffinic CH3 groups, paraffinic CH2 groups, paraffinic CH groups, olefinic CH–CH2 groups, naphthenic CH–CH2 groups, and aromatic C–CH groups on D/CN was studied. A particular emphasis on the effect of branching (i.e., methyl substitution) on the D/CN was studied, and a new parameter denoted as the branching index (BI) was introduced to quantify this effect. A new formula was developed to calculate the BI of hydrocarbon fuels using 1H NMR spectroscopy. Multiple linear regression (MLR) modeling was used to develop an empirical relationship between D/CN and the eight structural parameters. This was then used to predict the DCN of many hydrocarbon fuels. The developed model has a high correlation coefficient (R2 = 0.97) and was validated with experimentally measured DCN of twenty-two real fuel mixtures (e.g., gasolines and diesels) and fifty-nine blends of known composition, and the predicted values matched well with the experimental data.
Zounemat-Kermani, Mohammad
2012-08-01
In this study, the ability of two models of multi linear regression (MLR) and Levenberg-Marquardt (LM) feed-forward neural network was examined to estimate the hourly dew point temperature. Dew point temperature is the temperature at which water vapor in the air condenses into liquid. This temperature can be useful in estimating meteorological variables such as fog, rain, snow, dew, and evapotranspiration and in investigating agronomical issues as stomatal closure in plants. The availability of hourly records of climatic data (air temperature, relative humidity and pressure) which could be used to predict dew point temperature initiated the practice of modeling. Additionally, the wind vector (wind speed magnitude and direction) and conceptual input of weather condition were employed as other input variables. The three quantitative standard statistical performance evaluation measures, i.e. the root mean squared error, mean absolute error, and absolute logarithmic Nash-Sutcliffe efficiency coefficient ( {| {{{Log}}({{NS}})} |} ) were employed to evaluate the performances of the developed models. The results showed that applying wind vector and weather condition as input vectors along with meteorological variables could slightly increase the ANN and MLR predictive accuracy. The results also revealed that LM-NN was superior to MLR model and the best performance was obtained by considering all potential input variables in terms of different evaluation criteria.
DEFF Research Database (Denmark)
Sokoler, Leo Emil; Frison, Gianluca; Edlund, Kristian
2013-01-01
In this paper, we develop an efficient interior-point method (IPM) for the linear programs arising in economic model predictive control of linear systems. The novelty of our algorithm is that it combines a homogeneous and self-dual model, and a specialized Riccati iteration procedure. We test...
Sparse Power-Law Network Model for Reliable Statistical Predictions Based on Sampled Data
Directory of Open Access Journals (Sweden)
Alexander P. Kartun-Giles
2018-04-01
Full Text Available A projective network model is a model that enables predictions to be made based on a subsample of the network data, with the predictions remaining unchanged if a larger sample is taken into consideration. An exchangeable model is a model that does not depend on the order in which nodes are sampled. Despite a large variety of non-equilibrium (growing and equilibrium (static sparse complex network models that are widely used in network science, how to reconcile sparseness (constant average degree with the desired statistical properties of projectivity and exchangeability is currently an outstanding scientific problem. Here we propose a network process with hidden variables which is projective and can generate sparse power-law networks. Despite the model not being exchangeable, it can be closely related to exchangeable uncorrelated networks as indicated by its information theory characterization and its network entropy. The use of the proposed network process as a null model is here tested on real data, indicating that the model offers a promising avenue for statistical network modelling.
Directory of Open Access Journals (Sweden)
C. Makendran
2015-01-01
Full Text Available Prediction models for low volume village roads in India are developed to evaluate the progression of different types of distress such as roughness, cracking, and potholes. Even though the Government of India is investing huge quantum of money on road construction every year, poor control over the quality of road construction and its subsequent maintenance is leading to the faster road deterioration. In this regard, it is essential that scientific maintenance procedures are to be evolved on the basis of performance of low volume flexible pavements. Considering the above, an attempt has been made in this research endeavor to develop prediction models to understand the progression of roughness, cracking, and potholes in flexible pavements exposed to least or nil routine maintenance. Distress data were collected from the low volume rural roads covering about 173 stretches spread across Tamil Nadu state in India. Based on the above collected data, distress prediction models have been developed using multiple linear regression analysis. Further, the models have been validated using independent field data. It can be concluded that the models developed in this study can serve as useful tools for the practicing engineers maintaining flexible pavements on low volume roads.
Shayan, Zahra; Mohammad Gholi Mezerji, Naser; Shayan, Leila; Naseri, Parisa
2015-11-03
Logistic regression (LR) and linear discriminant analysis (LDA) are two popular statistical models for prediction of group membership. Although they are very similar, the LDA makes more assumptions about the data. When categorical and continuous variables used simultaneously, the optimal choice between the two models is questionable. In most studies, classification error (CE) is used to discriminate between subjects in several groups, but this index is not suitable to predict the accuracy of the outcome. The present study compared LR and LDA models using classification indices. This cross-sectional study selected 243 cancer patients. Sample sets of different sizes (n = 50, 100, 150, 200, 220) were randomly selected and the CE, B, and Q classification indices were calculated by the LR and LDA models. CE revealed the a lack of superiority for one model over the other, but the results showed that LR performed better than LDA for the B and Q indices in all situations. No significant effect for sample size on CE was noted for selection of an optimal model. Assessment of the accuracy of prediction of real data indicated that the B and Q indices are appropriate for selection of an optimal model. The results of this study showed that LR performs better in some cases and LDA in others when based on CE. The CE index is not appropriate for classification, although the B and Q indices performed better and offered more efficient criteria for comparison and discrimination between groups.
International Nuclear Information System (INIS)
Bunyamin, Muhammad Afif; Yap, Keem Siah; Aziz, Nur Liyana Afiqah Abdul; Tiong, Sheih Kiong; Wong, Shen Yuong; Kamal, Md Fauzan
2013-01-01
This paper presents a new approach of gas emission estimation in power generation plant using a hybrid Genetic Algorithm (GA) and Linear Regression (LR) (denoted as GA-LR). The LR is one of the approaches that model the relationship between an output dependant variable, y, with one or more explanatory variables or inputs which denoted as x. It is able to estimate unknown model parameters from inputs data. On the other hand, GA is used to search for the optimal solution until specific criteria is met causing termination. These results include providing good solutions as compared to one optimal solution for complex problems. Thus, GA is widely used as feature selection. By combining the LR and GA (GA-LR), this new technique is able to select the most important input features as well as giving more accurate prediction by minimizing the prediction errors. This new technique is able to produce more consistent of gas emission estimation, which may help in reducing population to the environment. In this paper, the study's interest is focused on nitrous oxides (NOx) prediction. The results of the experiment are encouraging.
Cuppo, F L S; Gómez, S L; Figueiredo Neto, A M
2004-04-01
In this paper is reported a systematic experimental study of the linear-optical-absorption coefficient of ferrofluid-doped isotropic lyotropic mixtures as a function of the magnetic-grains concentration. The linear optical absorption of ferrolyomesophases increases in a nonlinear manner with the concentration of magnetic grains, deviating from the usual Beer-Lambert law. This behavior is associated to the presence of correlated micelles in the mixture which favors the formation of small-scale aggregates of magnetic grains (dimers), which have a higher absorption coefficient with respect to that of isolated grains. We propose that the indirect heating of the micelles via the ferrofluid grains (hyperthermia) could account for this nonlinear increase of the linear-optical-absorption coefficient as a function of the grains concentration.
Yock, Adam D; Rao, Arvind; Dong, Lei; Beadle, Beth M; Garden, Adam S; Kudchadker, Rajat J; Court, Laurence E
2014-05-01
The purpose of this work was to develop and evaluate the accuracy of several predictive models of variation in tumor volume throughout the course of radiation therapy. Nineteen patients with oropharyngeal cancers were imaged daily with CT-on-rails for image-guided alignment per an institutional protocol. The daily volumes of 35 tumors in these 19 patients were determined and used to generate (1) a linear model in which tumor volume changed at a constant rate, (2) a general linear model that utilized the power fit relationship between the daily and initial tumor volumes, and (3) a functional general linear model that identified and exploited the primary modes of variation between time series describing the changing tumor volumes. Primary and nodal tumor volumes were examined separately. The accuracy of these models in predicting daily tumor volumes were compared with those of static and linear reference models using leave-one-out cross-validation. In predicting the daily volume of primary tumors, the general linear model and the functional general linear model were more accurate than the static reference model by 9.9% (range: -11.6%-23.8%) and 14.6% (range: -7.3%-27.5%), respectively, and were more accurate than the linear reference model by 14.2% (range: -6.8%-40.3%) and 13.1% (range: -1.5%-52.5%), respectively. In predicting the daily volume of nodal tumors, only the 14.4% (range: -11.1%-20.5%) improvement in accuracy of the functional general linear model compared to the static reference model was statistically significant. A general linear model and a functional general linear model trained on data from a small population of patients can predict the primary tumor volume throughout the course of radiation therapy with greater accuracy than standard reference models. These more accurate models may increase the prognostic value of information about the tumor garnered from pretreatment computed tomography images and facilitate improved treatment management.
A State-Space Approach to Optimal Level-Crossing Prediction for Linear Gaussian Processes
Martin, Rodney Alexander
2009-01-01
In many complex engineered systems, the ability to give an alarm prior to impending critical events is of great importance. These critical events may have varying degrees of severity, and in fact they may occur during normal system operation. In this article, we investigate approximations to theoretically optimal methods of designing alarm systems for the prediction of level-crossings by a zero-mean stationary linear dynamic system driven by Gaussian noise. An optimal alarm system is designed to elicit the fewest false alarms for a fixed detection probability. This work introduces the use of Kalman filtering in tandem with the optimal level-crossing problem. It is shown that there is a negligible loss in overall accuracy when using approximations to the theoretically optimal predictor, at the advantage of greatly reduced computational complexity. I
Real-time detection of musical onsets with linear prediction and sinusoidal modeling
Glover, John; Lazzarini, Victor; Timoney, Joseph
2011-12-01
Real-time musical note onset detection plays a vital role in many audio analysis processes, such as score following, beat detection and various sound synthesis by analysis methods. This article provides a review of some of the most commonly used techniques for real-time onset detection. We suggest ways to improve these techniques by incorporating linear prediction as well as presenting a novel algorithm for real-time onset detection using sinusoidal modelling. We provide comprehensive results for both the detection accuracy and the computational performance of all of the described techniques, evaluated using Modal, our new open source library for musical onset detection, which comes with a free database of samples with hand-labelled note onsets.
Bayesian techniques for fatigue life prediction and for inference in linear time dependent PDEs
Scavino, Marco
2016-01-08
In this talk we introduce first the main characteristics of a systematic statistical approach to model calibration, model selection and model ranking when stress-life data are drawn from a collection of records of fatigue experiments. Focusing on Bayesian prediction assessment, we consider fatigue-limit models and random fatigue-limit models under different a priori assumptions. In the second part of the talk, we present a hierarchical Bayesian technique for the inference of the coefficients of time dependent linear PDEs, under the assumption that noisy measurements are available in both the interior of a domain of interest and from boundary conditions. We present a computational technique based on the marginalization of the contribution of the boundary parameters and apply it to inverse heat conduction problems.
Wan, Shibiao; Mak, Man-Wai; Kung, Sun-Yuan
2016-12-02
In the postgenomic era, the number of unreviewed protein sequences is remarkably larger and grows tremendously faster than that of reviewed ones. However, existing methods for protein subchloroplast localization often ignore the information from these unlabeled proteins. This paper proposes a multi-label predictor based on ensemble linear neighborhood propagation (LNP), namely, LNP-Chlo, which leverages hybrid sequence-based feature information from both labeled and unlabeled proteins for predicting localization of both single- and multi-label chloroplast proteins. Experimental results on a stringent benchmark dataset and a novel independent dataset suggest that LNP-Chlo performs at least 6% (absolute) better than state-of-the-art predictors. This paper also demonstrates that ensemble LNP significantly outperforms LNP based on individual features. For readers' convenience, the online Web server LNP-Chlo is freely available at http://bioinfo.eie.polyu.edu.hk/LNPChloServer/ .
Real time implementation of a linear predictive coding algorithm on digital signal processor DSP32C
International Nuclear Information System (INIS)
Sheikh, N.M.; Usman, S.R.; Fatima, S.
2002-01-01
Pulse Code Modulation (PCM) has been widely used in speech coding. However, due to its high bit rate. PCM has severe limitations in application where high spectral efficiency is desired, for example, in mobile communication, CD quality broadcasting system etc. These limitation have motivated research in bit rate reduction techniques. Linear predictive coding (LPC) is one of the most powerful complex techniques for bit rate reduction. With the introduction of powerful digital signal processors (DSP) it is possible to implement the complex LPC algorithm in real time. In this paper we present a real time implementation of the LPC algorithm on AT and T's DSP32C at a sampling frequency of 8192 HZ. Application of the LPC algorithm on two speech signals is discussed. Using this implementation , a bit rate reduction of 1:3 is achieved for better than tool quality speech, while a reduction of 1.16 is possible for speech quality required in military applications. (author)
Hariharan, M; Chee, Lim Sin; Yaacob, Sazali
2012-06-01
Acoustic analysis of infant cry signals has been proven to be an excellent tool in the area of automatic detection of pathological status of an infant. This paper investigates the application of parameter weighting for linear prediction cepstral coefficients (LPCCs) to provide the robust representation of infant cry signals. Three classes of infant cry signals were considered such as normal cry signals, cry signals from deaf babies and babies with asphyxia. A Probabilistic Neural Network (PNN) is suggested to classify the infant cry signals into normal and pathological cries. PNN is trained with different spread factor or smoothing parameter to obtain better classification accuracy. The experimental results demonstrate that the suggested features and classification algorithms give very promising classification accuracy of above 98% and it expounds that the suggested method can be used to help medical professionals for diagnosing pathological status of an infant from cry signals.
Directory of Open Access Journals (Sweden)
Rachid Darnag
2017-02-01
Full Text Available Support vector machines (SVM represent one of the most promising Machine Learning (ML tools that can be applied to develop a predictive quantitative structure–activity relationship (QSAR models using molecular descriptors. Multiple linear regression (MLR and artificial neural networks (ANNs were also utilized to construct quantitative linear and non linear models to compare with the results obtained by SVM. The prediction results are in good agreement with the experimental value of HIV activity; also, the results reveal the superiority of the SVM over MLR and ANN model. The contribution of each descriptor to the structure–activity relationships was evaluated.
The Spike-and-Slab Lasso Generalized Linear Models for Prediction and Associated Genes Detection.
Tang, Zaixiang; Shen, Yueping; Zhang, Xinyan; Yi, Nengjun
2017-01-01
Large-scale "omics" data have been increasingly used as an important resource for prognostic prediction of diseases and detection of associated genes. However, there are considerable challenges in analyzing high-dimensional molecular data, including the large number of potential molecular predictors, limited number of samples, and small effect of each predictor. We propose new Bayesian hierarchical generalized linear models, called spike-and-slab lasso GLMs, for prognostic prediction and detection of associated genes using large-scale molecular data. The proposed model employs a spike-and-slab mixture double-exponential prior for coefficients that can induce weak shrinkage on large coefficients, and strong shrinkage on irrelevant coefficients. We have developed a fast and stable algorithm to fit large-scale hierarchal GLMs by incorporating expectation-maximization (EM) steps into the fast cyclic coordinate descent algorithm. The proposed approach integrates nice features of two popular methods, i.e., penalized lasso and Bayesian spike-and-slab variable selection. The performance of the proposed method is assessed via extensive simulation studies. The results show that the proposed approach can provide not only more accurate estimates of the parameters, but also better prediction. We demonstrate the proposed procedure on two cancer data sets: a well-known breast cancer data set consisting of 295 tumors, and expression data of 4919 genes; and the ovarian cancer data set from TCGA with 362 tumors, and expression data of 5336 genes. Our analyses show that the proposed procedure can generate powerful models for predicting outcomes and detecting associated genes. The methods have been implemented in a freely available R package BhGLM (http://www.ssg.uab.edu/bhglm/). Copyright © 2017 by the Genetics Society of America.
Ji, Zhiwei; Wang, Bing; Yan, Ke; Dong, Ligang; Meng, Guanmin; Shi, Lei
2017-12-21
In recent years, the integration of 'omics' technologies, high performance computation, and mathematical modeling of biological processes marks that the systems biology has started to fundamentally impact the way of approaching drug discovery. The LINCS public data warehouse provides detailed information about cell responses with various genetic and environmental stressors. It can be greatly helpful in developing new drugs and therapeutics, as well as improving the situations of lacking effective drugs, drug resistance and relapse in cancer therapies, etc. In this study, we developed a Ternary status based Integer Linear Programming (TILP) method to infer cell-specific signaling pathway network and predict compounds' treatment efficacy. The novelty of our study is that phosphor-proteomic data and prior knowledge are combined for modeling and optimizing the signaling network. To test the power of our approach, a generic pathway network was constructed for a human breast cancer cell line MCF7; and the TILP model was used to infer MCF7-specific pathways with a set of phosphor-proteomic data collected from ten representative small molecule chemical compounds (most of them were studied in breast cancer treatment). Cross-validation indicated that the MCF7-specific pathway network inferred by TILP were reliable predicting a compound's efficacy. Finally, we applied TILP to re-optimize the inferred cell-specific pathways and predict the outcomes of five small compounds (carmustine, doxorubicin, GW-8510, daunorubicin, and verapamil), which were rarely used in clinic for breast cancer. In the simulation, the proposed approach facilitates us to identify a compound's treatment efficacy qualitatively and quantitatively, and the cross validation analysis indicated good accuracy in predicting effects of five compounds. In summary, the TILP model is useful for discovering new drugs for clinic use, and also elucidating the potential mechanisms of a compound to targets.
Neural network-based nonlinear model predictive control vs. linear quadratic gaussian control
Cho, C.; Vance, R.; Mardi, N.; Qian, Z.; Prisbrey, K.
1997-01-01
One problem with the application of neural networks to the multivariable control of mineral and extractive processes is determining whether and how to use them. The objective of this investigation was to compare neural network control to more conventional strategies and to determine if there are any advantages in using neural network control in terms of set-point tracking, rise time, settling time, disturbance rejection and other criteria. The procedure involved developing neural network controllers using both historical plant data and simulation models. Various control patterns were tried, including both inverse and direct neural network plant models. These were compared to state space controllers that are, by nature, linear. For grinding and leaching circuits, a nonlinear neural network-based model predictive control strategy was superior to a state space-based linear quadratic gaussian controller. The investigation pointed out the importance of incorporating state space into neural networks by making them recurrent, i.e., feeding certain output state variables into input nodes in the neural network. It was concluded that neural network controllers can have better disturbance rejection, set-point tracking, rise time, settling time and lower set-point overshoot, and it was also concluded that neural network controllers can be more reliable and easy to implement in complex, multivariable plants.
Predictive inference for best linear combination of biomarkers subject to limits of detection.
Coolen-Maturi, Tahani
2017-08-15
Measuring the accuracy of diagnostic tests is crucial in many application areas including medicine, machine learning and credit scoring. The receiver operating characteristic (ROC) curve is a useful tool to assess the ability of a diagnostic test to discriminate between two classes or groups. In practice, multiple diagnostic tests or biomarkers are combined to improve diagnostic accuracy. Often, biomarker measurements are undetectable either below or above the so-called limits of detection (LoD). In this paper, nonparametric predictive inference (NPI) for best linear combination of two or more biomarkers subject to limits of detection is presented. NPI is a frequentist statistical method that is explicitly aimed at using few modelling assumptions, enabled through the use of lower and upper probabilities to quantify uncertainty. The NPI lower and upper bounds for the ROC curve subject to limits of detection are derived, where the objective function to maximize is the area under the ROC curve. In addition, the paper discusses the effect of restriction on the linear combination's coefficients on the analysis. Examples are provided to illustrate the proposed method. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.
AUTHOR|(SzGeCERN)673023; Blanco Viñuela, Enrique
In each of eight arcs of the 27 km circumference Large Hadron Collider (LHC), 2.5 km long strings of super-conducting magnets are cooled with superfluid Helium II at 1.9 K. The temperature stabilisation is a challenging control problem due to complex non-linear dynamics of the magnets temperature and presence of multiple operational constraints. Strong nonlinearities and variable dead-times of the dynamics originate at strongly heat-flux dependent effective heat conductivity of superfluid that varies three orders of magnitude over the range of possible operational conditions. In order to improve the temperature stabilisation, a proof of concept on-line economic output-feedback Non-linear Model Predictive Controller (NMPC) is presented in this thesis. The controller is based on a novel complex first-principles distributed parameters numerical model of the temperature dynamics over a 214 m long sub-sector of the LHC that is characterized by very low computational cost of simulation needed in real-time optimizat...
A Zero-One Dichotomy Theorem for r-Semi-Stable Laws on Infinite Dimensional Linear Spaces.
1978-10-01
SEMISTABLE LAWS - LIKE STABLE ONES - ARE CONTINUOUS: i.e. THEY ASSIGN’ ZERO MASS TO SIIMGLETONS.. DD 172 1 1473 sov’ow as, IMail , 62 i 1 SOee..S $.M 0 102 LfP.Of 4 6601 1ECIuatY CLASSI’PICA1 130N 00 1 100 0449 (W%4 Dma rwer
DEFF Research Database (Denmark)
Sokoler, Leo Emil; Frison, Gianluca; Skajaa, Anders
2015-01-01
We develop an efficient homogeneous and self-dual interior-point method (IPM) for the linear programs arising in economic model predictive control of constrained linear systems with linear objective functions. The algorithm is based on a Riccati iteration procedure, which is adapted to the linear...... system of equations solved in homogeneous and self-dual IPMs. Fast convergence is further achieved using a warm-start strategy. We implement the algorithm in MATLAB and C. Its performance is tested using a conceptual power management case study. Closed loop simulations show that 1) the proposed algorithm...
Mamdani-Fuzzy Modeling Approach for Quality Prediction of Non-Linear Laser Lathing Process
Sivaraos; Khalim, A. Z.; Salleh, M. S.; Sivakumar, D.; Kadirgama, K.
2018-03-01
Lathing is a process to fashioning stock materials into desired cylindrical shapes which usually performed by traditional lathe machine. But, the recent rapid advancements in engineering materials and precision demand gives a great challenge to the traditional method. The main drawback of conventional lathe is its mechanical contact which brings to the undesirable tool wear, heat affected zone, finishing, and dimensional accuracy especially taper quality in machining of stock with high length to diameter ratio. Therefore, a novel approach has been devised to investigate in transforming a 2D flatbed CO2 laser cutting machine into 3D laser lathing capability as an alternative solution. Three significant design parameters were selected for this experiment, namely cutting speed, spinning speed, and depth of cut. Total of 24 experiments were performed with eight (8) sequential runs where they were then replicated three (3) times. The experimental results were then used to establish Mamdani - Fuzzy predictive model where it yields the accuracy of more than 95%. Thus, the proposed Mamdani - Fuzzy modelling approach is found very much suitable and practical for quality prediction of non-linear laser lathing process for cylindrical stocks of 10mm diameter.
International Nuclear Information System (INIS)
Gertsch, R.; Ozdemir, L.
1992-09-01
The performances of mechanical excavators are predicted for excavations in welded tuff. Emphasis is given to tunnel boring machine evaluations based on linear cutting machine test data obtained on samples of Topopah Spring welded tuff. The tests involve measurement of forces as cutters are applied to the rock surface at certain spacing and penetrations. Two disc and two point-attack cutters representing currently available technology are thus evaluated. The performance predictions based on these direct experimental measurements are believed to be more accurate than any previous values for mechanical excavation of welded tuff. The calculations of performance are predicated on minimizing the amount of energy required to excavate the welded tuff. Specific energy decreases with increasing spacing and penetration, and reaches its lowest at the widest spacing and deepest penetration used in this test program. Using the force, spacing, and penetration data from this experimental program, the thrust, torque, power, and rate of penetration are calculated for several types of mechanical excavators. The results of this study show that the candidate excavators will require higher torque and power than heretofore estimated
Chaos and loss of predictability in the periodically kicked linear oscillator
International Nuclear Information System (INIS)
Luna-Acosta, G.A.; Cantoral, E.
1989-01-01
Chernikov et.al. [2] have discovered new features in the dynamics of a periodically kicked LHO x'' + ω 0 2 x = ( K/ k 0 T 2 ) sin (k 0 x) x Σ n δ (t / T - n). They report that its phase space motion under exact resonance (p ω 0 = (2 π / T) q; p, q integers), and with initial conditions on the separatrix of the average Hamiltonian , accelerates unboundedly along a fractal stochastic web with q-fold symmetry. Here we investigate with numerical experiments the effects of small deviations from exact resonance on the diffusion and symmetry patterns. We show graphically that the stochastic webs are (topologically) unstable and thus the unbounded motion becomes considerably truncated. Moreover, we analyze numerically and analytically a simpler (integrable) version. We give its exact closed-form solution in complex numbers, realize that it accelerates unboundedly only when ω 0 = (2 π/T) q (q = ± 1,2,...), and show that for small uncertainties in these frequencies, total predictability is lost as time evolves. That is, trajectories of a set of systems, initially described by close neighboring points in phase space strongly diverge in a non-linear way. The great loss of predictability in the integrable model is due to the combination of translational and rotational symmetries, inherent in these systems. (Author)
TRM performance prediction in Yucca Mountain welded tuff from linear cutter tests
International Nuclear Information System (INIS)
Gertsch, R.; Ozdemir, L.; Gertsch, L.
1992-01-01
Performance predictions were developed for tunnel boring machines operating in welded tuff for the construction of the experimental study facility and the potential nuclear waste repository at Yucca Mountain. The predictions were based on test data obtained from an extensive series of linear cutting tests performed on samples of Topopah Spring welded tuff from the Yucca Mountain Project site. Using the cutter force, spacing, and penetration data from the experimental program, the thrust, torque, power, and rate of penetration were estimated for a 25 ft diameter tunnel boring machine (TBM) operating in welded tuff. Guidelines were developed for the optimal design of the TBM cutterhead to achieve high production rates at the lowest possible excavation costs. The results show that the Topopah Spring welded tuff (TSw2) can be excavated at relatively high rates of advance with state-of-the-art TBMs. The results also show, however, that the TBM torque and power requirements will be higher than estimated based on rock physical properties and past tunneling experience in rock formations of similar strength
Li, Guanghui; Luo, Jiawei; Xiao, Qiu; Liang, Cheng; Ding, Pingjian
2018-05-12
Interactions between microRNAs (miRNAs) and diseases can yield important information for uncovering novel prognostic markers. Since experimental determination of disease-miRNA associations is time-consuming and costly, attention has been given to designing efficient and robust computational techniques for identifying undiscovered interactions. In this study, we present a label propagation model with linear neighborhood similarity, called LPLNS, to predict unobserved miRNA-disease associations. Additionally, a preprocessing step is performed to derive new interaction likelihood profiles that will contribute to the prediction since new miRNAs and diseases lack known associations. Our results demonstrate that the LPLNS model based on the known disease-miRNA associations could achieve impressive performance with an AUC of 0.9034. Furthermore, we observed that the LPLNS model based on new interaction likelihood profiles could improve the performance to an AUC of 0.9127. This was better than other comparable methods. In addition, case studies also demonstrated our method's outstanding performance for inferring undiscovered interactions between miRNAs and diseases, especially for novel diseases. Copyright © 2018. Published by Elsevier Inc.
Chaos and loss of predictability in the periodically kicked linear oscillator
Energy Technology Data Exchange (ETDEWEB)
Luna-Acosta, G A [Universidad Autonoma de Puebla (Mexico). Inst. de Ciencias; Cantoral, E [Universidad Autonoma de Puebla (Mexico). Escuela de Fisica
1989-01-01
Chernikov et.al. [2] have discovered new features in the dynamics of a periodically kicked LHO x'' + [omega] [sub 0] [sup 2] x = ( K/ k[sub 0] T [sup 2]) sin (k[sub 0] x) x [Sigma][sub n] [delta] (t / T - n). They report that its phase space motion under exact resonance (p [omega] [sub 0] = (2 [pi] / T) q; p, q integers), and with initial conditions on the separatrix of the average Hamiltonian , accelerates unboundedly along a fractal stochastic web with q-fold symmetry. Here we investigate with numerical experiments the effects of small deviations from exact resonance on the diffusion and symmetry patterns. We show graphically that the stochastic webs are (topologically) unstable and thus the unbounded motion becomes considerably truncated. Moreover, we analyze numerically and analytically a simpler (integrable) version. We give its exact closed-form solution in complex numbers, realize that it accelerates unboundedly only when [omega][sub 0] = (2 [pi]/T) q (q = [+-] 1,2,...), and show that for small uncertainties in these frequencies, total predictability is lost as time evolves. That is, trajectories of a set of systems, initially described by close neighboring points in phase space strongly diverge in a non-linear way. The great loss of predictability in the integrable model is due to the combination of translational and rotational symmetries, inherent in these systems. (Author).
Gobrecht, Alexia; Bendoula, Ryad; Roger, Jean-Michel; Bellon-Maurel, Véronique
2015-01-01
Visible and Near Infrared (Vis-NIR) Spectroscopy is a powerful non destructive analytical method used to analyze major compounds in bulk materials and products and requiring no sample preparation. It is widely used in routine analysis and also in-line in industries, in-vivo with biomedical applications or in-field for agricultural and environmental applications. However, highly scattering samples subvert Beer-Lambert law's linear relationship between spectral absorbance and the concentrations. Instead of spectral pre-processing, which is commonly used by Vis-NIR spectroscopists to mitigate the scattering effect, we put forward an optical method, based on Polarized Light Spectroscopy to improve the absorbance signal measurement on highly scattering samples. This method selects part of the signal which is less impacted by scattering. The resulted signal is combined in the Absorption/Remission function defined in Dahm's Representative Layer Theory to compute an absorbance signal fulfilling Beer-Lambert's law, i.e. being linearly related to concentration of the chemicals composing the sample. The underpinning theories have been experimentally evaluated on scattering samples in liquid form and in powdered form. The method produced more accurate spectra and the Pearson's coefficient assessing the linearity between the absorbance spectra and the concentration of the added dye improved from 0.94 to 0.99 for liquid samples and 0.84-0.97 for powdered samples. Copyright © 2014 Elsevier B.V. All rights reserved.
Predicting stem borer density in maize using RapidEye data and generalized linear models
Abdel-Rahman, Elfatih M.; Landmann, Tobias; Kyalo, Richard; Ong'amo, George; Mwalusepo, Sizah; Sulieman, Saad; Ru, Bruno Le
2017-05-01
Average maize yield in eastern Africa is 2.03 t ha-1 as compared to global average of 6.06 t ha-1 due to biotic and abiotic constraints. Amongst the biotic production constraints in Africa, stem borers are the most injurious. In eastern Africa, maize yield losses due to stem borers are currently estimated between 12% and 21% of the total production. The objective of the present study was to explore the possibility of RapidEye spectral data to assess stem borer larva densities in maize fields in two study sites in Kenya. RapidEye images were acquired for the Bomet (western Kenya) test site on the 9th of December 2014 and on 27th of January 2015, and for Machakos (eastern Kenya) a RapidEye image was acquired on the 3rd of January 2015. Five RapidEye spectral bands as well as 30 spectral vegetation indices (SVIs) were utilized to predict per field maize stem borer larva densities using generalized linear models (GLMs), assuming Poisson ('Po') and negative binomial ('NB') distributions. Root mean square error (RMSE) and ratio prediction to deviation (RPD) statistics were used to assess the models performance using a leave-one-out cross-validation approach. The Zero-inflated NB ('ZINB') models outperformed the 'NB' models and stem borer larva densities could only be predicted during the mid growing season in December and early January in both study sites, respectively (RMSE = 0.69-1.06 and RPD = 8.25-19.57). Overall, all models performed similar when all the 30 SVIs (non-nested) and only the significant (nested) SVIs were used. The models developed could improve decision making regarding controlling maize stem borers within integrated pest management (IPM) interventions.
Wang, J; Wang, F; Liu, Y; Xu, J; Lin, H; Jia, B; Zuo, W; Jiang, Y; Hu, L; Lin, F
2016-01-01
Overweight individuals are at higher risk for developing type II diabetes than the general population. We conducted this study to analyze the correlation between blood glucose and biochemical parameters, and developed a blood glucose prediction model tailored to overweight patients. A total of 346 overweight Chinese people patients ages 18-81 years were involved in this study. Their levels of fasting glucose (fs-GLU), blood lipids, and hepatic and renal functions were measured and analyzed by multiple linear regression (MLR). Based the MLR results, we developed a back propagation artificial neural network (BP-ANN) model by selecting tansig as the transfer function of the hidden layers nodes, and purelin for the output layer nodes, with training goal of 0.5×10(-5). There was significant correlation between fs-GLU with age, BMI, and blood biochemical indexes (P<0.05). The results of MLR analysis indicated that age, fasting alanine transaminase (fs-ALT), blood urea nitrogen (fs-BUN), total protein (fs-TP), uric acid (fs-BUN), and BMI are 6 independent variables related to fs-GLU. Based on these parameters, the BP-ANN model was performed well and reached high prediction accuracy when training 1 000 epoch (R=0.9987). The level of fs-GLU was predictable using the proposed BP-ANN model based on 6 related parameters (age, fs-ALT, fs-BUN, fs-TP, fs-UA and BMI) in overweight patients. © Georg Thieme Verlag KG Stuttgart · New York.
Integrating genomics and proteomics data to predict drug effects using binary linear programming.
Ji, Zhiwei; Su, Jing; Liu, Chenglin; Wang, Hongyan; Huang, Deshuang; Zhou, Xiaobo
2014-01-01
The Library of Integrated Network-Based Cellular Signatures (LINCS) project aims to create a network-based understanding of biology by cataloging changes in gene expression and signal transduction that occur when cells are exposed to a variety of perturbations. It is helpful for understanding cell pathways and facilitating drug discovery. Here, we developed a novel approach to infer cell-specific pathways and identify a compound's effects using gene expression and phosphoproteomics data under treatments with different compounds. Gene expression data were employed to infer potential targets of compounds and create a generic pathway map. Binary linear programming (BLP) was then developed to optimize the generic pathway topology based on the mid-stage signaling response of phosphorylation. To demonstrate effectiveness of this approach, we built a generic pathway map for the MCF7 breast cancer cell line and inferred the cell-specific pathways by BLP. The first group of 11 compounds was utilized to optimize the generic pathways, and then 4 compounds were used to identify effects based on the inferred cell-specific pathways. Cross-validation indicated that the cell-specific pathways reliably predicted a compound's effects. Finally, we applied BLP to re-optimize the cell-specific pathways to predict the effects of 4 compounds (trichostatin A, MS-275, staurosporine, and digoxigenin) according to compound-induced topological alterations. Trichostatin A and MS-275 (both HDAC inhibitors) inhibited the downstream pathway of HDAC1 and caused cell growth arrest via activation of p53 and p21; the effects of digoxigenin were totally opposite. Staurosporine blocked the cell cycle via p53 and p21, but also promoted cell growth via activated HDAC1 and its downstream pathway. Our approach was also applied to the PC3 prostate cancer cell line, and the cross-validation analysis showed very good accuracy in predicting effects of 4 compounds. In summary, our computational model can be
Yakubu, A.; Oluremi, O. I. A.; Ekpo, E. I.
2018-03-01
There is an increasing use of robust analytical algorithms in the prediction of heat stress. The present investigation therefore, was carried out to forecast heat stress index (HSI) in Sasso laying hens. One hundred and sixty seven records on the thermo-physiological parameters of the birds were utilized. They were reared on deep litter and battery cage systems. Data were collected when the birds were 42- and 52-week of age. The independent variables fitted were housing system, age of birds, rectal temperature (RT), pulse rate (PR), and respiratory rate (RR). The response variable was HSI. Data were analyzed using automatic linear modeling (ALM) and artificial neural network (ANN) procedures. The ALM model building method involved Forward Stepwise using the F Statistic criterion. As regards ANN, multilayer perceptron (MLP) with back-propagation network was used. The ANN network was trained with 90% of the data set while 10% were dedicated to testing for model validation. RR and PR were the two parameters of utmost importance in the prediction of HSI. However, the fractional importance of RR was higher than that of PR in both ALM (0.947 versus 0.053) and ANN (0.677 versus 0.274) models. The two models also predicted HSI effectively with high degree of accuracy [r = 0.980, R 2 = 0.961, adjusted R 2 = 0.961, and RMSE = 0.05168 (ALM); r = 0.983, R 2 = 0.966; adjusted R 2 = 0.966, and RMSE = 0.04806 (ANN)]. The present information may be exploited in the development of a heat stress chart based largely on RR. This may aid detection of thermal discomfort in a poultry house under tropical and subtropical conditions.
Dual-Source Linear Energy Prediction (LINE-P) Model in the Context of WSNs.
Ahmed, Faisal; Tamberg, Gert; Le Moullec, Yannick; Annus, Paul
2017-07-20
Energy harvesting technologies such as miniature power solar panels and micro wind turbines are increasingly used to help power wireless sensor network nodes. However, a major drawback of energy harvesting is its varying and intermittent characteristic, which can negatively affect the quality of service. This calls for careful design and operation of the nodes, possibly by means of, e.g., dynamic duty cycling and/or dynamic frequency and voltage scaling. In this context, various energy prediction models have been proposed in the literature; however, they are typically compute-intensive or only suitable for a single type of energy source. In this paper, we propose Linear Energy Prediction "LINE-P", a lightweight, yet relatively accurate model based on approximation and sampling theory; LINE-P is suitable for dual-source energy harvesting. Simulations and comparisons against existing similar models have been conducted with low and medium resolutions (i.e., 60 and 22 min intervals/24 h) for the solar energy source (low variations) and with high resolutions (15 min intervals/24 h) for the wind energy source. The results show that the accuracy of the solar-based and wind-based predictions is up to approximately 98% and 96%, respectively, while requiring a lower complexity and memory than the other models. For the cases where LINE-P's accuracy is lower than that of other approaches, it still has the advantage of lower computing requirements, making it more suitable for embedded implementation, e.g., in wireless sensor network coordinator nodes or gateways.
Bhutwala, Krish; Beg, Farhat; Mariscal, Derek; Wilks, Scott; Ma, Tammy
2017-10-01
The Advanced Radiographic Capability (ARC) laser at the National Ignition Facility (NIF) at Lawrence Livermore National Laboratory is the world's most energetic short-pulse laser. It comprises four beamlets, each of substantial energy ( 1.5 kJ), extended short-pulse duration (10-30 ps), and large focal spot (>=50% of energy in 150 µm spot). This allows ARC to achieve proton and light ion acceleration via the Target Normal Sheath Acceleration (TNSA) mechanism, but it is yet unknown how proton beam characteristics scale with ARC-regime laser parameters. As theory has also not yet been validated for laser-generated protons at ARC-regime laser parameters, we attempt to formulate the scaling physics of proton beam characteristics as a function of laser energy, intensity, focal spot size, pulse length, target geometry, etc. through a review of relevant proton acceleration experiments from laser facilities across the world. These predicted scaling laws should then guide target design and future diagnostics for desired proton beam experiments on the NIF ARC. This work performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344 and funded by the LLNL LDRD program under tracking code 17-ERD-039.
Directory of Open Access Journals (Sweden)
Shandilya Sharad
2012-10-01
.6% and 60.9%, respectively. Conclusion We report the development and first-use of a nontraditional non-linear method of analyzing the VF ECG signal, yielding high predictive accuracies of defibrillation success. Furthermore, incorporation of features from the PetCO2 signal noticeably increased model robustness. These predictive capabilities should further improve with the availability of a larger database.
International Nuclear Information System (INIS)
Raisee, M.; Hejazi, S.H.
2007-01-01
This paper presents comparisons between heat transfer predictions and measurements for developing turbulent flow through straight rectangular channels with sudden contractions at the mid-channel section. The present numerical results were obtained using a two-dimensional finite-volume code which solves the governing equations in a vertical plane located at the lateral mid-point of the channel. The pressure field is obtained with the well-known SIMPLE algorithm. The hybrid scheme was employed for the discretization of convection in all transport equations. For modeling of the turbulence, a zonal low-Reynolds number k-ε model and the linear and non-linear low-Reynolds number k-ε models with the 'Yap' and 'NYP' length-scale correction terms have been employed. The main objective of present study is to examine the ability of the above turbulence models in the prediction of convective heat transfer in channels with sudden contraction at a mid-channel section. The results of this study show that a sudden contraction creates a relatively small recirculation bubble immediately downstream of the channel contraction. This separation bubble influences the distribution of local heat transfer coefficient and increases the heat transfer levels by a factor of three. Computational results indicate that all the turbulence models employed produce similar flow fields. The zonal k-ε model produces the wrong Nusselt number distribution by underpredicting heat transfer levels in the recirculation bubble and overpredicting them in the developing region. The linear low-Re k-ε model, on the other hand, returns the correct Nusselt number distribution in the recirculation region, although it somewhat overpredicts heat transfer levels in the developing region downstream of the separation bubble. The replacement of the 'Yap' term with the 'NYP' term in the linear low-Re k-ε model results in a more accurate local Nusselt number distribution. Moreover, the application of the non-linear k
Xing, Yafei; Macq, Benoit
2017-11-01
With the emergence of clinical prototypes and first patient acquisitions for proton therapy, the research on prompt gamma imaging is aiming at making most use of the prompt gamma data for in vivo estimation of any shift from expected Bragg peak (BP). The simple problem of matching the measured prompt gamma profile of each pencil beam with a reference simulation from the treatment plan is actually made complex by uncertainties which can translate into distortions during treatment. We will illustrate this challenge and demonstrate the robustness of a predictive linear model we proposed for BP shift estimation based on principal component analysis (PCA) method. It considered the first clinical knife-edge slit camera design in use with anthropomorphic phantom CT data. Particularly, 4115 error scenarios were simulated for the learning model. PCA was applied to the training input randomly chosen from 500 scenarios for eliminating data collinearities. A total variance of 99.95% was used for representing the testing input from 3615 scenarios. This model improved the BP shift estimation by an average of 63+/-19% in a range between -2.5% and 86%, comparing to our previous profile shift (PS) method. The robustness of our method was demonstrated by a comparative study conducted by applying 1000 times Poisson noise to each profile. 67% cases obtained by the learning model had lower prediction errors than those obtained by PS method. The estimation accuracy ranged between 0.31 +/- 0.22 mm and 1.84 +/- 8.98 mm for the learning model, while for PS method it ranged between 0.3 +/- 0.25 mm and 20.71 +/- 8.38 mm.
International Nuclear Information System (INIS)
Li Fan; Pan Jingzhe; Guillon, Olivier; Cocks, Alan
2010-01-01
Sintering of ceramic films on a solid substrate is an important technology for fabricating a range of products, including solid oxide fuel cells, micro-electronic PZT films and protective coatings. There is clear evidence that the constrained sintering process is anisotropic in nature. This paper presents a study of the constrained sintering deformation using an anisotropic constitutive law. The state of the material is described using the sintering strains rather than the relative density. In the limiting case of free sintering, the constitutive law reduces to a conventional isotropic constitutive law. The anisotropic constitutive law is used to calculate sintering deformation of a constrained film bonded to a rigid substrate and the compressive stress required in a sinter-forging experiment to achieve zero lateral shrinkage. The results are compared with experimental data in the literature. It is shown that the anisotropic constitutive law can capture the behaviour of the materials observed in the sintering experiments.
Data Normalization to Accelerate Training for Linear Neural Net to Predict Tropical Cyclone Tracks
Directory of Open Access Journals (Sweden)
Jian Jin
2015-01-01
Full Text Available When pure linear neural network (PLNN is used to predict tropical cyclone tracks (TCTs in South China Sea, whether the data is normalized or not greatly affects the training process. In this paper, min.-max. method and normal distribution method, instead of standard normal distribution, are applied to TCT data before modeling. We propose the experimental schemes in which, with min.-max. method, the min.-max. value pair of each variable is mapped to (−1, 1 and (0, 1; with normal distribution method, each variable’s mean and standard deviation pair is set to (0, 1 and (100, 1. We present the following results: (1 data scaled to the similar intervals have similar effects, no matter the use of min.-max. or normal distribution method; (2 mapping data to around 0 gains much faster training speed than mapping them to the intervals far away from 0 or using unnormalized raw data, although all of them can approach the same lower level after certain steps from their training error curves. This could be useful to decide data normalization method when PLNN is used individually.
Wheel slip control with torque blending using linear and nonlinear model predictive control
Basrah, M. Sofian; Siampis, Efstathios; Velenis, Efstathios; Cao, Dongpu; Longo, Stefano
2017-11-01
Modern hybrid electric vehicles employ electric braking to recuperate energy during deceleration. However, currently anti-lock braking system (ABS) functionality is delivered solely by friction brakes. Hence regenerative braking is typically deactivated at a low deceleration threshold in case high slip develops at the wheels and ABS activation is required. If blending of friction and electric braking can be achieved during ABS events, there would be no need to impose conservative thresholds for deactivation of regenerative braking and the recuperation capacity of the vehicle would increase significantly. In addition, electric actuators are typically significantly faster responding and would deliver better control of wheel slip than friction brakes. In this work we present a control strategy for ABS on a fully electric vehicle with each wheel independently driven by an electric machine and friction brake independently applied at each wheel. In particular we develop linear and nonlinear model predictive control strategies for optimal performance and enforcement of critical control and state constraints. The capability for real-time implementation of these controllers is assessed and their performance is validated in high fidelity simulation.
A turbulent mixing Reynolds stress model fitted to match linear interaction analysis predictions
International Nuclear Information System (INIS)
Griffond, J; Soulard, O; Souffland, D
2010-01-01
To predict the evolution of turbulent mixing zones developing in shock tube experiments with different gases, a turbulence model must be able to reliably evaluate the production due to the shock-turbulence interaction. In the limit of homogeneous weak turbulence, 'linear interaction analysis' (LIA) can be applied. This theory relies on Kovasznay's decomposition and allows the computation of waves transmitted or produced at the shock front. With assumptions about the composition of the upstream turbulent mixture, one can connect the second-order moments downstream from the shock front to those upstream through a transfer matrix, depending on shock strength. The purpose of this work is to provide a turbulence model that matches LIA results for the shock-turbulent mixture interaction. Reynolds stress models (RSMs) with additional equations for the density-velocity correlation and the density variance are considered here. The turbulent states upstream and downstream from the shock front calculated with these models can also be related through a transfer matrix, provided that the numerical implementation is based on a pseudo-pressure formulation. Then, the RSM should be modified in such a way that its transfer matrix matches the LIA one. Using the pseudo-pressure to introduce ad hoc production terms, we are able to obtain a close agreement between LIA and RSM matrices for any shock strength and thus improve the capabilities of the RSM.
International Nuclear Information System (INIS)
Vullings, R; Sluijter, R J; Mischi, M; Bergmans, J W M; Peters, C H L; Oei, S G
2009-01-01
Monitoring the fetal heart rate (fHR) and fetal electrocardiogram (fECG) during pregnancy is important to support medical decision making. Before labor, the fHR is usually monitored using Doppler ultrasound. This method is inaccurate and therefore of limited clinical value. During labor, the fHR can be monitored more accurately using an invasive electrode; this method also enables monitoring of the fECG. Antenatally, the fECG and fHR can also be monitored using electrodes on the maternal abdomen. The signal-to-noise ratio of these recordings is, however, low, the maternal electrocardiogram (mECG) being the main interference. Existing techniques to remove the mECG from these non-invasive recordings are insufficiently accurate or do not provide all spatial information of the fECG. In this paper a new technique for mECG removal in antenatal abdominal recordings is presented. This technique operates by the linear prediction of each separate wave in the mECG. Its performance in mECG removal and fHR detection is evaluated by comparison with spatial filtering, adaptive filtering, template subtraction and independent component analysis techniques. The new technique outperforms the other techniques in both mECG removal and fHR detection (by more than 3%)
Kim, T. W.; Park, G. H.
2014-12-01
Seasonal variation of aragonite saturation state (Ωarag) in the North Pacific Ocean (NPO) was investigated, using multiple linear regression (MLR) models produced from the PACIFICA (Pacific Ocean interior carbon) dataset. Data within depth ranges of 50-1200m were used to derive MLR models, and three parameters (potential temperature, nitrate, and apparent oxygen utilization (AOU)) were chosen as predictor variables because these parameters are associated with vertical mixing, DIC (dissolved inorganic carbon) removal and release which all affect Ωarag in water column directly or indirectly. The PACIFICA dataset was divided into 5° × 5° grids, and a MLR model was produced in each grid, giving total 145 independent MLR models over the NPO. Mean RMSE (root mean square error) and r2 (coefficient of determination) of all derived MLR models were approximately 0.09 and 0.96, respectively. Then the obtained MLR coefficients for each of predictor variables and an intercept were interpolated over the study area, thereby making possible to allocate MLR coefficients to data-sparse ocean regions. Predictability from the interpolated coefficients was evaluated using Hawaiian time-series data, and as a result mean residual between measured and predicted Ωarag values was approximately 0.08, which is less than the mean RMSE of our MLR models. The interpolated MLR coefficients were combined with seasonal climatology of World Ocean Atlas 2013 (1° × 1°) to produce seasonal Ωarag distributions over various depths. Large seasonal variability in Ωarag was manifested in the mid-latitude Western NPO (24-40°N, 130-180°E) and low-latitude Eastern NPO (0-12°N, 115-150°W). In the Western NPO, seasonal fluctuations of water column stratification appeared to be responsible for the seasonal variation in Ωarag (~ 0.5 at 50 m) because it closely followed temperature variations in a layer of 0-75 m. In contrast, remineralization of organic matter was the main cause for the seasonal
Bower, Dan J.; Sanan, Patrick; Wolf, Aaron S.
2018-01-01
The energy balance of a partially molten rocky planet can be expressed as a non-linear diffusion equation using mixing length theory to quantify heat transport by both convection and mixing of the melt and solid phases. Crucially, in this formulation the effective or eddy diffusivity depends on the entropy gradient, ∂S / ∂r , as well as entropy itself. First we present a simplified model with semi-analytical solutions that highlights the large dynamic range of ∂S / ∂r -around 12 orders of magnitude-for physically-relevant parameters. It also elucidates the thermal structure of a magma ocean during the earliest stage of crystal formation. This motivates the development of a simple yet stable numerical scheme able to capture the large dynamic range of ∂S / ∂r and hence provide a flexible and robust method for time-integrating the energy equation. Using insight gained from the simplified model, we consider a full model, which includes energy fluxes associated with convection, mixing, gravitational separation, and conduction that all depend on the thermophysical properties of the melt and solid phases. This model is discretised and evolved by applying the finite volume method (FVM), allowing for extended precision calculations and using ∂S / ∂r as the solution variable. The FVM is well-suited to this problem since it is naturally energy conserving, flexible, and intuitive to incorporate arbitrary non-linear fluxes that rely on lookup data. Special attention is given to the numerically challenging scenario in which crystals first form in the centre of a magma ocean. The computational framework we devise is immediately applicable to modelling high melt fraction phenomena in Earth and planetary science research. Furthermore, it provides a template for solving similar non-linear diffusion equations that arise in other science and engineering disciplines, particularly for non-linear functional forms of the diffusion coefficient.
International Nuclear Information System (INIS)
Yock, Adam D.; Kudchadker, Rajat J.; Rao, Arvind; Dong, Lei; Beadle, Beth M.; Garden, Adam S.; Court, Laurence E.
2014-01-01
Purpose: The purpose of this work was to develop and evaluate the accuracy of several predictive models of variation in tumor volume throughout the course of radiation therapy. Methods: Nineteen patients with oropharyngeal cancers were imaged daily with CT-on-rails for image-guided alignment per an institutional protocol. The daily volumes of 35 tumors in these 19 patients were determined and used to generate (1) a linear model in which tumor volume changed at a constant rate, (2) a general linear model that utilized the power fit relationship between the daily and initial tumor volumes, and (3) a functional general linear model that identified and exploited the primary modes of variation between time series describing the changing tumor volumes. Primary and nodal tumor volumes were examined separately. The accuracy of these models in predicting daily tumor volumes were compared with those of static and linear reference models using leave-one-out cross-validation. Results: In predicting the daily volume of primary tumors, the general linear model and the functional general linear model were more accurate than the static reference model by 9.9% (range: −11.6%–23.8%) and 14.6% (range: −7.3%–27.5%), respectively, and were more accurate than the linear reference model by 14.2% (range: −6.8%–40.3%) and 13.1% (range: −1.5%–52.5%), respectively. In predicting the daily volume of nodal tumors, only the 14.4% (range: −11.1%–20.5%) improvement in accuracy of the functional general linear model compared to the static reference model was statistically significant. Conclusions: A general linear model and a functional general linear model trained on data from a small population of patients can predict the primary tumor volume throughout the course of radiation therapy with greater accuracy than standard reference models. These more accurate models may increase the prognostic value of information about the tumor garnered from pretreatment computed tomography
Ding, Anxin; Li, Shuxin; Wang, Jihui; Ni, Aiqing; Sun, Liangliang; Chang, Lei
2016-10-01
In this paper, the corner spring-in angles of AS4/8552 L-shaped composite profiles with different thicknesses are predicted using path-dependent constitutive law with the consideration of material properties variation due to phase change during curing. The prediction accuracy mainly depends on the properties in the rubbery and glassy states obtained by homogenization method rather than experimental measurements. Both analytical and finite element (FE) homogenization methods are applied to predict the overall properties of AS4/8552 composite. The effect of fiber volume fraction on the properties is investigated for both rubbery and glassy states using both methods. And the predicted results are compared with experimental measurements for the glassy state. Good agreement is achieved between the predicted results and available experimental data, showing the reliability of the homogenization method. Furthermore, the corner spring-in angles of L-shaped composite profiles are measured experimentally and the reliability of path-dependent constitutive law is validated as well as the properties prediction by FE homogenization method.
Marchese Robinson, Richard L; Palczewska, Anna; Palczewski, Jan; Kidley, Nathan
2017-08-28
The ability to interpret the predictions made by quantitative structure-activity relationships (QSARs) offers a number of advantages. While QSARs built using nonlinear modeling approaches, such as the popular Random Forest algorithm, might sometimes be more predictive than those built using linear modeling approaches, their predictions have been perceived as difficult to interpret. However, a growing number of approaches have been proposed for interpreting nonlinear QSAR models in general and Random Forest in particular. In the current work, we compare the performance of Random Forest to those of two widely used linear modeling approaches: linear Support Vector Machines (SVMs) (or Support Vector Regression (SVR)) and partial least-squares (PLS). We compare their performance in terms of their predictivity as well as the chemical interpretability of the predictions using novel scoring schemes for assessing heat map images of substructural contributions. We critically assess different approaches for interpreting Random Forest models as well as for obtaining predictions from the forest. We assess the models on a large number of widely employed public-domain benchmark data sets corresponding to regression and binary classification problems of relevance to hit identification and toxicology. We conclude that Random Forest typically yields comparable or possibly better predictive performance than the linear modeling approaches and that its predictions may also be interpreted in a chemically and biologically meaningful way. In contrast to earlier work looking at interpretation of nonlinear QSAR models, we directly compare two methodologically distinct approaches for interpreting Random Forest models. The approaches for interpreting Random Forest assessed in our article were implemented using open-source programs that we have made available to the community. These programs are the rfFC package ( https://r-forge.r-project.org/R/?group_id=1725 ) for the R statistical
Predicted and verified deviations from Zipf's law in ecology of competing products.
Hisano, Ryohei; Sornette, Didier; Mizuno, Takayuki
2011-08-01
Zipf's power-law distribution is a generic empirical statistical regularity found in many complex systems. However, rather than universality with a single power-law exponent (equal to 1 for Zipf's law), there are many reported deviations that remain unexplained. A recently developed theory finds that the interplay between (i) one of the most universal ingredients, namely stochastic proportional growth, and (ii) birth and death processes, leads to a generic power-law distribution with an exponent that depends on the characteristics of each ingredient. Here, we report the first complete empirical test of the theory and its application, based on the empirical analysis of the dynamics of market shares in the product market. We estimate directly the average growth rate of market shares and its standard deviation, the birth rates and the "death" (hazard) rate of products. We find that temporal variations and product differences of the observed power-law exponents can be fully captured by the theory with no adjustable parameters. Our results can be generalized to many systems for which the statistical properties revealed by power-law exponents are directly linked to the underlying generating mechanism.
Cardoso, F F; Tempelman, R J
2012-07-01
The objectives of this work were to assess alternative linear reaction norm (RN) models for genetic evaluation of Angus cattle in Brazil. That is, we investigated the interaction between genotypes and continuous descriptors of the environmental variation to examine evidence of genotype by environment interaction (G×E) in post-weaning BW gain (PWG) and to compare the environmental sensitivity of national and imported Angus sires. Data were collected by the Brazilian Angus Improvement Program from 1974 to 2005 and consisted of 63,098 records and a pedigree file with 95,896 animals. Six models were implemented using Bayesian inference and compared using the Deviance Information Criterion (DIC). The simplest model was M(1), a traditional animal model, which showed the largest DIC and hence the poorest fit when compared with the 4 alternative RN specifications accounting for G×E. In M(2), a 2-step procedure was implemented using the contemporary group posterior means of M(1) as the environmental gradient, ranging from -92.6 to +265.5 kg. Moreover, the benefits of jointly estimating all parameters in a 1-step approach were demonstrated by M(3). Additionally, we extended M(3) to allow for residual heteroskedasticity using an exponential function (M(4)) and the best fitting (smallest DIC) environmental classification model (M(5)) specification. Finally, M(6) added just heteroskedastic residual variance to M(1). Heritabilities were less at harsh environments and increased with the improvement of production conditions for all RN models. Rank correlations among genetic merit predictions obtained by M(1) and by the best fitting RN models M(3) (homoskedastic) and M(5) (heteroskedastic) at different environmental levels ranged from 0.79 and 0.81, suggesting biological importance of G×E in Brazilian Angus PWG. These results suggest that selection progress could be optimized by adopting environment-specific genetic merit predictions. The PWG environmental sensitivity of
Performance prediction of gas turbines by solving a system of non-linear equations
Energy Technology Data Exchange (ETDEWEB)
Kaikko, J
1998-09-01
This study presents a novel method for implementing the performance prediction of gas turbines from the component models. It is based on solving the non-linear set of equations that corresponds to the process equations, and the mass and energy balances for the engine. General models have been presented for determining the steady state operation of single components. Single and multiple shad arrangements have been examined with consideration also being given to heat regeneration and intercooling. Emphasis has been placed upon axial gas turbines of an industrial scale. Applying the models requires no information of the structural dimensions of the gas turbines. On comparison with the commonly applied component matching procedures, this method incorporates several advantages. The application of the models for providing results is facilitated as less attention needs to be paid to calculation sequences and routines. Solving the set of equations is based on zeroing co-ordinate functions that are directly derived from the modelling equations. Therefore, controlling the accuracy of the results is easy. This method gives more freedom for the selection of the modelling parameters since, unlike for the matching procedures, exchanging these criteria does not itself affect the algorithms. Implicit relationships between the variables are of no significance, thus increasing the freedom for the modelling equations as well. The mathematical models developed in this thesis will provide facilities to optimise the operation of any major gas turbine configuration with respect to the desired process parameters. The computational methods used in this study may also be adapted to any other modelling problems arising in industry. (orig.) 36 refs.
Ensemble prediction of floods – catchment non-linearity and forecast probabilities
Directory of Open Access Journals (Sweden)
C. Reszler
2007-07-01
Full Text Available Quantifying the uncertainty of flood forecasts by ensemble methods is becoming increasingly important for operational purposes. The aim of this paper is to examine how the ensemble distribution of precipitation forecasts propagates in the catchment system, and to interpret the flood forecast probabilities relative to the forecast errors. We use the 622 km2 Kamp catchment in Austria as an example where a comprehensive data set, including a 500 yr and a 1000 yr flood, is available. A spatially-distributed continuous rainfall-runoff model is used along with ensemble and deterministic precipitation forecasts that combine rain gauge data, radar data and the forecast fields of the ALADIN and ECMWF numerical weather prediction models. The analyses indicate that, for long lead times, the variability of the precipitation ensemble is amplified as it propagates through the catchment system as a result of non-linear catchment response. In contrast, for lead times shorter than the catchment lag time (e.g. 12 h and less, the variability of the precipitation ensemble is decreased as the forecasts are mainly controlled by observed upstream runoff and observed precipitation. Assuming that all ensemble members are equally likely, the statistical analyses for five flood events at the Kamp showed that the ensemble spread of the flood forecasts is always narrower than the distribution of the forecast errors. This is because the ensemble forecasts focus on the uncertainty in forecast precipitation as the dominant source of uncertainty, and other sources of uncertainty are not accounted for. However, a number of analyses, including Relative Operating Characteristic diagrams, indicate that the ensemble spread is a useful indicator to assess potential forecast errors for lead times larger than 12 h.
A General Linear Model (GLM) was used to evaluate the deviation of predicted values from expected values for a complex environmental model. For this demonstration, we used the default level interface of the Regional Mercury Cycling Model (R-MCM) to simulate epilimnetic total mer...
Efficient Implementation of Solvers for Linear Model Predictive Control on Embedded Devices
DEFF Research Database (Denmark)
Frison, Gianluca; Kwame Minde Kufoalor, D.; Imsland, Lars
2014-01-01
This paper proposes a novel approach for the efficient implementation of solvers for linear MPC on embedded devices. The main focus is to explain in detail the approach used to optimize the linear algebra for selected low-power embedded devices, and to show how the high-performance implementation...
Linear regressive model structures for estimation and prediction of compartmental diffusive systems
Vries, D; Keesman, K.J.; Zwart, Heiko J.
In input-output relations of (compartmental) diffusive systems, physical parameters appear non-linearly, resulting in the use of (constrained) non-linear parameter estimation techniques with its short-comings regarding global optimality and computational effort. Given a LTI system in state space
Linear regressive model structures for estimation and prediction of compartmental diffusive systems
Vries, D.; Keesman, K.J.; Zwart, H.
2006-01-01
Abstract In input-output relations of (compartmental) diffusive systems, physical parameters appear non-linearly, resulting in the use of (constrained) non-linear parameter estimation techniques with its short-comings regarding global optimality and computational effort. Given a LTI system in state
Prediction of failures in linear systems with the use of tolerance ranges
International Nuclear Information System (INIS)
Gadzhiev, Ch.M.
1993-01-01
The problem of predicting the technical state of an object can be stated in a general case as that of predicting potential failures on the basis of a quantitative evaluation of the predicted parameters in relation to the set of tolerances on these parameters. The main stages in the prediction are collecting and preparing source data on the prehistory of the predicted phenomenon, forming a mathematical model of this phenomenon, working out the algorithm for the prediction, and adopting a solution from the prediction results. The final two stages of prediction are considered in this article. The prediction algorithm is proposed based on construction of the tolerance range for the signal of error between output coordinates of the system and its mathematical model. A solution regarding possible occurrence of failure in the system is formulated as a result of comparison of the tolerance range and the found confidence interval. 5 refs
DEFF Research Database (Denmark)
Choi, Ui-Min; Ma, Ke; Blaabjerg, Frede
2018-01-01
In this paper, the lifetime prediction of power device modules based on the linear damage accumulation is studied in conjunction with simple mission profiles of converters. Superimposed power cycling conditions, which are called simple mission profiles in this paper, are made based on a lifetime ...... prediction of IGBT modules under power converter applications.......In this paper, the lifetime prediction of power device modules based on the linear damage accumulation is studied in conjunction with simple mission profiles of converters. Superimposed power cycling conditions, which are called simple mission profiles in this paper, are made based on a lifetime...... model in respect to junction temperature swing duration. This model has been built based on 39 power cycling test results of 600-V 30-A three-phase-molded IGBT modules. Six tests are performed under three superimposed power cycling conditions using an advanced power cycling test setup. The experimental...
Augmented chaos-multiple linear regression approach for prediction of wave parameters
Directory of Open Access Journals (Sweden)
M.A. Ghorbani
2017-06-01
The inter-comparisons demonstrated that the Chaos-MLR and pure MLR models yield almost the same accuracy in predicting the significant wave heights and the zero-up-crossing wave periods. Whereas, the augmented Chaos-MLR model is performed better results in term of the prediction accuracy vis-a-vis the previous prediction applications of the same case study.
McDowell, J J; Wood, H M
1984-03-01
Eight human subjects pressed a lever on a range of variable-interval schedules for 0.25 cent to 35.0 cent per reinforcement. Herrnstein's hyperbola described seven of the eight subjects' response-rate data well. For all subjects, the y-asymptote of the hyperbola increased with increasing reinforcer magnitude and its reciprocal was a linear function of the reciprocal of reinforcer magnitude. These results confirm predictions made by linear system theory; they contradict formal properties of Herrnstein's account and of six other mathematical accounts of single-alternative responding.
International Nuclear Information System (INIS)
Kim, J. Y.; Shin, C. H.; Kim, J. K.; Lee, J. K.; Park, Y. J.
2003-01-01
The variation transitions of the inventories for the liquid radwaste system and the radioactive gas have being released in containment, and their predictive values according to the operation histories of Yonggwang(YGN) 3 and 4 were analyzed by linear regression analysis methodology. The results show that the variation transitions of the inventories for those systems are linearly increasing according to the operation histories but the inventories released to the environment are considerably lower than the recommended values based on the FSAR suggestions. It is considered that some conservation were presented in the estimation methodology in preparing stage of FSAR
Zhang, Hua; Kurgan, Lukasz
2014-12-01
Knowledge of protein flexibility is vital for deciphering the corresponding functional mechanisms. This knowledge would help, for instance, in improving computational drug design and refinement in homology-based modeling. We propose a new predictor of the residue flexibility, which is expressed by B-factors, from protein chains that use local (in the chain) predicted (or native) relative solvent accessibility (RSA) and custom-derived amino acid (AA) alphabets. Our predictor is implemented as a two-stage linear regression model that uses RSA-based space in a local sequence window in the first stage and a reduced AA pair-based space in the second stage as the inputs. This method is easy to comprehend explicit linear form in both stages. Particle swarm optimization was used to find an optimal reduced AA alphabet to simplify the input space and improve the prediction performance. The average correlation coefficients between the native and predicted B-factors measured on a large benchmark dataset are improved from 0.65 to 0.67 when using the native RSA values and from 0.55 to 0.57 when using the predicted RSA values. Blind tests that were performed on two independent datasets show consistent improvements in the average correlation coefficients by a modest value of 0.02 for both native and predicted RSA-based predictions.
International Nuclear Information System (INIS)
Jahandideh, Sepideh; Jahandideh, Samad; Asadabadi, Ebrahim Barzegari; Askarian, Mehrdad; Movahedi, Mohammad Mehdi; Hosseini, Somayyeh; Jahandideh, Mina
2009-01-01
Prediction of the amount of hospital waste production will be helpful in the storage, transportation and disposal of hospital waste management. Based on this fact, two predictor models including artificial neural networks (ANNs) and multiple linear regression (MLR) were applied to predict the rate of medical waste generation totally and in different types of sharp, infectious and general. In this study, a 5-fold cross-validation procedure on a database containing total of 50 hospitals of Fars province (Iran) were used to verify the performance of the models. Three performance measures including MAR, RMSE and R 2 were used to evaluate performance of models. The MLR as a conventional model obtained poor prediction performance measure values. However, MLR distinguished hospital capacity and bed occupancy as more significant parameters. On the other hand, ANNs as a more powerful model, which has not been introduced in predicting rate of medical waste generation, showed high performance measure values, especially 0.99 value of R 2 confirming the good fit of the data. Such satisfactory results could be attributed to the non-linear nature of ANNs in problem solving which provides the opportunity for relating independent variables to dependent ones non-linearly. In conclusion, the obtained results showed that our ANN-based model approach is very promising and may play a useful role in developing a better cost-effective strategy for waste management in future.
Li, Hongjian; Leung, Kwong-Sak; Wong, Man-Hon; Ballester, Pedro J
2014-08-27
State-of-the-art protein-ligand docking methods are generally limited by the traditionally low accuracy of their scoring functions, which are used to predict binding affinity and thus vital for discriminating between active and inactive compounds. Despite intensive research over the years, classical scoring functions have reached a plateau in their predictive performance. These assume a predetermined additive functional form for some sophisticated numerical features, and use standard multivariate linear regression (MLR) on experimental data to derive the coefficients. In this study we show that such a simple functional form is detrimental for the prediction performance of a scoring function, and replacing linear regression by machine learning techniques like random forest (RF) can improve prediction performance. We investigate the conditions of applying RF under various contexts and find that given sufficient training samples RF manages to comprehensively capture the non-linearity between structural features and measured binding affinities. Incorporating more structural features and training with more samples can both boost RF performance. In addition, we analyze the importance of structural features to binding affinity prediction using the RF variable importance tool. Lastly, we use Cyscore, a top performing empirical scoring function, as a baseline for comparison study. Machine-learning scoring functions are fundamentally different from classical scoring functions because the former circumvents the fixed functional form relating structural features with binding affinities. RF, but not MLR, can effectively exploit more structural features and more training samples, leading to higher prediction performance. The future availability of more X-ray crystal structures will further widen the performance gap between RF-based and MLR-based scoring functions. This further stresses the importance of substituting RF for MLR in scoring function development.
International Nuclear Information System (INIS)
Burr, T.L.
1994-04-01
This report is a primer on the analysis of both linear and nonlinear time series with applications in nuclear safeguards and nonproliferation. We analyze eight simulated and two real time series using both linear and nonlinear modeling techniques. The theoretical treatment is brief but references to pertinent theory are provided. Forecasting is our main goal. However, because our most common approach is to fit models to the data, we also emphasize checking model adequacy by analyzing forecast errors for serial correlation or nonconstant variance
Jaime-Pérez, José Carlos; Jiménez-Castillo, Raúl Alberto; Vázquez-Hernández, Karina Elizabeth; Salazar-Riojas, Rosario; Méndez-Ramírez, Nereida; Gómez-Almaguer, David
2017-10-01
Advances in automated cell separators have improved the efficiency of plateletpheresis and the possibility of obtaining double products (DP). We assessed cell processor accuracy of predicted platelet (PLT) yields with the goal of a better prediction of DP collections. This retrospective proof-of-concept study included 302 plateletpheresis procedures performed on a Trima Accel v6.0 at the apheresis unit of a hematology department. Donor variables, software predicted yield and actual PLT yield were statistically evaluated. Software prediction was optimized by linear regression analysis and its optimal cut-off to obtain a DP assessed by receiver operating characteristic curve (ROC) modeling. Three hundred and two plateletpheresis procedures were performed; in 271 (89.7%) occasions, donors were men and in 31 (10.3%) women. Pre-donation PLT count had the best direct correlation with actual PLT yield (r = 0.486. P Simple correction derived from linear regression analysis accurately corrected this underestimation and ROC analysis identified a precise cut-off to reliably predict a DP. © 2016 Wiley Periodicals, Inc.
Gowda, Dhananjaya; Airaksinen, Manu; Alku, Paavo
2017-09-01
Recently, a quasi-closed phase (QCP) analysis of speech signals for accurate glottal inverse filtering was proposed. However, the QCP analysis which belongs to the family of temporally weighted linear prediction (WLP) methods uses the conventional forward type of sample prediction. This may not be the best choice especially in computing WLP models with a hard-limiting weighting function. A sample selective minimization of the prediction error in WLP reduces the effective number of samples available within a given window frame. To counter this problem, a modified quasi-closed phase forward-backward (QCP-FB) analysis is proposed, wherein each sample is predicted based on its past as well as future samples thereby utilizing the available number of samples more effectively. Formant detection and estimation experiments on synthetic vowels generated using a physical modeling approach as well as natural speech utterances show that the proposed QCP-FB method yields statistically significant improvements over the conventional linear prediction and QCP methods.
Afantitis, Antreas; Melagraki, Georgia; Sarimveis, Haralambos; Koutentis, Panayiotis A; Markopoulos, John; Igglessi-Markopoulou, Olga
2006-08-01
A quantitative-structure activity relationship was obtained by applying Multiple Linear Regression Analysis to a series of 80 1-[2-hydroxyethoxy-methyl]-6-(phenylthio) thymine (HEPT) derivatives with significant anti-HIV activity. For the selection of the best among 37 different descriptors, the Elimination Selection Stepwise Regression Method (ES-SWR) was utilized. The resulting QSAR model (R (2) (CV) = 0.8160; S (PRESS) = 0.5680) proved to be very accurate both in training and predictive stages.
Selvam, A. M.
2017-01-01
Dynamical systems in nature exhibit self-similar fractal space-time fluctuations on all scales indicating long-range correlations and, therefore, the statistical normal distribution with implicit assumption of independence, fixed mean and standard deviation cannot be used for description and quantification of fractal data sets. The author has developed a general systems theory based on classical statistical physics for fractal fluctuations which predicts the following. (1) The fractal fluctuations signify an underlying eddy continuum, the larger eddies being the integrated mean of enclosed smaller-scale fluctuations. (2) The probability distribution of eddy amplitudes and the variance (square of eddy amplitude) spectrum of fractal fluctuations follow the universal Boltzmann inverse power law expressed as a function of the golden mean. (3) Fractal fluctuations are signatures of quantum-like chaos since the additive amplitudes of eddies when squared represent probability densities analogous to the sub-atomic dynamics of quantum systems such as the photon or electron. (4) The model predicted distribution is very close to statistical normal distribution for moderate events within two standard deviations from the mean but exhibits a fat long tail that are associated with hazardous extreme events. Continuous periodogram power spectral analyses of available GHCN annual total rainfall time series for the period 1900-2008 for Indian and USA stations show that the power spectra and the corresponding probability distributions follow model predicted universal inverse power law form signifying an eddy continuum structure underlying the observed inter-annual variability of rainfall. On a global scale, man-made greenhouse gas related atmospheric warming would result in intensification of natural climate variability, seen immediately in high frequency fluctuations such as QBO and ENSO and even shorter timescales. Model concepts and results of analyses are discussed with reference
Directory of Open Access Journals (Sweden)
Luis Gonzaga Baca Ruiz
2016-08-01
Full Text Available This paper addresses the problem of energy consumption prediction using neural networks over a set of public buildings. Since energy consumption in the public sector comprises a substantial share of overall consumption, the prediction of such consumption represents a decisive issue in the achievement of energy savings. In our experiments, we use the data provided by an energy consumption monitoring system in a compound of faculties and research centers at the University of Granada, and provide a methodology to predict future energy consumption using nonlinear autoregressive (NAR and the nonlinear autoregressive neural network with exogenous inputs (NARX, respectively. Results reveal that NAR and NARX neural networks are both suitable for performing energy consumption prediction, but also that exogenous data may help to improve the accuracy of predictions.
High-performance small-scale solvers for linear Model Predictive Control
DEFF Research Database (Denmark)
Frison, Gianluca; Sørensen, Hans Henrik Brandenborg; Dammann, Bernd
2014-01-01
, with the two main research areas of explicit MPC and tailored on-line MPC. State-of-the-art solvers in this second class can outperform optimized linear-algebra libraries (BLAS) only for very small problems, and do not explicitly exploit the hardware capabilities, relying on compilers for that. This approach...
Due to the complexity of the processes contributing to beach bacteria concentrations, many researchers rely on statistical modeling, among which multiple linear regression (MLR) modeling is most widely used. Despite its ease of use and interpretation, there may be time dependence...
Chapman, Robin S.; Hesketh, Linda J.; Kistler, Doris J.
2002-01-01
Longitudinal change in syntax comprehension and production skill, measured over six years, was modeled in 31 individuals (ages 5-20) with Down syndrome. The best fitting Hierarchical Linear Modeling model of comprehension uses age and visual and auditory short-term memory as predictors of initial status, and age for growth trajectory. (Contains…
Eric J. Gustafson; L. Jay Roberts; Larry A. Leefers
2006-01-01
Forest management planners require analytical tools to assess the effects of alternative strategies on the sometimes disparate benefits from forests such as timber production and wildlife habitat. We assessed the spatial patterns of alternative management strategies by linking two models that were developed for different purposes. We used a linear programming model (...
DEFF Research Database (Denmark)
Rohde, Palle Duun; Demontis, Ditte; Børglum, Anders
is enriched for causal variants. Here we apply the GFBLUP model to a small schizophrenia case-control study to test the promise of this model on psychiatric disorders, and hypothesize that the performance will be increased when applying the model to a larger ADHD case-control study if the genomic feature...... contains the causal variants. Materials and Methods: The schizophrenia study consisted of 882 controls and 888 schizophrenia cases genotyped for 520,000 SNPs. The ADHD study contained 25,954 controls and 16,663 ADHD cases with 8,4 million imputed genotypes. Results: The predictive ability for schizophrenia.......6% for the null model). Conclusion: The improvement in predictive ability for schizophrenia was marginal, however, greater improvement is expected for the larger ADHD data....
Automated Prediction of Catalytic Mechanism and Rate Law Using Graph-Based Reaction Path Sampling.
Habershon, Scott
2016-04-12
In a recent article [ J. Chem. Phys. 2015 , 143 , 094106 ], we introduced a novel graph-based sampling scheme which can be used to generate chemical reaction paths in many-atom systems in an efficient and highly automated manner. The main goal of this work is to demonstrate how this approach, when combined with direct kinetic modeling, can be used to determine the mechanism and phenomenological rate law of a complex catalytic cycle, namely cobalt-catalyzed hydroformylation of ethene. Our graph-based sampling scheme generates 31 unique chemical products and 32 unique chemical reaction pathways; these sampled structures and reaction paths enable automated construction of a kinetic network model of the catalytic system when combined with density functional theory (DFT) calculations of free energies and resultant transition-state theory rate constants. Direct simulations of this kinetic network across a range of initial reactant concentrations enables determination of both the reaction mechanism and the associated rate law in an automated fashion, without the need for either presupposing a mechanism or making steady-state approximations in kinetic analysis. Most importantly, we find that the reaction mechanism which emerges from these simulations is exactly that originally proposed by Heck and Breslow; furthermore, the simulated rate law is also consistent with previous experimental and computational studies, exhibiting a complex dependence on carbon monoxide pressure. While the inherent errors of using DFT simulations to model chemical reactivity limit the quantitative accuracy of our calculated rates, this work confirms that our automated simulation strategy enables direct analysis of catalytic mechanisms from first principles.
A linear regression model for predicting PNW estuarine temperatures in a changing climate
Pacific Northwest coastal regions, estuaries, and associated ecosystems are vulnerable to the potential effects of climate change, especially to changes in nearshore water temperature. While predictive climate models simulate future air temperatures, no such projections exist for...
2017-10-01
ENGINEERING CENTER GRAIN EVALUATION SOFTWARE TO NUMERICALLY PREDICT LINEAR BURN REGRESSION FOR SOLID PROPELLANT GRAIN GEOMETRIES Brian...distribution is unlimited. AD U.S. ARMY ARMAMENT RESEARCH, DEVELOPMENT AND ENGINEERING CENTER Munitions Engineering Technology Center Picatinny...U.S. ARMY ARMAMENT RESEARCH, DEVELOPMENT AND ENGINEERING CENTER GRAIN EVALUATION SOFTWARE TO NUMERICALLY PREDICT LINEAR BURN REGRESSION FOR SOLID
Bogachev, Mikhail I.; Bunde, Armin
2011-06-01
We study the predictability of extreme events in records with linear and nonlinear long-range memory in the presence of additive white noise using two different approaches: (i) the precursory pattern recognition technique (PRT) that exploits solely the information about short-term precursors, and (ii) the return interval approach (RIA) that exploits long-range memory incorporated in the elapsed time after the last extreme event. We find that the PRT always performs better when only linear memory is present. In the presence of nonlinear memory, both methods demonstrate comparable efficiency in the absence of white noise. When additional white noise is present in the record (which is the case in most observational records), the efficiency of the PRT decreases monotonously with increasing noise level. In contrast, the RIA shows an abrupt transition between a phase of low level noise where the prediction is as good as in the absence of noise, and a phase of high level noise where the prediction becomes poor. In the phase of low and intermediate noise the RIA predicts considerably better than the PRT, which explains our recent findings in physiological and financial records.
On the Use of Linearized Euler Equations in the Prediction of Jet Noise
Mankbadi, Reda R.; Hixon, R.; Shih, S.-H.; Povinelli, L. A.
1995-01-01
Linearized Euler equations are used to simulate supersonic jet noise generation and propagation. Special attention is given to boundary treatment. The resulting solution is stable and nearly free from boundary reflections without the need for artificial dissipation, filtering, or a sponge layer. The computed solution is in good agreement with theory and observation and is much less CPU-intensive as compared to large-eddy simulations.
Energy Technology Data Exchange (ETDEWEB)
Jawad, Abdul [COMSATS Institute of Information Technology, Department of Mathematics, Lahore (Pakistan); Videla, Nelson [FCFM, Universidad de Chile, Departamento de Fisica, Santiago (Chile); Gulshan, Faiza [Lahore Leads University, Department of Mathematics, Lahore (Pakistan)
2017-05-15
In the present work, we study the consequences of considering a new family of single-field inflation models, called power-law plateau inflation, in the warm inflation framework. We consider the inflationary expansion is driven by a standard scalar field with a decay ratio Γ having a generic power-law dependence with the scalar field φ and the temperature of the thermal bath T given by Γ(φ,T) = C{sub φ}(T{sup a})/(φ{sup a-1}). Assuming that our model evolves according to the strong dissipative regime, we study the background and perturbative dynamics, obtaining the most relevant inflationary observable as the scalar power spectrum, the scalar spectral index and its running and the tensor-to-scalar ratio. The free parameters characterizing our model are constrained by considering the essential condition for warm inflation, the conditions for the model evolves according to the strong dissipative regime and the 2015 Planck results through the n{sub s}-r plane. For completeness, we study the predictions in the n{sub s}-dn{sub s}/d ln k plane. The model is consistent with a strong dissipative dynamics and predicts values for the tensor-to-scalar ratio and for the running of the scalar spectral index consistent with current bounds imposed by Planck and we conclude that the model is viable. (orig.)
International Nuclear Information System (INIS)
Jawad, Abdul; Videla, Nelson; Gulshan, Faiza
2017-01-01
In the present work, we study the consequences of considering a new family of single-field inflation models, called power-law plateau inflation, in the warm inflation framework. We consider the inflationary expansion is driven by a standard scalar field with a decay ratio Γ having a generic power-law dependence with the scalar field φ and the temperature of the thermal bath T given by Γ(φ,T) = C_φ(T"a)/(φ"a"-"1). Assuming that our model evolves according to the strong dissipative regime, we study the background and perturbative dynamics, obtaining the most relevant inflationary observable as the scalar power spectrum, the scalar spectral index and its running and the tensor-to-scalar ratio. The free parameters characterizing our model are constrained by considering the essential condition for warm inflation, the conditions for the model evolves according to the strong dissipative regime and the 2015 Planck results through the n_s-r plane. For completeness, we study the predictions in the n_s-dn_s/d ln k plane. The model is consistent with a strong dissipative dynamics and predicts values for the tensor-to-scalar ratio and for the running of the scalar spectral index consistent with current bounds imposed by Planck and we conclude that the model is viable. (orig.)
Multivariate power-law models for streamflow prediction in the Mekong Basin
Directory of Open Access Journals (Sweden)
Guillaume Lacombe
2014-11-01
New hydrological insights for the region: A combination of 3–6 explanatory variables – chosen among annual rainfall, drainage area, perimeter, elevation, slope, drainage density and latitude – is sufficient to predict a range of flow metrics with a prediction R-squared ranging from 84 to 95%. The inclusion of forest or paddy percentage coverage as an additional explanatory variable led to slight improvements in the predictive power of some of the low-flow models (lowest prediction R-squared = 89%. A physical interpretation of the model structure was possible for most of the resulting relationships. Compared to regional regression models developed in other parts of the world, this new set of equations performs reasonably well.
Wang, Ming; Li, Zheng; Lee, Eun Young; Lewis, Mechelle M; Zhang, Lijun; Sterling, Nicholas W; Wagner, Daymond; Eslinger, Paul; Du, Guangwei; Huang, Xuemei
2017-09-25
It is challenging for current statistical models to predict clinical progression of Parkinson's disease (PD) because of the involvement of multi-domains and longitudinal data. Past univariate longitudinal or multivariate analyses from cross-sectional trials have limited power to predict individual outcomes or a single moment. The multivariate generalized linear mixed-effect model (GLMM) under the Bayesian framework was proposed to study multi-domain longitudinal outcomes obtained at baseline, 18-, and 36-month. The outcomes included motor, non-motor, and postural instability scores from the MDS-UPDRS, and demographic and standardized clinical data were utilized as covariates. The dynamic prediction was performed for both internal and external subjects using the samples from the posterior distributions of the parameter estimates and random effects, and also the predictive accuracy was evaluated based on the root of mean square error (RMSE), absolute bias (AB) and the area under the receiver operating characteristic (ROC) curve. First, our prediction model identified clinical data that were differentially associated with motor, non-motor, and postural stability scores. Second, the predictive accuracy of our model for the training data was assessed, and improved prediction was gained in particularly for non-motor (RMSE and AB: 2.89 and 2.20) compared to univariate analysis (RMSE and AB: 3.04 and 2.35). Third, the individual-level predictions of longitudinal trajectories for the testing data were performed, with ~80% observed values falling within the 95% credible intervals. Multivariate general mixed models hold promise to predict clinical progression of individual outcomes in PD. The data was obtained from Dr. Xuemei Huang's NIH grant R01 NS060722 , part of NINDS PD Biomarker Program (PDBP). All data was entered within 24 h of collection to the Data Management Repository (DMR), which is publically available ( https://pdbp.ninds.nih.gov/data-management ).
Multi input single output model predictive control of non-linear bio-polymerization process
Energy Technology Data Exchange (ETDEWEB)
Arumugasamy, Senthil Kumar; Ahmad, Z. [School of Chemical Engineering, Univerisiti Sains Malaysia, Engineering Campus, Seri Ampangan,14300 Nibong Tebal, Seberang Perai Selatan, Pulau Pinang (Malaysia)
2015-05-15
This paper focuses on Multi Input Single Output (MISO) Model Predictive Control of bio-polymerization process in which mechanistic model is developed and linked with the feedforward neural network model to obtain a hybrid model (Mechanistic-FANN) of lipase-catalyzed ring-opening polymerization of ε-caprolactone (ε-CL) for Poly (ε-caprolactone) production. In this research, state space model was used, in which the input to the model were the reactor temperatures and reactor impeller speeds and the output were the molecular weight of polymer (M{sub n}) and polymer polydispersity index. State space model for MISO created using System identification tool box of Matlab™. This state space model is used in MISO MPC. Model predictive control (MPC) has been applied to predict the molecular weight of the biopolymer and consequently control the molecular weight of biopolymer. The result shows that MPC is able to track reference trajectory and give optimum movement of manipulated variable.
Zhou, L; Lund, M S; Wang, Y; Su, G
2014-08-01
This study investigated genomic predictions across Nordic Holstein and Nordic Red using various genomic relationship matrices. Different sources of information, such as consistencies of linkage disequilibrium (LD) phase and marker effects, were used to construct the genomic relationship matrices (G-matrices) across these two breeds. Single-trait genomic best linear unbiased prediction (GBLUP) model and two-trait GBLUP model were used for single-breed and two-breed genomic predictions. The data included 5215 Nordic Holstein bulls and 4361 Nordic Red bulls, which was composed of three populations: Danish Red, Swedish Red and Finnish Ayrshire. The bulls were genotyped with 50 000 SNP chip. Using the two-breed predictions with a joint Nordic Holstein and Nordic Red reference population, accuracies increased slightly for all traits in Nordic Red, but only for some traits in Nordic Holstein. Among the three subpopulations of Nordic Red, accuracies increased more for Danish Red than for Swedish Red and Finnish Ayrshire. This is because closer genetic relationships exist between Danish Red and Nordic Holstein. Among Danish Red, individuals with higher genomic relationship coefficients with Nordic Holstein showed more increased accuracies in the two-breed predictions. Weighting the two-breed G-matrices by LD phase consistencies, marker effects or both did not further improve accuracies of the two-breed predictions. © 2014 Blackwell Verlag GmbH.
Albajes-Eizagirre, Anton; Romero, Laia; Soria-Frisch, Aureli; Vanhellemont, Quinten
2011-11-01
Impact of jellyfish in human activities has been increasingly reported worldwide in recent years. Segments such as tourism, water sports and leisure, fisheries and aquaculture are commonly damaged when facing blooms of gelatinous zooplankton. Hence the prediction of the appearance and disappearance of jellyfish in our coasts, which is not fully understood from its biological point of view, has been approached as a pattern recognition problem in the paper presented herein, where a set of potential ecological cues was selected to test their usefulness for prediction. Remote sensing data was used to describe environmental conditions that could support the occurrence of jellyfish blooms with the aim of capturing physical-biological interactions: forcing, coastal morphology, food availability, and water mass characteristics are some of the variables that seem to exert an effect on jellyfish accumulation on the shoreline, under specific spatial and temporal windows. A data-driven model based on computational intelligence techniques has been designed and implemented to predict jellyfish events on the beach area as a function of environmental conditions. Data from 2009 over the NW Mediterranean continental shelf have been used to train and test this prediction protocol. Standard level 2 products are used from MODIS (NASA OceanColor) and MERIS (ESA - FRS data). The procedure for designing the analysis system can be described as following. The aforementioned satellite data has been used as feature set for the performance evaluation. Ground truth has been extracted from visual observations by human agents on different beach sites along the Catalan area. After collecting the evaluation data set, the performance between different computational intelligence approaches have been compared. The outperforming one in terms of its generalization capability has been selected for prediction recall. Different tests have been conducted in order to assess the prediction capability of the
Three dimensional force prediction in a model linear brushless dc motor
Energy Technology Data Exchange (ETDEWEB)
Moghani, J.S.; Eastham, J.F.; Akmese, R.; Hill-Cottingham, R.J. (Univ. of Bath (United Kingdom). School of Electronic and Electric Engineering)
1994-11-01
Practical results are presented for the three axes forces produced on the primary of a linear brushless dc machine which is supplied from a three-phase delta-modulated inverter. Conditions of both lateral alignment and lateral displacement are considered. Finite element analysis using both two and three dimensional modeling is compared with the practical results. It is shown that a modified two dimensional model is adequate, where it can be used, in the aligned position and that the full three dimensional method gives good results when the machine is axially misaligned.
Directory of Open Access Journals (Sweden)
Drzewiecki Wojciech
2016-12-01
Full Text Available In this work nine non-linear regression models were compared for sub-pixel impervious surface area mapping from Landsat images. The comparison was done in three study areas both for accuracy of imperviousness coverage evaluation in individual points in time and accuracy of imperviousness change assessment. The performance of individual machine learning algorithms (Cubist, Random Forest, stochastic gradient boosting of regression trees, k-nearest neighbors regression, random k-nearest neighbors regression, Multivariate Adaptive Regression Splines, averaged neural networks, and support vector machines with polynomial and radial kernels was also compared with the performance of heterogeneous model ensembles constructed from the best models trained using particular techniques.
Malik, Abish; Maciejewski, Ross; Towers, Sherry; McCullough, Sean; Ebert, David S
2014-12-01
In this paper, we present a visual analytics approach that provides decision makers with a proactive and predictive environment in order to assist them in making effective resource allocation and deployment decisions. The challenges involved with such predictive analytics processes include end-users' understanding, and the application of the underlying statistical algorithms at the right spatiotemporal granularity levels so that good prediction estimates can be established. In our approach, we provide analysts with a suite of natural scale templates and methods that enable them to focus and drill down to appropriate geospatial and temporal resolution levels. Our forecasting technique is based on the Seasonal Trend decomposition based on Loess (STL) method, which we apply in a spatiotemporal visual analytics context to provide analysts with predicted levels of future activity. We also present a novel kernel density estimation technique we have developed, in which the prediction process is influenced by the spatial correlation of recent incidents at nearby locations. We demonstrate our techniques by applying our methodology to Criminal, Traffic and Civil (CTC) incident datasets.
Ren, Y Y; Zhou, L C; Yang, L; Liu, P Y; Zhao, B W; Liu, H X
2016-09-01
The paper highlights the use of the logistic regression (LR) method in the construction of acceptable statistically significant, robust and predictive models for the classification of chemicals according to their aquatic toxic modes of action. Essentials accounting for a reliable model were all considered carefully. The model predictors were selected by stepwise forward discriminant analysis (LDA) from a combined pool of experimental data and chemical structure-based descriptors calculated by the CODESSA and DRAGON software packages. Model predictive ability was validated both internally and externally. The applicability domain was checked by the leverage approach to verify prediction reliability. The obtained models are simple and easy to interpret. In general, LR performs much better than LDA and seems to be more attractive for the prediction of the more toxic compounds, i.e. compounds that exhibit excess toxicity versus non-polar narcotic compounds and more reactive compounds versus less reactive compounds. In addition, model fit and regression diagnostics was done through the influence plot which reflects the hat-values, studentized residuals, and Cook's distance statistics of each sample. Overdispersion was also checked for the LR model. The relationships between the descriptors and the aquatic toxic behaviour of compounds are also discussed.
Prediction of strong ground motion based on scaling law of earthquake
International Nuclear Information System (INIS)
Kamae, Katsuhiro; Irikura, Kojiro; Fukuchi, Yasunaga.
1991-01-01
In order to predict more practically strong ground motion, it is important to study how to use a semi-empirical method in case of having no appropriate observation records for actual small-events as empirical Green's functions. We propose a prediction procedure using artificially simulated small ground motions as substitute for the actual motions. First, we simulate small-event motion by means of stochastic simulation method proposed by Boore (1983) in considering pass effects such as attenuation, and broadening of waveform envelope empirically in the objective region. Finally, we attempt to predict the strong ground motion due to a future large earthquake (M 7, Δ = 13 km) using the same summation procedure as the empirical Green's function method. We obtained the results that the characteristics of the synthetic motion using M 5 motion were in good agreement with those by the empirical Green's function method. (author)
Schuecker, Clara; Davila, Carlos G.; Rose, Cheryl A.
2010-01-01
Five models for matrix damage in fiber reinforced laminates are evaluated for matrix-dominated loading conditions under plane stress and are compared both qualitatively and quantitatively. The emphasis of this study is on a comparison of the response of embedded plies subjected to a homogeneous stress state. Three of the models are specifically designed for modeling the non-linear response due to distributed matrix cracking under homogeneous loading, and also account for non-linear (shear) behavior prior to the onset of cracking. The remaining two models are localized damage models intended for predicting local failure at stress concentrations. The modeling approaches of distributed vs. localized cracking as well as the different formulations of damage initiation and damage progression are compared and discussed.
Christiansen, Bo
2015-04-01
Linear regression methods are without doubt the most used approaches to describe and predict data in the physical sciences. They are often good first order approximations and they are in general easier to apply and interpret than more advanced methods. However, even the properties of univariate regression can lead to debate over the appropriateness of various models as witnessed by the recent discussion about climate reconstruction methods. Before linear regression is applied important choices have to be made regarding the origins of the noise terms and regarding which of the two variables under consideration that should be treated as the independent variable. These decisions are often not easy to make but they may have a considerable impact on the results. We seek to give a unified probabilistic - Bayesian with flat priors - treatment of univariate linear regression and prediction by taking, as starting point, the general errors-in-variables model (Christiansen, J. Clim., 27, 2014-2031, 2014). Other versions of linear regression can be obtained as limits of this model. We derive the likelihood of the model parameters and predictands of the general errors-in-variables model by marginalizing over the nuisance parameters. The resulting likelihood is relatively simple and easy to analyze and calculate. The well known unidentifiability of the errors-in-variables model is manifested as the absence of a well-defined maximum in the likelihood. However, this does not mean that probabilistic inference can not be made; the marginal likelihoods of model parameters and the predictands have, in general, well-defined maxima. We also include a probabilistic version of classical calibration and show how it is related to the errors-in-variables model. The results are illustrated by an example from the coupling between the lower stratosphere and the troposphere in the Northern Hemisphere winter.
Directory of Open Access Journals (Sweden)
Sharad Shandilya
Full Text Available The timing of defibrillation is mostly at arbitrary intervals during cardio-pulmonary resuscitation (CPR, rather than during intervals when the out-of-hospital cardiac arrest (OOH-CA patient is physiologically primed for successful countershock. Interruptions to CPR may negatively impact defibrillation success. Multiple defibrillations can be associated with decreased post-resuscitation myocardial function. We hypothesize that a more complete picture of the cardiovascular system can be gained through non-linear dynamics and integration of multiple physiologic measures from biomedical signals.Retrospective analysis of 153 anonymized OOH-CA patients who received at least one defibrillation for ventricular fibrillation (VF was undertaken. A machine learning model, termed Multiple Domain Integrative (MDI model, was developed to predict defibrillation success. We explore the rationale for non-linear dynamics and statistically validate heuristics involved in feature extraction for model development. Performance of MDI is then compared to the amplitude spectrum area (AMSA technique.358 defibrillations were evaluated (218 unsuccessful and 140 successful. Non-linear properties (Lyapunov exponent > 0 of the ECG signals indicate a chaotic nature and validate the use of novel non-linear dynamic methods for feature extraction. Classification using MDI yielded ROC-AUC of 83.2% and accuracy of 78.8%, for the model built with ECG data only. Utilizing 10-fold cross-validation, at 80% specificity level, MDI (74% sensitivity outperformed AMSA (53.6% sensitivity. At 90% specificity level, MDI had 68.4% sensitivity while AMSA had 43.3% sensitivity. Integrating available end-tidal carbon dioxide features into MDI, for the available 48 defibrillations, boosted ROC-AUC to 93.8% and accuracy to 83.3% at 80% sensitivity.At clinically relevant sensitivity thresholds, the MDI provides improved performance as compared to AMSA, yielding fewer unsuccessful defibrillations
Czech Academy of Sciences Publication Activity Database
Pčolka, M.; Žáčeková, E.; Robinett, R.; Čelikovský, Sergej; Šebek, M.
2016-01-01
Roč. 53, č. 1 (2016), s. 124-138 ISSN 0967-0661 R&D Projects: GA ČR GA13-20433S Institutional support: RVO:67985556 Keywords : Model predictive control * Identification for control * Building climatecontrol Subject RIV: BC - Control Systems Theory Impact factor: 2.602, year: 2016 http://library.utia.cas.cz/separaty/2016/TR/celikovsky-0460306.pdf
Prediction of high airway pressure using a non-linear autoregressive model of pulmonary mechanics.
Langdon, Ruby; Docherty, Paul D; Schranz, Christoph; Chase, J Geoffrey
2017-11-02
For mechanically ventilated patients with acute respiratory distress syndrome (ARDS), suboptimal PEEP levels can cause ventilator induced lung injury (VILI). In particular, high PEEP and high peak inspiratory pressures (PIP) can cause over distension of alveoli that is associated with VILI. However, PEEP must also be sufficient to maintain recruitment in ARDS lungs. A lung model that accurately and precisely predicts the outcome of an increase in PEEP may allow dangerous high PIP to be avoided, and reduce the incidence of VILI. Sixteen pressure-flow data sets were collected from nine mechanically ventilated ARDs patients that underwent one or more recruitment manoeuvres. A nonlinear autoregressive (NARX) model was identified on one or more adjacent PEEP steps, and extrapolated to predict PIP at 2, 4, and 6 cmH 2 O PEEP horizons. The analysis considered whether the predicted and measured PIP exceeded a threshold of 40 cmH 2 O. A direct comparison of the method was made using the first order model of pulmonary mechanics (FOM(I)). Additionally, a further, more clinically appropriate method for the FOM was tested, in which the FOM was trained on a single PEEP prior to prediction (FOM(II)). The NARX model exhibited very high sensitivity (> 0.96) in all cases, and a high specificity (> 0.88). While both FOM methods had a high specificity (> 0.96), the sensitivity was much lower, with a mean of 0.68 for FOM(I), and 0.82 for FOM(II). Clinically, false negatives are more harmful than false positives, as a high PIP may result in distension and VILI. Thus, the NARX model may be more effective than the FOM in allowing clinicians to reduce the risk of applying a PEEP that results in dangerously high airway pressures.
Abad, Cesar C C; Barros, Ronaldo V; Bertuzzi, Romulo; Gagliardi, João F L; Lima-Silva, Adriano E; Lambert, Mike I; Pires, Flavio O
2016-06-01
The aim of this study was to verify the power of VO 2max , peak treadmill running velocity (PTV), and running economy (RE), unadjusted or allometrically adjusted, in predicting 10 km running performance. Eighteen male endurance runners performed: 1) an incremental test to exhaustion to determine VO 2max and PTV; 2) a constant submaximal run at 12 km·h -1 on an outdoor track for RE determination; and 3) a 10 km running race. Unadjusted (VO 2max , PTV and RE) and adjusted variables (VO 2max 0.72 , PTV 0.72 and RE 0.60 ) were investigated through independent multiple regression models to predict 10 km running race time. There were no significant correlations between 10 km running time and either the adjusted or unadjusted VO 2max . Significant correlations (p 0.84 and power > 0.88. The allometrically adjusted predictive model was composed of PTV 0.72 and RE 0.60 and explained 83% of the variance in 10 km running time with a standard error of the estimate (SEE) of 1.5 min. The unadjusted model composed of a single PVT accounted for 72% of the variance in 10 km running time (SEE of 1.9 min). Both regression models provided powerful estimates of 10 km running time; however, the unadjusted PTV may provide an uncomplicated estimation.
International Nuclear Information System (INIS)
Huo, Pengfei; Miller, Thomas F. III; Coker, David F.
2013-01-01
A partial linearized path integral approach is used to calculate the condensed phase electron transfer (ET) rate by directly evaluating the flux-flux/flux-side quantum time correlation functions. We demonstrate for a simple ET model that this approach can reliably capture the transition between non-adiabatic and adiabatic regimes as the electronic coupling is varied, while other commonly used semi-classical methods are less accurate over the broad range of electronic couplings considered. Further, we show that the approach reliably recovers the Marcus turnover as a function of thermodynamic driving force, giving highly accurate rates over four orders of magnitude from the normal to the inverted regimes. We also demonstrate that the approach yields accurate rate estimates over five orders of magnitude of inverse temperature. Finally, the approach outlined here accurately captures the electronic coherence in the flux-flux correlation function that is responsible for the decreased rate in the inverted regime
Directory of Open Access Journals (Sweden)
Ririn Kusumawati
2016-05-01
In the classification, using Hidden Markov Model, voice signal is analyzed and searched the maximum possible value that can be recognized. The modeling results obtained parameters are used to compare with the sound of Arabic speakers. From the test results' Classification, Hidden Markov Models with Linear Predictive Coding extraction average accuracy of 78.6% for test data sampling frequency of 8,000 Hz, 80.2% for test data sampling frequency of 22050 Hz, 79% for frequencies sampling test data at 44100 Hz.
International Nuclear Information System (INIS)
Sharma, P.; Khare, M.
2000-01-01
Historical data of the time-series of carbon monoxide (CO) concentration was analysed using Box-Jenkins modelling approach. Univariate Linear Stochastic Models (ULSMs) were developed to examine the degree of prediction possible for situations where only a limited data set, restricted only to the past record of pollutant data are available. The developed models can be used to provide short-term, real-time forecast of extreme CO concentrations for an Air Quality Control Region (AQCR), comprising a major traffic intersection in a Central Business District of Delhi City, India. (author)
Wosnitza, Jan Henrik; Denz, Cornelia
2013-09-01
We employ the log-periodic power law (LPPL) to analyze the late-2000 financial crisis from the perspective of critical phenomena. The main purpose of this study is to examine whether LPPL structures in the development of credit default swap (CDS) spreads can be used for default classification. Based on the different triggers of Bear Stearns’ near bankruptcy during the late-2000 financial crisis and Ford’s insolvency in 2009, this study provides a quantitative description of the mechanism behind bank runs. We apply the Johansen-Ledoit-Sornette (JLS) positive feedback model to explain the rise of financial institutions’ CDS spreads during the global financial crisis 2007-2009. This investigation is based on CDS spreads of 40 major banks over the period from June 2007 to April 2009 which includes a significant CDS spread increase. The qualitative data analysis indicates that the CDS spread variations have followed LPPL patterns during the global financial crisis. Furthermore, the univariate classification performances of seven LPPL parameters as default indicators are measured by Mann-Whitney U tests. The present study supports the hypothesis that discrete scale-invariance governs the dynamics of financial markets and suggests the application of new and fast updateable default indicators to capture the buildup of long-range correlations between creditors.
Wamala, Robert
2016-01-01
Purpose: Prospective students of law are required to demonstrate competence in certain disciplines to attain admission to law school. The grounding in the disciplines is expected to demonstrate competencies required to excel academically in law school. The purpose of this study is to investigate the relevance of the law school admission test to…
DEFF Research Database (Denmark)
Møldrup, Per; Chamindu, T. K. K. Deepagoda; Hamamoto, S.
2013-01-01
The soil-gas diffusion is a primary driver of transport, reactions, emissions, and uptake of vadose zone gases, including oxygen, greenhouse gases, fumigants, and spilled volatile organics. The soil-gas diffusion coefficient, Dp, depends not only on soil moisture content, texture, and compaction...... but also on the local-scale variability of these. Different predictive models have been developed to estimate Dp in intact and repacked soil, but clear guidelines for model choice at a given soil state are lacking. In this study, the water-induced linear reduction (WLR) model for repacked soil is made...... air) in repacked soils containing between 0 and 54% clay. With Cm = 2.1, the SWLR model on average gave excellent predictions for 290 intact soils, performing well across soil depths, textures, and compactions (dry bulk densities). The SWLR model generally outperformed similar, simple Dp/Do models...
A Family of High-Performance Solvers for Linear Model Predictive Control
DEFF Research Database (Denmark)
Frison, Gianluca; Sokoler, Leo Emil; Jørgensen, John Bagterp
2014-01-01
In Model Predictive Control (MPC), an optimization problem has to be solved at each sampling time, and this has traditionally limited the use of MPC to systems with slow dynamic. In this paper, we propose an e_cient solution strategy for the unconstrained sub-problems that give the search......-direction in Interior-Point (IP) methods for MPC, and that usually are the computational bottle-neck. This strategy combines a Riccati-like solver with the use of high-performance computing techniques: in particular, in this paper we explore the performance boost given by the use of single precision computation...
International Nuclear Information System (INIS)
Lee, Jae Yong; Na, Man Gyun
2011-01-01
Diametral creep of the pressure tube (PT) is one of the principal aging mechanisms governing the heat transfer and hydraulic degradation of a heat transport system. PT diametral creep leads to diametral expansion that affects the thermal hydraulic characteristics of the coolant channels and the critical heat flux. Therefore, it is essential to predict the PT diametral creep in CANDU reactors, which is caused mainly by fast neutron irradiation, reactor coolant temperature and so forth. The currently used PT diametral creep prediction model considers the complex interactions between the effects of temperature and fast neutron flux on the deformation of PT zirconium alloys. The model assumes that long-term steady-state deformation consists of separable, additive components from thermal creep, irradiation creep and irradiation growth. This is a mechanistic model based on measured data. However, this model has high prediction uncertainty. Recently, a statistical error modeling method was developed using plant inspection data from the Bruce B CANDU reactor. The aim of this study was to develop a bundle position-wise linear model (BPLM) to predict PT diametral creep employing previously measured PT diameters and HTS operating conditions. There are twelve bundles in a fuel channel and for each bundle, a linear model was developed by using the dependent variables, such as the fast neutron fluxes and the bundle temperatures. The training data set was selected using the subtractive clustering method. The data of 39 channels that consist of 80 percent of a total of 49 measured channels from Units 2, 3 and 4 were used to develop the BPLM models. The remaining 10 channels' data were used to test the developed BPLM models. The BPLM was optimized by the maximum likelihood estimation method. The developed BPLM to predict PT diametral creep was verified using the operating data gathered from the Units 2,3 and 4 in Korea. Two error components for the BPLM, which are the epistemic
Energy Technology Data Exchange (ETDEWEB)
Bause, Rainer; Schulz, Woldemar [Amprion GmbH, Dortmund (Germany); Buehler, Holger [EnBW TNG, Stuttgart (Germany); Hodurek, Claus [50Hertz Transmission GmbH, Berlin (Germany); Kiessling, Axel [TenneT TSO GmbH, Bayreuth (Germany)
2011-10-15
By paying their monthly electricity bill German consumers are promoting the transition to tomorrow's resource-efficient electricity supply system, in which the largest part of electricity used is to come from renewable energy resources. Every year Germany's transmission system operators publish what is referred to as the ''EEG-Umlage'' (report on the Electricity Feed Law levy, or EEG levy), which shows how much every German household will be paying for the promotion of renewable energies in the coming year. The determination of the EEG levy involves uncertainties and imponderabilities which have to be taken into account in its calculation. The crucial task is to find a suitable systematic scheme for predicting the renewable energy yield.
Directory of Open Access Journals (Sweden)
M. Khamforoush
2009-12-01
Full Text Available Anew and effective electrospinning method has been developed for producing aligned polymer nanofibers. The conventional electrospinning technique has been modified to fabricate nanofibers as uniaxially aligned array. The key to the success of this technique is the creation of a rotating jet by using a cylindrical collector in which the needle tip is located at its center. The unique advantage of this method among the current methods is the ability of apparatus to weave continuously nanofibers in uniaxially aligned form. Fibers produced by this method are well-aligned, with several meters in length, and can be spread over a large area. We have employed a voltage range of (6-16 kV, a collector diameter in the range of 20-50 cm and various concentrations of PAN solutions ranging from 15 wt% to 19 wt %. The electrospun nanofibers could be conveniently formed onto the surface of any thin substrate such as glass sampling plate for subsequent treatments and other applications. Therefore, the linear speed of electrospinning process is determined experimentally as a function of cylindrical collector diameter, polymer concentration and field potential difference.
Directory of Open Access Journals (Sweden)
Alessandra Bianchin
Full Text Available The purpose of this study was to investigate the blood stage of the malaria causing parasite, Plasmodium falciparum, to predict potential protein interactions between the parasite merozoite and the host erythrocyte and design peptides that could interrupt these predicted interactions. We screened the P. falciparum and human proteomes for computationally predicted short linear motifs (SLiMs in cytoplasmic portions of transmembrane proteins that could play roles in the invasion of the erythrocyte by the merozoite, an essential step in malarial pathogenesis. We tested thirteen peptides predicted to contain SLiMs, twelve of them palmitoylated to enhance membrane targeting, and found three that blocked parasite growth in culture by inhibiting the initiation of new infections in erythrocytes. Scrambled peptides for two of the most promising peptides suggested that their activity may be reflective of amino acid properties, in particular, positive charge. However, one peptide showed effects which were stronger than those of scrambled peptides. This was derived from human red blood cell glycophorin-B. We concluded that proteome-wide computational screening of the intracellular regions of both host and pathogen adhesion proteins provides potential lead peptides for the development of anti-malarial compounds.
Tang, Zaixiang; Shen, Yueping; Li, Yan; Zhang, Xinyan; Wen, Jia; Qian, Chen'ao; Zhuang, Wenzhuo; Shi, Xinghua; Yi, Nengjun
2018-03-15
Large-scale molecular data have been increasingly used as an important resource for prognostic prediction of diseases and detection of associated genes. However, standard approaches for omics data analysis ignore the group structure among genes encoded in functional relationships or pathway information. We propose new Bayesian hierarchical generalized linear models, called group spike-and-slab lasso GLMs, for predicting disease outcomes and detecting associated genes by incorporating large-scale molecular data and group structures. The proposed model employs a mixture double-exponential prior for coefficients that induces self-adaptive shrinkage amount on different coefficients. The group information is incorporated into the model by setting group-specific parameters. We have developed a fast and stable deterministic algorithm to fit the proposed hierarchal GLMs, which can perform variable selection within groups. We assess the performance of the proposed method on several simulated scenarios, by varying the overlap among groups, group size, number of non-null groups, and the correlation within group. Compared with existing methods, the proposed method provides not only more accurate estimates of the parameters but also better prediction. We further demonstrate the application of the proposed procedure on three cancer datasets by utilizing pathway structures of genes. Our results show that the proposed method generates powerful models for predicting disease outcomes and detecting associated genes. The methods have been implemented in a freely available R package BhGLM (http://www.ssg.uab.edu/bhglm/). nyi@uab.edu. Supplementary data are available at Bioinformatics online.
Adaptive LINE-P: An Adaptive Linear Energy Prediction Model for Wireless Sensor Network Nodes.
Ahmed, Faisal; Tamberg, Gert; Le Moullec, Yannick; Annus, Paul
2018-04-05
In the context of wireless sensor networks, energy prediction models are increasingly useful tools that can facilitate the power management of the wireless sensor network (WSN) nodes. However, most of the existing models suffer from the so-called fixed weighting parameter, which limits their applicability when it comes to, e.g., solar energy harvesters with varying characteristics. Thus, in this article we propose the Adaptive LINE-P (all cases) model that calculates adaptive weighting parameters based on the stored energy profiles. Furthermore, we also present a profile compression method to reduce the memory requirements. To determine the performance of our proposed model, we have used real data for the solar and wind energy profiles. The simulation results show that our model achieves 90-94% accuracy and that the compressed method reduces memory overheads by 50% as compared to state-of-the-art models.
McDowell, J. J; Wood, Helena M.
1985-01-01
Four human subjects worked on all combinations of five variable-interval schedules and five reinforcer magnitudes (¢/reinforcer) in each of two phases of the experiment. In one phase the force requirement on the operandum was low (1 or 11 N) and in the other it was high (25 or 146 N). Estimates of Herrnstein's κ were obtained at each reinforcer magnitude. The results were: (1) response rate was more sensitive to changes in reinforcement rate at the high than at the low force requirement, (2) κ increased from the beginning to the end of the magnitude range for all subjects at both force requirements, (3) the reciprocal of κ was a linear function of the reciprocal of reinforcer magnitude for seven of the eight data sets, and (4) the rate of change of κ was greater at the high than at the low force requirement by an order of magnitude or more. The second and third findings confirm predictions made by linear system theory, and replicate the results of an earlier experiment (McDowell & Wood, 1984). The fourth finding confirms a further prediction of the theory and supports the theory's interpretation of conflicting data on the constancy of Herrnstein's κ. PMID:16812408
McDowell, J J; Wood, H M
1985-01-01
Four human subjects worked on all combinations of five variable-interval schedules and five reinforcer magnitudes ( cent/reinforcer) in each of two phases of the experiment. In one phase the force requirement on the operandum was low (1 or 11 N) and in the other it was high (25 or 146 N). Estimates of Herrnstein's kappa were obtained at each reinforcer magnitude. The results were: (1) response rate was more sensitive to changes in reinforcement rate at the high than at the low force requirement, (2) kappa increased from the beginning to the end of the magnitude range for all subjects at both force requirements, (3) the reciprocal of kappa was a linear function of the reciprocal of reinforcer magnitude for seven of the eight data sets, and (4) the rate of change of kappa was greater at the high than at the low force requirement by an order of magnitude or more. The second and third findings confirm predictions made by linear system theory, and replicate the results of an earlier experiment (McDowell & Wood, 1984). The fourth finding confirms a further prediction of the theory and supports the theory's interpretation of conflicting data on the constancy of Herrnstein's kappa.
Zhang, Huiling; Huang, Qingsheng; Bei, Zhendong; Wei, Yanjie; Floudas, Christodoulos A
2016-03-01
In this article, we present COMSAT, a hybrid framework for residue contact prediction of transmembrane (TM) proteins, integrating a support vector machine (SVM) method and a mixed integer linear programming (MILP) method. COMSAT consists of two modules: COMSAT_SVM which is trained mainly on position-specific scoring matrix features, and COMSAT_MILP which is an ab initio method based on optimization models. Contacts predicted by the SVM model are ranked by SVM confidence scores, and a threshold is trained to improve the reliability of the predicted contacts. For TM proteins with no contacts above the threshold, COMSAT_MILP is used. The proposed hybrid contact prediction scheme was tested on two independent TM protein sets based on the contact definition of 14 Å between Cα-Cα atoms. First, using a rigorous leave-one-protein-out cross validation on the training set of 90 TM proteins, an accuracy of 66.8%, a coverage of 12.3%, a specificity of 99.3% and a Matthews' correlation coefficient (MCC) of 0.184 were obtained for residue pairs that are at least six amino acids apart. Second, when tested on a test set of 87 TM proteins, the proposed method showed a prediction accuracy of 64.5%, a coverage of 5.3%, a specificity of 99.4% and a MCC of 0.106. COMSAT shows satisfactory results when compared with 12 other state-of-the-art predictors, and is more robust in terms of prediction accuracy as the length and complexity of TM protein increase. COMSAT is freely accessible at http://hpcc.siat.ac.cn/COMSAT/. © 2016 Wiley Periodicals, Inc.
Directory of Open Access Journals (Sweden)
Luigi Capoferri
Full Text Available Prediction of human Cytochrome P450 (CYP binding affinities of small ligands, i.e., substrates and inhibitors, represents an important task for predicting drug-drug interactions. A quantitative assessment of the ligand binding affinity towards different CYPs can provide an estimate of inhibitory activity or an indication of isoforms prone to interact with the substrate of inhibitors. However, the accuracy of global quantitative models for CYP substrate binding or inhibition based on traditional molecular descriptors can be limited, because of the lack of information on the structure and flexibility of the catalytic site of CYPs. Here we describe the application of a method that combines protein-ligand docking, Molecular Dynamics (MD simulations and Linear Interaction Energy (LIE theory, to allow for quantitative CYP affinity prediction. Using this combined approach, a LIE model for human CYP 1A2 was developed and evaluated, based on a structurally diverse dataset for which the estimated experimental uncertainty was 3.3 kJ mol-1. For the computed CYP 1A2 binding affinities, the model showed a root mean square error (RMSE of 4.1 kJ mol-1 and a standard error in prediction (SDEP in cross-validation of 4.3 kJ mol-1. A novel approach that includes information on both structural ligand description and protein-ligand interaction was developed for estimating the reliability of predictions, and was able to identify compounds from an external test set with a SDEP for the predicted affinities of 4.6 kJ mol-1 (corresponding to 0.8 pKi units.
Directory of Open Access Journals (Sweden)
Deborah Apthorp
Full Text Available Visually-induced illusions of self-motion (vection can be compelling for some people, but they are subject to large individual variations in strength. Do these variations depend, at least in part, on the extent to which people rely on vision to maintain their postural stability? We investigated by comparing physical posture measures to subjective vection ratings. Using a Bertec balance plate in a brightly-lit room, we measured 13 participants' excursions of the centre of foot pressure (CoP over a 60-second period with eyes open and with eyes closed during quiet stance. Subsequently, we collected vection strength ratings for large optic flow displays while seated, using both verbal ratings and online throttle measures. We also collected measures of postural sway (changes in anterior-posterior CoP in response to the same visual motion stimuli while standing on the plate. The magnitude of standing sway in response to expanding optic flow (in comparison to blank fixation periods was predictive of both verbal and throttle measures for seated vection. In addition, the ratio between eyes-open and eyes-closed CoP excursions during quiet stance (using the area of postural sway significantly predicted seated vection for both measures. Interestingly, these relationships were weaker for contracting optic flow displays, though these produced both stronger vection and more sway. Next we used a non-linear analysis (recurrence quantification analysis, RQA of the fluctuations in anterior-posterior position during quiet stance (both with eyes closed and eyes open; this was a much stronger predictor of seated vection for both expanding and contracting stimuli. Given the complex multisensory integration involved in postural control, our study adds to the growing evidence that non-linear measures drawn from complexity theory may provide a more informative measure of postural sway than the conventional linear measures.
A model for preemptive maintenance of medical linear accelerators—predictive maintenance
International Nuclear Information System (INIS)
Able, Charles M.; Baydush, Alan H.; Nguyen, Callistus; Gersh, Jacob; Ndlovu, Alois; Rebo, Igor; Booth, Jeremy; Perez, Mario; Sintay, Benjamin; Munley, Michael T.
2016-01-01
Unscheduled accelerator downtime can negatively impact the quality of life of patients during their struggle against cancer. Currently digital data accumulated in the accelerator system is not being exploited in a systematic manner to assist in more efficient deployment of service engineering resources. The purpose of this study is to develop an effective process for detecting unexpected deviations in accelerator system operating parameters and/or performance that predicts component failure or system dysfunction and allows maintenance to be performed prior to the actuation of interlocks. The proposed predictive maintenance (PdM) model is as follows: 1) deliver a daily quality assurance (QA) treatment; 2) automatically transfer and interrogate the resulting log files; 3) once baselines are established, subject daily operating and performance values to statistical process control (SPC) analysis; 4) determine if any alarms have been triggered; and 5) alert facility and system service engineers. A robust volumetric modulated arc QA treatment is delivered to establish mean operating values and perform continuous sampling and monitoring using SPC methodology. Chart limits are calculated using a hybrid technique that includes the use of the standard SPC 3σ limits and an empirical factor based on the parameter/system specification. There are 7 accelerators currently under active surveillance. Currently 45 parameters plus each MLC leaf (120) are analyzed using Individual and Moving Range (I/MR) charts. The initial warning and alarm rule is as follows: warning (2 out of 3 consecutive values ≥ 2σ hybrid ) and alarm (2 out of 3 consecutive values or 3 out of 5 consecutive values ≥ 3σ hybrid ). A customized graphical user interface provides a means to review the SPC charts for each parameter and a visual color code to alert the reviewer of parameter status. Forty-five synthetic errors/changes were introduced to test the effectiveness of our initial chart limits. Forty
A Non-linear Predictive Model of Borderline Personality Disorder Based on Multilayer Perceptron.
Maldonato, Nelson M; Sperandeo, Raffaele; Moretto, Enrico; Dell'Orco, Silvia
2018-01-01
Borderline Personality Disorder is a serious mental disease, classified in Cluster B of DSM IV-TR personality disorders. People with this syndrome presents an anamnesis of traumatic experiences and shows dissociative symptoms. Since not all subjects who have been victims of trauma develop a Borderline Personality Disorder, the emergence of this serious disease seems to have the fragility of character as a predisposing condition. Infect, numerous studies show that subjects positive for diagnosis of Borderline Personality Disorder had scores extremely high or extremely low to some temperamental dimensions (harm Avoidance and reward dependence) and character dimensions (cooperativeness and self directedness). In a sample of 602 subjects, who have had consecutive access to an Outpatient Mental Health Service, it was evaluated the presence of Borderline Personality Disorder using the semi-structured interview for the DSM IV-TR personality disorders. In this population we assessed the presence of dissociative symptoms with the Dissociative Experiences Scale and the personality traits with the Temperament and Character Inventory developed by Cloninger. To assess the weight and the predictive value of these psychopathological dimensions in relation to the Borderline Personality Disorder diagnosis, a neural network statistical model called "multilayer perceptron," was implemented. This model was developed with a dichotomous dependent variable, consisting in the presence or absence of the diagnosis of borderline personality disorder and with five covariates. The first one is the taxonomic subscale of dissociative experience scale, the others are temperamental and characterial traits: Novelty-Seeking, Harm-Avoidance, Self-Directedness and Cooperativeness. The statistical model, that results satisfactory, showed a significance capacity (89%) to predict the presence of borderline personality disorder. Furthermore, the dissociative symptoms seem to have a greater influence than
A Non-linear Predictive Model of Borderline Personality Disorder Based on Multilayer Perceptron
Directory of Open Access Journals (Sweden)
Nelson M. Maldonato
2018-04-01
Full Text Available Borderline Personality Disorder is a serious mental disease, classified in Cluster B of DSM IV-TR personality disorders. People with this syndrome presents an anamnesis of traumatic experiences and shows dissociative symptoms. Since not all subjects who have been victims of trauma develop a Borderline Personality Disorder, the emergence of this serious disease seems to have the fragility of character as a predisposing condition. Infect, numerous studies show that subjects positive for diagnosis of Borderline Personality Disorder had scores extremely high or extremely low to some temperamental dimensions (harm Avoidance and reward dependence and character dimensions (cooperativeness and self directedness. In a sample of 602 subjects, who have had consecutive access to an Outpatient Mental Health Service, it was evaluated the presence of Borderline Personality Disorder using the semi-structured interview for the DSM IV-TR personality disorders. In this population we assessed the presence of dissociative symptoms with the Dissociative Experiences Scale and the personality traits with the Temperament and Character Inventory developed by Cloninger. To assess the weight and the predictive value of these psychopathological dimensions in relation to the Borderline Personality Disorder diagnosis, a neural network statistical model called “multilayer perceptron,” was implemented. This model was developed with a dichotomous dependent variable, consisting in the presence or absence of the diagnosis of borderline personality disorder and with five covariates. The first one is the taxonomic subscale of dissociative experience scale, the others are temperamental and characterial traits: Novelty-Seeking, Harm-Avoidance, Self-Directedness and Cooperativeness. The statistical model, that results satisfactory, showed a significance capacity (89% to predict the presence of borderline personality disorder. Furthermore, the dissociative symptoms seem to have a
A model for preemptive maintenance of medical linear accelerators-predictive maintenance.
Able, Charles M; Baydush, Alan H; Nguyen, Callistus; Gersh, Jacob; Ndlovu, Alois; Rebo, Igor; Booth, Jeremy; Perez, Mario; Sintay, Benjamin; Munley, Michael T
2016-03-10
Unscheduled accelerator downtime can negatively impact the quality of life of patients during their struggle against cancer. Currently digital data accumulated in the accelerator system is not being exploited in a systematic manner to assist in more efficient deployment of service engineering resources. The purpose of this study is to develop an effective process for detecting unexpected deviations in accelerator system operating parameters and/or performance that predicts component failure or system dysfunction and allows maintenance to be performed prior to the actuation of interlocks. The proposed predictive maintenance (PdM) model is as follows: 1) deliver a daily quality assurance (QA) treatment; 2) automatically transfer and interrogate the resulting log files; 3) once baselines are established, subject daily operating and performance values to statistical process control (SPC) analysis; 4) determine if any alarms have been triggered; and 5) alert facility and system service engineers. A robust volumetric modulated arc QA treatment is delivered to establish mean operating values and perform continuous sampling and monitoring using SPC methodology. Chart limits are calculated using a hybrid technique that includes the use of the standard SPC 3σ limits and an empirical factor based on the parameter/system specification. There are 7 accelerators currently under active surveillance. Currently 45 parameters plus each MLC leaf (120) are analyzed using Individual and Moving Range (I/MR) charts. The initial warning and alarm rule is as follows: warning (2 out of 3 consecutive values ≥ 2σ hybrid) and alarm (2 out of 3 consecutive values or 3 out of 5 consecutive values ≥ 3σ hybrid). A customized graphical user interface provides a means to review the SPC charts for each parameter and a visual color code to alert the reviewer of parameter status. Forty-five synthetic errors/changes were introduced to test the effectiveness of our initial chart limits. Forty
The Protein Cost of Metabolic Fluxes: Prediction from Enzymatic Rate Laws and Cost Minimization.
Directory of Open Access Journals (Sweden)
Elad Noor
2016-11-01
Full Text Available Bacterial growth depends crucially on metabolic fluxes, which are limited by the cell's capacity to maintain metabolic enzymes. The necessary enzyme amount per unit flux is a major determinant of metabolic strategies both in evolution and bioengineering. It depends on enzyme parameters (such as kcat and KM constants, but also on metabolite concentrations. Moreover, similar amounts of different enzymes might incur different costs for the cell, depending on enzyme-specific properties such as protein size and half-life. Here, we developed enzyme cost minimization (ECM, a scalable method for computing enzyme amounts that support a given metabolic flux at a minimal protein cost. The complex interplay of enzyme and metabolite concentrations, e.g. through thermodynamic driving forces and enzyme saturation, would make it hard to solve this optimization problem directly. By treating enzyme cost as a function of metabolite levels, we formulated ECM as a numerically tractable, convex optimization problem. Its tiered approach allows for building models at different levels of detail, depending on the amount of available data. Validating our method with measured metabolite and protein levels in E. coli central metabolism, we found typical prediction fold errors of 4.1 and 2.6, respectively, for the two kinds of data. This result from the cost-optimized metabolic state is significantly better than randomly sampled metabolite profiles, supporting the hypothesis that enzyme cost is important for the fitness of E. coli. ECM can be used to predict enzyme levels and protein cost in natural and engineered pathways, and could be a valuable computational tool to assist metabolic engineering projects. Furthermore, it establishes a direct connection between protein cost and thermodynamics, and provides a physically plausible and computationally tractable way to include enzyme kinetics into constraint-based metabolic models, where kinetics have usually been ignored or
Kouramas, K.I.; Faí sca, N.P.; Panos, C.; Pistikopoulos, E.N.
2011-01-01
This work presents a new algorithm for solving the explicit/multi- parametric model predictive control (or mp-MPC) problem for linear, time-invariant discrete-time systems, based on dynamic programming and multi-parametric programming techniques
International Nuclear Information System (INIS)
Maï, S El; Petit, J; Mercier, S; Molinari, A
2014-01-01
The fragmentation of structures subject to dynamic conditions is a matter of interest for civil industries as well as for Defence institutions. Dynamic expansions of structures, such as cylinders or rings, have been performed to obtain crucial information on fragment distributions. Many authors have proposed to capture by FEA the experimental distribution of fragment size by introducing in the FE model a perturbation. Stability and bifurcation analyses have also been proposed to describe the evolution of the perturbation growth rate. In the proposed contribution, the multiple necking of a round bar in dynamic tensile loading is analysed by the FE method. A perturbation on the initial flow stress is introduced in the numerical model to trigger instabilities. The onset time and the dominant mode of necking have been characterized precisely and showed power law evolutions, with the loading velocities and moderately with the amplitudes and the cell sizes of the perturbations. In the second part of the paper, the development of linear stability analysis and the use of salient criteria in terms of the growth rate of perturbations enabled comparisons with the numerical results. A good correlation in terms of onset time of instabilities and of number of necks is shown.
Milquez-Sanabria, Harvey; Blanco-Cocom, Luis; Alzate-Gaviria, Liliana
2016-10-03
Agro-industrial wastes are an energy source for different industries. However, its application has not reached small industries. Previous and current research activities performed on the acidogenic phase of two-phase anaerobic digestion processes deal particularly with process optimization of the acid-phase reactors operating with a wide variety of substrates, both soluble and complex in nature. Mathematical models for anaerobic digestion have been developed to understand and improve the efficient operation of the process. At present, lineal models with the advantages of requiring less data, predicting future behavior and updating when a new set of data becomes available have been developed. The aim of this research was to contribute to the reduction of organic solid waste, generate biogas and develop a simple but accurate mathematical model to predict the behavior of the UASB reactor. The system was maintained separate for 14 days during which hydrolytic and acetogenic bacteria broke down onion waste, produced and accumulated volatile fatty acids. On this day, two reactors were coupled and the system continued for 16 days more. The biogas and methane yields and volatile solid reduction were 0.6 ± 0.05 m 3 (kg VS removed ) -1 , 0.43 ± 0.06 m 3 (kg VS removed ) -1 and 83.5 ± 9.8 %, respectively. The model application showed a good prediction of all process parameters defined; maximum error between experimental and predicted value was 1.84 % for alkalinity profile. A linear predictive adaptive model for anaerobic digestion of onion waste in a two-stage process was determined under batch-fed condition. Organic load rate (OLR) was maintained constant for the entire operation, modifying effluent hydrolysis reactor feed to UASB reactor. This condition avoids intoxication of UASB reactor and also limits external buffer addition.
International Nuclear Information System (INIS)
Saba, V.; Setayeshi, S.; Ghannadi-Maragheh, M.
2011-01-01
We have developed an algorithm for real-time detection and complete correction of the patient motion effects during single photon emission computed tomography. The algorithm is based on a linear prediction filter (LPC). The new prediction of projection data algorithm (PPDA) detects most motions-such as those of the head, legs, and hands-using comparison of the predicted and measured frame data. When the data acquisition for a specific frame is completed, the accuracy of the acquired data is evaluated by the PPDA. If patient motion is detected, the scanning procedure is stopped. After the patient rests in his or her true position, data acquisition is repeated only for the corrupted frame and the scanning procedure is continued. Various experimental data were used to validate the motion detection algorithm; on the whole, the proposed method was tested with approximately 100 test cases. The PPDA shows promising results. Using the PPDA enables us to prevent the scanner from collecting disturbed data during the scan and replaces them with motion-free data by real-time rescanning for the corrupted frames. As a result, the effects of patient motion is corrected in real time. (author)
Sitdikova, Lyalya; Izotov, Victor; Berthault, Gi; Lalomov, Alexander
2010-05-01
expectation of rock type content. k - constant of proportionality. Definition of constants of these functions allowed to express Golovkinskii law in terms of exponential equations system for Kazanian deposits of studing region. There were established principles of facies transition in marine gray and red formations interrelations zone. So fields of development of rocks-collectors of hydrocarbons and impermeable layers of the region can be predicted.
Directory of Open Access Journals (Sweden)
Paulino Pérez
2010-09-01
Full Text Available The availability of dense molecular markers has made possible the use of genomic selection in plant and animal breeding. However, models for genomic selection pose several computational and statistical challenges and require specialized computer programs, not always available to the end user and not implemented in standard statistical software yet. The R-package BLR (Bayesian Linear Regression implements several statistical procedures (e.g., Bayesian Ridge Regression, Bayesian LASSO in a unified framework that allows including marker genotypes and pedigree data jointly. This article describes the classes of models implemented in the BLR package and illustrates their use through examples. Some challenges faced when applying genomic-enabled selection, such as model choice, evaluation of predictive ability through cross-validation, and choice of hyper-parameters, are also addressed.
Fleming, David P.; Poplawski, J. V.
2002-01-01
Rolling-element bearing forces vary nonlinearly with bearing deflection. Thus an accurate rotordynamic transient analysis requires bearing forces to be determined at each step of the transient solution. Analyses have been carried out to show the effect of accurate bearing transient forces (accounting for non-linear speed and load dependent bearing stiffness) as compared to conventional use of average rolling-element bearing stiffness. Bearing forces were calculated by COBRA-AHS (Computer Optimized Ball and Roller Bearing Analysis - Advanced High Speed) and supplied to the rotordynamics code ARDS (Analysis of Rotor Dynamic Systems) for accurate simulation of rotor transient behavior. COBRA-AHS is a fast-running 5 degree-of-freedom computer code able to calculate high speed rolling-element bearing load-displacement data for radial and angular contact ball bearings and also for cylindrical and tapered roller beatings. Results show that use of nonlinear bearing characteristics is essential for accurate prediction of rotordynamic behavior.
Yu, Donghai; Du, Ruobing; Xiao, Ji-Chang
2016-07-05
Ninety-six acidic phosphorus-containing molecules with pKa 1.88 to 6.26 were collected and divided into training and test sets by random sampling. Structural parameters were obtained by density functional theory calculation of the molecules. The relationship between the experimental pKa values and structural parameters was obtained by multiple linear regression fitting for the training set, and tested with the test set; the R(2) values were 0.974 and 0.966 for the training and test sets, respectively. This regression equation, which quantitatively describes the influence of structural parameters on pKa , and can be used to predict pKa values of similar structures, is significant for the design of new acidic phosphorus-containing extractants. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.
Guedes, Rafael Lucas Muniz; Rodrigues, Carla Monadeli Filgueira; Coatnoan, Nicolas; Cosson, Alain; Cadioli, Fabiano Antonio; Garcia, Herakles Antonio; Gerber, Alexandra Lehmkuhl; Machado, Rosangela Zacarias; Minoprio, Paola Marcella Camargo; Teixeira, Marta Maria Geraldes; de Vasconcelos, Ana Tereza Ribeiro
2018-02-27
Trypanosoma vivax is a parasite widespread across Africa and South America. Immunological methods using recombinant antigens have been developed aiming at specific and sensitive detection of infections caused by T. vivax. Here, we sequenced for the first time the transcriptome of a virulent T. vivax strain (Lins), isolated from an outbreak of severe disease in South America (Brazil) and performed a computational integrated analysis of genome, transcriptome and in silico predictions to identify and characterize putative linear B-cell epitopes from African and South American T. vivax. A total of 2278, 3936 and 4062 linear B-cell epitopes were respectively characterized for the transcriptomes of T. vivax LIEM-176 (Venezuela), T. vivax IL1392 (Nigeria) and T. vivax Lins (Brazil) and 4684 for the genome of T. vivax Y486 (Nigeria). The results presented are a valuable theoretical source that may pave the way for highly sensitive and specific diagnostic tools. Copyright © 2018 The Authors. Published by Elsevier Inc. All rights reserved.
Directory of Open Access Journals (Sweden)
Elizabeth I. SKLAR
2016-04-01
Full Text Available Using pliable materials for the construction of robot bodies presents new and interesting challenges for the robotics community. Within the EU project entitled STIFFness controllable Flexible & Learnable manipulator for surgical Operations (STIFF-FLOP, a bendable, segmented robot arm has been developed. The exterior of the arm is composed of a soft material (silicone, encasing an internal structure that contains air-chamber actuators and a variety of sensors for monitoring applied force, position and shape of the arm as it bends. Due to the physical characteristics of the arm, a proper model of robot kinematics and dynamics is difficult to infer from the sensor data. Here we propose a non-linear approach to predicting the robot arm posture, by training a feed-forward neural network with a structured series of pressures values applied to the arm's actuators. The model is developed across a set of seven different experiments. Because the STIFF-FLOP arm is intended for use in surgical procedures, traditional methods for position estimation (based on visual information or electromagnetic tracking will not be possible to implement. Thus the ability to estimate pose based on data from a custom fiber-optic bending sensor and accompanying model is a valuable contribution. Results are presented which demonstrate the utility of our non-linear modelling approach across a range of data collection procedures.
Directory of Open Access Journals (Sweden)
Kyle A McQuisten
2009-10-01
Full Text Available Exogenous short interfering RNAs (siRNAs induce a gene knockdown effect in cells by interacting with naturally occurring RNA processing machinery. However not all siRNAs induce this effect equally. Several heterogeneous kinds of machine learning techniques and feature sets have been applied to modeling siRNAs and their abilities to induce knockdown. There is some growing agreement to which techniques produce maximally predictive models and yet there is little consensus for methods to compare among predictive models. Also, there are few comparative studies that address what the effect of choosing learning technique, feature set or cross validation approach has on finding and discriminating among predictive models.Three learning techniques were used to develop predictive models for effective siRNA sequences including Artificial Neural Networks (ANNs, General Linear Models (GLMs and Support Vector Machines (SVMs. Five feature mapping methods were also used to generate models of siRNA activities. The 2 factors of learning technique and feature mapping were evaluated by complete 3x5 factorial ANOVA. Overall, both learning techniques and feature mapping contributed significantly to the observed variance in predictive models, but to differing degrees for precision and accuracy as well as across different kinds and levels of model cross-validation.The methods presented here provide a robust statistical framework to compare among models developed under distinct learning techniques and feature sets for siRNAs. Further comparisons among current or future modeling approaches should apply these or other suitable statistically equivalent methods to critically evaluate the performance of proposed models. ANN and GLM techniques tend to be more sensitive to the inclusion of noisy features, but the SVM technique is more robust under large numbers of features for measures of model precision and accuracy. Features found to result in maximally predictive models are
Olive, David J
2017-01-01
This text covers both multiple linear regression and some experimental design models. The text uses the response plot to visualize the model and to detect outliers, does not assume that the error distribution has a known parametric distribution, develops prediction intervals that work when the error distribution is unknown, suggests bootstrap hypothesis tests that may be useful for inference after variable selection, and develops prediction regions and large sample theory for the multivariate linear regression model that has m response variables. A relationship between multivariate prediction regions and confidence regions provides a simple way to bootstrap confidence regions. These confidence regions often provide a practical method for testing hypotheses. There is also a chapter on generalized linear models and generalized additive models. There are many R functions to produce response and residual plots, to simulate prediction intervals and hypothesis tests, to detect outliers, and to choose response trans...
Alexeeff, Stacey E; Carroll, Raymond J; Coull, Brent
2016-04-01
Spatial modeling of air pollution exposures is widespread in air pollution epidemiology research as a way to improve exposure assessment. However, there are key sources of exposure model uncertainty when air pollution is modeled, including estimation error and model misspecification. We examine the use of predicted air pollution levels in linear health effect models under a measurement error framework. For the prediction of air pollution exposures, we consider a universal Kriging framework, which may include land-use regression terms in the mean function and a spatial covariance structure for the residuals. We derive the bias induced by estimation error and by model misspecification in the exposure model, and we find that a misspecified exposure model can induce asymptotic bias in the effect estimate of air pollution on health. We propose a new spatial simulation extrapolation (SIMEX) procedure, and we demonstrate that the procedure has good performance in correcting this asymptotic bias. We illustrate spatial SIMEX in a study of air pollution and birthweight in Massachusetts. © The Author 2015. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
Directory of Open Access Journals (Sweden)
Nicholas J. Sammut
2007-08-01
Full Text Available A superconducting particle accelerator like the LHC (Large Hadron Collider at CERN, can only be controlled well if the effects of the magnetic field multipoles on the beam are compensated. The demands on a control system solely based on beam feedback may be too high for the requirements to be reached at the specified bandwidth and accuracy. Therefore, we designed a suitable field description for the LHC (FIDEL as part of the machine control baseline to act as a feed-forward magnetic field prediction system. FIDEL consists of a physical and empirical parametric field model based on magnetic measurements at warm and in cryogenic conditions. The performance of FIDEL is particularly critical at injection when the field decays, and in the initial part of the acceleration when the field snaps back. These dynamic components are both current and time dependent and are not reproducible from cycle to cycle since they also depend on the magnet powering history. In this paper a qualitative and quantitative description of the dynamic field behavior substantiated by a set of scaling laws is presented.
Directory of Open Access Journals (Sweden)
Zhe Zhang
2010-09-01
Full Text Available With the availability of high density whole-genome single nucleotide polymorphism chips, genomic selection has become a promising method to estimate genetic merit with potentially high accuracy for animal, plant and aquaculture species of economic importance. With markers covering the entire genome, genetic merit of genotyped individuals can be predicted directly within the framework of mixed model equations, by using a matrix of relationships among individuals that is derived from the markers. Here we extend that approach by deriving a marker-based relationship matrix specifically for the trait of interest.In the framework of mixed model equations, a new best linear unbiased prediction (BLUP method including a trait-specific relationship matrix (TA was presented and termed TABLUP. The TA matrix was constructed on the basis of marker genotypes and their weights in relation to the trait of interest. A simulation study with 1,000 individuals as the training population and five successive generations as candidate population was carried out to validate the proposed method. The proposed TABLUP method outperformed the ridge regression BLUP (RRBLUP and BLUP with realized relationship matrix (GBLUP. It performed slightly worse than BayesB with an accuracy of 0.79 in the standard scenario.The proposed TABLUP method is an improvement of the RRBLUP and GBLUP method. It might be equivalent to the BayesB method but it has additional benefits like the calculation of accuracies for individual breeding values. The results also showed that the TA-matrix performs better in predicting ability than the classical numerator relationship matrix and the realized relationship matrix which are derived solely from pedigree or markers without regard to the trait. This is because the TA-matrix not only accounts for the Mendelian sampling term, but also puts the greater emphasis on those markers that explain more of the genetic variance in the trait.
More, Anand Govind; Gupta, Sunil Kumar
2018-03-24
Bioelectrochemical system (BES) is a novel, self-sustaining metal removal technology functioning on the utilization of chemical energy of organic matter with the help of microorganisms. Experimental trials of two chambered BES reactor were conducted with varying substrate concentration using sodium acetate (500 mg/L to 2000 mg/L COD) and different initial chromium concentration (Cr i ) (10-100 mg/L) at different cathode pH (pH 1-7). In the current study mathematical models based on multiple linear regression (MLR) and non-linear regression (NLR) approach were developed using laboratory experimental data for determining chromium removal efficiency (CRE) in the cathode chamber of BES. Substrate concentration, rate of substrate consumption, Cr i , pH, temperature and hydraulic retention time (HRT) were the operating process parameters of the reactor considered for development of the proposed models. MLR showed a better correlation coefficient (0.972) as compared to NLR (0.952). Validation of the models using t-test analysis revealed unbiasedness of both the models, with t critical value (2.04) greater than t-calculated values for MLR (-0.708) and NLR (-0.86). The root-mean-square error (RMSE) for MLR and NLR were 5.06 % and 7.45 %, respectively. Comparison between both models suggested MLR to be best suited model for predicting the chromium removal behavior using the BES technology to specify a set of operating conditions for BES. Modelling the behavior of CRE will be helpful for scale up of BES technology at industrial level. Copyright © 2018 The Society for Biotechnology, Japan. Published by Elsevier B.V. All rights reserved.
Directory of Open Access Journals (Sweden)
Pivatelli Flávio
2012-10-01
Full Text Available Abstract Background Decreased heart rate variability (HRV is related to higher morbidity and mortality. In this study we evaluated the linear and nonlinear indices of the HRV in stable angina patients submitted to coronary angiography. Methods We studied 77 unselected patients for elective coronary angiography, which were divided into two groups: coronary artery disease (CAD and non-CAD groups. For analysis of HRV indices, HRV was recorded beat by beat with the volunteers in the supine position for 40 minutes. We analyzed the linear indices in the time (SDNN [standard deviation of normal to normal], NN50 [total number of adjacent RR intervals with a difference of duration greater than 50ms] and RMSSD [root-mean square of differences] and frequency domains ultra-low frequency (ULF ≤ 0,003 Hz, very low frequency (VLF 0,003 – 0,04 Hz, low frequency (LF (0.04–0.15 Hz, and high frequency (HF (0.15–0.40 Hz as well as the ratio between LF and HF components (LF/HF. In relation to the nonlinear indices we evaluated SD1, SD2, SD1/SD2, approximate entropy (−ApEn, α1, α2, Lyapunov Exponent, Hurst Exponent, autocorrelation and dimension correlation. The definition of the cutoff point of the variables for predictive tests was obtained by the Receiver Operating Characteristic curve (ROC. The area under the ROC curve was calculated by the extended trapezoidal rule, assuming as relevant areas under the curve ≥ 0.650. Results Coronary arterial disease patients presented reduced values of SDNN, RMSSD, NN50, HF, SD1, SD2 and -ApEn. HF ≤ 66 ms2, RMSSD ≤ 23.9 ms, ApEn ≤−0.296 and NN50 ≤ 16 presented the best discriminatory power for the presence of significant coronary obstruction. Conclusion We suggest the use of Heart Rate Variability Analysis in linear and nonlinear domains, for prognostic purposes in patients with stable angina pectoris, in view of their overall impairment.
DeForest, David K; Brix, Kevin V; Tear, Lucinda M; Adams, William J
2018-01-01
The bioavailability of aluminum (Al) to freshwater aquatic organisms varies as a function of several water chemistry parameters, including pH, dissolved organic carbon (DOC), and water hardness. We evaluated the ability of multiple linear regression (MLR) models to predict chronic Al toxicity to a green alga (Pseudokirchneriella subcapitata), a cladoceran (Ceriodaphnia dubia), and a fish (Pimephales promelas) as a function of varying DOC, pH, and hardness conditions. The MLR models predicted toxicity values that were within a factor of 2 of observed values in 100% of the cases for P. subcapitata (10 and 20% effective concentrations [EC10s and EC20s]), 91% of the cases for C. dubia (EC10s and EC20s), and 95% (EC10s) and 91% (EC20s) of the cases for P. promelas. The MLR models were then applied to all species with Al toxicity data to derive species and genus sensitivity distributions that could be adjusted as a function of varying DOC, pH, and hardness conditions (the P. subcapitata model was applied to algae and macrophytes, the C. dubia model was applied to invertebrates, and the P. promelas model was applied to fish). Hazardous concentrations to 5% of the species or genera were then derived in 2 ways: 1) fitting a log-normal distribution to species-mean EC10s for all species (following the European Union methodology), and 2) fitting a triangular distribution to genus-mean EC20s for animals only (following the US Environmental Protection Agency methodology). Overall, MLR-based models provide a viable approach for deriving Al water quality guidelines that vary as a function of DOC, pH, and hardness conditions and are a significant improvement over bioavailability corrections based on single parameters. Environ Toxicol Chem 2018;37:80-90. © 2017 SETAC. © 2017 SETAC.
Energy Technology Data Exchange (ETDEWEB)
Prada-Sanchez, J.M.; Febrero-Bande, M.; Gonzalez-Manteiga, W. [Universidad de Santiago de Compostela, Dept. de Estadistica e Investigacion Operativa, Santiago de Compostela (Spain); Costos-Yanez, T. [Universidad de Vigo, Dept. de Estadistica e Investigacion Operativa, Orense (Spain); Bermudez-Cela, J.L.; Lucas-Dominguez, T. [Laboratorio, Central Termica de As Pontes, La Coruna (Spain)
2000-07-01
Atmospheric SO{sub 2} concentrations at sampling stations near the fossil fuel fired power station at As Pontes (La Coruna, Spain) were predicted using a model for the corresponding time series consisting of a self-explicative term and a linear combination of exogenous variables. In a supplementary simulation study, models of this kind behaved better than the corresponding pure self-explicative or pure linear regression models. (Author)
International Nuclear Information System (INIS)
Prada-Sanchez, J.M.; Febrero-Bande, M.; Gonzalez-Manteiga, W.; Costos-Yanez, T.; Bermudez-Cela, J.L.; Lucas-Dominguez, T.
2000-01-01
Atmospheric SO 2 concentrations at sampling stations near the fossil fuel fired power station at As Pontes (La Coruna, Spain) were predicted using a model for the corresponding time series consisting of a self-explicative term and a linear combination of exogenous variables. In a supplementary simulation study, models of this kind behaved better than the corresponding pure self-explicative or pure linear regression models. (Author)
International Nuclear Information System (INIS)
Bataille, F.; Younis, B.A.; Bellettre, J.; Lallemand, A.
2003-01-01
The paper reports on the prediction of the effects of blowing on the evolution of the thermal and velocity fields in a flat-plate turbulent boundary layer developing over a porous surface. Closure of the time-averaged equations governing the transport of momentum and thermal energy is achieved using a complete Reynolds-stress transport model for the turbulent stresses and a non-linear, algebraic and explicit model for the turbulent heat fluxes. The latter model accounts explicitly for the dependence of the turbulent heat fluxes on the gradients of mean velocity. Results are reported for the case of a heated boundary layer which is first developed into equilibrium over a smooth impervious wall before encountering a porous section through which cooler fluid is continuously injected. Comparisons are made with LDA measurements for an injection rate of 1%. The reduction of the wall shear stress with increase in injection rate is obtained in the calculations, and the computed rates of heat transfer between the hot flow and the wall are found to agree well with the published data
International Nuclear Information System (INIS)
Bender, B.; Sparwasser, R.
1988-01-01
Environmental law is discussed exhaustively in this book. Legal and scientific fundamentals are taken into account, a systematic orientation is given, and hints for further information are presented. The book covers general environmental law, plan approval procedures, protection against nuisances, atomic law and radiation protection law, water protection law, waste management law, laws on chemical substances, conservation law. (HSCH) [de
DEFF Research Database (Denmark)
Troen, Ib; Bechmann, Andreas; Kelly, Mark C.
2014-01-01
Using the Wind Atlas methodology to predict the average wind speed at one location from measured climatological wind frequency distributions at another nearby location we analyse the relative prediction errors using a linearized flow model (IBZ) and a more physically correct fully non-linear 3D...... flow model (CFD) for a number of sites in very complex terrain (large terrain slopes). We first briefly describe the Wind Atlas methodology as implemented in WAsP and the specifics of the “classical” model setup and the new setup allowing the use of the CFD computation engine. We discuss some known...
Tahani, Masoud; Askari, Amir R.
2014-09-01
In spite of the fact that pull-in instability of electrically actuated nano/micro-beams has been investigated by many researchers to date, no explicit formula has been presented yet which can predict pull-in voltage based on a geometrically non-linear and distributed parameter model. The objective of present paper is to introduce a simple and accurate formula to predict this value for a fully clamped electrostatically actuated nano/micro-beam. To this end, a non-linear Euler-Bernoulli beam model is employed, which accounts for the axial residual stress, geometric non-linearity of mid-plane stretching, distributed electrostatic force and the van der Waals (vdW) attraction. The non-linear boundary value governing equation of equilibrium is non-dimensionalized and solved iteratively through single-term Galerkin based reduced order model (ROM). The solutions are validated thorough direct comparison with experimental and other existing results reported in previous studies. Pull-in instability under electrical and vdW loads are also investigated using universal graphs. Based on the results of these graphs, non-dimensional pull-in and vdW parameters, which are defined in the text, vary linearly versus the other dimensionless parameters of the problem. Using this fact, some linear equations are presented to predict pull-in voltage, the maximum allowable length, the so-called detachment length, and the minimum allowable gap for a nano/micro-system. These linear equations are also reduced to a couple of universal pull-in formulas for systems with small initial gap. The accuracy of the universal pull-in formulas are also validated by comparing its results with available experimental and some previous geometric linear and closed-form findings published in the literature.
Jensen, Dan B; Hogeveen, Henk; De Vries, Albert
2016-09-01
Rapid detection of dairy cow mastitis is important so corrective action can be taken as soon as possible. Automatically collected sensor data used to monitor the performance and the health state of the cow could be useful for rapid detection of mastitis while reducing the labor needs for monitoring. The state of the art in combining sensor data to predict clinical mastitis still does not perform well enough to be applied in practice. Our objective was to combine a multivariate dynamic linear model (DLM) with a naïve Bayesian classifier (NBC) in a novel method using sensor and nonsensor data to detect clinical cases of mastitis. We also evaluated reductions in the number of sensors for detecting mastitis. With the DLM, we co-modeled 7 sources of sensor data (milk yield, fat, protein, lactose, conductivity, blood, body weight) collected at each milking for individual cows to produce one-step-ahead forecasts for each sensor. The observations were subsequently categorized according to the errors of the forecasted values and the estimated forecast variance. The categorized sensor data were combined with other data pertaining to the cow (week in milk, parity, mastitis history, somatic cell count category, and season) using Bayes' theorem, which produced a combined probability of the cow having clinical mastitis. If this probability was above a set threshold, the cow was classified as mastitis positive. To illustrate the performance of our method, we used sensor data from 1,003,207 milkings from the University of Florida Dairy Unit collected from 2008 to 2014. Of these, 2,907 milkings were associated with recorded cases of clinical mastitis. Using the DLM/NBC method, we reached an area under the receiver operating characteristic curve of 0.89, with a specificity of 0.81 when the sensitivity was set at 0.80. Specificities with omissions of sensor data ranged from 0.58 to 0.81. These results are comparable to other studies, but differences in data quality, definitions of
Energy Technology Data Exchange (ETDEWEB)
Herbert, Christopher, E-mail: cherbert@bccancer.bc.ca [Department of Radiation Oncology, British Columbia Cancer Agency, Vancouver, BC (Canada); Moiseenko, Vitali [Department of Medical Physics, British Columbia Cancer Agency, Vancouver, BC (Canada); McKenzie, Michael [Department of Radiation Oncology, British Columbia Cancer Agency, Vancouver, BC (Canada); Redekop, Gary [Division of Neurosurgery, Vancouver General Hospital, University of British Columbia, Vancouver, BC (Canada); Hsu, Fred [Department of Radiation Oncology, British Columbia Cancer Agency, Abbotsford, BC (Canada); Gete, Ermias; Gill, Brad; Lee, Richard; Luchka, Kurt [Department of Medical Physics, British Columbia Cancer Agency, Vancouver, BC (Canada); Haw, Charles [Division of Neurosurgery, Vancouver General Hospital, University of British Columbia, Vancouver, BC (Canada); Lee, Andrew [Department of Neurosurgery, Royal Columbian Hospital, New Westminster, BC (Canada); Toyota, Brian [Division of Neurosurgery, Vancouver General Hospital, University of British Columbia, Vancouver, BC (Canada); Martin, Montgomery [Department of Medical Imaging, British Columbia Cancer Agency, Vancouver, BC (Canada)
2012-07-01
Purpose: To investigate predictive factors in the development of symptomatic radiation injury after treatment with linear accelerator-based stereotactic radiosurgery for intracerebral arteriovenous malformations and relate the findings to the conclusions drawn by Quantitative Analysis of Normal Tissue Effects in the Clinic (QUANTEC). Methods and Materials: Archived plans for 73 patients who were treated at the British Columbia Cancer Agency were studied. Actuarial estimates of freedom from radiation injury were calculated using the Kaplan-Meier method. Univariate and multivariate Cox proportional hazards models were used for analysis of incidence of radiation injury. Log-rank test was used to search for dosimetric parameters associated with freedom from radiation injury. Results: Symptomatic radiation injury was exhibited by 14 of 73 patients (19.2%). Actuarial rate of symptomatic radiation injury was 23.0% at 4 years. Most patients (78.5%) had mild to moderate deficits according to Common Terminology Criteria for Adverse Events, version 4.0. On univariate analysis, lesion volume and diameter, dose to isocenter, and a V{sub x} for doses {>=}8 Gy showed statistical significance. Only lesion diameter showed statistical significance (p < 0.05) in a multivariate model. According to the log-rank test, AVM volumes >5 cm{sup 3} and diameters >30 mm were significantly associated with the risk of radiation injury (p < 0.01). The V{sub 12} also showed strong association with the incidence of radiation injury. Actuarial incidence of radiation injury was 16.8% if V{sub 12} was <28 cm{sup 3} and 53.2% if >28 cm{sup 3} (log-rank test, p = 0.001). Conclusions: This study confirms that the risk of developing symptomatic radiation injury after radiosurgery is related to lesion diameter and volume and irradiated volume. Results suggest a higher tolerance than proposed by QUANTEC. The widely differing findings reported in the literature, however, raise considerable uncertainties.
Directory of Open Access Journals (Sweden)
E. Çelebi
2012-11-01
Full Text Available The objective of this paper focuses primarily on the numerical approach based on two-dimensional (2-D finite element method for analysis of the seismic response of infinite soil-structure interaction (SSI system. This study is performed by a series of different scenarios that involved comprehensive parametric analyses including the effects of realistic material properties of the underlying soil on the structural response quantities. Viscous artificial boundaries, simulating the process of wave transmission along the truncated interface of the semi-infinite space, are adopted in the non-linear finite element formulation in the time domain along with Newmark's integration. The slenderness ratio of the superstructure and the local soil conditions as well as the characteristics of input excitations are important parameters for the numerical simulation in this research. The mechanical behavior of the underlying soil medium considered in this prediction model is simulated by an undrained elasto-plastic Mohr-Coulomb model under plane-strain conditions. To emphasize the important findings of this type of problems to civil engineers, systematic calculations with different controlling parameters are accomplished to evaluate directly the structural response of the vibrating soil-structure system. When the underlying soil becomes stiffer, the frequency content of the seismic motion has a major role in altering the seismic response. The sudden increase of the dynamic response is more pronounced for resonance case, when the frequency content of the seismic ground motion is close to that of the SSI system. The SSI effects under different seismic inputs are different for all considered soil conditions and structural types.
International Nuclear Information System (INIS)
Herbert, Christopher; Moiseenko, Vitali; McKenzie, Michael; Redekop, Gary; Hsu, Fred; Gete, Ermias; Gill, Brad; Lee, Richard; Luchka, Kurt; Haw, Charles; Lee, Andrew; Toyota, Brian; Martin, Montgomery
2012-01-01
Purpose: To investigate predictive factors in the development of symptomatic radiation injury after treatment with linear accelerator–based stereotactic radiosurgery for intracerebral arteriovenous malformations and relate the findings to the conclusions drawn by Quantitative Analysis of Normal Tissue Effects in the Clinic (QUANTEC). Methods and Materials: Archived plans for 73 patients who were treated at the British Columbia Cancer Agency were studied. Actuarial estimates of freedom from radiation injury were calculated using the Kaplan-Meier method. Univariate and multivariate Cox proportional hazards models were used for analysis of incidence of radiation injury. Log–rank test was used to search for dosimetric parameters associated with freedom from radiation injury. Results: Symptomatic radiation injury was exhibited by 14 of 73 patients (19.2%). Actuarial rate of symptomatic radiation injury was 23.0% at 4 years. Most patients (78.5%) had mild to moderate deficits according to Common Terminology Criteria for Adverse Events, version 4.0. On univariate analysis, lesion volume and diameter, dose to isocenter, and a V x for doses ≥8 Gy showed statistical significance. Only lesion diameter showed statistical significance (p 5 cm 3 and diameters >30 mm were significantly associated with the risk of radiation injury (p 12 also showed strong association with the incidence of radiation injury. Actuarial incidence of radiation injury was 16.8% if V 12 was 3 and 53.2% if >28 cm 3 (log–rank test, p = 0.001). Conclusions: This study confirms that the risk of developing symptomatic radiation injury after radiosurgery is related to lesion diameter and volume and irradiated volume. Results suggest a higher tolerance than proposed by QUANTEC. The widely differing findings reported in the literature, however, raise considerable uncertainties.
Szadkowski, Zbigniew; Fraenkel, E. D.; van den Berg, Ad M.
2013-01-01
We present the FPGA/NIOS implementation of an adaptive finite impulse response (FIR) filter based on linear prediction to suppress radio frequency interference (RFI). This technique will be used for experiments that observe coherent radio emission from extensive air showers induced by
DEFF Research Database (Denmark)
Föh, Kennet Fischer; Mandøe, Lene; Tinten, Bjarke
Business Law is a translation of the 2nd edition of Erhvervsjura - videregående uddannelser. It is an educational textbook for the subject of business law. The textbook covers all important topic?s within business law such as the Legal System, Private International Law, Insolvency Law, Contract law......, Instruments of debt and other claims, Sale of Goods and real estate, Charges, mortgages and pledges, Guarantees, Credit agreements, Tort Law, Product liability and Insurance, Company law, Market law, Labour Law, Family Law and Law of Inheritance....
Bhamidipati, Ravi Kanth; Syed, Muzeeb; Mullangi, Ramesh; Srinivas, Nuggehally
2018-02-01
1. Dalbavancin, a lipoglycopeptide, is approved for treating gram-positive bacterial infections. Area under plasma concentration versus time curve (AUC inf ) of dalbavancin is a key parameter and AUC inf /MIC ratio is a critical pharmacodynamic marker. 2. Using end of intravenous infusion concentration (i.e. C max ) C max versus AUC inf relationship for dalbavancin was established by regression analyses (i.e. linear, log-log, log-linear and power models) using 21 pairs of subject data. 3. The predictions of the AUC inf were performed using published C max data by application of regression equations. The quotient of observed/predicted values rendered fold difference. The mean absolute error (MAE)/root mean square error (RMSE) and correlation coefficient (r) were used in the assessment. 4. MAE and RMSE values for the various models were comparable. The C max versus AUC inf exhibited excellent correlation (r > 0.9488). The internal data evaluation showed narrow confinement (0.84-1.14-fold difference) with a RMSE models predicted AUC inf with a RMSE of 3.02-27.46% with fold difference largely contained within 0.64-1.48. 5. Regardless of the regression models, a single time point strategy of using C max (i.e. end of 30-min infusion) is amenable as a prospective tool for predicting AUC inf of dalbavancin in patients.
Azadi, Sama; Karimi-Jashni, Ayoub
2016-02-01
Predicting the mass of solid waste generation plays an important role in integrated solid waste management plans. In this study, the performance of two predictive models, Artificial Neural Network (ANN) and Multiple Linear Regression (MLR) was verified to predict mean Seasonal Municipal Solid Waste Generation (SMSWG) rate. The accuracy of the proposed models is illustrated through a case study of 20 cities located in Fars Province, Iran. Four performance measures, MAE, MAPE, RMSE and R were used to evaluate the performance of these models. The MLR, as a conventional model, showed poor prediction performance. On the other hand, the results indicated that the ANN model, as a non-linear model, has a higher predictive accuracy when it comes to prediction of the mean SMSWG rate. As a result, in order to develop a more cost-effective strategy for waste management in the future, the ANN model could be used to predict the mean SMSWG rate. Copyright © 2015 Elsevier Ltd. All rights reserved.
Friction laws at the nanoscale.
Mo, Yifei; Turner, Kevin T; Szlufarska, Izabela
2009-02-26
Macroscopic laws of friction do not generally apply to nanoscale contacts. Although continuum mechanics models have been predicted to break down at the nanoscale, they continue to be applied for lack of a better theory. An understanding of how friction force depends on applied load and contact area at these scales is essential for the design of miniaturized devices with optimal mechanical performance. Here we use large-scale molecular dynamics simulations with realistic force fields to establish friction laws in dry nanoscale contacts. We show that friction force depends linearly on the number of atoms that chemically interact across the contact. By defining the contact area as being proportional to this number of interacting atoms, we show that the macroscopically observed linear relationship between friction force and contact area can be extended to the nanoscale. Our model predicts that as the adhesion between the contacting surfaces is reduced, a transition takes place from nonlinear to linear dependence of friction force on load. This transition is consistent with the results of several nanoscale friction experiments. We demonstrate that the breakdown of continuum mechanics can be understood as a result of the rough (multi-asperity) nature of the contact, and show that roughness theories of friction can be applied at the nanoscale.
Jendeberg, Johan; Geijer, Håkan; Alshamari, Muhammed; Lidén, Mats
2018-01-24
To compare the ability of different size estimates to predict spontaneous passage of ureteral stones using a 3D-segmentation and to investigate the impact of manual measurement variability on the prediction of stone passage. We retrospectively included 391 consecutive patients with ureteral stones on non-contrast-enhanced CT (NECT). Three-dimensional segmentation size estimates were compared to the mean of three radiologists' measurements. Receiver-operating characteristic (ROC) analysis was performed for the prediction of spontaneous passage for each estimate. The difference in predicted passage probability between the manual estimates in upper and lower stones was compared. The area under the ROC curve (AUC) for the measurements ranged from 0.88 to 0.90. Between the automated 3D algorithm and the manual measurements the 95% limits of agreement were 0.2 ± 1.4 mm for the width. The manual bone window measurements resulted in a > 20 percentage point (ppt) difference between the readers in the predicted passage probability in 44% of the upper and 6% of the lower ureteral stones. All automated 3D algorithm size estimates independently predicted the spontaneous stone passage with similar high accuracy as the mean of three readers' manual linear measurements. Manual size estimation of upper stones showed large inter-reader variations for spontaneous passage prediction. • An automated 3D technique predicts spontaneous stone passage with high accuracy. • Linear, areal and volumetric measurements performed similarly in predicting stone passage. • Reader variability has a large impact on the predicted prognosis for stone passage.
Linear colliders - prospects 1985
International Nuclear Information System (INIS)
Rees, J.
1985-06-01
We discuss the scaling laws of linear colliders and their consequences for accelerator design. We then report on the SLAC Linear Collider project and comment on experience gained on that project and its application to future colliders. 9 refs., 2 figs
Chandler, T L; Pralle, R S; Dórea, J R R; Poock, S E; Oetzel, G R; Fourdraine, R H; White, H M
2018-03-01
Although cowside testing strategies for diagnosing hyperketonemia (HYK) are available, many are labor intensive and costly, and some lack sufficient accuracy. Predicting milk ketone bodies by Fourier transform infrared spectrometry during routine milk sampling may offer a more practical monitoring strategy. The objectives of this study were to (1) develop linear and logistic regression models using all available test-day milk and performance variables for predicting HYK and (2) compare prediction methods (Fourier transform infrared milk ketone bodies, linear regression models, and logistic regression models) to determine which is the most predictive of HYK. Given the data available, a secondary objective was to evaluate differences in test-day milk and performance variables (continuous measurements) between Holsteins and Jerseys and between cows with or without HYK within breed. Blood samples were collected on the same day as milk sampling from 658 Holstein and 468 Jersey cows between 5 and 20 d in milk (DIM). Diagnosis of HYK was at a serum β-hydroxybutyrate (BHB) concentration ≥1.2 mmol/L. Concentrations of milk BHB and acetone were predicted by Fourier transform infrared spectrometry (Foss Analytical, Hillerød, Denmark). Thresholds of milk BHB and acetone were tested for diagnostic accuracy, and logistic models were built from continuous variables to predict HYK in primiparous and multiparous cows within breed. Linear models were constructed from continuous variables for primiparous and multiparous cows within breed that were 5 to 11 DIM or 12 to 20 DIM. Milk ketone body thresholds diagnosed HYK with 64.0 to 92.9% accuracy in Holsteins and 59.1 to 86.6% accuracy in Jerseys. Logistic models predicted HYK with 82.6 to 97.3% accuracy. Internally cross-validated multiple linear regression models diagnosed HYK of Holstein cows with 97.8% accuracy for primiparous and 83.3% accuracy for multiparous cows. Accuracy of Jersey models was 81.3% in primiparous and 83
International Nuclear Information System (INIS)
Maiya, P.S.
1978-07-01
The creep-fatigue life results for five different heats of Type 304 stainless steel at 593 0 C (1100 0 F), generated under push-pull conditions in the axial strain-control mode, are presented. The life predictions for the various heats based on the linear-damage rule, strain-range partitioning method, and damage-rate approach are discussed. The appropriate material properties required for computation of fatigue life are also included
DEFF Research Database (Denmark)
Lund, Claus Otto; Nilas, Lisbeth; Bangsgaard, Nannie
2002-01-01
CG levels had a low diagnostic sensitivity (0.38-0.66) and specificity (0.74-0.77) for predicting PEP. In multivariate logistic analysis, none of the following clinical variables were predictive of PEP: duration of surgery, laparoscopic approach, history of previous EP, history of previous lower abdominal...
DEFF Research Database (Denmark)
Sokoler, Leo Emil; Skajaa, Anders; Frison, Gianluca
2013-01-01
algorithm in MATLAB and its performance is analyzed based on a smart grid power management case study. Closed loop simulations show that 1) our algorithm is significantly faster than state-of-the-art IPMs based on sparse linear algebra routines, and 2) warm-starting reduces the number of iterations...
de Bruin, A.B.H.; Smits, N.; Rikers, R.M.J.P.; Schmidt, H.G.
2008-01-01
In this study, the longitudinal relation between deliberate practice and performance in chess was examined using a linear mixed models analysis. The practice activities and performance ratings of young elite chess players, who were either in, or had dropped out of the Dutch national chess training,
Rico Rico, A.; Droge, S.T.J.|info:eu-repo/dai/nl/304834017; Hermens, J.L.M.|info:eu-repo/dai/nl/069681384
2010-01-01
The effect of the molecular structure and the salinity on the sorption of the anionic surfactant linear alkylbenzenesulfonate (LAS) to marine sediment has been studied. The analysis of several individual LAS congeners in seawater and of one specific LAS congener at different dilutions of seawater
Berngard, Samuel Clark; Berngard, Jennifer Bishop; Krebs, Nancy F; Garcés, Ana; Miller, Leland V; Westcott, Jamie; Wright, Linda L; Kindem, Mark; Hambidge, K Michael
2013-12-01
Stunting is prevalent by the age of 6 months in the indigenous population of the Western Highlands of Guatemala. The objective of this study was to determine the time course and predictors of linear growth failure and weight-for-age in early infancy. One hundred and forty eight term newborns had measurements of length and weight in their homes, repeated at 3 and 6 months. Maternal measurements were also obtained. Mean ± SD length-for-age Z-score (LAZ) declined from newborn -1.0 ± 1.01 to -2.20 ± 1.05 and -2.26 ± 1.01 at 3 and 6 months respectively. Stunting rates for newborn, 3 and 6 months were 47%, 53% and 56% respectively. A multiple regression model (R(2) = 0.64) demonstrated that the major predictor of LAZ at 3 months was newborn LAZ with the other predictors being newborn weight-for-age Z-score (WAZ), gender and maternal education∗maternal age interaction. Because WAZ remained essentially constant and LAZ declined during the same period, weight-for-length Z-score (WLZ) increased from -0.44 to +1.28 from birth to 3 months. The more severe the linear growth failure, the greater WAZ was in proportion to the LAZ. The primary conclusion is that impaired fetal linear growth is the major predictor of early infant linear growth failure indicating that prevention needs to start with maternal interventions. © 2013.
International Nuclear Information System (INIS)
Childs, Jessie T.; Thoirs, Kerry A.; Esterman, Adrian J.
2016-01-01
This study sought to develop a practical and uncomplicated predictive equation that could accurately calculate liver volumes, using multiple simple linear ultrasound measurements combined with measurements of body size. Penalized (lasso) regression was used to develop a new model and compare it to the ultrasonic linear measurements currently used clinically. A Bland–Altman analysis showed that the large limits of agreement of the new model render it too inaccurate to be of clinical use for estimating liver volume per se, but it holds value in tracking disease progress or response to treatment over time in individuals, and is certainly substantially better as an indicator of overall liver size than the ultrasonic linear measurements currently being used clinically. - Highlights: • A new model to calculate liver volumes from simple linear ultrasound measurements. • This model was compared to the linear measurements currently used clinically. • The new model holds value in tracking disease progress or response to treatment. • This model is better as an indicator of overall liver size.
Joshi, Shuchi N; Srinivas, Nuggehally R; Parmar, Deven V
2018-03-01
Our aim was to develop and validate the extrapolative performance of a regression model using a limited sampling strategy for accurate estimation of the area under the plasma concentration versus time curve for saroglitazar. Healthy subject pharmacokinetic data from a well-powered food-effect study (fasted vs fed treatments; n = 50) was used in this work. The first 25 subjects' serial plasma concentration data up to 72 hours and corresponding AUC 0-t (ie, 72 hours) from the fasting group comprised a training dataset to develop the limited sampling model. The internal datasets for prediction included the remaining 25 subjects from the fasting group and all 50 subjects from the fed condition of the same study. The external datasets included pharmacokinetic data for saroglitazar from previous single-dose clinical studies. Limited sampling models were composed of 1-, 2-, and 3-concentration-time points' correlation with AUC 0-t of saroglitazar. Only models with regression coefficients (R 2 ) >0.90 were screened for further evaluation. The best R 2 model was validated for its utility based on mean prediction error, mean absolute prediction error, and root mean square error. Both correlations between predicted and observed AUC 0-t of saroglitazar and verification of precision and bias using Bland-Altman plot were carried out. None of the evaluated 1- and 2-concentration-time points models achieved R 2 > 0.90. Among the various 3-concentration-time points models, only 4 equations passed the predefined criterion of R 2 > 0.90. Limited sampling models with time points 0.5, 2, and 8 hours (R 2 = 0.9323) and 0.75, 2, and 8 hours (R 2 = 0.9375) were validated. Mean prediction error, mean absolute prediction error, and root mean square error were prediction of saroglitazar. The same models, when applied to the AUC 0-t prediction of saroglitazar sulfoxide, showed mean prediction error, mean absolute prediction error, and root mean square error model predicts the exposure of
International Nuclear Information System (INIS)
Ketteler, G.; Kippels, K.
1988-01-01
In section I 'Basic principles' the following topics are considered: Constitutional-legal aspects of environmental protection, e.g. nuclear hazards and the remaining risk; European environmental law; international environmental law; administrative law, private law and criminal law relating to the environment; basic principles of environmental law, the instruments of public environmental law. Section II 'Special areas of law' is concerned with the law on water and waste, prevention of air pollution, nature conservation and care of the countryside. Legal decisions and literature up to June 1988 have been taken into consideration. (orig./RST) [de
Hesselink, M.W.; Gibbons, M.T.
2014-01-01
The concept of civil law has two distinct meanings. that is, disputes between private parties (individuals, corporations), as opposed to other branches of the law, such as administrative law or criminal law, which relate to disputes between individuals and the state. Second, the term civil law is
Directory of Open Access Journals (Sweden)
Giovanni Leopoldo Rozza
2015-09-01
Full Text Available With world becoming each day a global village, enterprises continuously seek to optimize their internal processes to hold or improve their competitiveness and make better use of natural resources. In this context, decision support tools are an underlying requirement. Such tools are helpful on predicting operational issues, avoiding cost risings, loss of productivity, work-related accident leaves or environmental disasters. This paper has its focus on the prediction of spent liquor caustic concentration of Bayer process for alumina production. Caustic concentration measuring is essential to keep it at expected levels, otherwise quality issues might arise. The organization requests caustic concentration by chemical analysis laboratory once a day, such information is not enough to issue preventive actions to handle process inefficiencies that will be known only after new measurement on the next day. Thereby, this paper proposes using Multiple Linear Regression and Artificial Neural Networks techniques a mathematical model to predict the spent liquor´s caustic concentration. Hence preventive actions will occur in real time. Such models were built using software tool for numerical computation (MATLAB and a statistical analysis software package (SPSS. The models output (predicted caustic concentration were compared with the real lab data. We found evidence suggesting superior results with use of Artificial Neural Networks over Multiple Linear Regression model. The results demonstrate that replacing laboratorial analysis by the forecasting model to support technical staff on decision making could be feasible.
Directory of Open Access Journals (Sweden)
M. M. Rashidi
2014-04-01
Full Text Available The variability of specific heats, internal irreversibility, heat and frictional losses are neglected in air-standard analysis for different internal combustion engine cycles. In this paper, the performance of an air-standard Diesel cycle with considerations of internal irreversibility described by using the compression and expansion efficiencies, variable specific heats, and losses due to heat transfer and friction is investigated by using finite-time thermodynamics. Artificial neural network (ANN is proposed for predicting the thermal efficiency and power output values versus the minimum and the maximum temperatures of the cycle and also the compression ratio. Results show that the first-law efficiency and the output power reach their maximum at a critical compression ratio for specific fixed parameters. The first-law efficiency increases as the heat leakage decreases; however the heat leakage has no direct effect on the output power. The results also show that irreversibilities have depressing effects on the performance of the cycle. Finally, a comparison between the results of the thermodynamic analysis and the ANN prediction shows a maximum difference of 0.181% and 0.194% in estimating the thermal efficiency and the output power. The obtained results in this paper can be useful for evaluating and improving the performance of practical Diesel engines.
Cui, Haibo; Wei, Xiaomei; Huang, Yu; Hu, Bin; Fang, Yaping; Wang, Jia
2014-01-01
Among human influenza viruses, strain A/H3N2 accounts for over a quarter of a million deaths annually. Antigenic variants of these viruses often render current vaccinations ineffective and lead to repeated infections. In this study, a computational model was developed to predict antigenic variants of the A/H3N2 strain. First, 18 critical antigenic amino acids in the hemagglutinin (HA) protein were recognized using a scoring method combining phi (ϕ) coefficient and information entropy. Next, a prediction model was developed by integrating multiple linear regression method with eight types of physicochemical changes in critical amino acid positions. When compared to other three known models, our prediction model achieved the best performance not only on the training dataset but also on the commonly-used testing dataset composed of 31878 antigenic relationships of the H3N2 influenza virus.
Laso Carbajo, Manuel; Karayiannis, Nikos Ch.
2008-01-01
We present predictions for the static scaling exponents and for the cross-over polymer volumetric fractions in the marginal and concentrated solution regimes. Corrections for finite chain length are made. Predictions are based on an analysis of correlated fluctuations in density and chain length, in a semigrand ensemble in which mers and solvent sites exchange identities. Cross-over volumetric fractions are found to be chain length independent to first order, although reciprocal-N corrections...
Perdigão, R. A. P.
2017-12-01
Predictability assessments are traditionally made on a case-by-case basis, often by running the particular model of interest with randomly perturbed initial/boundary conditions and parameters, producing computationally expensive ensembles. These approaches provide a lumped statistical view of uncertainty evolution, without eliciting the fundamental processes and interactions at play in the uncertainty dynamics. In order to address these limitations, we introduce a systematic dynamical framework for predictability assessment and forecast, by analytically deriving governing equations of predictability in terms of the fundamental architecture of dynamical systems, independent of any particular problem under consideration. The framework further relates multiple uncertainty sources along with their coevolutionary interplay, enabling a comprehensive and explicit treatment of uncertainty dynamics along time, without requiring the actual model to be run. In doing so, computational resources are freed and a quick and effective a-priori systematic dynamic evaluation is made of predictability evolution and its challenges, including aspects in the model architecture and intervening variables that may require optimization ahead of initiating any model runs. It further brings out universal dynamic features in the error dynamics elusive to any case specific treatment, ultimately shedding fundamental light on the challenging issue of predictability. The formulated approach, framed with broad mathematical physics generality in mind, is then implemented in dynamic models of nonlinear geophysical systems with various degrees of complexity, in order to evaluate their limitations and provide informed assistance on how to optimize their design and improve their predictability in fundamental dynamical terms.
DEFF Research Database (Denmark)
Langsted, Lars Bo; Garde, Peter; Greve, Vagn
<> book contains a thorough description of Danish substantive criminal law, criminal procedure and execution of sanctions. The book was originally published as a monograph in the International Encyclopaedia of Laws/Criminal Law....... book contains a thorough description of Danish substantive criminal law, criminal procedure and execution of sanctions. The book was originally published as a monograph in the International Encyclopaedia of Laws/Criminal Law....
Fragkaki, A G; Farmaki, E; Thomaidis, N; Tsantili-Kakoulidou, A; Angelis, Y S; Koupparis, M; Georgakopoulos, C
2012-09-21
The comparison among different modelling techniques, such as multiple linear regression, partial least squares and artificial neural networks, has been performed in order to construct and evaluate models for prediction of gas chromatographic relative retention times of trimethylsilylated anabolic androgenic steroids. The performance of the quantitative structure-retention relationship study, using the multiple linear regression and partial least squares techniques, has been previously conducted. In the present study, artificial neural networks models were constructed and used for the prediction of relative retention times of anabolic androgenic steroids, while their efficiency is compared with that of the models derived from the multiple linear regression and partial least squares techniques. For overall ranking of the models, a novel procedure [Trends Anal. Chem. 29 (2010) 101-109] based on sum of ranking differences was applied, which permits the best model to be selected. The suggested models are considered useful for the estimation of relative retention times of designer steroids for which no analytical data are available. Copyright © 2012 Elsevier B.V. All rights reserved.
Gazzaniga, Michael S
2008-11-06
Some of the implications for law of recent discoveries in neuroscience are considered in a new program established by the MacArthur Foundation. A group of neuroscientists, lawyers, philosophers, and jurists are examining issues in criminal law and, in particular, problems in responsibility and prediction and problems in legal decision making.
Directory of Open Access Journals (Sweden)
Harold J. Berman
1999-03-01
Full Text Available In the third millennium of the Christian era, which is characterised by the emergence of a world economy and eventually a world society, the concept of world law is needed to embrace not only the traditional disciplines of public international law, and comparative law, but also the common underlying legal principles applicable in world trade, world finance, transnational transfer of technology and other fields of world economic law, as well as in such emerging fields as the protection of the world's environment and the protection of universal human rights. World law combines inter-state law with the common law of humanity and the customary law of various world communities.
International Nuclear Information System (INIS)
Joubert, H.D.; Terblans, J.J.; Swart, H.C.
2009-01-01
Classical inter-diffusion studies assume a constant time of annealing when samples are annealed in a furnace. It is assumed that the sample temperature reaches the annealing temperature immediately after insertion, while the sample temperature immediately drops to room temperature after removal, the annealing time being taken as the time between insertion and removal. Using the above assumption, the diffusion coefficient can be calculated in a number of ways. In reality, the sample temperature does not immediately reach the annealing temperature; instead it rises at a rate governed by several heat transfer mechanisms, depending on the annealing procedure. For short annealing times, the sample temperature may not attain the annealing temperature, while for extended annealing times the sample temperature may reach the annealing temperature only for a fraction of the annealing time. To eliminate the effect of heat transfer mechanisms, a linear temperature ramping regime is proposed. Used in conjunction with a suitable profile reconstructing technique and a numerical solution of Fick's second law, the inter-diffusion parameters obtained from a linear ramping of Ni/Cu thin film samples can be compared to those obtained from calculations performed with the so-called Mixing-Roughness-Information model or any other suitable method used to determine classical diffusion coefficients.
Mendez, Javier; Monleon-Getino, Antonio; Jofre, Juan; Lucena, Francisco
2017-10-01
The present study aimed to establish the kinetics of the appearance of coliphage plaques using the double agar layer titration technique to evaluate the feasibility of using traditional coliphage plaque forming unit (PFU) enumeration as a rapid quantification method. Repeated measurements of the appearance of plaques of coliphages titrated according to ISO 10705-2 at different times were analysed using non-linear mixed-effects regression to determine the most suitable model of their appearance kinetics. Although this model is adequate, to simplify its applicability two linear models were developed to predict the numbers of coliphages reliably, using the PFU counts as determined by the ISO after only 3 hours of incubation. One linear model, when the number of plaques detected was between 4 and 26 PFU after 3 hours, had a linear fit of: (1.48 × Counts 3 h + 1.97); and the other, values >26 PFU, had a fit of (1.18 × Counts 3 h + 2.95). If the number of plaques detected was PFU after 3 hours, we recommend incubation for (18 ± 3) hours. The study indicates that the traditional coliphage plating technique has a reasonable potential to provide results in a single working day without the need to invest in additional laboratory equipment.
A linear model to predict with a multi-spectral radiometer the amount of nitrogen in winter wheat
Reyniers, M.; Walvoort, D.J.J.; Baardemaaker, De J.
2006-01-01
The objective was to develop an optimal vegetation index (VIopt) to predict with a multi-spectral radiometer nitrogen in wheat crop (kg[N] ha-1). Optimality means that nitrogen in the crop can be measured accurately in the field during the growing season. It also means that the measurements are
DEFF Research Database (Denmark)
Minko, Tomasz; Wisniewski, Rafal; Bendtsen, Jan Dimon
2016-01-01
. The retrieved heat excess can be stored in the water tank. For this purpose the charging and the discharging water loops has been designed. We present the non-linear model of the above described system and a non-linear model predictive supervisory controller that according to the received price signal......, occupancy information and ambient temperature minimizes the operation cost of the whole system and distributes set points to local controllers of supermarkets subsystems. We find that when reliable information about the high price period is available, it is profitable to use the refrigeration system...... to generate heat during the low price period, store it and use it to substitute the conventional heater during the high price period....
International Nuclear Information System (INIS)
Diani, J.; Bedoui, F.; Regnier, G.
2008-01-01
The relevance of micromechanics modeling to the linear viscoelastic behavior of semi-crystalline polymers is studied. For this purpose, the linear viscoelastic behaviors of amorphous and semi-crystalline PETs are characterized. Then, two micromechanics modeling methods, which have been proven in a previous work to apply to the PET elastic behavior, are used to predict the viscoelastic behavior of three semi-crystalline PETs. The microstructures of the crystalline PETs are clearly defined using WAXS techniques. Since microstructures and mechanical properties of both constitutive phases (the crystalline and the amorphous) are defined, the simulations are run without adjustable parameters. Results show that the models are unable to reproduce the substantial decrease of viscosity induced by the increase of crystallinity. Unlike the real materials, for moderate crystallinity, both models show materials of viscosity nearly identical to the amorphous material
Directory of Open Access Journals (Sweden)
Xiaowei Li
2017-01-01
Full Text Available The risk of coal and gas outbursts can be predicted using a method that is linear and continuous and based on the initial gas flow in the borehole (IGFB; this method is significantly superior to the traditional point prediction method. Acquiring accurate critical values is the key to ensuring accurate predictions. Based on ideal rock cross-cut coal uncovering model, the IGFB measurement device was developed. The present study measured the data of the initial gas flow over 3 min in a 1 m long borehole with a diameter of 42 mm in the laboratory. A total of 48 sets of data were obtained. These data were fuzzy and chaotic. Fisher’s discrimination method was able to transform these spatial data, which were multidimensional due to the factors influencing the IGFB, into a one-dimensional function and determine its critical value. Then, by processing the data into a normal distribution, the critical values of the outbursts were analyzed using linear discriminant analysis with Fisher’s criterion. The weak and strong outbursts had critical values of 36.63 L and 80.85 L, respectively, and the accuracy of the back-discriminant analysis for the weak and strong outbursts was 94.74% and 92.86%, respectively. Eight outburst tests were simulated in the laboratory, the reverse verification accuracy was 100%, and the accuracy of the critical value was verified.
Relativistic Linear Restoring Force
Clark, D.; Franklin, J.; Mann, N.
2012-01-01
We consider two different forms for a relativistic version of a linear restoring force. The pair comes from taking Hooke's law to be the force appearing on the right-hand side of the relativistic expressions: d"p"/d"t" or d"p"/d["tau"]. Either formulation recovers Hooke's law in the non-relativistic limit. In addition to these two forces, we…
Directory of Open Access Journals (Sweden)
M Taki
2017-05-01
Full Text Available Introduction Controlling greenhouse microclimate not only influences the growth of plants, but also is critical in the spread of diseases inside the greenhouse. The microclimate parameters were inside air, greenhouse roof and soil temperature, relative humidity and solar radiation intensity. Predicting the microclimate conditions inside a greenhouse and enabling the use of automatic control systems are the two main objectives of greenhouse climate model. The microclimate inside a greenhouse can be predicted by conducting experiments or by using simulation. Static and dynamic models are used for this purpose as a function of the metrological conditions and the parameters of the greenhouse components. Some works were done in past to 2015 year to simulation and predict the inside variables in different greenhouse structures. Usually simulation has a lot of problems to predict the inside climate of greenhouse and the error of simulation is higher in literature. The main objective of this paper is comparison between heat transfer and regression models to evaluate them to predict inside air and roof temperature in a semi-solar greenhouse in Tabriz University. Materials and Methods In this study, a semi-solar greenhouse was designed and constructed at the North-West of Iran in Azerbaijan Province (geographical location of 38°10′ N and 46°18′ E with elevation of 1364 m above the sea level. In this research, shape and orientation of the greenhouse, selected between some greenhouses common shapes and according to receive maximum solar radiation whole the year. Also internal thermal screen and cement north wall was used to store and prevent of heat lost during the cold period of year. So we called this structure, ‘semi-solar’ greenhouse. It was covered with glass (4 mm thickness. It occupies a surface of approximately 15.36 m2 and 26.4 m3. The orientation of this greenhouse was East–West and perpendicular to the direction of the wind prevailing
Directory of Open Access Journals (Sweden)
Elena Vital'evna Bykova
2011-09-01
Full Text Available This paper describes the concept of energy security and a system of indicators for its monitoring. The indicator system includes more than 40 parameters that reflect the structure and state of fuel and energy complex sectors (fuel, electricity and heat & power, as well as takes into account economic, environmental and social aspects. A brief description of the structure of the computer system to monitor and analyze energy security is given. The complex contains informational, analytical and calculation modules, provides applications for forecasting and modeling energy scenarios, modeling threats and determining levels of energy security. Its application to predict the values of the indicators and methods developed for it are described. This paper presents a method developed by conventional nonlinear mathematical programming needed to address several problems of energy and, in particular, the prediction problem of the security. An example of its use and implementation of this method in the application, "Prognosis", is also given.
Prinstein, Mitchell J; Choukas-Bradley, Sophia C; Helms, Sarah W; Brechwald, Whitney A; Rancourt, Diana
2011-10-01
In contrast to prior work, recent theory suggests that high, not low, levels of adolescent peer popularity may be associated with health risk behavior. This study examined (a) whether popularity may be uniquely associated with cigarette use, marijuana use, and sexual risk behavior, beyond the predictive effects of aggression; (b) whether the longitudinal association between popularity and health risk behavior may be curvilinear; and (c) gender moderation. A total of 336 adolescents, initially in 10-11th grades, reported cigarette use, marijuana use, and number of sexual intercourse partners at two time points 18 months apart. Sociometric peer nominations were used to examine popularity and aggression. Longitudinal quadratic effects and gender moderation suggest that both high and low levels of popularity predict some, but not all, health risk behaviors. New theoretical models can be useful for understanding the complex manner in which health risk behaviors may be reinforced within the peer context.
Melchiorre, C.; Castellanos Abella, E. A.; van Westen, C. J.; Matteucci, M.
2011-04-01
This paper describes a procedure for landslide susceptibility assessment based on artificial neural networks, and focuses on the estimation of the prediction capability, robustness, and sensitivity of susceptibility models. The study is carried out in the Guantanamo Province of Cuba, where 186 landslides were mapped using photo-interpretation. Twelve conditioning factors were mapped including geomorphology, geology, soils, landuse, slope angle, slope direction, internal relief, drainage density, distance from roads and faults, rainfall intensity, and ground peak acceleration. A methodology was used that subdivided the database in 3 subsets. A training set was used for updating the weights. A validation set was used to stop the training procedure when the network started losing generalization capability, and a test set was used to calculate the performance of the network. A 10-fold cross-validation was performed in order to show that the results are repeatable. The prediction capability, the robustness analysis, and the sensitivity analysis were tested on 10 mutually exclusive datasets. The results show that by means of artificial neural networks it is possible to obtain models with high prediction capability and high robustness, and that an exploration of the effect of the individual variables is possible, even if they are considered as a black-box model.
Wittek, Adam; Joldes, Grand; Couton, Mathieu; Warfield, Simon K; Miller, Karol
2010-12-01
Long computation times of non-linear (i.e. accounting for geometric and material non-linearity) biomechanical models have been regarded as one of the key factors preventing application of such models in predicting organ deformation for image-guided surgery. This contribution presents real-time patient-specific computation of the deformation field within the brain for six cases of brain shift induced by craniotomy (i.e. surgical opening of the skull) using specialised non-linear finite element procedures implemented on a graphics processing unit (GPU). In contrast to commercial finite element codes that rely on an updated Lagrangian formulation and implicit integration in time domain for steady state solutions, our procedures utilise the total Lagrangian formulation with explicit time stepping and dynamic relaxation. We used patient-specific finite element meshes consisting of hexahedral and non-locking tetrahedral elements, together with realistic material properties for the brain tissue and appropriate contact conditions at the boundaries. The loading was defined by prescribing deformations on the brain surface under the craniotomy. Application of the computed deformation fields to register (i.e. align) the preoperative and intraoperative images indicated that the models very accurately predict the intraoperative deformations within the brain. For each case, computing the brain deformation field took less than 4 s using an NVIDIA Tesla C870 GPU, which is two orders of magnitude reduction in computation time in comparison to our previous study in which the brain deformation was predicted using a commercial finite element solver executed on a personal computer. Copyright © 2010 Elsevier Ltd. All rights reserved.
International Nuclear Information System (INIS)
Kumar, Ajay; Ravi, P.M.; Guneshwar, S.L.; Rout, Sabyasachi; Mishra, Manish K.; Pulhani, Vandana; Tripathi, R.M.
2018-01-01
Numerous common methods (batch laboratory, the column laboratory, field-batch method, field modeling and K 0c method) are used frequently for determination of K d values. Recently, multiple regression models are considered as new best estimates for predicting the K d of radionuclides in the environment. It is also well known fact that the K d value is highly influenced by physico-chemical properties of sediment. Due to the significant variability in influencing parameters, the measured K d values can range over several orders of magnitude under different environmental conditions. The aim of this study is to develop a predictive model for K d values of 137 Cs and 60 Co based on the sediment properties using multiple linear regression analysis
Ansari, Hamid Reza
2014-09-01
In this paper we propose a new method for predicting rock porosity based on a combination of several artificial intelligence systems. The method focuses on one of the Iranian carbonate fields in the Persian Gulf. Because there is strong heterogeneity in carbonate formations, estimation of rock properties experiences more challenge than sandstone. For this purpose, seismic colored inversion (SCI) and a new approach of committee machine are used in order to improve porosity estimation. The study comprises three major steps. First, a series of sample-based attributes is calculated from 3D seismic volume. Acoustic impedance is an important attribute that is obtained by the SCI method in this study. Second, porosity log is predicted from seismic attributes using common intelligent computation systems including: probabilistic neural network (PNN), radial basis function network (RBFN), multi-layer feed forward network (MLFN), ε-support vector regression (ε-SVR) and adaptive neuro-fuzzy inference system (ANFIS). Finally, a power law committee machine (PLCM) is constructed based on imperial competitive algorithm (ICA) to combine the results of all previous predictions in a single solution. This technique is called PLCM-ICA in this paper. The results show that PLCM-ICA model improved the results of neural networks, support vector machine and neuro-fuzzy system.
Wang, Tiening; Takayasu, Makoto; Bordini, Bernardo
2014-01-01
Superconducting Nb$_{3}$Sn Powder-In-Tube (PIT) strands could be used for the superconducting magnets of the next generation Large Hadron Collider. The strands are cabled into the typical flat Rutherford cable configuration. During the assembly of a magnet and its operation the strands experience not only longitudinal but also transverse load due to the pre-compression applied during the assembly and the Lorentz load felt when the magnets are energized. To properly design the magnets and guarantee their safe operation, mechanical load effects on the strand superconducting properties are studied extensively; particularly, many scaling laws based on tensile load experiments have been established to predict the critical current dependence on strain. However, the dependence of the superconducting properties on transverse load has not been extensively studied so far. One of the reasons is that transverse loading experiments are difficult to conduct due to the small diameter of the strand (about 1 mm) and the data ...
Hubble's Law Implies Benford's Law for Distances to Galaxies ...
Indian Academy of Sciences (India)
in both time and space, predicts that conformity to Benford's law will improve as more data on distances to galaxies becomes available. Con- versely, with the logical derivation of this law presented here, the recent empirical observations may beviewed as independent evidence of the validity of Hubble's law. Key words.
de Bruin, Anique B H; Smits, Niels; Rikers, Remy M J P; Schmidt, Henk G
2008-11-01
In this study, the longitudinal relation between deliberate practice and performance in chess was examined using a linear mixed models analysis. The practice activities and performance ratings of young elite chess players, who were either in, or had dropped out of the Dutch national chess training, were analysed since they had started playing chess seriously. The results revealed that deliberate practice (i.e. serious chess study alone and serious chess play) strongly contributed to chess performance. The influence of deliberate practice was not only observable in current performance, but also over chess players' careers. Moreover, although the drop-outs' chess ratings developed more slowly over time, both the persistent and drop-out chess players benefited to the same extent from investments in deliberate practice. Finally, the effect of gender on chess performance proved to be much smaller than the effect of deliberate practice. This study provides longitudinal support for the monotonic benefits assumption of deliberate practice, by showing that over chess players' careers, deliberate practice has a significant effect on performance, and to the same extent for chess players of different ultimate performance levels. The results of this study are not in line with critique raised against the deliberate practice theory that the factors deliberate practice and talent could be confounded.
Directory of Open Access Journals (Sweden)
G. P. Tolstopiatenko
2014-01-01
Full Text Available At the origin of the International Law Department were such eminent scientists, diplomats and teachers as V.N. Durdenevsky, S.B. Krylov and F.I. Kozhevnikov. International law studies in USSR and Russia during the second half of the XX century was largely shaped by the lawyers of MGIMO. They had a large influence on the education in the international law in the whole USSR, and since 1990s in Russia and other CIS countries. The prominence of the research of MGIMO international lawyers was due to the close connections with the international practice, involving international negotiations in the United Nations and other international fora, diplomatic conferences and international scientific conferences. This experience is represented in the MGIMO handbooks on international law, which are still in demand. The Faculty of International Law at MGIMO consists of seven departments: Department of International Law, Department of Private International and Comparative Law; Department of European Law; Department of Comparative Constitutional Law; Department of Administrative and Financial Law; Department of Criminal Law, Department Criminal Procedure and Criminalistics. Many Russian lawyers famous at home and abroad work at the Faculty, contributing to domestic and international law studies. In 1947 the Academy of Sciences of the USSR published "International Law" textbook which was the first textbook on the subject in USSR. S.B. Krylov and V.N. Durdenevsky were the authors and editors of the textbook. First generations of MGIMO students studied international law according to this textbook. All subsequent books on international law, published in the USSR, were based on the approach to the teaching of international law, developed in the textbook by S.B. Krylov and V.N. Durdenevsky. The first textbook of international law with the stamp of MGIMO, edited by F.I. Kozhevnikov, was published in 1964. This textbook later went through five editions in 1966, 1972
Nakamura, Kengo; Yasutaka, Tetsuo; Kuwatani, Tatsu; Komai, Takeshi
2017-11-01
In this study, we applied sparse multiple linear regression (SMLR) analysis to clarify the relationships between soil properties and adsorption characteristics for a range of soils across Japan and identify easily-obtained physical and chemical soil properties that could be used to predict K and n values of cadmium, lead and fluorine. A model was first constructed that can easily predict the K and n values from nine soil parameters (pH, cation exchange capacity, specific surface area, total carbon, soil organic matter from loss on ignition and water holding capacity, the ratio of sand, silt and clay). The K and n values of cadmium, lead and fluorine of 17 soil samples were used to verify the SMLR models by the root mean square error values obtained from 512 combinations of soil parameters. The SMLR analysis indicated that fluorine adsorption to soil may be associated with organic matter, whereas cadmium or lead adsorption to soil is more likely to be influenced by soil pH, IL. We found that an accurate K value can be predicted from more than three soil parameters for most soils. Approximately 65% of the predicted values were between 33 and 300% of their measured values for the K value; 76% of the predicted values were within ±30% of their measured values for the n value. Our findings suggest that adsorption properties of lead, cadmium and fluorine to soil can be predicted from the soil physical and chemical properties using the presented models. Copyright © 2017 Elsevier Ltd. All rights reserved.
An Offline Formulation of MPC for LPV Systems Using Linear Matrix Inequalities
Directory of Open Access Journals (Sweden)
P. Bumroongsri
2014-01-01
Full Text Available An offline model predictive control (MPC algorithm for linear parameter varying (LPV systems is presented. The main contribution is to develop an offline MPC algorithm for LPV systems that can deal with both time-varying scheduling parameter and persistent disturbance. The norm-bounding technique is used to derive an offline MPC algorithm based on the parameter-dependent state feedback control law and the parameter-dependent Lyapunov functions. The online computational time is reduced by solving offline the linear matrix inequality (LMI optimization problems to find the sequences of explicit state feedback control laws. At each sampling instant, a parameter-dependent state feedback control law is computed by linear interpolation between the precomputed state feedback control laws. The algorithm is illustrated with two examples. The results show that robust stability can be ensured in the presence of both time-varying scheduling parameter and persistent disturbance.
International Nuclear Information System (INIS)
Monte, Luigi
2013-01-01
The present work describes the application of a non-linear Leslie model for predicting the effects of ionising radiation on wild populations. The model assumes that, for protracted chronic irradiation, the effect-dose relationship is linear. In particular, the effects of radiation are modelled by relating the increase in the mortality rates of the individuals to the dose rates through a proportionality factor C. The model was tested using independent data and information from a series of experiments that were aimed at assessing the response to radiation of wild populations of meadow voles and whose results were described in the international literature. The comparison of the model results with the data selected from the above mentioned experiments showed that the model overestimated the detrimental effects of radiation on the size of irradiated populations when the values of C were within the range derived from the median lethal dose (L 50 ) for small mammals. The described non-linear model suggests that the non-expressed biotic potential of the species whose growth is limited by processes of environmental resistance, such as the competition among the individuals of the same or of different species for the exploitation of the available resources, can be a factor that determines a more effective response of population to the radiation effects. -- Highlights: • A model to assess the radiation effects on wild population is described. • The model is based on non-linear Leslie matrix. • The model is applied to small mammals living in an irradiated meadow. • Model output is conservative if effect-dose factor estimated from L 50 is used. • Systemic response to stress of populations in competitive conditions may be more effective
Directory of Open Access Journals (Sweden)
Orkun Oztürk
Full Text Available BACKGROUND: Predicting type-1 Human Immunodeficiency Virus (HIV-1 protease cleavage site in protein molecules and determining its specificity is an important task which has attracted considerable attention in the research community. Achievements in this area are expected to result in effective drug design (especially for HIV-1 protease inhibitors against this life-threatening virus. However, some drawbacks (like the shortage of the available training data and the high dimensionality of the feature space turn this task into a difficult classification problem. Thus, various machine learning techniques, and specifically several classification methods have been proposed in order to increase the accuracy of the classification model. In addition, for several classification problems, which are characterized by having few samples and many features, selecting the most relevant features is a major factor for increasing classification accuracy. RESULTS: We propose for HIV-1 data a consistency-based feature selection approach in conjunction with recursive feature elimination of support vector machines (SVMs. We used various classifiers for evaluating the results obtained from the feature selection process. We further demonstrated the effectiveness of our proposed method by comparing it with a state-of-the-art feature selection method applied on HIV-1 data, and we evaluated the reported results based on attributes which have been selected from different combinations. CONCLUSION: Applying feature selection on training data before realizing the classification task seems to be a reasonable data-mining process when working with types of data similar to HIV-1. On HIV-1 data, some feature selection or extraction operations in conjunction with different classifiers have been tested and noteworthy outcomes have been reported. These facts motivate for the work presented in this paper. SOFTWARE AVAILABILITY: The software is available at http
Shaw, Malcolm N
2017-01-01
International Law is the definitive and authoritative text on the subject, offering Shaw's unbeatable combination of clarity of expression and academic rigour and ensuring both understanding and critical analysis in an engaging and authoritative style. Encompassing the leading principles, practice and cases, and retaining and developing the detailed references which encourage and assist the reader in further study, this new edition motivates and challenges students and professionals while remaining accessible and engaging. Fully updated to reflect recent case law and treaty developments, this edition contains an expanded treatment of the relationship between international and domestic law, the principles of international humanitarian law, and international criminal law alongside additional material on international economic law.
International Nuclear Information System (INIS)
Anon.
1980-01-01
This pocketbook contains major federal regulations on environmental protection. They serve to protect and cultivate mankind's natural foundations of life, to preserve the environment. The environmental law is devided as follows: Constitutional law on the environment, common administrative law on the environment, special administrative law on the environment including conservation of nature and preservation of rural amenities, protection of waters, waste management, protection against nuisances, nuclear energy and radiation protection, energy conservation, protection against dangerous substances, private law relating to the environment, criminal law relating to the environment. (HSCH) [de
Real-time Non-linear Target Tracking Control of Wheeled Mobile Robots
Institute of Scientific and Technical Information of China (English)
YU Wenyong
2006-01-01
A control strategy for real-time target tracking for wheeled mobile robots is presented. Using a modified Kalman filter for environment perception, a novel tracking control law derived from Lyapunov stability theory is introduced. Tuning of linear velocity and angular velocity with mechanical constraints is applied. The proposed control system can simultaneously solve the target trajectory prediction, real-time tracking, and posture regulation problems of a wheeled mobile robot. Experimental results illustrate the effectiveness of the proposed tracking control laws.
Energy Technology Data Exchange (ETDEWEB)
Buerke, Boris; Puesken, Michael; Heindel, Walter; Wessling, Johannes (Dept. of Clinical Radiology, Univ. of Muenster (Germany)), email: buerkeb@uni-muenster.de; Gerss, Joachim (Dept. of Medical Informatics and Biomathematics, Univ. of Muenster (Germany)); Weckesser, Matthias (Dept. of Nuclear Medicine, Univ. of Muenster (Germany))
2011-06-15
Background Volumetry of lymph nodes potentially better reflect asymmetric size alterations independently of lymph node orientation in comparison to metric parameters (e.g. long-axis diameter). Purpose To distinguish between benign and malignant lymph nodes by comparing 2D and semi-automatic 3D measurements in MSCT. Material and Methods FDG-18 PET-CT was performed in 33 patients prior to therapy for malignant melanoma at stage III/IV. One hundred and eighty-six cervico-axillary, abdominal and inguinal lymph nodes were evaluated independently by two radiologists, both manually and with the use of semi-automatic segmentation software. Long axis (LAD), short axis (SAD), maximal 3D diameter, volume and elongation were obtained. PET-CT, PET-CT follow-up and/or histology served as a combined reference standard. Statistics encompassed intra-class correlation coefficients and ROC curves. Results Compared to manual assessment, semi-automatic inter-observer variability was found to be lower, e.g. at 2.4% (95% CI 0.05-4.8) for LAD. The standard of reference revealed metastases in 90 (48%) of 186 lymph nodes. Semi-automatic prediction of lymph node metastases revealed highest areas under the ROC curves for volume (reader 1 0.77, 95%CI 0.64-0.90; reader 2 0.76, 95%CI 0.59-0.86) and SAD (reader 1 0.76, 95%CI 0.64-0.88; reader 2 0.75, 95%CI 0.62-0.89). The findings for LAD (reader 1 0.73, 95%CI 0.60-0.86; reader 2 0.71, 95%CI 0.71, 95%CI 0.57-0.85) and maximal 3D diameter (reader 1 0.70, 95%CI 0.53-0.86; reader 2 0.76, 95%CI 0.50-0.80) were found substantially lower and for elongation (reader 1 0.65, 95%CI 0.50-0.79; reader 2 0.66, 95%CI 0.52-0.81) significantly lower (p < 0.05). Conclusion Semi-automatic analysis of lymph nodes in malignant melanoma is supported by high segmentation quality and reproducibility. As compared to established SAD, semi-automatic lymph node volumetry does not have an additive role for categorizing lymph nodes as normal or metastatic in malignant
Energy Technology Data Exchange (ETDEWEB)
Canciam, Cesar Augusto [Universidade Tecnologica Federal do Parana (UTFPR), Campus Ponta Grossa, PR (Brazil)], e-mail: canciam@utfpr.edu.br
2012-07-01
When evaluating the consumption of bio fuels, the knowledge of the density is of great importance for rectify the effect of temperature. The thermal expansion coefficient is a thermodynamic property that provides a measure of the density variation in response to temperature variation, keeping the pressure constant. This study aimed to predict the thermal expansion coefficients of ethyl bio diesels from castor beans, soybeans, sunflower seeds and Mabea fistulifera Mart. oils and of methyl bio diesels from soybeans, sunflower seeds, souari nut, cotton, coconut, castor beans and palm oils, from beef tallow, chicken fat and hydrogenated vegetable fat residual. For this purpose, there was a linear regression analysis of the density of each bio diesel a function of temperature. These data were obtained from other works. The thermal expansion coefficients for bio diesels are between 6.3729x{sup 10-4} and 1.0410x10{sup -3} degree C-1. In all the cases, the correlation coefficients were over 0.99. (author)
Kouramas, K.I.
2011-08-01
This work presents a new algorithm for solving the explicit/multi- parametric model predictive control (or mp-MPC) problem for linear, time-invariant discrete-time systems, based on dynamic programming and multi-parametric programming techniques. The algorithm features two key steps: (i) a dynamic programming step, in which the mp-MPC problem is decomposed into a set of smaller subproblems in which only the current control, state variables, and constraints are considered, and (ii) a multi-parametric programming step, in which each subproblem is solved as a convex multi-parametric programming problem, to derive the control variables as an explicit function of the states. The key feature of the proposed method is that it overcomes potential limitations of previous methods for solving multi-parametric programming problems with dynamic programming, such as the need for global optimization for each subproblem of the dynamic programming step. © 2011 Elsevier Ltd. All rights reserved.
Relating Cohesive Zone Model to Linear Elastic Fracture Mechanics
Wang, John T.
2010-01-01
The conditions required for a cohesive zone model (CZM) to predict a failure load of a cracked structure similar to that obtained by a linear elastic fracture mechanics (LEFM) analysis are investigated in this paper. This study clarifies why many different phenomenological cohesive laws can produce similar fracture predictions. Analytical results for five cohesive zone models are obtained, using five different cohesive laws that have the same cohesive work rate (CWR-area under the traction-separation curve) but different maximum tractions. The effect of the maximum traction on the predicted cohesive zone length and the remote applied load at fracture is presented. Similar to the small scale yielding condition for an LEFM analysis to be valid. the cohesive zone length also needs to be much smaller than the crack length. This is a necessary condition for a CZM to obtain a fracture prediction equivalent to an LEFM result.
International Nuclear Information System (INIS)
Xu, H.; Wang, Y.
1999-01-01
In this letter, a linear free energy relationship is used to predict the Gibbs free energies of formation of crystalline phases of pyrochlore and zirconolite families with stoichiometry of MCaTi 2 O 7 (or, CaMTi 2 O 7 ,) from the known thermodynamic properties of aqueous tetravalent cations (M 4+ ). The linear free energy relationship for tetravalent cations is expressed as ΔG f,M v X 0 =a M v X ΔG n,M 4+ 0 +b M v X +β M v X r M 4+ , where the coefficients a M v X , b M v X , and β M v X characterize a particular structural family of M v X, r M 4+ is the ionic radius of M 4+ cation, ΔG f,M v X 0 is the standard Gibbs free energy of formation of M v X, and ΔG n,M 4+ 0 is the standard non-solvation energy of cation M 4+ . The coefficients for the structural family of zirconolite with the stoichiometry of M 4+ CaTi 2 O 7 are estimated to be: a M v X =0.5717, b M v X =-4284.67 (kJ/mol), and β M v X =27.2 (kJ/mol nm). The coefficients for the structural family of pyrochlore with the stoichiometry of M 4+ CaTi 2 O 7 are estimated to be: a M v X =0.5717, b M v X =-4174.25 (kJ/mol), and β M v X =13.4 (kJ/mol nm). Using the linear free energy relationship, the Gibbs free energies of formation of various zirconolite and pyrochlore phases are calculated. (orig.)
Fridgeirsdottir, Gudrun A; Harris, Robert J; Dryden, Ian L; Fischer, Peter M; Roberts, Clive J
2018-03-29
Solid dispersions can be a successful way to enhance the bioavailability of poorly soluble drugs. Here 60 solid dispersion formulations were produced using ten chemically diverse, neutral, poorly soluble drugs, three commonly used polymers, and two manufacturing techniques, spray-drying and melt extrusion. Each formulation underwent a six-month stability study at accelerated conditions, 40 °C and 75% relative humidity (RH). Significant differences in times to crystallization (onset of crystallization) were observed between both the different polymers and the two processing methods. Stability from zero days to over one year was observed. The extensive experimental data set obtained from this stability study was used to build multiple linear regression models to correlate physicochemical properties of the active pharmaceutical ingredients (API) with the stability data. The purpose of these models is to indicate which combination of processing method and polymer carrier is most likely to give a stable solid dispersion. Six quantitative mathematical multiple linear regression-based models were produced based on selection of the most influential independent physical and chemical parameters from a set of 33 possible factors, one model for each combination of polymer and processing method, with good predictability of stability. Three general rules are proposed from these models for the formulation development of suitably stable solid dispersions. Namely, increased stability is correlated with increased glass transition temperature ( T g ) of solid dispersions, as well as decreased number of H-bond donors and increased molecular flexibility (such as rotatable bonds and ring count) of the drug molecule.
Baba, Toshimi; Gotoh, Yusaku; Yamaguchi, Satoshi; Nakagawa, Satoshi; Abe, Hayato; Masuda, Yutaka; Kawahara, Takayoshi
2017-08-01
This study aimed to evaluate a validation reliability of single-step genomic best linear unbiased prediction (ssGBLUP) with a multiple-lactation random regression test-day model and investigate an effect of adding genotyped cows on the reliability. Two data sets for test-day records from the first three lactations were used: full data from February 1975 to December 2015 (60 850 534 records from 2 853 810 cows) and reduced data cut off in 2011 (53 091 066 records from 2 502 307 cows). We used marker genotypes of 4480 bulls and 608 cows. Genomic enhanced breeding values (GEBV) of 305-day milk yield in all the lactations were estimated for at least 535 young bulls using two marker data sets: bull genotypes only and both bulls and cows genotypes. The realized reliability (R 2 ) from linear regression analysis was used as an indicator of validation reliability. Using only genotyped bulls, R 2 was ranged from 0.41 to 0.46 and it was always higher than parent averages. The very similar R 2 were observed when genotyped cows were added. An application of ssGBLUP to a multiple-lactation random regression model is feasible and adding a limited number of genotyped cows has no significant effect on reliability of GEBV for genotyped bulls. © 2016 Japanese Society of Animal Science.
Yan, Jun; Huang, Jian-Hua; He, Min; Lu, Hong-Bing; Yang, Rui; Kong, Bo; Xu, Qing-Song; Liang, Yi-Zeng
2013-08-01
Retention indices for frequently reported compounds of plant essential oils on three different stationary phases were investigated. Multivariate linear regression, partial least squares, and support vector machine combined with a new variable selection approach called random-frog recently proposed by our group, were employed to model quantitative structure-retention relationships. Internal and external validations were performed to ensure the stability and predictive ability. All the three methods could obtain an acceptable model, and the optimal results by support vector machine based on a small number of informative descriptors with the square of correlation coefficient for cross validation, values of 0.9726, 0.9759, and 0.9331 on the dimethylsilicone stationary phase, the dimethylsilicone phase with 5% phenyl groups, and the PEG stationary phase, respectively. The performances of two variable selection approaches, random-frog and genetic algorithm, are compared. The importance of the variables was found to be consistent when estimated from correlation coefficients in multivariate linear regression equations and selection probability in model spaces. © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Shilov, Georgi E
1977-01-01
Covers determinants, linear spaces, systems of linear equations, linear functions of a vector argument, coordinate transformations, the canonical form of the matrix of a linear operator, bilinear and quadratic forms, Euclidean spaces, unitary spaces, quadratic forms in Euclidean and unitary spaces, finite-dimensional space. Problems with hints and answers.
International Nuclear Information System (INIS)
Kloepfer, M.
1989-01-01
This comprehensive reference book on environmental law and practice also is a valuable textbook for students specializing in the field. The entire law on pollution control and environmental protection is presented in an intelligent system, covering the latest developments in the Federal and Land legislation, public environmental law, and the related provisions in the fields of civil law and criminal law. The national survey is rounded up by information concerning the international environmental law, environmental law of the European Communities, and of other foreign countries as e.g. Austria and Switzerland. The author also reviews conditions in neighbouring fields such as technology and labour law, environmental economy, environmental policy. Special attention is given to current topics, as e.g. relating to genetic engineering, disused landfills or industrial sites, soil protection, transport of hazardous goods, liability for damage to forests, atomic energy law, and radiation protection law. The latest publishing dates of literature and court decisions considered in the book are in the first months of 1989. (RST) [de
Kavuncuoglu, Hatice; Kavuncuoglu, Erhan; Karatas, Seyda Merve; Benli, Büsra; Sagdic, Osman; Yalcin, Hasan
2018-04-09
The mathematical model was established to determine the diameter of inhibition zone of the walnut extract on the twelve bacterial species. Type of extraction, concentration, and pathogens were taken as input variables. Two models were used with the aim of designing this system. One of them was developed with artificial neural networks (ANN), and the other was formed with multiple linear regression (MLR). Four common training algorithms were used. Levenberg-Marquardt (LM), Bayesian regulation (BR), scaled conjugate gradient (SCG) and resilient back propagation (RP) were investigated, and the algorithms were compared. Root mean squared error and correlation coefficient were evaluated as performance criteria. When these criteria were analyzed, ANN showed high prediction performance, while MLR showed low prediction performance. As a result, it is seen that when the different input values are provided to the system developed with ANN, the most accurate inhibition zone (IZ) estimates were obtained. The results of this study could offer new perspectives, particularly in the field of microbiology, because these could be applied to other type of extraction, concentrations, and pathogens, without resorting to experiments. Copyright © 2018 Elsevier B.V. All rights reserved.
International Nuclear Information System (INIS)
Kim, Yong-il; Kim, Yong Joong; Paeng, Jin Chul; Cheon, Gi Jeong; Lee, Dong Soo; Chung, June-Key; Kang, Keon Wook
2017-01-01
18 F-Fluorodeoxyglucose (FDG) positron emission tomography (PET)/computed tomography (CT) has been investigated as a method to predict pancreatic cancer recurrence after pancreatic surgery. We evaluated the recently introduced heterogeneity indices of 18 F-FDG PET/CT used for predicting pancreatic cancer recurrence after surgery and compared them with current clinicopathologic and 18 F-FDG PET/CT parameters. A total of 93 pancreatic ductal adenocarcinoma patients (M:F = 60:33, mean age = 64.2 ± 9.1 years) who underwent preoperative 18 F-FDG PET/CT following pancreatic surgery were retrospectively enrolled. The standardized uptake values (SUVs) and tumor-to-background ratios (TBR) were measured on each 18 F-FDG PET/CT, as metabolic parameters. Metabolic tumor volume (MTV) and total lesion glycolysis (TLG) were examined as volumetric parameters. The coefficient of variance (heterogeneity index-1; SUVmean divided by the standard deviation) and linear regression slopes (heterogeneity index-2) of the MTV, according to SUV thresholds of 2.0, 2.5 and 3.0, were evaluated as heterogeneity indices. Predictive values of clinicopathologic and 18 F-FDG PET/CT parameters and heterogeneity indices were compared in terms of pancreatic cancer recurrence. Seventy patients (75.3%) showed recurrence after pancreatic cancer surgery (mean recurrence = 9.4 ± 8.4 months). Comparing the recurrence and no recurrence patients, all of the 18 F-FDG PET/CT parameters and heterogeneity indices demonstrated significant differences. In univariate Cox-regression analyses, MTV (P = 0.013), TLG (P = 0.007), and heterogeneity index-2 (P = 0.027) were significant. Among the clinicopathologic parameters, CA19-9 (P = 0.025) and venous invasion (P = 0.002) were selected as significant parameters. In multivariate Cox-regression analyses, MTV (P = 0.005), TLG (P = 0.004), and heterogeneity index-2 (P = 0.016) with venous invasion (P < 0.001, 0.001, and 0.001, respectively) demonstrated significant results
Herrig, Ilona M; Böer, Simone I; Brennholt, Nicole; Manz, Werner
2015-11-15
Since rivers are typically subject to rapid changes in microbiological water quality, tools are needed to allow timely water quality assessment. A promising approach is the application of predictive models. In our study, we developed multiple linear regression (MLR) models in order to predict the abundance of the fecal indicator organisms Escherichia coli (EC), intestinal enterococci (IE) and somatic coliphages (SC) in the Lahn River, Germany. The models were developed on the basis of an extensive set of environmental parameters collected during a 12-months monitoring period. Two models were developed for each type of indicator: 1) an extended model including the maximum number of variables significantly explaining variations in indicator abundance and 2) a simplified model reduced to the three most influential explanatory variables, thus obtaining a model which is less resource-intensive with regard to required data. Both approaches have the ability to model multiple sites within one river stretch. The three most important predictive variables in the optimized models for the bacterial indicators were NH4-N, turbidity and global solar irradiance, whereas chlorophyll a content, discharge and NH4-N were reliable model variables for somatic coliphages. Depending on indicator type, the extended mode models also included the additional variables rainfall, O2 content, pH and chlorophyll a. The extended mode models could explain 69% (EC), 74% (IE) and 72% (SC) of the observed variance in fecal indicator concentrations. The optimized models explained the observed variance in fecal indicator concentrations to 65% (EC), 70% (IE) and 68% (SC). Site-specific efficiencies ranged up to 82% (EC) and 81% (IE, SC). Our results suggest that MLR models are a promising tool for a timely water quality assessment in the Lahn area. Copyright © 2015 Elsevier Ltd. All rights reserved.
International Nuclear Information System (INIS)
Triffterer, O.
1980-01-01
In the draft proposed by the legal advisory board the law for the controlling of environmental criminality was promulgated on 28th March 1980. The present commentary therefore - as seen from the results - corresponds in essential to the original assessment of the governmental draft. However, an introduction into the problems of environmental law precedes this commentary for the better unterstanding of all those not acquainted with pollution law and the whole legal matter. (orig./HP) [de
Eliazar, Iddo
2017-11-01
Aging means that as things grow old their remaining expected lifetimes lessen. Either faster or slower, most of the things we encounter in our everyday lives age with time. However, there are things that do quite the opposite - they anti-age: as they grow old their remaining expected lifetimes increase rather than decrease. A quantitative formulation of anti-aging is given by the so-called ;Lindy's Law;. In this paper we explore Lindy's Law and its connections to Pareto's Law, to Zipf's Law, and to socioeconomic inequality.
Deviations from Vegard’s law in ternary III-V alloys
Murphy, S. T.
2010-08-03
Vegard’s law states that, at a constant temperature, the volume of an alloy can be determined from a linear interpolation of its constituent’s volumes. Deviations from this description occur such that volumes are both greater and smaller than the linear relationship would predict. Here we use special quasirandom structures and density functional theory to investigate such deviations for MxN1−xAs ternary alloys, where M and N are group III species (B, Al, Ga, and In). Our simulations predict a tendency, with the exception of AlxGa1−xAs, for the volume of the ternary alloys to be smaller than that determined from the linear interpolation of the volumes of the MAs and BAs binary alloys. Importantly, we establish a simple relationship linking the relative size of the group III atoms in the alloy and the predicted magnitude of the deviation from Vegard’s law.
Angulo, Raul E.; Hilbert, Stefan
2015-03-01
We explore the cosmological constraints from cosmic shear using a new way of modelling the non-linear matter correlation functions. The new formalism extends the method of Angulo & White, which manipulates outputs of N-body simulations to represent the 3D non-linear mass distribution in different cosmological scenarios. We show that predictions from our approach for shear two-point correlations at 1-300 arcmin separations are accurate at the ˜10 per cent level, even for extreme changes in cosmology. For moderate changes, with target cosmologies similar to that preferred by analyses of recent Planck data, the accuracy is close to ˜5 per cent. We combine this approach with a Monte Carlo Markov chain sampler to explore constraints on a Λ cold dark matter model from the shear correlation functions measured in the Canada-France-Hawaii Telescope Lensing Survey (CFHTLenS). We obtain constraints on the parameter combination σ8(Ωm/0.27)0.6 = 0.801 ± 0.028. Combined with results from cosmic microwave background data, we obtain marginalized constraints on σ8 = 0.81 ± 0.01 and Ωm = 0.29 ± 0.01. These results are statistically compatible with previous analyses, which supports the validity of our approach. We discuss the advantages of our method and the potential it offers, including a path to model in detail (i) the effects of baryons, (ii) high-order shear correlation functions, and (iii) galaxy-galaxy lensing, among others, in future high-precision cosmological analyses.
International Nuclear Information System (INIS)
2016-01-01
This section treats of the following case laws: 1 - Case Law France: Conseil d'etat decision, 22 February 2016, EDF v. Republic and Canton of Geneva relative to the Bugey nuclear power plant (No. 373516); United States: Brodsky v. US Nuclear Regulatory Commission, 650 Fed. Appx. 804 (2. Cir. 2016)
Manitoba Dept. of Education, Winnipeg.
This publication outlines a law course intended as part of a business education program in the secondary schools of Manitoba, Canada. The one credit course of study should be taught over a period of 110-120 hours of instruction. It provides students with an introduction to the principles, practices, and consequences of law with regard to torts,…
Deville, Anne-Sophie; Grémillet, David; Gauthier-Clerc, Michel; Guillemain, Matthieu; Von Houwald, Friederike; Gardelli, Bruno; Béchet, Arnaud
2013-01-01
Accurate knowledge of the functional response of predators to prey density is essential for understanding food web dynamics, to parameterize mechanistic models of animal responses to environmental change, and for designing appropriate conservation measures. Greater flamingos (Phoenicopterus roseus), a flagship species of Mediterranean wetlands, primarily feed on Artemias (Artemia spp.) in commercial salt pans, an industry which may collapse for economic reasons. Flamingos also feed on alternative prey such as Chironomid larvae (e.g., Chironomid spp.) and rice seeds (Oryza sativa). However, the profitability of these food items for flamingos remains unknown. We determined the functional responses of flamingos feeding on Artemias, Chironomids, or rice. Experiments were conducted on 11 captive flamingos. For each food item, we offered different ranges of food densities, up to 13 times natural abundance. Video footage allowed estimating intake rates. Contrary to theoretical predictions for filter feeders, intake rates did not increase linearly with increasing food density (type I). Intake rates rather increased asymptotically with increasing food density (type II) or followed a sigmoid shape (type III). Hence, flamingos were not able to ingest food in direct proportion to their abundance, possibly because of unique bill structure resulting in limited filtering capabilities. Overall, flamingos foraged more efficiently on Artemias. When feeding on Chironomids, birds had lower instantaneous rates of food discovery and required more time to extract food from the sediment and ingest it, than when filtering Artemias from the water column. However, feeding on rice was energetically more profitable for flamingos than feeding on Artemias or Chironomids, explaining their attraction for rice fields. Crucially, we found that food densities required for flamingos to reach asymptotic intake rates are rarely met under natural conditions. This allows us to predict an immediate
Linear positivity and virtual probability
International Nuclear Information System (INIS)
Hartle, James B.
2004-01-01
We investigate the quantum theory of closed systems based on the linear positivity decoherence condition of Goldstein and Page. The objective of any quantum theory of a closed system, most generally the universe, is the prediction of probabilities for the individual members of sets of alternative coarse-grained histories of the system. Quantum interference between members of a set of alternative histories is an obstacle to assigning probabilities that are consistent with the rules of probability theory. A quantum theory of closed systems therefore requires two elements: (1) a condition specifying which sets of histories may be assigned probabilities and (2) a rule for those probabilities. The linear positivity condition of Goldstein and Page is the weakest of the general conditions proposed so far. Its general properties relating to exact probability sum rules, time neutrality, and conservation laws are explored. Its inconsistency with the usual notion of independent subsystems in quantum mechanics is reviewed. Its relation to the stronger condition of medium decoherence necessary for classicality is discussed. The linear positivity of histories in a number of simple model systems is investigated with the aim of exhibiting linearly positive sets of histories that are not decoherent. The utility of extending the notion of probability to include values outside the range of 0-1 is described. Alternatives with such virtual probabilities cannot be measured or recorded, but can be used in the intermediate steps of calculations of real probabilities. Extended probabilities give a simple and general way of formulating quantum theory. The various decoherence conditions are compared in terms of their utility for characterizing classicality and the role they might play in further generalizations of quantum mechanics
Faraway, Julian J
2014-01-01
A Hands-On Way to Learning Data AnalysisPart of the core of statistics, linear models are used to make predictions and explain the relationship between the response and the predictors. Understanding linear models is crucial to a broader competence in the practice of statistics. Linear Models with R, Second Edition explains how to use linear models in physical science, engineering, social science, and business applications. The book incorporates several improvements that reflect how the world of R has greatly expanded since the publication of the first edition.New to the Second EditionReorganiz
D'Archivio, Angelo Antonio; Maggi, Maria Anna; Ruggieri, Fabrizio
2014-08-01
In this paper, a multilayer artificial neural network is used to model simultaneously the effect of solute structure and eluent concentration profile on the retention of s-triazines in reversed-phase high-performance liquid chromatography under linear gradient elution. The retention data of 24 triazines, including common herbicides and their metabolites, are collected under 13 different elution modes, covering the following experimental domain: starting acetonitrile volume fraction ranging between 40 and 60% and gradient slope ranging between 0 and 1% acetonitrile/min. The gradient parameters together with five selected molecular descriptors, identified by quantitative structure-retention relationship modelling applied to individual separation conditions, are the network inputs. Predictive performance of this model is evaluated on six external triazines and four unseen separation conditions. For comparison, retention of triazines is modelled by both quantitative structure-retention relationships and response surface methodology, which describe separately the effect of molecular structure and gradient parameters on the retention. Although applied to a wider variable domain, the network provides a performance comparable to that of the above "local" models and retention times of triazines are modelled with accuracy generally better than 7%. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Golmohammadi, Hassan
2009-11-30
A quantitative structure-property relationship (QSPR) study was performed to develop models those relate the structure of 141 organic compounds to their octanol-water partition coefficients (log P(o/w)). A genetic algorithm was applied as a variable selection tool. Modeling of log P(o/w) of these compounds as a function of theoretically derived descriptors was established by multiple linear regression (MLR), partial least squares (PLS), and artificial neural network (ANN). The best selected descriptors that appear in the models are: atomic charge weighted partial positively charged surface area (PPSA-3), fractional atomic charge weighted partial positive surface area (FPSA-3), minimum atomic partial charge (Qmin), molecular volume (MV), total dipole moment of molecule (mu), maximum antibonding contribution of a molecule orbital in the molecule (MAC), and maximum free valency of a C atom in the molecule (MFV). The result obtained showed the ability of developed artificial neural network to prediction of partition coefficients of organic compounds. Also, the results revealed the superiority of ANN over the MLR and PLS models. Copyright 2009 Wiley Periodicals, Inc.
DEFF Research Database (Denmark)
Sadl, Urska
2013-01-01
Reasoning of the Court of Justice of the European Union – Constr uction of arguments in the case-law of the Court – Citation technique – The use of formulas to transform case-law into ‘law’ – ‘Formulaic style’ – European citizenship as a fundamental status – Ruiz Zambrano – Reasoning from...
International Nuclear Information System (INIS)
Pascal, Maurice.
1979-01-01
This book on nuclear law is the first of a series of analytical studies to be published by the French Energy Commission (CEA) concerning all the various nuclear activities. It describes national and international legislation applicable in France covering the following main sectors: the licensing procedure for nuclear installations, the law of the sea and nuclear law, the legal system governing radioisotopes, the transport of radioactive materials, third party liability and insurance and radiation protection. In each chapter, the overall analysis is supplemented by the relevant regulatory texts and by organisation charts in annex. (NEA) [fr
Second Law of Thermodynamics Applied to Metabolic Networks
Nigam, R.; Liang, S.
2003-01-01
We present a simple algorithm based on linear programming, that combines Kirchoff's flux and potential laws and applies them to metabolic networks to predict thermodynamically feasible reaction fluxes. These law's represent mass conservation and energy feasibility that are widely used in electrical circuit analysis. Formulating the Kirchoff's potential law around a reaction loop in terms of the null space of the stoichiometric matrix leads to a simple representation of the law of entropy that can be readily incorporated into the traditional flux balance analysis without resorting to non-linear optimization. Our technique is new as it can easily check the fluxes got by applying flux balance analysis for thermodynamic feasibility and modify them if they are infeasible so that they satisfy the law of entropy. We illustrate our method by applying it to the network dealing with the central metabolism of Escherichia coli. Due to its simplicity this algorithm will be useful in studying large scale complex metabolic networks in the cell of different organisms.
Shim, Jaemin; Hwang, Minki; Song, Jun-Seop; Lim, Byounghyun; Kim, Tae-Hoon; Joung, Boyoung; Kim, Sung-Hwan; Oh, Yong-Seog; Nam, Gi-Byung; On, Young Keun; Oh, Seil; Kim, Young-Hoon; Pak, Hui-Nam
2017-01-01
Objective: Radiofrequency catheter ablation for persistent atrial fibrillation (PeAF) still has a substantial recurrence rate. This study aims to investigate whether an AF ablation lesion set chosen using in-silico ablation (V-ABL) is clinically feasible and more effective than an empirically chosen ablation lesion set (Em-ABL) in patients with PeAF. Methods: We prospectively included 108 patients with antiarrhythmic drug-resistant PeAF (77.8% men, age 60.8 ± 9.9 years), and randomly assigned them to the V-ABL ( n = 53) and Em-ABL ( n = 55) groups. Five different in-silico ablation lesion sets [1 pulmonary vein isolation (PVI), 3 linear ablations, and 1 electrogram-guided ablation] were compared using heart-CT integrated AF modeling. We evaluated the feasibility, safety, and efficacy of V-ABL compared with that of Em-ABL. Results: The pre-procedural computing time for five different ablation strategies was 166 ± 11 min. In the Em-ABL group, the earliest terminating blinded in-silico lesion set matched with the Em-ABL lesion set in 21.8%. V-ABL was not inferior to Em-ABL in terms of procedure time ( p = 0.403), ablation time ( p = 0.510), and major complication rate ( p = 0.900). During 12.6 ± 3.8 months of follow-up, the clinical recurrence rate was 14.0% in the V-ABL group and 18.9% in the Em-ABL group ( p = 0.538). In Em-ABL group, clinical recurrence rate was significantly lower after PVI+posterior box+anterior linear ablation, which showed the most frequent termination during in-silico ablation (log-rank p = 0.027). Conclusions: V-ABL was feasible in clinical practice, not inferior to Em-ABL, and predicts the most effective ablation lesion set in patients who underwent PeAF ablation.
Doranda Maracineanu
2009-01-01
The law system of a State represents the body of rules passed or recognized by that State inorder to regulate the social relationships, rules that must be freely obeyed by their recipients, otherwisethe State intervening with its coercive power. Throughout the development of the society, pedants havebeen particularly interested in the issue of law systems, each supporting various classifications; theclassification that has remained is the one distinguishing between the Anglo-Saxon, the Roman-...
International Nuclear Information System (INIS)
Suwono.
1978-01-01
A linear gate providing a variable gate duration from 0,40μsec to 4μsec was developed. The electronic circuity consists of a linear circuit and an enable circuit. The input signal can be either unipolar or bipolar. If the input signal is bipolar, the negative portion will be filtered. The operation of the linear gate is controlled by the application of a positive enable pulse. (author)
International Nuclear Information System (INIS)
Vretenar, M
2014-01-01
The main features of radio-frequency linear accelerators are introduced, reviewing the different types of accelerating structures and presenting the main characteristics aspects of linac beam dynamics
LINEAR2007, Linear-Linear Interpolation of ENDF Format Cross-Sections
International Nuclear Information System (INIS)
2007-01-01
1 - Description of program or function: LINEAR converts evaluated cross sections in the ENDF/B format into a tabular form that is subject to linear-linear interpolation in energy and cross section. The code also thins tables of cross sections already in that form. Codes used subsequently need thus to consider only linear-linear data. IAEA1311/15: This version include the updates up to January 30, 2007. Changes in ENDF/B-VII Format and procedures, as well as the evaluations themselves, make it impossible for versions of the ENDF/B pre-processing codes earlier than PREPRO 2007 (2007 Version) to accurately process current ENDF/B-VII evaluations. The present code can handle all existing ENDF/B-VI evaluations through release 8, which will be the last release of ENDF/B-VI. Modifications from previous versions: - Linear VERS. 2007-1 (JAN. 2007): checked against all ENDF/B-VII; increased page size from 60,000 to 600,000 points 2 - Method of solution: Each section of data is considered separately. Each section of File 3, 23, and 27 data consists of a table of cross section versus energy with any of five interpolation laws. LINEAR will replace each section with a new table of energy versus cross section data in which the interpolation law is always linear in energy and cross section. The histogram (constant cross section between two energies) interpolation law is converted to linear-linear by substituting two points for each initial point. The linear-linear is not altered. For the log-linear, linear-log and log- log laws, the cross section data are converted to linear by an interval halving algorithm. Each interval is divided in half until the value at the middle of the interval can be approximated by linear-linear interpolation to within a given accuracy. The LINEAR program uses a multipoint fractional error thinning algorithm to minimize the size of each cross section table
Nachev, Vladislav; Stich, Kai Petra; Winter, York
2013-01-01
Weber's law quantifies the perception of difference between stimuli. For instance, it can explain why we are less likely to detect the removal of three nuts from a bowl if the bowl is full than if it is nearly empty. This is an example of the magnitude effect - the phenomenon that the subjective perception of a linear difference between a pair of stimuli progressively diminishes when the average magnitude of the stimuli increases. Although discrimination performances of both human and animal subjects in various sensory modalities exhibit the magnitude effect, results sometimes systematically deviate from the quantitative predictions based on Weber's law. An attempt to reformulate the law to better fit data from acoustic discrimination tasks has been dubbed the "near-miss to Weber's law". Here, we tested the gustatory discrimination performance of nectar-feeding bats (Glossophaga soricina), in order to investigate whether the original version of Weber's law accurately predicts choice behavior in a two-alternative forced choice task. As expected, bats either preferred the sweeter of the two options or showed no preference. In 4 out of 6 bats the near-miss to Weber's law provided a better fit and Weber's law underestimated the magnitude effect. In order to test the generality of this observation in nectar-feeders, we reviewed previously published data on bats, hummingbirds, honeybees, and bumblebees. In all groups of animals the near-miss to Weber's law provided better fits than Weber's law. Furthermore, whereas the magnitude effect was stronger than predicted by Weber's law in vertebrates, it was weaker than predicted in insects. Thus nectar-feeding vertebrates and insects seem to differ in how their choice behavior changes as sugar concentration is increased. We discuss the ecological and evolutionary implications of the observed patterns of sugar concentration discrimination.
Directory of Open Access Journals (Sweden)
Vladislav Nachev
Full Text Available Weber's law quantifies the perception of difference between stimuli. For instance, it can explain why we are less likely to detect the removal of three nuts from a bowl if the bowl is full than if it is nearly empty. This is an example of the magnitude effect - the phenomenon that the subjective perception of a linear difference between a pair of stimuli progressively diminishes when the average magnitude of the stimuli increases. Although discrimination performances of both human and animal subjects in various sensory modalities exhibit the magnitude effect, results sometimes systematically deviate from the quantitative predictions based on Weber's law. An attempt to reformulate the law to better fit data from acoustic discrimination tasks has been dubbed the "near-miss to Weber's law". Here, we tested the gustatory discrimination performance of nectar-feeding bats (Glossophaga soricina, in order to investigate whether the original version of Weber's law accurately predicts choice behavior in a two-alternative forced choice task. As expected, bats either preferred the sweeter of the two options or showed no preference. In 4 out of 6 bats the near-miss to Weber's law provided a better fit and Weber's law underestimated the magnitude effect. In order to test the generality of this observation in nectar-feeders, we reviewed previously published data on bats, hummingbirds, honeybees, and bumblebees. In all groups of animals the near-miss to Weber's law provided better fits than Weber's law. Furthermore, whereas the magnitude effect was stronger than predicted by Weber's law in vertebrates, it was weaker than predicted in insects. Thus nectar-feeding vertebrates and insects seem to differ in how their choice behavior changes as sugar concentration is increased. We discuss the ecological and evolutionary implications of the observed patterns of sugar concentration discrimination.
Terrill, Kasia; Nesbitt, David J
2010-08-01
Ab initio anharmonic transition frequencies are calculated for strongly coupled (i) asymmetric and (ii) symmetric proton stretching modes in the X-H(+)-X linear ionic hydrogen bonded complexes for OCHCO(+) and N(2)HN(2)(+). The optimized potential surface is calculated in these two coordinates for each molecular ion at CCSD(T)/aug-cc-pVnZ (n = 2-4) levels and extrapolated to the complete-basis-set limit (CBS). Slices through both 2D surfaces reveal a relatively soft potential in the asymmetric proton stretching coordinate at near equilibrium geometries, which rapidly becomes a double minimum potential with increasing symmetric proton acceptor center of mass separation. Eigenvalues are obtained by solution of the 2D Schrödinger equation with potential/kinetic energy coupling explicity taken into account, converged in a distributed Gaussian basis set as a function of grid density. The asymmetric proton stretch fundamental frequency for N(2)HN(2)(+) is predicted at 848 cm(-1), with strong negative anharmonicity in the progression characteristic of a shallow "particle in a box" potential. The corresponding proton stretch fundamental for OCHCO(+) is anomalously low at 386 cm(-1), but with a strong alternation in the vibrational spacing due to the presence of a shallow D(infinityh) transition state barrier (Delta = 398 cm(-1)) between the two equivalent minimum geometries. Calculation of a 2D dipole moment surface and transition matrix elements reveals surprisingly strong combination and difference bands with appreciable intensity throughout the 300-1500 cm(-1) region. Corrected for zero point (DeltaZPE) and thermal vibrational excitation (DeltaE(vib)) at 300 K, the single and double dissociation energies in these complexes are in excellent agreement with thermochemical gas phase ion data.
Tom, Nathan; Yeung, Ronald W.
2015-01-01
To further maximize power absorption in both regular and irregular ocean wave environments, nonlinear-model-predictive control (NMPC) was applied to a model-scale point absorber developed at the University of California Berkeley, Berkeley, CA, USA. The NMPC strategy requires a power-takeoff (PTO) unit that could be turned on and off, as the generator would be inactive for up to 60% of the wave period. To confirm the effectiveness of this NMPC strategy, an in-house-designed permanent magnet linear generator (PMLG) was chosen as the PTO. The time-varying performance of the PMLG was first characterized by dry-bench tests, using mechanical relays to control the electromagnetic conversion process. The on/off sequencing of the PMLG was tested under regular and irregular wave excitation to validate NMPC simulations using control inputs obtained from running the choice optimizer offline. Experimental results indicate that successful implementation was achieved and absorbed power using NMPC was up to 50% greater than the passive system, which utilized no controller. Previous investigations into MPC applied to wave energy converters have lacked the experimental results to confirm the reported gains in power absorption. However, after considering the PMLG mechanical-to-electrical conversion efficiency, the electrical power output was not consistently maximized. To improve output power, a mathematical relation between the efficiency and damping magnitude of the PMLG was inserted in the system model to maximize the electrical power output through continued use of NMPC which helps separate this work from previous investigators. Of significance, results from latter simulations provided a damping time series that was active over a larger portion of the wave period requiring the actuation of the applied electrical load, rather than on/off control.
Linearization Method and Linear Complexity
Tanaka, Hidema
We focus on the relationship between the linearization method and linear complexity and show that the linearization method is another effective technique for calculating linear complexity. We analyze its effectiveness by comparing with the logic circuit method. We compare the relevant conditions and necessary computational cost with those of the Berlekamp-Massey algorithm and the Games-Chan algorithm. The significant property of a linearization method is that it needs no output sequence from a pseudo-random number generator (PRNG) because it calculates linear complexity using the algebraic expression of its algorithm. When a PRNG has n [bit] stages (registers or internal states), the necessary computational cost is smaller than O(2n). On the other hand, the Berlekamp-Massey algorithm needs O(N2) where N(≅2n) denotes period. Since existing methods calculate using the output sequence, an initial value of PRNG influences a resultant value of linear complexity. Therefore, a linear complexity is generally given as an estimate value. On the other hand, a linearization method calculates from an algorithm of PRNG, it can determine the lower bound of linear complexity.
Said-Houari, Belkacem
2017-01-01
This self-contained, clearly written textbook on linear algebra is easily accessible for students. It begins with the simple linear equation and generalizes several notions from this equation for the system of linear equations and introduces the main ideas using matrices. It then offers a detailed chapter on determinants and introduces the main ideas with detailed proofs. The third chapter introduces the Euclidean spaces using very simple geometric ideas and discusses various major inequalities and identities. These ideas offer a solid basis for understanding general Hilbert spaces in functional analysis. The following two chapters address general vector spaces, including some rigorous proofs to all the main results, and linear transformation: areas that are ignored or are poorly explained in many textbooks. Chapter 6 introduces the idea of matrices using linear transformation, which is easier to understand than the usual theory of matrices approach. The final two chapters are more advanced, introducing t...
Malykh, O V; Golub, A Yu; Teplyakov, V V
2011-05-11
Membrane gas separation technologies (air separation, hydrogen recovery from dehydrogenation processes, etc.) use traditionally the glassy polymer membranes with dominating permeability of "small" gas molecules. For this purposes the membranes based on the low free volume glassy polymers (e.g., polysulfone, tetrabromopolycarbonate and polyimides) are used. On the other hand, an application of membrane methods for VOCs and some toxic gas recovery from air, separation of the lower hydrocarbons containing mixtures (in petrochemistry and oil refining) needs the membranes with preferable penetration of components with relatively larger molecular sizes. In general, this kind of permeability is characterized for rubbers and for the high free volume glassy polymers. Data files accumulated (more than 1500 polymeric materials) represent the region of parameters "inside" of these "boundaries." Two main approaches to the prediction of gas permeability of polymers are considered in this paper: (1) the statistical treatment of published transport parameters of polymers and (2) the prediction using model of ≪diffusion jump≫ with consideration of the key properties of the diffusing molecule and polymeric matrix. In the frames of (1) the paper presents N-dimensional methods of the gas permeability estimation of polymers using the correlations "selectivity/permeability." It is found that the optimal accuracy of prediction is provided at n=4. In the frames of the solution-diffusion mechanism (2) the key properties include the effective molecular cross-section of penetrating species to be responsible for molecular transportation in polymeric matrix and the well known force constant (ε/k)(eff i) of {6-12} potential for gas-gas interaction. Set of corrected effective molecular cross-section of penetrant including noble gases (He, Ne, Ar, Kr, Xe), permanent gases (H(2), O(2), N(2), CO), ballast and toxic gases (CO(2), NO(,) NO(2), SO(2), H(2)S) and linear lower hydrocarbons (CH(4
DEFF Research Database (Denmark)
working and researching in the key areas of law, security and privacy in IT, international trade and private law. Now, in 2010 and some seven conferences later, the event moves to Barcelona and embraces for the first time the three conference tracks just described. The papers in this work have all been...... blind reviewed and edited for quality. They represent the contributions of leading academics, early career researchers and others from an increasing number of countries, universities and institutions around the world. They set a benchmark for discussion of the current issues arising in the subject area...... and continue to offer an informed and relevant contribution to the policy making agenda. As Chair of the Conference Committee, I am once more very proud to endorse this work "Private Law: Rights, Duties & Conflicts" to all those seeking an up to date and informed evaluation of the leading issues. This work...
Stoll, R R
1968-01-01
Linear Algebra is intended to be used as a text for a one-semester course in linear algebra at the undergraduate level. The treatment of the subject will be both useful to students of mathematics and those interested primarily in applications of the theory. The major prerequisite for mastering the material is the readiness of the student to reason abstractly. Specifically, this calls for an understanding of the fact that axioms are assumptions and that theorems are logical consequences of one or more axioms. Familiarity with calculus and linear differential equations is required for understand
Modified circular velocity law
Djeghloul, Nazim
2018-05-01
A modified circular velocity law is presented for a test body orbiting around a spherically symmetric mass. This law exhibits a distance scale parameter and allows to recover both usual Newtonian behaviour for lower distances and a constant velocity limit at large scale. Application to the Galaxy predicts the known behaviour and also leads to a galactic mass in accordance with the measured visible stellar mass so that additional dark matter inside the Galaxy can be avoided. It is also shown that this circular velocity law can be embedded in a geometrical description of spacetime within the standard general relativity framework upon relaxing the usual asymptotic flatness condition. This formulation allows to redefine the introduced Newtonian scale limit in term of the central mass exclusively. Moreover, a satisfactory answer to the galactic escape speed problem can be provided indicating the possibility that one can also get rid of dark matter halo outside the Galaxy.
Dynamic Algorithm for LQGPC Predictive Control
DEFF Research Database (Denmark)
Hangstrup, M.; Ordys, A.W.; Grimble, M.J.
1998-01-01
In this paper the optimal control law is derived for a multi-variable state space Linear Quadratic Gaussian Predictive Controller (LQGPC). A dynamic performance index is utilized resulting in an optimal steady state controller. Knowledge of future reference values is incorporated into the control......In this paper the optimal control law is derived for a multi-variable state space Linear Quadratic Gaussian Predictive Controller (LQGPC). A dynamic performance index is utilized resulting in an optimal steady state controller. Knowledge of future reference values is incorporated...... into the controller design and the solution is derived using the method of Lagrange multipliers. It is shown how well-known GPC controller can be obtained as a special case of the LQGPC controller design. The important advantage of using the LQGPC framework for designing predictive, e.g. GPS is that LQGPC enables...
Solow, Daniel
2014-01-01
This text covers the basic theory and computation for a first course in linear programming, including substantial material on mathematical proof techniques and sophisticated computation methods. Includes Appendix on using Excel. 1984 edition.
Liesen, Jörg
2015-01-01
This self-contained textbook takes a matrix-oriented approach to linear algebra and presents a complete theory, including all details and proofs, culminating in the Jordan canonical form and its proof. Throughout the development, the applicability of the results is highlighted. Additionally, the book presents special topics from applied linear algebra including matrix functions, the singular value decomposition, the Kronecker product and linear matrix equations. The matrix-oriented approach to linear algebra leads to a better intuition and a deeper understanding of the abstract concepts, and therefore simplifies their use in real world applications. Some of these applications are presented in detailed examples. In several ‘MATLAB-Minutes’ students can comprehend the concepts and results using computational experiments. Necessary basics for the use of MATLAB are presented in a short introduction. Students can also actively work with the material and practice their mathematical skills in more than 300 exerc...
Berberian, Sterling K
2014-01-01
Introductory treatment covers basic theory of vector spaces and linear maps - dimension, determinants, eigenvalues, and eigenvectors - plus more advanced topics such as the study of canonical forms for matrices. 1992 edition.
Searle, Shayle R
2012-01-01
This 1971 classic on linear models is once again available--as a Wiley Classics Library Edition. It features material that can be understood by any statistician who understands matrix algebra and basic statistical methods.
Christofilos, N.C.; Polk, I.J.
1959-02-17
Improvements in linear particle accelerators are described. A drift tube system for a linear ion accelerator reduces gap capacity between adjacent drift tube ends. This is accomplished by reducing the ratio of the diameter of the drift tube to the diameter of the resonant cavity. Concentration of magnetic field intensity at the longitudinal midpoint of the external sunface of each drift tube is reduced by increasing the external drift tube diameter at the longitudinal center region.
Recent publications on environmental law
International Nuclear Information System (INIS)
Lohse, S.
1991-01-01
The bibliography contains references to publications covering the following subject fields: General environmental law; environmental law in relation to constitutional law, administrative law, procedural law, revenue law, criminal law, private law, industrial law; law of regional development; nature conservation law; law on water protection; waste management law; law on protection against harmful effects on the environment; atomic energy law and radiation protection law; law of the power industry and the mining industry; laws and regulations on hazardous material and environmental hygiene. (orig.) [de
Quantum dissipation from power-law memory
International Nuclear Information System (INIS)
Tarasov, Vasily E.
2012-01-01
A new quantum dissipation model based on memory mechanism is suggested. Dynamics of open and closed quantum systems with power-law memory is considered. The processes with power-law memory are described by using integration and differentiation of non-integer orders, by methods of fractional calculus. An example of quantum oscillator with linear friction and power-law memory is considered. - Highlights: ► A new quantum dissipation model based on memory mechanism is suggested. ► The generalization of Lindblad equation is considered. ► An exact solution of generalized Lindblad equation for quantum oscillator with linear friction and power-law memory is derived.
Conservation Laws in Biochemical Reaction Networks
DEFF Research Database (Denmark)
Mahdi, Adam; Ferragut, Antoni; Valls, Claudia
2017-01-01
We study the existence of linear and nonlinear conservation laws in biochemical reaction networks with mass-action kinetics. It is straightforward to compute the linear conservation laws as they are related to the left null-space of the stoichiometry matrix. The nonlinear conservation laws...... are difficult to identify and have rarely been considered in the context of mass-action reaction networks. Here, using the Darboux theory of integrability, we provide necessary structural (i.e., parameterindependent) conditions on a reaction network to guarantee the existence of nonlinear conservation laws...
International Nuclear Information System (INIS)
Anon.
1999-01-01
This paper gives and analyses three examples of case law: decision rejecting application to close down Tomari nuclear power plant (Japan); judgement by the Supreme Administrative Court on the closing of Barsebaeck (Sweden); litigation relating to the Department of Energy's obligations under the Nuclear Waste Policy Act to accept spent nuclear fuel and high-level radioactive waste (United States). (A.L.B.)
International Nuclear Information System (INIS)
2015-01-01
This section treats of the two following case laws: Slovak Republic: Further developments in cases related to the challenge by Greenpeace Slovakia to the Mochovce nuclear power plant; United States: Judgment of the Nuclear Regulatory Commission denying requests from petitioners to suspend final reactor licensing decisions pending the issuance of a final determination of reasonable assurance of permanent disposal of spent fuel
Marson, James; Ferris, Katy
2016-01-01
Marson & Ferris provide a thorough account of the subject for students. Essential topics are introduced by exploring current and pertinent examples and the relevance of the law in a business environment is considered throughout. This pack includes a supplement which considers the effects of the Consumer Rights Act 2015.
Zhou, Shaona; Wang, Yanlin; Zhang, Chunbin
2016-01-01
There is widespread agreement that science learning always builds upon students' existing ideas and that science teachers should possess knowledge of learners. This study aims at investigating pre-service science teachers' knowledge of student misconceptions and difficulties, a crucial component of PCK, on Newton's Third Law. A questionnaire was…
Nuclear Energy Law and Arbo Law/Safety Law
International Nuclear Information System (INIS)
Eijnde, J.G. van den
1986-01-01
The legal aspects of radiation protection in the Netherlands are described. Radiation protection is regulated mainly in the Nuclear Energy Law. The Arbo Law also has some sections about radiation protection. The interaction between both laws is discussed. (Auth.)
Dark Energy and the Hubble Law
Chernin, A. D.; Dolgachev, V. P.; Domozhilova, L. M.
The Big Bang predicted by Friedmann could not be empirically discovered in the 1920th, since global cosmological distances (more than 300-1000 Mpc) were not available for observations at that time. Lemaitre and Hubble studied receding motions of galaxies at local distances of less than 20-30 Mpc and found that the motions followed the (nearly) linear velocity-distance relation, known now as Hubble's law. For decades, the real nature of this phenomenon has remained a mystery, in Sandage's words. After the discovery of dark energy, it was suggested that the dynamics of local expansion flows is dominated by omnipresent dark energy, and it is the dark energy antigravity that is able to introduce the linear velocity-distance relation to the flows. It implies that Hubble's law observed at local distances was in fact the first observational manifestation of dark energy. If this is the case, the commonly accepted criteria of scientific discovery lead to the conclusion: In 1927, Lemaitre discovered dark energy and Hubble confirmed this in 1929.
A geomorphic process law for detachment-limited hillslopes
Turowski, Jens
2015-04-01
Geomorphic process laws are used to assess the shape evolution of structures at the Earth's surface over geological time scales, and are routinely used in landscape evolution models. There are two currently available concepts on which process laws for hillslope evolution rely. In the transport-limited concept, the evolution of a hillslope is described by a linear or a non-linear diffusion equation. In contrast, in the threshold slope concept, the hillslope is assumed to collapse to a slope equal to the internal friction angle of the material when the load due to the relief exists the material strength. Many mountains feature bedrock slopes, especially in the high mountains, and material transport along the slope is limited by the erosion of the material from the bedrock. Here, I suggest a process law for detachment-limited or threshold-dominated hillslopes, in which the erosion rate is a function of the applied stress minus the surface stress due to structural loading. The process law leads to the prediction of an equilibrium form that compares well to the shape of many mountain domes.
Directory of Open Access Journals (Sweden)
Doranda Maracineanu
2009-06-01
Full Text Available The law system of a State represents the body of rules passed or recognized by that State inorder to regulate the social relationships, rules that must be freely obeyed by their recipients, otherwisethe State intervening with its coercive power. Throughout the development of the society, pedants havebeen particularly interested in the issue of law systems, each supporting various classifications; theclassification that has remained is the one distinguishing between the Anglo-Saxon, the Roman-German,the religious and respectively the communist law systems. The third main international law system is theMuslim one, founded on the Muslim religion – the Islam. The Islam promotes the idea that Allah createdthe law and therefore it must be preserved and observed as such. Etymologically, the Arabian word“Islam” means “to be wanted, to obey” implying the fact that this law system promotes total andunconditioned submission to Allah. The Islamic law is not built on somebody of laws or leading cases,but has as source. The Islam is meant as a universal religion, the Koran promoting the idea of the unityof mankind; thus, one of the precepts in the Koran asserts that “all men are equal (…, there is nodifference between a white man and a black man, between one who is Arabian and one who is not,except for the measure in which they fear God.” The Koran is founded mainly on the Talmud, Hebrewsource of inspiration, and only on very few Christian sources. The Islam does not forward ideas whichcannot be materialized; on the contrary its ideas are purely practical, easy to be observed by the commonman, ideas subordinated to the principle of monotheism. The uncertainties and gaps of the Koran, whichhave been felt along the years, imposed the need for another set of rules, meant to supplement it – that isSunna. Sunna represents a body of laws and, consequently, the second source of the Koran. Sunnanarrates the life of the prophet Mohamed, the model to
International Nuclear Information System (INIS)
Alcaraz, J.
2001-01-01
After several years of study e''+ e''- linear colliders in the TeV range have emerged as the major and optimal high-energy physics projects for the post-LHC era. These notes summarize the present status form the main accelerator and detector features to their physics potential. The LHC era. These notes summarize the present status, from the main accelerator and detector features to their physics potential. The LHC is expected to provide first discoveries in the new energy domain, whereas an e''+ e''- linear collider in the 500 GeV-1 TeV will be able to complement it to an unprecedented level of precision in any possible areas: Higgs, signals beyond the SM and electroweak measurements. It is evident that the Linear Collider program will constitute a major step in the understanding of the nature of the new physics beyond the Standard Model. (Author) 22 refs
Edwards, Harold M
1995-01-01
In his new undergraduate textbook, Harold M Edwards proposes a radically new and thoroughly algorithmic approach to linear algebra Originally inspired by the constructive philosophy of mathematics championed in the 19th century by Leopold Kronecker, the approach is well suited to students in the computer-dominated late 20th century Each proof is an algorithm described in English that can be translated into the computer language the class is using and put to work solving problems and generating new examples, making the study of linear algebra a truly interactive experience Designed for a one-semester course, this text adopts an algorithmic approach to linear algebra giving the student many examples to work through and copious exercises to test their skills and extend their knowledge of the subject Students at all levels will find much interactive instruction in this text while teachers will find stimulating examples and methods of approach to the subject
International Nuclear Information System (INIS)
Silva, J.M. da.
1979-01-01
Facts concerning the application of atomic energy are presented and those aspects which should be under tutelage, the nature and guilt of the nuclear offenses and the agent's peril are presented. The need of a specific chapter in criminal law with adequate legislation concerning the principles of atomic energy is inferred. The basis for the future elaboration this legislation are fixed. (A.L.S.L.) [pt
Lam, Desmond; Mizerski, Richard
2017-06-01
The objective of this study is to explore the gambling participations and game purchase duplication of light regular, heavy regular and pathological gamblers by applying the Duplication of Purchase Law. Current study uses data collected by the Australian Productivity Commission for eight different types of games. Key behavioral statistics on light regular, heavy regular, and pathological gamblers were computed and compared. The key finding is that pathological gambling, just like regular gambling, follows the Duplication of Purchase Law, which states that the dominant factor of purchase duplication between two brands is their market shares. This means that gambling between any two games at pathological level, like any regular consumer purchases, exhibits "law-like" regularity based on the pathological gamblers' participation rate of each game. Additionally, pathological gamblers tend to gamble more frequently across all games except lotteries and instant as well as make greater cross-purchases compared to heavy regular gamblers. A better understanding of the behavioral traits between regular (particularly heavy regular) and pathological gamblers can be useful to public policy makers and social marketers in order to more accurately identify such gamblers and better manage the negative impacts of gambling.
Carlson, H. W.
1979-01-01
A new linearized-theory pressure-coefficient formulation was studied. The new formulation is intended to provide more accurate estimates of detailed pressure loadings for improved stability analysis and for analysis of critical structural design conditions. The approach is based on the use of oblique-shock and Prandtl-Meyer expansion relationships for accurate representation of the variation of pressures with surface slopes in two-dimensional flow and linearized-theory perturbation velocities for evaluation of local three-dimensional aerodynamic interference effects. The applicability and limitations of the modification to linearized theory are illustrated through comparisons with experimental pressure distributions for delta wings covering a Mach number range from 1.45 to 4.60 and angles of attack from 0 to 25 degrees.
Scaling laws and fluctuations in the statistics of word frequencies
Gerlach, Martin; Altmann, Eduardo G.
2014-11-01
In this paper, we combine statistical analysis of written texts and simple stochastic models to explain the appearance of scaling laws in the statistics of word frequencies. The average vocabulary of an ensemble of fixed-length texts is known to scale sublinearly with the total number of words (Heaps’ law). Analyzing the fluctuations around this average in three large databases (Google-ngram, English Wikipedia, and a collection of scientific articles), we find that the standard deviation scales linearly with the average (Taylor's law), in contrast to the prediction of decaying fluctuations obtained using simple sampling arguments. We explain both scaling laws (Heaps’ and Taylor) by modeling the usage of words using a Poisson process with a fat-tailed distribution of word frequencies (Zipf's law) and topic-dependent frequencies of individual words (as in topic models). Considering topical variations lead to quenched averages, turn the vocabulary size a non-self-averaging quantity, and explain the empirical observations. For the numerous practical applications relying on estimations of vocabulary size, our results show that uncertainties remain large even for long texts. We show how to account for these uncertainties in measurements of lexical richness of texts with different lengths.
Scaling laws and fluctuations in the statistics of word frequencies
International Nuclear Information System (INIS)
Gerlach, Martin; Altmann, Eduardo G
2014-01-01
In this paper, we combine statistical analysis of written texts and simple stochastic models to explain the appearance of scaling laws in the statistics of word frequencies. The average vocabulary of an ensemble of fixed-length texts is known to scale sublinearly with the total number of words (Heaps’ law). Analyzing the fluctuations around this average in three large databases (Google-ngram, English Wikipedia, and a collection of scientific articles), we find that the standard deviation scales linearly with the average (Taylor's law), in contrast to the prediction of decaying fluctuations obtained using simple sampling arguments. We explain both scaling laws (Heaps’ and Taylor) by modeling the usage of words using a Poisson process with a fat-tailed distribution of word frequencies (Zipf's law) and topic-dependent frequencies of individual words (as in topic models). Considering topical variations lead to quenched averages, turn the vocabulary size a non-self-averaging quantity, and explain the empirical observations. For the numerous practical applications relying on estimations of vocabulary size, our results show that uncertainties remain large even for long texts. We show how to account for these uncertainties in measurements of lexical richness of texts with different lengths. (paper)
National Research Council Canada - National Science Library
2007-01-01
...), human rights, rules of engagement, emergency essential civilians supporting military operations, contingency contractor personnel, foreign and deployment, criminal law, environmental law, fiscal law...
International Nuclear Information System (INIS)
Bringuier, P.
2009-01-01
The object of this report is to present the evolution of the nuclear law during the period from 2006 to 2008, period that was characterized in France by a real rewriting from the implementation of a control authority. The prescriptive backing of nuclear activities has been deeply changed by numerous texts. In this first part are presented: (1) the institutional aspects, (2) openness and public information, (7) radioactive wastes and (9) liability and insurance. In a next publication will be treated: (3) safety and radiation protection; (4) nuclear matter, inspection, physical protection; (5) transports; (6) trade, non-proliferation; (8) radiological accidents. (N.C.)
International Nuclear Information System (INIS)
2016-01-01
This section treats of the following case laws: 1 - Canada: Decision of the Canadian Federal Court of Appeal dismissing an appeal related to an environmental assessment of a project to refurbish and extend the life of an Ontario nuclear power plant; 2 - Poland: Decision of the Masovian Voivod of 28 December 2015 concerning the legality of the resolution on holding a local referendum in the Commune of Rozan regarding a new radioactive waste repository (2015); 3 - United States: Commission authorises issuance of construction permit for the Shine Medical Isotope Facility in Janesville, Wisconsin; 4 - United States: Commission authorises issuance of combined licences for the South Texas Project site in Matagorda County, Texas
International Nuclear Information System (INIS)
2012-01-01
This section gathers the following case laws: 1 - Canada: Judicial review of Darlington new nuclear power plant project; Appeal decision upholding criminal convictions related to attempt to export nuclear-related dual-use items to Iran: Her Majesty the Queen V. Yadegari; 2 - European Commission: Greenland cases; 3 - France: Chernobyl accident - decision of dismissal of the Court of Appeal of Paris; 4 - Slovak Republic: Aarhus Convention compliance update; 5 - United States: Judgement of a US court of appeals upholding the NRC's dismissal of challenges to the renewal of the operating licence for Oyster Creek Nuclear Generating Station; reexamination of the project of high-level waste disposal site at Yucca Mountain
Karloff, Howard
1991-01-01
To this reviewer’s knowledge, this is the first book accessible to the upper division undergraduate or beginning graduate student that surveys linear programming from the Simplex Method…via the Ellipsoid algorithm to Karmarkar’s algorithm. Moreover, its point of view is algorithmic and thus it provides both a history and a case history of work in complexity theory. The presentation is admirable; Karloff's style is informal (even humorous at times) without sacrificing anything necessary for understanding. Diagrams (including horizontal brackets that group terms) aid in providing clarity. The end-of-chapter notes are helpful...Recommended highly for acquisition, since it is not only a textbook, but can also be used for independent reading and study. —Choice Reviews The reader will be well served by reading the monograph from cover to cover. The author succeeds in providing a concise, readable, understandable introduction to modern linear programming. —Mathematics of Computing This is a textbook intend...
International Nuclear Information System (INIS)
Anon.
2002-01-01
Several judgements are carried: Supreme Administrative Court Judgement rejecting an application to prevent construction of a new nuclear power plant (Finland); judgement of the Council of State specifying the law applicable to storage facilities for depleted uranium (France); Supreme Court Decision overturning for foreign spent fuel (Russian federation); Court of Appeal Judgement on government decision to allow the start up of a MOX fuel plant ( United Kingdom); judgement on lawfulness of authorizations granted by the Environment Agency: Marchiori v. the Environment Agency; (U.K.); Kennedy v. Southern California Edison Co. (U.S.A); Judgement concerning Ireland ' s application to prevent operation of BNFL ' s MOX facility at Sellafield: Ireland v. United Kingdom; At the European Court of Human Rights Balmer-Schafroth and others have complained v. Switzerland. Parliamentary decision rescinding the shutdown date for Barseback - 2 (Sweden); Decision of the International trade Commission regarding imposition of countervailing and anti-dumping duties on imports of low enriched uranium from the European Union, Yucca Mountain site recommendation (USA). (N.C.)
Directory of Open Access Journals (Sweden)
Santana Isabel
2011-08-01
Full Text Available Abstract Background Dementia and cognitive impairment associated with aging are a major medical and social concern. Neuropsychological testing is a key element in the diagnostic procedures of Mild Cognitive Impairment (MCI, but has presently a limited value in the prediction of progression to dementia. We advance the hypothesis that newer statistical classification methods derived from data mining and machine learning methods like Neural Networks, Support Vector Machines and Random Forests can improve accuracy, sensitivity and specificity of predictions obtained from neuropsychological testing. Seven non parametric classifiers derived from data mining methods (Multilayer Perceptrons Neural Networks, Radial Basis Function Neural Networks, Support Vector Machines, CART, CHAID and QUEST Classification Trees and Random Forests were compared to three traditional classifiers (Linear Discriminant Analysis, Quadratic Discriminant Analysis and Logistic Regression in terms of overall classification accuracy, specificity, sensitivity, Area under the ROC curve and Press'Q. Model predictors were 10 neuropsychological tests currently used in the diagnosis of dementia. Statistical distributions of classification parameters obtained from a 5-fold cross-validation were compared using the Friedman's nonparametric test. Results Press' Q test showed that all classifiers performed better than chance alone (p Conclusions When taking into account sensitivity, specificity and overall classification accuracy Random Forests and Linear Discriminant analysis rank first among all the classifiers tested in prediction of dementia using several neuropsychological tests. These methods may be used to improve accuracy, sensitivity and specificity of Dementia predictions from neuropsychological testing.
Power laws for gravity and topography of Solar System bodies
Ermakov, A.; Park, R. S.; Bills, B. G.
2017-12-01
When a spacecraft visits a planetary body, it is useful to be able to predict its gravitational and topographic properties. This knowledge is important for determining the level of perturbations in spacecraft's motion as well as for planning the observation campaign. It has been known for the Earth that the power spectrum of gravity follows a power law, also known as the Kaula rule (Kaula, 1963; Rapp, 1989). A similar rule was derived for topography (Vening-Meinesz, 1951). The goal of this paper is to generalize the power law that can characterize the gravity and topography power spectra for bodies across a wide range of size. We have analyzed shape power spectra of the bodies that have either global shape and gravity field measured. These bodies span across five orders of magnitude in their radii and surface gravities and include terrestrial planets, icy moons and minor bodies. We have found that despite having different internal structure, composition and mechanical properties, the topography power spectrum of these bodies' shapes can be modeled with a similar power law rescaled by the surface gravity. Having empirically found a power law for topography, we can map it to a gravity power law. Special care should be taken for low-degree harmonic coefficients due to potential isostatic compensation. For minor bodies, uniform density can be assumed. The gravity coefficients are a linear function of the shape coefficients for close-to-spherical bodoes. In this case, the power law for gravity will be steeper than the power law of topography due to the factor (2n+1) in the gravity expansion (e.g. Eq. 10 in Wieczorek & Phillips, 1998). Higher powers of topography must be retained for irregularly shaped bodies, which breaks the linearity. Therefore, we propose the following procedure to derive an a priori constraint for gravity. First, a surface gravity needs to be determined assuming typical density for the relevant class of bodies. Second, the scaling coefficient of the
Kinematic Hardening: Characterization, Modeling and Impact on Springback Prediction
International Nuclear Information System (INIS)
Alves, J. L.; Bouvier, S.; Jomaa, M.; Billardon, R.; Oliveira, M. C.; Menezes, L. F.
2007-01-01
The constitutive modeling of the materials' mechanical behavior, usually carried out using a phenomenological constitutive model, i.e., a yield criterion associated to the isotropic and kinematic hardening laws, is of paramount importance in the FEM simulation of the sheet metal forming processes, as well as in the springback prediction. Among others, the kinematic behavior of the yield surface plays an essential role, since it is indispensable to describe the Bauschinger effect, i.e., the materials' answer to the multiple tension-compression cycles to which material points are submitted during the forming process. Several laws are usually used to model and describe the kinematic hardening, namely: a) the Prager's law, which describes a linear evolution of the kinematic hardening with the plastic strain rate tensor b) the Frederick-Armstrong non-linear kinematic hardening, basically a non-linear law with saturation; and c) a more advanced physically-based law, similar to the previous one but sensitive to the strain path changes. In the present paper a mixed kinematic hardening law (linear + non-linear behavior) is proposed and its implementation into a static fully-implicit FE code is described. The material parameters identification for sheet metals using different strategies, and the classical Bauschinger loading tests (i.e. in-plane forward and reverse monotonic loading), are addressed, and their impact on springback prediction evaluated. Some numerical results concerning the springback prediction of the Numisheet'05 Benchmark no. 3 are briefly presented to emphasize the importance of a correct modeling and identification of the kinematic hardening behavior
Titius-Bode laws in the solar system. 2: Build your own law from disk models
Dubrulle, B.; Graner, F.
1994-02-01
Simply respecting both scale and rotational invariance, it is easy to construct an endless collection of theoretical models predicting a Titius-Bode law, irrespective to their physical content. Due to the numerous ways to get the law and its intrinsic arbitrariness, it is not a useful constraint on theories of solar system formation. To illustrate the simple elegance of scale-invariant methods, we explicitly cook up one of the simplest examples, an infinitely thin cold gaseous disk rotating around a central object. In that academic case, the Titius-Bode law holds during the linear stage of the gravitational instability. The time scale of the instability is of the order of a self-gravitating time scale, (G rhod)-1/2, where rhod is the disk density. This model links the separation between different density maxima with the ratio MD/MC of the masses of the disk and the central object; for instance, MD/MC of the order of 0.18 roughly leads to the observed separation between the planets. We discuss the boundary conditions and the limit of the Wentzel-Kramer-Brillouin (WKB) approximation.
International Nuclear Information System (INIS)
Wiesbauer, Bruno
1978-01-01
This book is the first attempt of a comprehensive compilation of national Austrian Nuclear Law (Nuclear Liability Act; Radiation protection Act, Radiation Protection Ordinance, Security Control Act, Act on the uses of Nuclear Energy - Zwentendorf Nuclear Power Plant) and the most important international agreements to which Austria is a party. Furthermore, the book contains the most important Nuclear Liability Conventions to which Austria is not yet a party, but which are applicable in neighbouring; the Paris Convention served as a model for the national Nuclear Liability Act and may be used for its interpretation. The author has translated a number of international instruments into German, such as the Expose des Motifs of the Paris Convention. (NEA) [fr
International Nuclear Information System (INIS)
2014-01-01
This section of the Bulletin brings together the texts of the following case laws: Canada: - Judgment of the Federal Court of Canada sending back to a joint review panel for reconsideration the environmental assessment of a proposed new nuclear power plant in Ontario. France : - Conseil d'etat, 24 March 2014 (Request No. 358882); - Conseil d'etat, 24 March 2014 (Request No. 362001). Slovak Republic: - Further developments in cases related to the challenge by Greenpeace Slovakia to the Mochovce nuclear power plant; - Developments in relation to the disclosure of information concerning the Mochovce nuclear power plant. United States: - Initial Decision of the Atomic Safety and Licensing Board Ruling in Favour of Nuclear Innovation North America, LLC (NINA) Regarding Foreign Ownership, Control or Domination
NLO QCD predictions for off-shell tt̄ and tt̄H production and decay at a linear collider
Energy Technology Data Exchange (ETDEWEB)
Nejad, Bijan Chokoufé [Theory Group, DESY,Notkestr. 85, D-22607 Hamburg (Germany); Kilian, Wolfgang [Emmy-Noether-Campus,Walter-Flex-Str. 3, 57068 Siegen (Germany); Lindert, Jonas M.; Pozzorini, Stefano [Physik-Institut, Universität Zürich,Winterthurerstrasse 190, CH-8057 Zürich (Switzerland); Reuter, Jürgen [Theory Group, DESY,Notkestr. 85, D-22607 Hamburg (Germany); Weiss, Christian [Theory Group, DESY,Notkestr. 85, D-22607 Hamburg (Germany); Emmy-Noether-Campus,Walter-Flex-Str. 3, 57068 Siegen (Germany)
2016-12-15
We present predictions for tt̄ and tt̄H production and decay at future lepton colliders including non-resonant and interference contributions up to next-to-leading order (NLO) in perturbative QCD. The obtained precision predictions are necessary for a future precise determination of the top-quark Yukawa coupling, and allow for top-quark phenomenology in the continuum at an unprecedented level of accuracy. Simulations are performed with the automated NLO Monte-Carlo framework Whizard interfaced to the OpenLoops matrix element generator.
International Nuclear Information System (INIS)
2017-01-01
This section treats of the following case laws (United States): 1 - Virginia Uranium, Inc. v. Warren, 848 F.3d 590 (4. Cir. 2017): In the United States District Court for the Western District of Virginia, the plaintiffs, a collection of uranium mining companies and owners of land containing uranium deposits, challenged a Commonwealth of Virginia moratorium on conventional uranium mining. The plaintiffs alleged that the state moratorium was preempted by federal law under the Supremacy Clause of the US Constitution.; 2 - United States v. Energy Solutions, Inc.; Rockwell Holdco, Inc.; Andrews County; Holdings, Inc.; and Waste Control Specialists, LLC. (D. Del. June 21, 2017): In 2016, the United States, acting through the US Department of Justice, commenced an action in United States District Court in Delaware seeking to enjoin the acquisition of Waste Control Specialists, LLC (WCS) and its parent company by Energy Solutions, Inc., and its parent. WCS and Energy Solutions are competitors in the market for the disposal of low-level radioactive waste (LLRW) produced by commercial generators of such material. The United States alleged that the proposed acquisition was unlawful. 3 - Cooper v. Tokyo Electric Power Company, No. 15-56426 (9. Cir. 2017): The plaintiffs are US Navy service members who were deployed off the Japanese coast as part of the US effort to provide earthquake relief after the 9.0 earthquake and tsunami that struck Japan on 11 March 2011. Plaintiffs sued alleging 'that TEPCO was negligent in operating the Fukushima Daiichi Nuclear Power Plant and in reporting the extent of the radiation leak
Pestaña-Melero, Francisco Luis; Haff, G Gregory; Rojas, Francisco Javier; Pérez-Castilla, Alejandro; García-Ramos, Amador
2017-12-18
This study aimed to compare the between-session reliability of the load-velocity relationship between (1) linear vs. polynomial regression models, (2) concentric-only vs. eccentric-concentric bench press variants, as well as (3) the within-participants vs. the between-participants variability of the velocity attained at each percentage of the one-repetition maximum (%1RM). The load-velocity relationship of 30 men (age: 21.2±3.8 y; height: 1.78±0.07 m, body mass: 72.3±7.3 kg; bench press 1RM: 78.8±13.2 kg) were evaluated by means of linear and polynomial regression models in the concentric-only and eccentric-concentric bench press variants in a Smith Machine. Two sessions were performed with each bench press variant. The main findings were: (1) first-order-polynomials (CV: 4.39%-4.70%) provided the load-velocity relationship with higher reliability than second-order-polynomials (CV: 4.68%-5.04%); (2) the reliability of the load-velocity relationship did not differ between the concentric-only and eccentric-concentric bench press variants; (3) the within-participants variability of the velocity attained at each %1RM was markedly lower than the between-participants variability. Taken together, these results highlight that, regardless of the bench press variant considered, the individual determination of the load-velocity relationship by a linear regression model could be recommended to monitor and prescribe the relative load in the Smith machine bench press exercise.
Catastrophic Failure and Critical Scaling Laws of Fiber Bundle Material
Directory of Open Access Journals (Sweden)
Shengwang Hao
2017-05-01
Full Text Available This paper presents a spring-fiber bundle model used to describe the failure process induced by energy release in heterogeneous materials. The conditions that induce catastrophic failure are determined by geometric conditions and energy equilibrium. It is revealed that the relative rates of deformation of, and damage to the fiber bundle with respect to the boundary controlling displacement ε0 exhibit universal power law behavior near the catastrophic point, with a critical exponent of −1/2. The proportion of the rate of response with respect to acceleration exhibits a linear relationship with increasing displacement in the vicinity of the catastrophic point. This allows for the prediction of catastrophic failure immediately prior to failure by extrapolating the trajectory of this relationship as it asymptotes to zero. Monte Carlo simulations are completed and these two critical scaling laws are confirmed.
Reduction of Linear Programming to Linear Approximation
Vaserstein, Leonid N.
2006-01-01
It is well known that every Chebyshev linear approximation problem can be reduced to a linear program. In this paper we show that conversely every linear program can be reduced to a Chebyshev linear approximation problem.
Nicholas A. Povak; Paul F. Hessburg; Todd C. McDonnell; Keith M. Reynolds; Timothy J. Sullivan; R. Brion Salter; Bernard J. Crosby
2014-01-01
Accurate estimates of soil mineral weathering are required for regional critical load (CL) modeling to identify ecosystems at risk of the deleterious effects from acidification. Within a correlative modeling framework, we used modeled catchment-level base cation weathering (BCw) as the response variable to identify key environmental correlates and predict a continuous...
Explorative methods in linear models
DEFF Research Database (Denmark)
Høskuldsson, Agnar
2004-01-01
The author has developed the H-method of mathematical modeling that builds up the model by parts, where each part is optimized with respect to prediction. Besides providing with better predictions than traditional methods, these methods provide with graphic procedures for analyzing different feat...... features in data. These graphic methods extend the well-known methods and results of Principal Component Analysis to any linear model. Here the graphic procedures are applied to linear regression and Ridge Regression....
Molecular Dynamics Simulations for Resolving Scaling Laws of Polyethylene Melts
Directory of Open Access Journals (Sweden)
Kazuaki Z. Takahashi
2017-01-01
Full Text Available Long-timescale molecular dynamics simulations were performed to estimate the actual physical nature of a united-atom model of polyethylene (PE. Several scaling laws for representative polymer properties are compared to theoretical predictions. Internal structure results indicate a clear departure from theoretical predictions that assume ideal chain statics. Chain motion deviates from predictions that assume ideal motion of short chains. With regard to linear viscoelasticity, the presence or absence of entanglements strongly affects the duration of the theoretical behavior. Overall, the results indicate that Gaussian statics and dynamics are not necessarily established for real atomistic models of PE. Moreover, the actual physical nature should be carefully considered when using atomistic models for applications that expect typical polymer behaviors.
Directory of Open Access Journals (Sweden)
Majid Mohammadhosseini
2014-05-01
Full Text Available A reliable quantitative structure retention relationship (QSRR study has been evaluated to predict the retention indices (RIs of a broad spectrum of compounds, namely 118 non-linear, cyclic and heterocyclic terpenoids (both saturated and unsaturated, on an HP-5MS fused silica column. A principal component analysis showed that seven compounds lay outside of the main cluster. After elimination of the outliers, the data set was divided into training and test sets involving 80 and 28 compounds. The method was tested by application of the particle swarm optimization (PSO method to find the most effective molecular descriptors, followed by multiple linear regressions (MLR. The PSO-MLR model was further confirmed through “leave one out cross validation” (LOO-CV and “leave group out cross validation” (LGO-CV, as well as external validations. The promising statistical figures of merit associated with the proposed model (R2train=0.936, Q2LOO=0.928, Q2LGO=0.921, F=376.4 confirm its high ability to predict RIs with negligible relative errors of predictions (REP train=4.8%, REP test=6.0%.
Evaluation of Iowa's anti-bullying law.
Ramirez, Marizen; Ten Eyck, Patrick; Peek-Asa, Corinne; Onwuachi-Willig, Angela; Cavanaugh, Joseph E
2016-12-01
Bullying is the most common form of youth aggression. Although 49 of all 50 states in the U.S. have an anti-bullying law in place to prevent bullying, little is known about the effectiveness of these laws. Our objective was to measure the effectiveness of Iowa's anti-bullying law in preventing bullying and improving teacher response to bullying. Sixth, 8th, and 11th grade children who completed the 2005, 2008 and 2010 Iowa Youth Survey were included in this study (n = 253,000). Students were coded according to exposure to the law: pre-law for 2005 survey data, one year post-law for 2008 data, and three years post-law for 2010 data. The outcome variables were: 1) being bullied (relational, verbal, physical, and cyber) in the last month and 2) extent to which teachers/adults on campus intervened with bullying. Generalized linear mixed models were constructed with random effects. The odds of being bullied increased from pre-law to one year post-law periods, and then decreased from one year to three years post-law but not below 2005 pre-law levels. This pattern was consistent across all bullying types except cyberbullying. The odds of teacher intervention decreased 11 % (OR = 0.89, 95 % CL = 0.88, 0.90) from 2005 (pre-law) to 2010 (post-law). Bullying increased immediately after Iowa's anti-bullying law was passed, possibly due to improved reporting. Reductions in bullying occurred as the law matured. Teacher response did not improve after the passage of the law.
Statistical Basis for Predicting Technological Progress
Nagy, Béla; Farmer, J. Doyne; Bui, Quan M.; Trancik, Jessika E.
2013-01-01
Forecasting technological progress is of great interest to engineers, policy makers, and private investors. Several models have been proposed for predicting technological improvement, but how well do these models perform? An early hypothesis made by Theodore Wright in 1936 is that cost decreases as a power law of cumulative production. An alternative hypothesis is Moore's law, which can be generalized to say that technologies improve exponentially with time. Other alternatives were proposed by Goddard, Sinclair et al., and Nordhaus. These hypotheses have not previously been rigorously tested. Using a new database on the cost and production of 62 different technologies, which is the most expansive of its kind, we test the ability of six different postulated laws to predict future costs. Our approach involves hindcasting and developing a statistical model to rank the performance of the postulated laws. Wright's law produces the best forecasts, but Moore's law is not far behind. We discover a previously unobserved regularity that production tends to increase exponentially. A combination of an exponential decrease in cost and an exponential increase in production would make Moore's law and Wright's law indistinguishable, as originally pointed out by Sahal. We show for the first time that these regularities are observed in data to such a degree that the performance of these two laws is nearly the same. Our results show that technological progress is forecastable, with the square root of the logarithmic error growing linearly with the forecasting horizon at a typical rate of 2.5% per year. These results have implications for theories of technological change, and assessments of candidate technologies and policies for climate change mitigation. PMID:23468837
Statistical basis for predicting technological progress.
Directory of Open Access Journals (Sweden)
Béla Nagy
Full Text Available Forecasting technological progress is of great interest to engineers, policy makers, and private investors. Several models have been proposed for predicting technological improvement, but how well do these models perform? An early hypothesis made by Theodore Wright in 1936 is that cost decreases as a power law of cumulative production. An alternative hypothesis is Moore's law, which can be generalized to say that technologies improve exponentially with time. Other alternatives were proposed by Goddard, Sinclair et al., and Nordhaus. These hypotheses have not previously been rigorously tested. Using a new database on the cost and production of 62 different technologies, which is the most expansive of its kind, we test the ability of six different postulated laws to predict future costs. Our approach involves hindcasting and developing a statistical model to rank the performance of the postulated laws. Wright's law produces the best forecasts, but Moore's law is not far behind. We discover a previously unobserved regularity that production tends to increase exponentially. A combination of an exponential decrease in cost and an exponential increase in production would make Moore's law and Wright's law indistinguishable, as originally pointed out by Sahal. We show for the first time that these regularities are observed in data to such a degree that the performance of these two laws is nearly the same. Our results show that technological progress is forecastable, with the square root of the logarithmic error growing linearly with the forecasting horizon at a typical rate of 2.5% per year. These results have implications for theories of technological change, and assessments of candidate technologies and policies for climate change mitigation.
International Nuclear Information System (INIS)
Anon.
2009-01-01
Different case law are presented in this part: By decision dated 17 july 2009, the Ontario Court of Appeal (Canada) has ruled on the scope of solicitor-client privilege and the protections that may be afforded to privileged investigations reports. The decision reaffirms the canadian court system view of the importance of the protection of solicitor-client privilege to the administration of justice; For United states here is a judgment of a U.S. court of Appeals on the design basis threat security rule (2009), this case concerns a challenge to the U.S. Nuclear regulatory commission (N.R.C.) revised design basis threat rule, which was adopted in 2007 (nuclear bulletin law no. 80). The petitioners public citizen, Inc., San Luis Obispo Mothers for Peace and the State of New York filed a lawsuit in the U.S. court of appeals for the Ninth circuit alleging that the N.R.C. acted arbitrarily and capriciously and in violation of law by refusing to include the treat of air attacks in its final revised design basis rule. On the 24. july 2009, a panel of three ninth circuit judges rules 2-1 that the N.R.C. acted reasonably in not including an air treat in its design basis rule. Secondly, judgment of a U.S. court of appeals on consideration of the environmental impact of terrorist attacks on nuclear facilities (2009), this case concerns the scope of the U.S. Nuclear regulatory commission environmental analysis during its review of applications to re-licence commercial nuclear power plants. New Jersey urged the N.R.C. to consider the environmental impact of an airborne terrorist attack on the power plant, arguing that such analysis was required by the national environmental policy act (N.E.P.A.). On 31. march 2009, a panel of three circuit judges declined to follow the ninth circuit opinion and affirmed NRC decision 3-0 ruling that NRC was not required to consider terrorism in its N.E.P.A. analysis because NRC re-licensing would not be a reasonably close cause of terrorism
International Nuclear Information System (INIS)
Anon.
2011-01-01
This chapter gathers three case laws, one concerning France and the two others concerning the United States. France - Decision of the Administrative Court in Strasbourg on the permanent shutdown of the Fessenheim nuclear power plant: On 9 March 2011, the administrative court in Strasbourg confirmed the government's rejection to immediately close the Fessenheim nuclear power plant, the first unit of which started operation on 1 January 1978. The court rejected the motion of the 'Association trinationale de protection nucleaire' (ATPN) filed against the decision of the Minister of Economy, Industry and Employment to refuse the final shutdown of the plant. The group, which brings together associations as well as French, German and Swiss municipalities, had taken legal action in December 2008. United States - Case law 1 - Judgment of a US Court of Appeals on public access to sensitive security information and consideration of the environmental impacts of terrorist attacks on nuclear facilities: This case concerns 1) the public's right to access classified and sensitive security information relied upon by the US Nuclear Regulatory Commission (NRC) in its environmental review; and 2) the sufficiency of the NRC's environmental review of the impacts of terrorist attacks for a proposed Independent Spent Fuel Storage Installation (ISFSI). In 2003, the NRC ruled that the National Environmental Policy Act (NEPA) did not require the NRC to consider the impacts of terrorist attacks in its environmental review for the proposed ISFSI at the Diablo Canyon Power Plant. ' NEPA mandates that all federal agencies must prepare a detailed statement on the environment impacts before undertaking a major federal action that significantly affects the human environment. In 2004, the San Luis Obispo Mothers for Peace, a group of individuals who live near the Diablo Canyon Power Plant, filed a petition in the US Court of Appeals for the Ninth Circuit challenging the NRC's 2003 decision. The
Tokamak confinement scaling laws
International Nuclear Information System (INIS)
Connor, J.
1998-01-01
The scaling of energy confinement with engineering parameters, such as plasma current and major radius, is important for establishing the size of an ignited fusion device. Tokamaks exhibit a variety of modes of operation with different confinement properties. At present there is no adequate first principles theory to predict tokamak energy confinement and the empirical scaling method is the preferred approach to designing next step tokamaks. This paper reviews a number of robust theoretical concepts, such as dimensional analysis and stability boundaries, which provide a framework for characterising and understanding tokamak confinement and, therefore, generate more confidence in using empirical laws for extrapolation to future devices. (author)
Directory of Open Access Journals (Sweden)
Jan Vittek
2004-01-01
Full Text Available Closed-loop position control of mechanisms directly driven by linear synchronous motors with permanent magnets is presented. The control strategy is based on forced dynamic control, which is a form of feedback linearisation, yielding a non-liner multivariable control law to obtain a prescribed linear speed dynamics together with the vector control condition of mutal orthogonality between the stator current and magnetic flux vectors (assuming perfect estimates of the plant parameters. Outer position control loop is closed via simple feedback with proportional gain. Simulations of the design control sysstem, including the drive with power electronic switching, predict the intended drive performance.
Rifai, Eko Aditya; van Dijk, Marc; Vermeulen, Nico P. E.; Geerke, Daan P.
2018-01-01
Computational protein binding affinity prediction can play an important role in drug research but performing efficient and accurate binding free energy calculations is still challenging. In the context of phase 2 of the Drug Design Data Resource (D3R) Grand Challenge 2 we used our automated eTOX ALLIES approach to apply the (iterative) linear interaction energy (LIE) method and we evaluated its performance in predicting binding affinities for farnesoid X receptor (FXR) agonists. Efficiency was obtained by our pre-calibrated LIE models and molecular dynamics (MD) simulations at the nanosecond scale, while predictive accuracy was obtained for a small subset of compounds. Using our recently introduced reliability estimation metrics, we could classify predictions with higher confidence by featuring an applicability domain (AD) analysis in combination with protein-ligand interaction profiling. The outcomes of and agreement between our AD and interaction-profile analyses to distinguish and rationalize the performance of our predictions highlighted the relevance of sufficiently exploring protein-ligand interactions during training and it demonstrated the possibility to quantitatively and efficiently evaluate if this is achieved by using simulation data only.
International Nuclear Information System (INIS)
2013-01-01
This section reports on 7 case laws from 4 countries: - France: Conseil d'Etat decision, 28 June 2013, refusing to suspend operation of the Fessenheim nuclear power plant; - Slovak Republic: New developments including the Supreme Court's judgment in a matter involving Greenpeace Slovakia's claims regarding the Mochovce nuclear power plant; New developments in the matter involving Greenpeace's demands for information under the Freedom of Information Act; - Switzerland: Judgment of the Federal Supreme Court in the matter of the Departement federal de l'environnement, des transports, de l'energie et de la communication (DETEC) against Ursula Balmer-Schafroth and others on consideration of admissibility of a request to withdraw the operating licence for the Muehleberg nuclear power plant; - United States: Judgment of the Court of Appeals for the District of Columbia Circuit granting petition for writ of mandamus ordering US Nuclear Regulatory Commission (NRC) to resume Yucca Mountain licensing; Judgment of the Court of Appeals for the Second Circuit invalidating two Vermont statutes as preempted by the Atomic Energy Act; Judgment of the NRC on transferring Shieldalloy site to New Jersey's jurisdiction
International Nuclear Information System (INIS)
2014-01-01
This section treats of the following case laws sorted by country: 1 - Germany: Federal Administrative Court confirms the judgments of the Higher Administrative Court of the Land Hesse: The shutdown of nuclear power plant Biblis blocks A and B based on a 'moratorium' imposed by the Government was unlawful; List of lawsuits in the nuclear field. 2 - Slovak Republic: Further developments in cases related to the challenge by Greenpeace Slovakia to the Mochovce nuclear power plant; Developments in relation to the disclosure of information concerning the Mochovce nuclear power plant. 3 - United States: Judgment of the Nuclear Regulatory Commission resuming the licensing process for the Department of Energy's construction authorisation application for the Yucca Mountain high-level radioactive waste repository; Judgment of the Licensing Board in favour of Shaw AREVA MOX Services regarding the material control and accounting system at the proposed MOX Facility; Dismissal by US District Court Judge of lawsuit brought by US military personnel against Tokyo Electric Power Company (TEPCO) in connection with the Fukushima Daiichi nuclear power plant accident
International Nuclear Information System (INIS)
Anon.
2000-01-01
This article reviews the judgements and law decisions concerning nuclear activities throughout the world during the end of 1999 and the first semester 2000. In Belgium a judgement has allowed the return of nuclear waste from France. In France the Council of State confirmed the repeal of an authorization order of an installation dedicated to the storage of uranium sesquioxide, on the basis of an insufficient risk analysis. In France too, the criminal chamber of the French Supreme Court ruled that the production in excess of that authorized in the licence can be compared to carrying out operations without a licence. In Japan the Fukui district court rejected a lawsuit filed by local residents calling for the permanent closure, on safety grounds, of the Monju reactor. In the Netherlands, the Council of State ruled that the Dutch government had no legal basis for limiting in time the operating licence of the Borssele plant. In Usa a district court has rejected a request to ban MOX fuel shipment. (A.C.)
Recent publications on environmental law
International Nuclear Information System (INIS)
Lohse, S.
1988-01-01
The bibliography contains 1235 references to publications covering the following subject fields: general environmental law; environmental law in relation to constitutional law, administrative law, procedural law, revenue law, criminal law, private law, industrial law; law of regional development; nature conservation law; law on water protection; waste management law; law on protection against harmful effects on the environment; atomic energy law and radiation protection law; law of the power industry and the mining industry; laws and regulations on hazardous material and environmental hygiene. (HP) [de
Recent publications on environmental law
International Nuclear Information System (INIS)
Lohse, S.
1989-01-01
The bibliography contains 1160 references to publications covering the following subject fields: General environmental law; environmental law in relation to constitutional law, administrative law, procedural law, revenue law, criminal law, private law, industrial law; law of regional development; nature conservation law; law on water protection; waste management law; law on protection against harmful effects on the environment; atomic energy law and radiation protection law; law of the power industry and the mining industry; laws and regulations on hazardous material and environmental hygiene. (orig./HP) [de
Directory of Open Access Journals (Sweden)
Tanwiwat Jaikuna
2017-02-01
Full Text Available Purpose: To develop an in-house software program that is able to calculate and generate the biological dose distribution and biological dose volume histogram by physical dose conversion using the linear-quadratic-linear (LQL model. Material and methods : The Isobio software was developed using MATLAB version 2014b to calculate and generate the biological dose distribution and biological dose volume histograms. The physical dose from each voxel in treatment planning was extracted through Computational Environment for Radiotherapy Research (CERR, and the accuracy was verified by the differentiation between the dose volume histogram from CERR and the treatment planning system. An equivalent dose in 2 Gy fraction (EQD2 was calculated using biological effective dose (BED based on the LQL model. The software calculation and the manual calculation were compared for EQD2 verification with pair t-test statistical analysis using IBM SPSS Statistics version 22 (64-bit. Results: Two and three-dimensional biological dose distribution and biological dose volume histogram were displayed correctly by the Isobio software. Different physical doses were found between CERR and treatment planning system (TPS in Oncentra, with 3.33% in high-risk clinical target volume (HR-CTV determined by D90%, 0.56% in the bladder, 1.74% in the rectum when determined by D2cc, and less than 1% in Pinnacle. The difference in the EQD2 between the software calculation and the manual calculation was not significantly different with 0.00% at p-values 0.820, 0.095, and 0.593 for external beam radiation therapy (EBRT and 0.240, 0.320, and 0.849 for brachytherapy (BT in HR-CTV, bladder, and rectum, respectively. Conclusions : The Isobio software is a feasible tool to generate the biological dose distribution and biological dose volume histogram for treatment plan evaluation in both EBRT and BT.
International Nuclear Information System (INIS)
2015-01-01
This section treats of the following case laws: 1 - Canada: Decision of the Canadian Federal Court of Appeal overturning a decision to send back for reconsideration an environmental assessment of a proposed new nuclear power plant in Ontario; 2 - France: Council of State decision, 28 November 2014, Federation 'Reseau sortir du nucleaire' (Nuclear Phase-Out network) and others vs. Electricite de France (EDF), Request No. 367013 for the annulment of: - The resolution of the French Nuclear Safety Authority (ASN) dated 4 July 2011 specifying additional regulations for Electricite de France (EDF) designed to strengthen the reactor basemat of reactor No. 1 in the Fessenheim nuclear power plant, and - The resolution of ASN dated 19 December 2012 approving the start of work on reinforcing the reactor basemat in accordance with the dossier submitted by EDF; 3 - Germany: Judgment of the European Court of Justice on the nuclear fuel tax; 4 - India: Judgment of the High Court of Kerala in a public interest litigation challenging the constitutional validity of the Civil Liability for Nuclear Damage Act, 2010; 5 - Japan - District court decisions on lawsuits related to the restart of Sendai NPP and Takahama NPP; 6 - Poland: Decision of the Masovian Voivod concerning the legality of the resolution on holding a local referendum in the Commune of Rozan regarding a new radioactive waste repository; Certain provisions of the Regulation of the Minister of Health of 18 February 2011 on the conditions for safe use of ionising radiation for all types of medical exposure have been declared unconstitutional by a judgment pronounced by the Constitutional Tribunal; 7 - Slovak Republic: Developments in relation to the disclosure of information concerning the Mochovce nuclear power plant
DEFF Research Database (Denmark)
Martinez Romera, Beatriz; Coelho, Nelson F.
2018-01-01
, treaty law is only one of many sources of the law that governs international relations, the others being customary international law and principles of law. The main conclusion of this chapter is that states may have to wake up to the limitations of the UNCLOS and that this will require understanding...... the relative role of this treaty among other sources of international law....
International Nuclear Information System (INIS)
Ito, Hiroshi
2013-01-01
The nuclear law had been out of the environmental law. The act on the transparency and the security of the nuclear matter was enacted in 2006 and set in the code of the environment in 2012. It means that the nuclear law is part of the environmental law and that it is advanced. I will report the French nuclear law. (author)
Holko, David A.
1982-01-01
Presents a complete computer program demonstrating the relationship between volume/pressure for Boyle's Law, volume/temperature for Charles' Law, and volume/moles of gas for Avagadro's Law. The programing reinforces students' application of gas laws and equates a simulated moving piston to theoretical values derived using the ideal gas law.…
Energy Technology Data Exchange (ETDEWEB)
Szadkowski, Zbigniew, E-mail: zszadkow@kfd2.phys.uni.lodz.pl [University of Lodz, Department of Physics and Applied Informatics (Poland); Fraenkel, E.D. [Kernfysisch Versneller Instituut of the University of Groningen, Groningen (Netherlands); Glas, Dariusz; Legumina, Remigiusz [University of Lodz, Department of Physics and Applied Informatics (Poland)
2013-12-21
The electromagnetic part of an extensive air shower developing in the atmosphere provides significant information complementary to that obtained by water Cherenkov detectors which are predominantly sensitive to the muonic content of an air shower at ground. The emissions can be observed in the frequency band between 10 and 100 MHz. However, this frequency range is significantly contaminated by narrow-band RFI and other human-made distortions. The Auger Engineering Radio Array currently suppresses the RFI by multiple time-to-frequency domain conversions using an FFT procedure as well as by a set of manually chosen IIR notch filters in the time-domain. An alternative approach developed in this paper is an adaptive FIR filter based on linear prediction (LP). The coefficients for the linear predictor are dynamically refreshed and calculated in the virtual NIOS processor. The radio detector is an autonomous system installed on the Argentinean pampas and supplied from a solar panel. Powerful calculation capacity inside the FPGA is a factor. Power consumption versus the degree of effectiveness of the calculation inside the FPGA is a figure of merit to be minimized. Results show that the RFI contamination can be significantly suppressed by the LP FIR filter for 64 or less stages. -- Highlights: • We propose an adaptive method using linear prediction for periodic RFI suppression. • Requirements are the detection of short transient signals powered by solar panels. • The RFI is significantly suppressed by ∼70%, even in a very contaminated environment. • This method consumes less energy than the current method based on FFT used in AERA. • Distortion of the short transient signals is negligible.
Kansas Data Access and Support Center — Law Enforcement Locations in Kansas Any location where sworn officers of a law enforcement agency are regularly based or stationed. Law enforcement agencies "are...
DEFF Research Database (Denmark)
Edlund, Hans Henrik
2003-01-01
Report on Danish Tenancy Law. Contribution to a research project co-financed by the Grotius Programme for Judicial Co-Operation in Civil Matters. http://www.iue.it/LAW/ResearchTeaching/EuropeanPrivateLaw/Projects.shtml......Report on Danish Tenancy Law. Contribution to a research project co-financed by the Grotius Programme for Judicial Co-Operation in Civil Matters. http://www.iue.it/LAW/ResearchTeaching/EuropeanPrivateLaw/Projects.shtml...
International Nuclear Information System (INIS)
Deore, S.M.; Fontenla, D.P.; Beitler, J.J.; Vikram, B.
1997-01-01
PURPOSE/OBJECTIVE: The time factor (γ/α) in the LQ model has been considered irrelevant for the late normal tissue injury (1). The failure of the LQ model to predict spinal cord injury in the CHART protocol questions the validity of this hypothesis. In this investigation, the incidence of radiation induced laryngeal edema was evaluated retrospectively in patients treated with different dose fractionation regimes for carcinoma of glottic cancer (2). The BED values of the LQ model calculated for different values of time factor (γ/α) were correlated with the incidence of radiation induced laryngeal edema. MATERIALS AND METHODS: A retrospective analysis was carried out for 208 patients T 1 and T 2 squamous cell cancer of the vocal cord treated with radical radiotherapy during 1975-80. There were 156 patients with T 1 lesions and the remaining 52 patients had T 2 lesions. All these patients were treated with three different fractionation regimens of 60.75 Gy/ 27 F/ 39 D, 60 Gy/24 F/34 D and 50 Gy/15 F/ 22 D, using fraction sizes 2.25 Gy, 2.5 Gy and 3.33 Gy, respectively. For the minimum follow up of 4 years, the incidence of laryngeal edema was related to fraction size (see table). To investigate the importance of the time factor (γ/α) of LQ model, BED values were calculated for different values of γ/α and ∞/β = 2.0 Gy. RESULTS: As shown in the table below, the incidence of radiation induced laryngeal edema was found in 17.2% of patients with 2.25 Gy/F compared to 44.4% using 3.33 Gy/F. The TDF model failed to correlate with the incidence of laryngeal edema. The BED values of LQ model also fails to show statistically significant correlation with the incidence of late complications. However, the BED values accounting for the time factor (particularly γ/α = 1.2 Gy/day) show significant improvement in correlation with incidence of laryngeal edema. CONCLUSION: For comparable TDF values the incidence of laryngeal edema varied from 17% to 44.4%. The analysis with
Moebes, T. A.
1994-01-01
To locate the accessory pathway(s) in preexicitation syndromes, epicardial and endocardial ventricular mapping is performed during anterograde ventricular activation via accessory pathway(s) from data originally received in signal form. As the number of channels increases, it is pertinent that more automated detection of coherent/incoherent signals is achieved as well as the prediction and prognosis of ventricular tachywardia (VT). Today's computers and computer program algorithms are not good in simple perceptual tasks such as recognizing a pattern or identifying a sound. This discrepancy, among other things, has been a major motivating factor in developing brain-based, massively parallel computing architectures. Neural net paradigms have proven to be effective at pattern recognition tasks. In signal processing, the picking of coherent/incoherent signals represents a pattern recognition task for computer systems. The picking of signals representing the onset ot VT also represents such a computer task. We attacked this problem by defining four signal attributes for each potential first maximal arrival peak and one signal attribute over the entire signal as input to a back propagation neural network. One attribute was the predicted amplitude value after the maximum amplitude over a data window. Then, by using a set of known (user selected) coherent/incoherent signals, and signals representing the onset of VT, we trained the back propagation network to recognize coherent/incoherent signals, and signals indicating the onset of VT. Since our output scheme involves a true or false decision, and since the output unit computes values between 0 and 1, we used a Fuzzy Arithmetic approach to classify data as coherent/incoherent signals. Furthermore, a Mean-Square Error Analysis was used to determine system stability. The neural net based picking coherent/incoherent signal system achieved high accuracy on picking coherent/incoherent signals on different patients. The system
Model predictive control of a high speed switched reluctance generator system
Marinkov, Sava; De Jager, Bram; Steinbuch, Maarten
2013-01-01
This paper presents a novel voltage control strategy for the high-speed operation of a Switched Reluctance Generator. It uses a linear Model Predictive Control law based on the average system model. The controller computes the DC-link current needed to achieve the tracking of a desired voltage
Dieguez-Santana, Karel; Pham-The, Hai; Villegas-Aguilar, Pedro J; Le-Thi-Thu, Huong; Castillo-Garit, Juan A; Casañola-Martin, Gerardo M
2016-12-01
In this article, the modeling of inhibitory grown activity against Tetrahymena pyriformis is described. The 0-2D Dragon descriptors based on structural aspects to gain some knowledge of factors influencing aquatic toxicity are mainly used. Besides, it is done by some enlarged data of phenol derivatives described for the first time and composed of 358 chemicals. It overcomes the previous datasets with about one hundred compounds. Moreover, the results of the model evaluation by the parameters in the training, prediction and validation give adequate results comparable with those of the previous works. The more influential descriptors included in the model are: X3A, MWC02, MWC10 and piPC03 with positive contributions to the dependent variable; and MWC09, piPC02 and TPC with negative contributions. In a next step, a median-size database of nearly 8000 phenolic compounds extracted from ChEMBL was evaluated with the quantitative-structure toxicity relationship (QSTR) model developed providing some clues (SARs) for identification of ecotoxicological compounds. The outcome of this report is very useful to screen chemical databases for finding the compounds responsible of aquatic contamination in the biomarker used in the current work. Copyright © 2016 Elsevier Ltd. All rights reserved.
International Nuclear Information System (INIS)
Anon.
2008-01-01
The first point concerns the judgement of the federal Administration Court on the standing of third parties regarding attacks at interim storage facilities (2008). In its judgement handed down on 10. april 2008, the german Federal Administrative Court overrules a decision of a Higher Regional Administrative Court and declares that residents in the vicinity of an interim storage facility may challenge the licence for that facility on the grounds that the necessary protection has not been provided against disruptive action or other interference by third parties. The second point concerns the judgement of the European Court of justice of a member State to fulfill obligations under directive 96/29 EURATOM (2007): the united kingdom imposed to intervene only if a situation of radioactive contamination results from a present or past activity for the exercise of which a licence was granted. The national legislation does not oblige the authorities to take measures in circumstances in which radioactive contamination results from a past practice which was not the subject of a such licence. The United Kingdom Government admitted the validity of the Commission claims adding that further legislation to transpose that article (article 53) into national laws is in the process of being drawn up. The third point is relative to judgement of the US court of Appeals on licensing of the L.E.S. uranium enrichment facility (2007), on appeal to the Federal Court of Appeals for the district of Columbia, the joint petitioners objected to the Nuclear regulatory Commission (NRC) issuing to the Louisiana Energy Services, L.P. (L.E.S.) Uranium enrichment Facility in New Mexico on several grounds: the NRC violated the Atomic Energy Act by supplementing the environmental impact statement after hearing closed; the NRC violated the National Environmental Policy Act by insufficiently analysing the environmental impact of depleted uranium waste from the L.E.S. facility; the NRC violated the Atomic
Directory of Open Access Journals (Sweden)
Larijani Kambiz
2011-01-01
Full Text Available The chemical composition of the volatile fraction obtained by head-space solid phase microextraction (HS-SPME, single drop microextraction (SDME and the essential oil obtained by cold-press from the peels of C. sinensis cv. valencia were analyzed employing gas chromatography-flame ionization detector (GC-FID and gas chromatography-mass spectrometry (GC-MS. The main components were limonene (61.34 %, 68.27 %, 90.50 %, myrcene (17.55 %, 12.35 %, 2.50 %, sabinene (6.50 %, 7.62 %, 0.5 % and α-pinene (0 %, 6.65 %, 1.4 % respectively obtained by HS-SPME, SDME and cold-press. Then a quantitative structure-retention relationship (QSRR study for the prediction of retention indices (RI of the compounds was developed by application of structural descriptors and the multiple linear regression (MLR method. Principal components analysis was used to select the training set. A simple model with low standard errors and high correlation coefficients was obtained. The results illustrated that linear techniques such as MLR combined with a successful variable selection procedure are capable of generating an efficient QSRR model for prediction of the retention indices of different compounds. This model, with high statistical significance (R2 train = 0.983, R2 test = 0.970, Q2 LOO = 0.962, Q2 LGO = 0.936, REP(% = 3.00, could be used adequately for the prediction and description of the retention indices of the volatile compounds.
Hamiltonian structures of some non-linear evolution equations
International Nuclear Information System (INIS)
Tu, G.Z.
1983-06-01
The Hamiltonian structure of the O(2,1) non-linear sigma model, generalized AKNS equations, are discussed. By reducing the O(2,1) non-linear sigma model to its Hamiltonian form some new conservation laws are derived. A new hierarchy of non-linear evolution equations is proposed and shown to be generalized Hamiltonian equations with an infinite number of conservation laws. (author)
Kuznetsov, N.; Maz'ya, V.; Vainberg, B.
2002-08-01
This book gives a self-contained and up-to-date account of mathematical results in the linear theory of water waves. The study of waves has many applications, including the prediction of behavior of floating bodies (ships, submarines, tension-leg platforms etc.), the calculation of wave-making resistance in naval architecture, and the description of wave patterns over bottom topography in geophysical hydrodynamics. The first section deals with time-harmonic waves. Three linear boundary value problems serve as the approximate mathematical models for these types of water waves. The next section uses a plethora of mathematical techniques in the investigation of these three problems. The techniques used in the book include integral equations based on Green's functions, various inequalities between the kinetic and potential energy and integral identities which are indispensable for proving the uniqueness theorems. The so-called inverse procedure is applied to constructing examples of non-uniqueness, usually referred to as 'trapped nodes.'
The Radiometric Bode's law and Extrasolar Planets
National Research Council Canada - National Science Library
Lazio, T. J; Farrell, W. M; Dietrick, Jill; Greenlees, Elizabeth; Hogan, Emily; Jones, Christopher; Hennig, L. A
2004-01-01
We predict the radio flux densities of the extrasolar planets in the current census, making use of an empirical relation the radiometric Bode's law determined from the five "magnetic" planets in the solar system...
Srinivas, Nuggehally R; Syed, Muzeeb
2016-03-01
Linezolid, a oxazolidinone, was the first in class to be approved for the treatment of bacterial infections arising from both susceptible and resistant strains of Gram-positive bacteria. Since overt exposure of linezolid may precipitate serious toxicity issues, therapeutic drug monitoring (TDM) may be required in certain situations, especially in patients who are prescribed other co-medications. Using appropriate oral pharmacokinetic data (single dose and steady state) for linezolid, both maximum plasma drug concentration (Cmax) versus area under the plasma concentration-time curve (AUC) and minimum plasma drug concentration (Cmin) versus AUC relationship was established by linear regression models. The predictions of the AUC values were performed using published mean/median Cmax or Cmin data and appropriate regression lines. The quotient of observed and predicted values rendered fold difference calculation. The mean absolute error (MAE), root mean square error (RMSE), correlation coefficient (r), and the goodness of the AUC fold prediction were used to evaluate the two models. The Cmax versus AUC and trough plasma concentration (Ctrough) versus AUC models displayed excellent correlation, with r values of >0.9760. However, linezolid AUC values were predicted to be within the narrower boundary of 0.76 to 1.5-fold by a higher percentage by the Ctrough (78.3%) versus Cmax model (48.2%). The Ctrough model showed superior correlation of predicted versus observed values and RMSE (r = 0.9031; 28.54%, respectively) compared with the Cmax model (r = 0.5824; 61.34%, respectively). A single time point strategy of using Ctrough level is possible as a prospective tool to measure the AUC of linezolid in the patient population.
Roberts, James R; Dollard, Denis
2010-12-01
The human body and the central nervous system can develop tremendous tolerance to ethanol. Mental and physical dysfunctions from ethanol, in an alcohol-tolerant individual, do not consistently correlate with ethanol levels traditionally used to define intoxication, or even lethality, in a nontolerant subject. Attempting to relate observed signs of alcohol intoxication or impairment, or to evaluate sobriety, by quantifying blood alcohol levels can be misleading, if not impossible. We report a case demonstrating the disconnect between alcohol levels and generally assigned parameters of intoxication and impairment. In this case, an alcohol-tolerant man, with a serum ethanol level of 515 mg/dl, appeared neurologically intact and cognitively normal. This individual was without objective signs of impairment or intoxication by repeated evaluations by experienced emergency physicians. In alcohol-tolerant individuals, blood alcohol levels cannot always be predicted by and do not necessarily correlate with outward appearance, overt signs of intoxication, or physical examination. This phenomenon must be acknowledged when analyzing medical decision making in the emergency department or when evaluating the ability of bartenders and party hosts to identify intoxication in dram shop cases.
Energy Technology Data Exchange (ETDEWEB)
Lissenden, Cliff [Pennsylvania State Univ., State College, PA (United States); Hassan, Tasnin [North Carolina State Univ., Raleigh, NC (United States); Rangari, Vijaya [Tuskegee Univ., Tuskegee, AL (United States)
2014-10-30
application of the harmonic generation method to tubular mechanical test specimens and pipes for nondestructive evaluation. Tubular specimens and pipes act as waveguides, thus we applied the acoustic harmonic generation method to guided waves in both plates and shells. Magnetostrictive transducers were used to generate and receive guided wave modes in the shell sample and the received signals were processed to show the sensitivity of higher harmonic generation to microstructure evolution. Modeling was initiated to correlate higher harmonic generation with the microstructure that will lead to development of a life prediction model that is informed by the nonlinear acoustics measurements.
International Nuclear Information System (INIS)
Lissenden, Cliff; Hassan, Tasnin; Rangari, Vijaya
2014-01-01
harmonic generation method to tubular mechanical test specimens and pipes for nondestructive evaluation. Tubular specimens and pipes act as waveguides, thus we applied the acoustic harmonic generation method to guided waves in both plates and shells. Magnetostrictive transducers were used to generate and receive guided wave modes in the shell sample and the received signals were processed to show the sensitivity of higher harmonic generation to microstructure evolution. Modeling was initiated to correlate higher harmonic generation with the microstructure that will lead to development of a life prediction model that is informed by the nonlinear acoustics measurements.
Generalized non-linear Schroedinger hierarchy
International Nuclear Information System (INIS)
Aratyn, H.; Gomes, J.F.; Zimerman, A.H.
1994-01-01
The importance in studying the completely integrable models have became evident in the last years due to the fact that those models present an algebraic structure extremely rich, providing the natural scenery for solitons description. Those models can be described through non-linear differential equations, pseudo-linear operators (Lax formulation), or a matrix formulation. The integrability implies in the existence of a conservation law associated to each of degree of freedom. Each conserved charge Q i can be associated to a Hamiltonian, defining a time evolution related to to a time t i through the Hamilton equation ∂A/∂t i =[A,Q i ]. Particularly, for a two-dimensions field theory, infinite degree of freedom exist, and consequently infinite conservation laws describing the time evolution in space of infinite times. The Hamilton equation defines a hierarchy of models which present a infinite set of conservation laws. This paper studies the generalized non-linear Schroedinger hierarchy
Evaluation of Linear Regression Simultaneous Myoelectric Control Using Intramuscular EMG.
Smith, Lauren H; Kuiken, Todd A; Hargrove, Levi J
2016-04-01
The objective of this study was to evaluate the ability of linear regression models to decode patterns of muscle coactivation from intramuscular electromyogram (EMG) and provide simultaneous myoelectric control of a virtual 3-DOF wrist/hand system. Performance was compared to the simultaneous control of conventional myoelectric prosthesis methods using intramuscular EMG (parallel dual-site control)-an approach that requires users to independently modulate individual muscles in the residual limb, which can be challenging for amputees. Linear regression control was evaluated in eight able-bodied subjects during a virtual Fitts' law task and was compared to performance of eight subjects using parallel dual-site control. An offline analysis also evaluated how different types of training data affected prediction accuracy of linear regression control. The two control systems demonstrated similar overall performance; however, the linear regression method demonstrated improved performance for targets requiring use of all three DOFs, whereas parallel dual-site control demonstrated improved performance for targets that required use of only one DOF. Subjects using linear regression control could more easily activate multiple DOFs simultaneously, but often experienced unintended movements when trying to isolate individual DOFs. Offline analyses also suggested that the method used to train linear regression systems may influence controllability. Linear regression myoelectric control using intramuscular EMG provided an alternative to parallel dual-site control for 3-DOF simultaneous control at the wrist and hand. The two methods demonstrated different strengths in controllability, highlighting the tradeoff between providing simultaneous control and the ability to isolate individual DOFs when desired.
Hesselink, M.W.
2015-01-01
This article discusses the normative relationship between contract law and democracy. In particular, it argues that in order to be legitimate contract law needs to have a democratic basis. Private law is not different in this respect from public law. Thus, the first claim made in this article will