WorldWideScience

Sample records for linear increments model

  1. Power calculation of linear and angular incremental encoders

    Science.gov (United States)

    Prokofev, Aleksandr V.; Timofeev, Aleksandr N.; Mednikov, Sergey V.; Sycheva, Elena A.

    2016-04-01

    Automation technology is constantly expanding its role in improving the efficiency of manufacturing and testing processes in all branches of industry. More than ever before, the mechanical movements of linear slides, rotary tables, robot arms, actuators, etc. are numerically controlled. Linear and angular incremental photoelectric encoders measure mechanical motion and transmit the measured values back to the control unit. The capabilities of these systems are undergoing continual development in terms of their resolution, accuracy and reliability, their measuring ranges, and maximum speeds. This article discusses the method of power calculation of linear and angular incremental photoelectric encoders, to find the optimum parameters for its components, such as light emitters, photo-detectors, linear and angular scales, optical components etc. It analyzes methods and devices that permit high resolutions in the order of 0.001 mm or 0.001°, as well as large measuring lengths of over 100 mm. In linear and angular incremental photoelectric encoders optical beam is usually formulated by a condenser lens passes through the measuring unit changes its value depending on the movement of a scanning head or measuring raster. Past light beam is converting into an electrical signal by the photo-detecter's block for processing in the electrical block. Therefore, for calculating the energy source is a value of the desired value of the optical signal at the input of the photo-detecter's block, which reliably recorded and processed in the electronic unit of linear and angular incremental optoelectronic encoders. Automation technology is constantly expanding its role in improving the efficiency of manufacturing and testing processes in all branches of industry. More than ever before, the mechanical movements of linear slides, rotary tables, robot arms, actuators, etc. are numerically controlled. Linear and angular incremental photoelectric encoders measure mechanical motion and

  2. Stable Myoelectric Control of a Hand Prosthesis using Non-Linear Incremental Learning

    Directory of Open Access Journals (Sweden)

    Arjan eGijsberts

    2014-02-01

    Full Text Available Stable myoelectric control of hand prostheses remains an open problem. The only successful human-machine interface is surface electromyography, typically allowing control of a few degrees of freedom. Machine learning techniques may have the potential to remove these limitations, but their performance is thus far inadequate: myoelectric signals change over time under the influence of various factors, deteriorating control performance. It is therefore necessary, in the standard approach, to regularly retrain a new model from scratch.We hereby propose a non-linear incremental learning method in which occasional updates with a modest amount of novel training data allow continual adaptation to the changes in the signals. In particular, Incremental Ridge Regression and an approximation of the Gaussian Kernel known as Random Fourier Features are combined to predict finger forces from myoelectric signals, both finger-by-finger and grouped in grasping patterns.We show that the approach is effective and practically applicable to this problem by first analyzing its performance while predicting single-finger forces. Surface electromyography and finger forces were collected from 10 intact subjects during four sessions spread over two different days; the results of the analysis show that small incremental updates are indeed effective to maintain a stable level of performance.Subsequently, we employed the same method on-line to teleoperate a humanoid robotic arm equipped with a state-of-the-art commercial prosthetic hand. The subject could reliably grasp, carry and release everyday-life objects, enforcing stable grasping irrespective of the signal changes, hand/arm movements and wrist pronation and supination.

  3. Validation of the periodicity of growth increment deposition in ...

    African Journals Online (AJOL)

    Validation of the periodicity of growth increment deposition in otoliths from the larval and early juvenile stages of two cyprinids from the Orange–Vaal river ... Linear regression models were fitted to the known age post-fertilisation and the age estimated using increment counts to test the correspondence between the two for ...

  4. Cyclic and seasonal features in the behaviour of linear growth increment of Rayleigh-Taylor instability in equatorial F-region

    International Nuclear Information System (INIS)

    Farkullin, M.N.; Nikitin, M.A.; Kashchenko, N.M.

    1989-01-01

    Calculations of linear increment of the Rayleigh-Taylor instability for various geophysical conditions are presented. It is shwn that space-time characteristics of increment depend strongly on conditions of solar activity and seasons. The calculation results are in a good agreement with statistical regularities of F-scattering observation in equatorial F-area, which points to the Rayleigh-Taylor natur of the penomena

  5. Successive 1-Month Weight Increments in Infancy Can Be Used to Screen for Faltering Linear Growth.

    Science.gov (United States)

    Onyango, Adelheid W; Borghi, Elaine; de Onis, Mercedes; Frongillo, Edward A; Victora, Cesar G; Dewey, Kathryn G; Lartey, Anna; Bhandari, Nita; Baerug, Anne; Garza, Cutberto

    2015-12-01

    Linear growth faltering in the first 2 y contributes greatly to a high stunting burden, and prevention is hampered by the limited capacity in primary health care for timely screening and intervention. This study aimed to determine an approach to predicting long-term stunting from consecutive 1-mo weight increments in the first year of life. By using the reference sample of the WHO velocity standards, the analysis explored patterns of consecutive monthly weight increments among healthy infants. Four candidate screening thresholds of successive increments that could predict stunting were considered, and one was selected for further testing. The selected threshold was applied in a cohort of Bangladeshi infants to assess its predictive value for stunting at ages 12 and 24 mo. Between birth and age 12 mo, 72.6% of infants in the WHO sample tracked within 1 SD of their weight and length. The selected screening criterion ("event") was 2 consecutive monthly increments below the 15th percentile. Bangladeshi infants were born relatively small and, on average, tracked downward from approximately age 6 to strategy is effective, the estimated preventable proportion in the group who experienced the event would be 34% at 12 mo and 24% at 24 mo. This analysis offers an approach for frontline workers to identify children at risk of stunting, allowing for timely initiation of preventive measures. It opens avenues for further investigation into evidence-informed application of the WHO growth velocity standards. © 2015 American Society for Nutrition.

  6. LINEAR MIXED MODEL TO DESCRIBE THE BASAL AREA INCREMENT FOR INDIVUDUAL CEDRO (Cedrela odorata L.TREES IN OCCIDENTAL AMAZON, BRAZIL

    Directory of Open Access Journals (Sweden)

    Thiago Augusto da Cunha

    2013-01-01

    Full Text Available Reliable growth data from trees are important to establish a rational forest management. Characteristics from trees, like the size, crown architecture and competition indices have been used to mathematically describe the increment efficiently when associated with them. However, the precise role of these effects in the growth-modeling destined to tropical trees needs to be further studied. Here it is reconstructed the basal area increment (BAI of individual Cedrela odorata trees, sampled at Amazon forest, to develop a growth- model using potential-predictors like: (1 classical tree size; (2 morphometric data; (3 competition and (4 social position including liana loads. Despite the large variation in tree size and growth, we observed that these kinds of predictor variables described well the BAI in level of individual tree. The fitted mixed model achieve a high efficiency (R2=92.7 % and predicted 3-years BAI over bark for trees of Cedrela odorata ranging from 10 to 110 cm at diameter at breast height. Tree height, steam slenderness and crown formal demonstrated high influence in the BAI growth model and explaining most of the growth variance (Partial R2=87.2%. Competition variables had negative influence on the BAI, however, explained about 7% of the total variation. The introduction of a random parameter on the regressions model (mixed modelprocedure has demonstrated a better significance approach to the data observed and showed more realistic predictions than the fixed model.

  7. Incremental Adaptive Fuzzy Control for Sensorless Stroke Control of A Halbach-type Linear Oscillatory Motor

    Science.gov (United States)

    Lei, Meizhen; Wang, Liqiang

    2018-01-01

    The halbach-type linear oscillatory motor (HT-LOM) is multi-variable, highly coupled, nonlinear and uncertain, and difficult to get a satisfied result by conventional PID control. An incremental adaptive fuzzy controller (IAFC) for stroke tracking was presented, which combined the merits of PID control, the fuzzy inference mechanism and the adaptive algorithm. The integral-operation is added to the conventional fuzzy control algorithm. The fuzzy scale factor can be online tuned according to the load force and stroke command. The simulation results indicate that the proposed control scheme can achieve satisfied stroke tracking performance and is robust with respect to parameter variations and external disturbance.

  8. Two models of minimalist, incremental syntactic analysis.

    Science.gov (United States)

    Stabler, Edward P

    2013-07-01

    Minimalist grammars (MGs) and multiple context-free grammars (MCFGs) are weakly equivalent in the sense that they define the same languages, a large mildly context-sensitive class that properly includes context-free languages. But in addition, for each MG, there is an MCFG which is strongly equivalent in the sense that it defines the same language with isomorphic derivations. However, the structure-building rules of MGs but not MCFGs are defined in a way that generalizes across categories. Consequently, MGs can be exponentially more succinct than their MCFG equivalents, and this difference shows in parsing models too. An incremental, top-down beam parser for MGs is defined here, sound and complete for all MGs, and hence also capable of parsing all MCFG languages. But since the parser represents its grammar transparently, the relative succinctness of MGs is again evident. Although the determinants of MG structure are narrowly and discretely defined, probabilistic influences from a much broader domain can influence even the earliest analytic steps, allowing frequency and context effects to come early and from almost anywhere, as expected in incremental models. Copyright © 2013 Cognitive Science Society, Inc.

  9. Incremental principal component pursuit for video background modeling

    Science.gov (United States)

    Rodriquez-Valderrama, Paul A.; Wohlberg, Brendt

    2017-03-14

    An incremental Principal Component Pursuit (PCP) algorithm for video background modeling that is able to process one frame at a time while adapting to changes in background, with a computational complexity that allows for real-time processing, having a low memory footprint and is robust to translational and rotational jitter.

  10. Quasi-static incremental behavior of granular materials: Elastic-plastic coupling and micro-scale dissipation

    Science.gov (United States)

    Kuhn, Matthew R.; Daouadji, Ali

    2018-05-01

    The paper addresses a common assumption of elastoplastic modeling: that the recoverable, elastic strain increment is unaffected by alterations of the elastic moduli that accompany loading. This assumption is found to be false for a granular material, and discrete element (DEM) simulations demonstrate that granular materials are coupled materials at both micro- and macro-scales. Elasto-plastic coupling at the macro-scale is placed in the context of thermomechanics framework of Tomasz Hueckel and Hans Ziegler, in which the elastic moduli are altered by irreversible processes during loading. This complex behavior is explored for multi-directional loading probes that follow an initial monotonic loading. An advanced DEM model is used in the study, with non-convex non-spherical particles and two different contact models: a conventional linear-frictional model and an exact implementation of the Hertz-like Cattaneo-Mindlin model. Orthotropic true-triaxial probes were used in the study (i.e., no direct shear strain), with tiny strain increments of 2 ×10-6 . At the micro-scale, contact movements were monitored during small increments of loading and load-reversal, and results show that these movements are not reversed by a reversal of strain direction, and some contacts that were sliding during a loading increment continue to slide during reversal. The probes show that the coupled part of a strain increment, the difference between the recoverable (elastic) increment and its reversible part, must be considered when partitioning strain increments into elastic and plastic parts. Small increments of irreversible (and plastic) strain and contact slipping and frictional dissipation occur for all directions of loading, and an elastic domain, if it exists at all, is smaller than the strain increment used in the simulations.

  11. A comparative study of velocity increment generation between the rigid body and flexible models of MMET

    Energy Technology Data Exchange (ETDEWEB)

    Ismail, Norilmi Amilia, E-mail: aenorilmi@usm.my [School of Aerospace Engineering, Engineering Campus, Universiti Sains Malaysia, 14300 Nibong Tebal, Pulau Pinang (Malaysia)

    2016-02-01

    The motorized momentum exchange tether (MMET) is capable of generating useful velocity increments through spin–orbit coupling. This study presents a comparative study of the velocity increments between the rigid body and flexible models of MMET. The equations of motions of both models in the time domain are transformed into a function of true anomaly. The equations of motion are integrated, and the responses in terms of the velocity increment of the rigid body and flexible models are compared and analysed. Results show that the initial conditions, eccentricity, and flexibility of the tether have significant effects on the velocity increments of the tether.

  12. Foundations of linear and generalized linear models

    CERN Document Server

    Agresti, Alan

    2015-01-01

    A valuable overview of the most important ideas and results in statistical analysis Written by a highly-experienced author, Foundations of Linear and Generalized Linear Models is a clear and comprehensive guide to the key concepts and results of linear statistical models. The book presents a broad, in-depth overview of the most commonly used statistical models by discussing the theory underlying the models, R software applications, and examples with crafted models to elucidate key ideas and promote practical model building. The book begins by illustrating the fundamentals of linear models,

  13. The balanced scorecard: an incremental approach model to health care management.

    Science.gov (United States)

    Pineno, Charles J

    2002-01-01

    The balanced scorecard represents a technique used in strategic management to translate an organization's mission and strategy into a comprehensive set of performance measures that provide the framework for implementation of strategic management. This article develops an incremental approach for decision making by formulating a specific balanced scorecard model with an index of nonfinancial as well as financial measures. The incremental approach to costs, including profit contribution analysis and probabilities, allows decisionmakers to assess, for example, how their desire to meet different health care needs will cause changes in service design. This incremental approach to the balanced scorecard may prove to be useful in evaluating the existence of causality relationships between different objective and subjective measures to be included within the balanced scorecard.

  14. Incremental passivity and output regulation for switched nonlinear systems

    Science.gov (United States)

    Pang, Hongbo; Zhao, Jun

    2017-10-01

    This paper studies incremental passivity and global output regulation for switched nonlinear systems, whose subsystems are not required to be incrementally passive. A concept of incremental passivity for switched systems is put forward. First, a switched system is rendered incrementally passive by the design of a state-dependent switching law. Second, the feedback incremental passification is achieved by the design of a state-dependent switching law and a set of state feedback controllers. Finally, we show that once the incremental passivity for switched nonlinear systems is assured, the output regulation problem is solved by the design of global nonlinear regulator controllers comprising two components: the steady-state control and the linear output feedback stabilising controllers, even though the problem for none of subsystems is solvable. Two examples are presented to illustrate the effectiveness of the proposed approach.

  15. Conservation of wildlife populations: factoring in incremental disturbance.

    Science.gov (United States)

    Stewart, Abbie; Komers, Petr E

    2017-06-01

    Progressive anthropogenic disturbance can alter ecosystem organization potentially causing shifts from one stable state to another. This potential for ecosystem shifts must be considered when establishing targets and objectives for conservation. We ask whether a predator-prey system response to incremental anthropogenic disturbance might shift along a disturbance gradient and, if it does, whether any disturbance thresholds are evident for this system. Development of linear corridors in forested areas increases wolf predation effectiveness, while high density of development provides a safe-haven for their prey. If wolves limit moose population growth, then wolves and moose should respond inversely to land cover disturbance. Using general linear model analysis, we test how the rate of change in moose ( Alces alces ) density and wolf ( Canis lupus ) harvest density are influenced by the rate of change in land cover and proportion of land cover disturbed within a 300,000 km 2 area in the boreal forest of Alberta, Canada. Using logistic regression, we test how the direction of change in moose density is influenced by measures of land cover change. In response to incremental land cover disturbance, moose declines occurred where 43% of land cover was disturbed and wolf density declined. Wolves and moose appeared to respond inversely to incremental disturbance with the balance between moose decline and wolf increase shifting at about 43% of land cover disturbed. Conservation decisions require quantification of disturbance rates and their relationships to predator-prey systems because ecosystem responses to anthropogenic disturbance shift across disturbance gradients.

  16. Observers for Systems with Nonlinearities Satisfying an Incremental Quadratic Inequality

    Science.gov (United States)

    Acikmese, Ahmet Behcet; Corless, Martin

    2004-01-01

    We consider the problem of state estimation for nonlinear time-varying systems whose nonlinearities satisfy an incremental quadratic inequality. These observer results unifies earlier results in the literature; and extend it to some additional classes of nonlinearities. Observers are presented which guarantee that the state estimation error exponentially converges to zero. Observer design involves solving linear matrix inequalities for the observer gain matrices. Results are illustrated by application to a simple model of an underwater.

  17. Using Analysis Increments (AI) to Estimate and Correct Systematic Errors in the Global Forecast System (GFS) Online

    Science.gov (United States)

    Bhargava, K.; Kalnay, E.; Carton, J.; Yang, F.

    2017-12-01

    Systematic forecast errors, arising from model deficiencies, form a significant portion of the total forecast error in weather prediction models like the Global Forecast System (GFS). While much effort has been expended to improve models, substantial model error remains. The aim here is to (i) estimate the model deficiencies in the GFS that lead to systematic forecast errors, (ii) implement an online correction (i.e., within the model) scheme to correct GFS following the methodology of Danforth et al. [2007] and Danforth and Kalnay [2008, GRL]. Analysis Increments represent the corrections that new observations make on, in this case, the 6-hr forecast in the analysis cycle. Model bias corrections are estimated from the time average of the analysis increments divided by 6-hr, assuming that initial model errors grow linearly and first ignoring the impact of observation bias. During 2012-2016, seasonal means of the 6-hr model bias are generally robust despite changes in model resolution and data assimilation systems, and their broad continental scales explain their insensitivity to model resolution. The daily bias dominates the sub-monthly analysis increments and consists primarily of diurnal and semidiurnal components, also requiring a low dimensional correction. Analysis increments in 2015 and 2016 are reduced over oceans, which is attributed to improvements in the specification of the SSTs. These results encourage application of online correction, as suggested by Danforth and Kalnay, for mean, seasonal and diurnal and semidiurnal model biases in GFS to reduce both systematic and random errors. As the error growth in the short-term is still linear, estimated model bias corrections can be added as a forcing term in the model tendency equation to correct online. Preliminary experiments with GFS, correcting temperature and specific humidity online show reduction in model bias in 6-hr forecast. This approach can then be used to guide and optimize the design of sub

  18. An Incremental Weighted Least Squares Approach to Surface Lights Fields

    Science.gov (United States)

    Coombe, Greg; Lastra, Anselmo

    An Image-Based Rendering (IBR) approach to appearance modelling enables the capture of a wide variety of real physical surfaces with complex reflectance behaviour. The challenges with this approach are handling the large amount of data, rendering the data efficiently, and previewing the model as it is being constructed. In this paper, we introduce the Incremental Weighted Least Squares approach to the representation and rendering of spatially and directionally varying illumination. Each surface patch consists of a set of Weighted Least Squares (WLS) node centers, which are low-degree polynomial representations of the anisotropic exitant radiance. During rendering, the representations are combined in a non-linear fashion to generate a full reconstruction of the exitant radiance. The rendering algorithm is fast, efficient, and implemented entirely on the GPU. The construction algorithm is incremental, which means that images are processed as they arrive instead of in the traditional batch fashion. This human-in-the-loop process enables the user to preview the model as it is being constructed and to adapt to over-sampling and under-sampling of the surface appearance.

  19. Space-time quantitative source apportionment of soil heavy metal concentration increments.

    Science.gov (United States)

    Yang, Yong; Christakos, George; Guo, Mingwu; Xiao, Lu; Huang, Wei

    2017-04-01

    Assessing the space-time trends and detecting the sources of heavy metal accumulation in soils have important consequences in the prevention and treatment of soil heavy metal pollution. In this study, we collected soil samples in the eastern part of the Qingshan district, Wuhan city, Hubei Province, China, during the period 2010-2014. The Cd, Cu, Pb and Zn concentrations in soils exhibited a significant accumulation during 2010-2014. The spatiotemporal Kriging technique, based on a quantitative characterization of soil heavy metal concentration variations in terms of non-separable variogram models, was employed to estimate the spatiotemporal soil heavy metal distribution in the study region. Our findings showed that the Cd, Cu, and Zn concentrations have an obvious incremental tendency from the southwestern to the central part of the study region. However, the Pb concentrations exhibited an obvious tendency from the northern part to the central part of the region. Then, spatial overlay analysis was used to obtain absolute and relative concentration increments of adjacent 1- or 5-year periods during 2010-2014. The spatial distribution of soil heavy metal concentration increments showed that the larger increments occurred in the center of the study region. Lastly, the principal component analysis combined with the multiple linear regression method were employed to quantify the source apportionment of the soil heavy metal concentration increments in the region. Our results led to the conclusion that the sources of soil heavy metal concentration increments should be ascribed to industry, agriculture and traffic. In particular, 82.5% of soil heavy metal concentration increment during 2010-2014 was ascribed to industrial/agricultural activities sources. Using STK and SOA to obtain the spatial distribution of heavy metal concentration increments in soils. Using PCA-MLR to quantify the source apportionment of soil heavy metal concentration increments. Copyright © 2017

  20. Greedy Sampling and Incremental Surrogate Model-Based Tailoring of Aeroservoelastic Model Database for Flexible Aircraft

    Science.gov (United States)

    Wang, Yi; Pant, Kapil; Brenner, Martin J.; Ouellette, Jeffrey A.

    2018-01-01

    This paper presents a data analysis and modeling framework to tailor and develop linear parameter-varying (LPV) aeroservoelastic (ASE) model database for flexible aircrafts in broad 2D flight parameter space. The Kriging surrogate model is constructed using ASE models at a fraction of grid points within the original model database, and then the ASE model at any flight condition can be obtained simply through surrogate model interpolation. The greedy sampling algorithm is developed to select the next sample point that carries the worst relative error between the surrogate model prediction and the benchmark model in the frequency domain among all input-output channels. The process is iterated to incrementally improve surrogate model accuracy till a pre-determined tolerance or iteration budget is met. The methodology is applied to the ASE model database of a flexible aircraft currently being tested at NASA/AFRC for flutter suppression and gust load alleviation. Our studies indicate that the proposed method can reduce the number of models in the original database by 67%. Even so the ASE models obtained through Kriging interpolation match the model in the original database constructed directly from the physics-based tool with the worst relative error far below 1%. The interpolated ASE model exhibits continuously-varying gains along a set of prescribed flight conditions. More importantly, the selected grid points are distributed non-uniformly in the parameter space, a) capturing the distinctly different dynamic behavior and its dependence on flight parameters, and b) reiterating the need and utility for adaptive space sampling techniques for ASE model database compaction. The present framework is directly extendible to high-dimensional flight parameter space, and can be used to guide the ASE model development, model order reduction, robust control synthesis and novel vehicle design of flexible aircraft.

  1. Modelling female fertility traits in beef cattle using linear and non-linear models.

    Science.gov (United States)

    Naya, H; Peñagaricano, F; Urioste, J I

    2017-06-01

    Female fertility traits are key components of the profitability of beef cattle production. However, these traits are difficult and expensive to measure, particularly under extensive pastoral conditions, and consequently, fertility records are in general scarce and somehow incomplete. Moreover, fertility traits are usually dominated by the effects of herd-year environment, and it is generally assumed that relatively small margins are kept for genetic improvement. New ways of modelling genetic variation in these traits are needed. Inspired in the methodological developments made by Prof. Daniel Gianola and co-workers, we assayed linear (Gaussian), Poisson, probit (threshold), censored Poisson and censored Gaussian models to three different kinds of endpoints, namely calving success (CS), number of days from first calving (CD) and number of failed oestrus (FE). For models involving FE and CS, non-linear models overperformed their linear counterparts. For models derived from CD, linear versions displayed better adjustment than the non-linear counterparts. Non-linear models showed consistently higher estimates of heritability and repeatability in all cases (h 2  linear models; h 2  > 0.23 and r > 0.24, for non-linear models). While additive and permanent environment effects showed highly favourable correlations between all models (>0.789), consistency in selecting the 10% best sires showed important differences, mainly amongst the considered endpoints (FE, CS and CD). In consequence, endpoints should be considered as modelling different underlying genetic effects, with linear models more appropriate to describe CD and non-linear models better for FE and CS. © 2017 Blackwell Verlag GmbH.

  2. Characterization of the heart rate curve during a maximum incremental test on a treadmill. DOI: 10.5007/1980-0037.2011v13n4p285

    Directory of Open Access Journals (Sweden)

    Eduardo Marcel Fernandes Nascimento

    2011-08-01

    Full Text Available The objective of this study was to analyze the heart rate (HR profile plotted against incremental workloads (IWL during a treadmill test using three mathematical models [linear, linear with 2 segments (Lin2, and sigmoidal], and to determine the best model for the identification of the HR threshold that could be used as a predictor of ventilatory thresholds (VT1 and VT2. Twenty-two men underwent a treadmill incremental test (retest group: n=12 at an initial speed of 5.5 km.h-1, with increments of 0.5 km.h-1 at 1-min intervals until exhaustion. HR and gas exchange were continuously measured and subsequently converted to 5-s and 20-s averages, respectively. The best model was chosen based on residual sum of squares and mean square error. The HR/IWL ratio was better fitted with the Lin2 model in the test and retest groups (p0.05. During a treadmill incremental test, the HR/IWL ratio seems to be better fitted with a Lin2 model, which permits to determine the HR threshold that coincides with VT1.

  3. Development of the Nonstationary Incremental Analysis Update Algorithm for Sequential Data Assimilation System

    Directory of Open Access Journals (Sweden)

    Yoo-Geun Ham

    2016-01-01

    Full Text Available This study introduces a modified version of the incremental analysis updates (IAU, called the nonstationary IAU (NIAU method, to improve the assimilation accuracy of the IAU while keeping the continuity of the analysis. Similar to the IAU, the NIAU is designed to add analysis increments at every model time step to improve the continuity in the intermittent data assimilation. However, unlike the IAU, the NIAU procedure uses time-evolved forcing using the forward operator as corrections to the model. The solution of the NIAU is superior to that of the forward IAU, of which analysis is performed at the beginning of the time window for adding the IAU forcing, in terms of the accuracy of the analysis field. It is because, in the linear systems, the NIAU solution equals that in an intermittent data assimilation method at the end of the assimilation interval. To have the filtering property in the NIAU, a forward operator to propagate the increment is reconstructed with only dominant singular vectors. An illustration of those advantages of the NIAU is given using the simple 40-variable Lorenz model.

  4. Planning Through Incrementalism

    Science.gov (United States)

    Lasserre, Ph.

    1974-01-01

    An incremental model of decisionmaking is discussed and compared with the Comprehensive Rational Approach. A model of reconciliation between the two approaches is proposed, and examples are given in the field of economic development and educational planning. (Author/DN)

  5. A System to Derive Optimal Tree Diameter Increment Models from the Eastwide Forest Inventory Data Base (EFIDB)

    Science.gov (United States)

    Don C. Bragg

    2002-01-01

    This article is an introduction to the computer software used by the Potential Relative Increment (PRI) approach to optimal tree diameter growth modeling. These DOS programs extract qualified tree and plot data from the Eastwide Forest Inventory Data Base (EFIDB), calculate relative tree increment, sort for the highest relative increments by diameter class, and...

  6. Awareness and its use in Incremental Data Driven Modelling for Plug and Play Process Control

    DEFF Research Database (Denmark)

    Knudsen, Torben; Bendtsen, Jan Dimon; Trangbæk, Klaus

    2012-01-01

    In this paper, we focus on the problem of incremental system identification for the purpose of automatic reconfiguration of control systems. We consider the particular case where a linear time-invariant system is augmented with either an extra sensor or an extra actuator and derive prediction error...... methods for recursively estimating the additional parameters while retaining the existing system model. Next, we propose a novel measure of the "usefulness'' of new signals that appear in an existing control loop due to the addition of a new device, e.g., a sensor. This measure, which we refer...... to as awareness, indicates if there is a relation between the signal provided by the new device and the existing process, as well as what the new device is good for in terms of control performance. Finally, a simulation example illustrates the potentials of the proposed method....

  7. Linear models with R

    CERN Document Server

    Faraway, Julian J

    2014-01-01

    A Hands-On Way to Learning Data AnalysisPart of the core of statistics, linear models are used to make predictions and explain the relationship between the response and the predictors. Understanding linear models is crucial to a broader competence in the practice of statistics. Linear Models with R, Second Edition explains how to use linear models in physical science, engineering, social science, and business applications. The book incorporates several improvements that reflect how the world of R has greatly expanded since the publication of the first edition.New to the Second EditionReorganiz

  8. Theory of Single Point Incremental Forming

    DEFF Research Database (Denmark)

    Martins, P.A.F.; Bay, Niels; Skjødt, Martin

    2008-01-01

    This paper presents a closed-form theoretical analysis modelling the fundamentals of single point incremental forming and explaining the experimental and numerical results available in the literature for the past couple of years. The model is based on membrane analysis with bi-directional in-plan......-plane contact friction and is focused on the extreme modes of deformation that are likely to be found in single point incremental forming processes. The overall investigation is supported by experimental work performed by the authors and data retrieved from the literature.......This paper presents a closed-form theoretical analysis modelling the fundamentals of single point incremental forming and explaining the experimental and numerical results available in the literature for the past couple of years. The model is based on membrane analysis with bi-directional in...

  9. Linear regression metamodeling as a tool to summarize and present simulation model results.

    Science.gov (United States)

    Jalal, Hawre; Dowd, Bryan; Sainfort, François; Kuntz, Karen M

    2013-10-01

    Modelers lack a tool to systematically and clearly present complex model results, including those from sensitivity analyses. The objective was to propose linear regression metamodeling as a tool to increase transparency of decision analytic models and better communicate their results. We used a simplified cancer cure model to demonstrate our approach. The model computed the lifetime cost and benefit of 3 treatment options for cancer patients. We simulated 10,000 cohorts in a probabilistic sensitivity analysis (PSA) and regressed the model outcomes on the standardized input parameter values in a set of regression analyses. We used the regression coefficients to describe measures of sensitivity analyses, including threshold and parameter sensitivity analyses. We also compared the results of the PSA to deterministic full-factorial and one-factor-at-a-time designs. The regression intercept represented the estimated base-case outcome, and the other coefficients described the relative parameter uncertainty in the model. We defined simple relationships that compute the average and incremental net benefit of each intervention. Metamodeling produced outputs similar to traditional deterministic 1-way or 2-way sensitivity analyses but was more reliable since it used all parameter values. Linear regression metamodeling is a simple, yet powerful, tool that can assist modelers in communicating model characteristics and sensitivity analyses.

  10. Analysis of single-degree-of-freedom piezoelectric energy harvester with stopper by incremental harmonic balance method

    Science.gov (United States)

    Zhao, Dan; Wang, Xiaoman; Cheng, Yuan; Liu, Shaogang; Wu, Yanhong; Chai, Liqin; Liu, Yang; Cheng, Qianju

    2018-05-01

    Piecewise-linear structure can effectively broaden the working frequency band of the piezoelectric energy harvester, and improvement of its research can promote the practical process of energy collection device to meet the requirements for powering microelectronic components. In this paper, the incremental harmonic balance (IHB) method is introduced for the complicated and difficult analysis process of the piezoelectric energy harvester to solve these problems. After obtaining the nonlinear dynamic equation of the single-degree-of-freedom piecewise-linear energy harvester by mathematical modeling and the equation is solved based on the IHB method, the theoretical amplitude-frequency curve of open-circuit voltage is achieved. Under 0.2 g harmonic excitation, a piecewise-linear energy harvester is experimentally tested by unidirectional frequency-increasing scanning. The results demonstrate that the theoretical and experimental amplitudes have the same trend, and the width of the working band with high voltage output are 4.9 Hz and 4.7 Hz, respectively, and the relative error is 4.08%. The open-output peak voltage are 21.53 V and 18.25 V, respectively, and the relative error is 15.23%. Since the theoretical value is consistent with the experimental results, the theoretical model and the incremental harmonic balance method used in this paper are suitable for solving single-degree-of-freedom piecewise-linear piezoelectric energy harvester and can be applied to further parameter optimized design.

  11. Linear and Generalized Linear Mixed Models and Their Applications

    CERN Document Server

    Jiang, Jiming

    2007-01-01

    This book covers two major classes of mixed effects models, linear mixed models and generalized linear mixed models, and it presents an up-to-date account of theory and methods in analysis of these models as well as their applications in various fields. The book offers a systematic approach to inference about non-Gaussian linear mixed models. Furthermore, it has included recently developed methods, such as mixed model diagnostics, mixed model selection, and jackknife method in the context of mixed models. The book is aimed at students, researchers and other practitioners who are interested

  12. Variational formulation for dissipative continua and an incremental J-integral

    Science.gov (United States)

    Rahaman, Md. Masiur; Dhas, Bensingh; Roy, D.; Reddy, J. N.

    2018-01-01

    Our aim is to rationally formulate a proper variational principle for dissipative (viscoplastic) solids in the presence of inertia forces. As a first step, a consistent linearization of the governing nonlinear partial differential equations (PDEs) is carried out. An additional set of complementary (adjoint) equations is then formed to recover an underlying variational structure for the augmented system of linearized balance laws. This makes it possible to introduce an incremental Lagrangian such that the linearized PDEs, including the complementary equations, become the Euler-Lagrange equations. Continuous groups of symmetries of the linearized PDEs are computed and an analysis is undertaken to identify the variational groups of symmetries of the linearized dissipative system. Application of Noether's theorem leads to the conservation laws (conserved currents) of motion corresponding to the variational symmetries. As a specific outcome, we exploit translational symmetries of the functional in the material space and recover, via Noether's theorem, an incremental J-integral for viscoplastic solids in the presence of inertia forces. Numerical demonstrations are provided through a two-dimensional plane strain numerical simulation of a compact tension specimen of annealed mild steel under dynamic loading.

  13. The Dangers of Estimating V˙O2max Using Linear, Nonexercise Prediction Models.

    Science.gov (United States)

    Nevill, Alan M; Cooke, Carlton B

    2017-05-01

    This study aimed to compare the accuracy and goodness of fit of two competing models (linear vs allometric) when estimating V˙O2max (mL·kg·min) using nonexercise prediction models. The two competing models were fitted to the V˙O2max (mL·kg·min) data taken from two previously published studies. Study 1 (the Allied Dunbar National Fitness Survey) recruited 1732 randomly selected healthy participants, 16 yr and older, from 30 English parliamentary constituencies. Estimates of V˙O2max were obtained using a progressive incremental test on a motorized treadmill. In study 2, maximal oxygen uptake was measured directly during a fatigue limited treadmill test in older men (n = 152) and women (n = 146) 55 to 86 yr old. In both studies, the quality of fit associated with estimating V˙O2max (mL·kg·min) was superior using allometric rather than linear (additive) models based on all criteria (R, maximum log-likelihood, and Akaike information criteria). Results suggest that linear models will systematically overestimate V˙O2max for participants in their 20s and underestimate V˙O2max for participants in their 60s and older. The residuals saved from the linear models were neither normally distributed nor independent of the predicted values nor age. This will probably explain the absence of a key quadratic age term in the linear models, crucially identified using allometric models. Not only does the curvilinear age decline within an exponential function follow a more realistic age decline (the right-hand side of a bell-shaped curve), but the allometric models identified either a stature-to-body mass ratio (study 1) or a fat-free mass-to-body mass ratio (study 2), both associated with leanness when estimating V˙O2max. Adopting allometric models will provide more accurate predictions of V˙O2max (mL·kg·min) using plausible, biologically sound, and interpretable models.

  14. 2D discontinuous piecewise linear map: Emergence of fashion cycles.

    Science.gov (United States)

    Gardini, L; Sushko, I; Matsuyama, K

    2018-05-01

    We consider a discrete-time version of the continuous-time fashion cycle model introduced in Matsuyama, 1992. Its dynamics are defined by a 2D discontinuous piecewise linear map depending on three parameters. In the parameter space of the map periodicity, regions associated with attracting cycles of different periods are organized in the period adding and period incrementing bifurcation structures. The boundaries of all the periodicity regions related to border collision bifurcations are obtained analytically in explicit form. We show the existence of several partially overlapping period incrementing structures, that is, a novelty for the considered class of maps. Moreover, we show that if the time-delay in the discrete time formulation of the model shrinks to zero, the number of period incrementing structures tends to infinity and the dynamics of the discrete time fashion cycle model converges to those of continuous-time fashion cycle model.

  15. Introduction to generalized linear models

    CERN Document Server

    Dobson, Annette J

    2008-01-01

    Introduction Background Scope Notation Distributions Related to the Normal Distribution Quadratic Forms Estimation Model Fitting Introduction Examples Some Principles of Statistical Modeling Notation and Coding for Explanatory Variables Exponential Family and Generalized Linear Models Introduction Exponential Family of Distributions Properties of Distributions in the Exponential Family Generalized Linear Models Examples Estimation Introduction Example: Failure Times for Pressure Vessels Maximum Likelihood Estimation Poisson Regression Example Inference Introduction Sampling Distribution for Score Statistics Taylor Series Approximations Sampling Distribution for MLEs Log-Likelihood Ratio Statistic Sampling Distribution for the Deviance Hypothesis Testing Normal Linear Models Introduction Basic Results Multiple Linear Regression Analysis of Variance Analysis of Covariance General Linear Models Binary Variables and Logistic Regression Probability Distributions ...

  16. Dimension of linear models

    DEFF Research Database (Denmark)

    Høskuldsson, Agnar

    1996-01-01

    Determination of the proper dimension of a given linear model is one of the most important tasks in the applied modeling work. We consider here eight criteria that can be used to determine the dimension of the model, or equivalently, the number of components to use in the model. Four of these cri......Determination of the proper dimension of a given linear model is one of the most important tasks in the applied modeling work. We consider here eight criteria that can be used to determine the dimension of the model, or equivalently, the number of components to use in the model. Four...... the basic problems in determining the dimension of linear models. Then each of the eight measures are treated. The results are illustrated by examples....

  17. Extending the linear model with R generalized linear, mixed effects and nonparametric regression models

    CERN Document Server

    Faraway, Julian J

    2005-01-01

    Linear models are central to the practice of statistics and form the foundation of a vast range of statistical methodologies. Julian J. Faraway''s critically acclaimed Linear Models with R examined regression and analysis of variance, demonstrated the different methods available, and showed in which situations each one applies. Following in those footsteps, Extending the Linear Model with R surveys the techniques that grow from the regression model, presenting three extensions to that framework: generalized linear models (GLMs), mixed effect models, and nonparametric regression models. The author''s treatment is thoroughly modern and covers topics that include GLM diagnostics, generalized linear mixed models, trees, and even the use of neural networks in statistics. To demonstrate the interplay of theory and practice, throughout the book the author weaves the use of the R software environment to analyze the data of real examples, providing all of the R commands necessary to reproduce the analyses. All of the ...

  18. A primer on linear models

    CERN Document Server

    Monahan, John F

    2008-01-01

    Preface Examples of the General Linear Model Introduction One-Sample Problem Simple Linear Regression Multiple Regression One-Way ANOVA First Discussion The Two-Way Nested Model Two-Way Crossed Model Analysis of Covariance Autoregression Discussion The Linear Least Squares Problem The Normal Equations The Geometry of Least Squares Reparameterization Gram-Schmidt Orthonormalization Estimability and Least Squares Estimators Assumptions for the Linear Mean Model Confounding, Identifiability, and Estimability Estimability and Least Squares Estimators F

  19. Application of Hierarchical Linear Models/Linear Mixed-Effects Models in School Effectiveness Research

    Science.gov (United States)

    Ker, H. W.

    2014-01-01

    Multilevel data are very common in educational research. Hierarchical linear models/linear mixed-effects models (HLMs/LMEs) are often utilized to analyze multilevel data nowadays. This paper discusses the problems of utilizing ordinary regressions for modeling multilevel educational data, compare the data analytic results from three regression…

  20. Application of a Double-Sided Chance-Constrained Integer Linear Program for Optimization of the Incremental Value of Ecosystem Services in Jilin Province, China

    Directory of Open Access Journals (Sweden)

    Baofeng Cai

    2017-08-01

    Full Text Available The Interconnected River System Network Project (IRSNP is a significant water supply engineering project, which is capable of effectively utilizing flood resources to generate ecological value, by connecting 198 lakes and ponds in western Jilin, northeast China. In this article, an optimization research approach has been proposed to maximize the incremental value of IRSNP ecosystem services. A double-sided chance-constrained integer linear program (DCCILP method has been proposed to support the optimization, which can deal with uncertainties presented as integers or random parameters that appear on both sides of the decision variable at the same time. The optimal scheme indicates that after rational optimization, the total incremental value of ecosystem services from the interconnected river system network project increased 22.25%, providing an increase in benefits of 3.26 × 109 ¥ compared to the original scheme. Most of the functional area is swamp wetland, which provides the greatest ecological benefits. Adjustment services increased obviously, implying that the optimization scheme prioritizes ecological benefits rather than supply and production services.

  1. BMI and BMI SDS in childhood: annual increments and conditional change.

    Science.gov (United States)

    Brannsether, Bente; Eide, Geir Egil; Roelants, Mathieu; Bjerknes, Robert; Júlíusson, Pétur Benedikt

    2017-02-01

    Background Early detection of abnormal weight gain in childhood may be important for preventive purposes. It is still debated which annual changes in BMI should warrant attention. Aim To analyse 1-year increments of Body Mass Index (BMI) and standardised BMI (BMI SDS) in childhood and explore conditional change in BMI SDS as an alternative method to evaluate 1-year changes in BMI. Subjects and methods The distributions of 1-year increments of BMI (kg/m 2 ) and BMI SDS are summarised by percentiles. Differences according to sex, age, height, weight, initial BMI and weight status on the BMI and BMI SDS increments were assessed with multiple linear regression. Conditional change in BMI SDS was based on the correlation between annual BMI measurements converted to SDS. Results BMI increments depended significantly on sex, height, weight and initial BMI. Changes in BMI SDS depended significantly only on the initial BMI SDS. The distribution of conditional change in BMI SDS using a two-correlation model was close to normal (mean = 0.11, SD = 1.02, n = 1167), with 3.2% (2.3-4.4%) of the observations below -2 SD and 2.8% (2.0-4.0%) above +2 SD. Conclusion Conditional change in BMI SDS can be used to detect unexpected large changes in BMI SDS. Although this method requires the use of a computer, it may be clinically useful to detect aberrant weight development.

  2. From spiking neuron models to linear-nonlinear models.

    Science.gov (United States)

    Ostojic, Srdjan; Brunel, Nicolas

    2011-01-20

    Neurons transform time-varying inputs into action potentials emitted stochastically at a time dependent rate. The mapping from current input to output firing rate is often represented with the help of phenomenological models such as the linear-nonlinear (LN) cascade, in which the output firing rate is estimated by applying to the input successively a linear temporal filter and a static non-linear transformation. These simplified models leave out the biophysical details of action potential generation. It is not a priori clear to which extent the input-output mapping of biophysically more realistic, spiking neuron models can be reduced to a simple linear-nonlinear cascade. Here we investigate this question for the leaky integrate-and-fire (LIF), exponential integrate-and-fire (EIF) and conductance-based Wang-Buzsáki models in presence of background synaptic activity. We exploit available analytic results for these models to determine the corresponding linear filter and static non-linearity in a parameter-free form. We show that the obtained functions are identical to the linear filter and static non-linearity determined using standard reverse correlation analysis. We then quantitatively compare the output of the corresponding linear-nonlinear cascade with numerical simulations of spiking neurons, systematically varying the parameters of input signal and background noise. We find that the LN cascade provides accurate estimates of the firing rates of spiking neurons in most of parameter space. For the EIF and Wang-Buzsáki models, we show that the LN cascade can be reduced to a firing rate model, the timescale of which we determine analytically. Finally we introduce an adaptive timescale rate model in which the timescale of the linear filter depends on the instantaneous firing rate. This model leads to highly accurate estimates of instantaneous firing rates.

  3. Modeling stem increment in individual Pinus occidentalis Sw. trees in La Sierra, Dominican Republic

    Energy Technology Data Exchange (ETDEWEB)

    Bueno, S.; Bevilacqua, E.

    2010-07-01

    One of the most common and important tree characteristics used in forest management decision-making is tree diameter-at-breast height (DBH). This paper presents results on an evaluation of two growth functions developed to model stem diameter increases in individual Pinus occidentalis Sw. trees in La Sierra, Dominican Republic. The first model was developed in order to predict future DBH (FDM) at different intervals of time and the other for predicting growth, that is, periodic annual diameter increment (PADIM). Each model employed two statistical techniques for fitting model parameters: stepwise ordinary least squares (OLS) regression, and mixed models. The two statistical approaches varied in how they accounted for the repeated measurements on individual trees over time, affecting standard error estimates and statistical inference of model parameters. Each approach was evaluated based on six goodness of- fit statistics, using both calibration and validation data sets. The objectives were 1) to determine the best model for predicting future tree DBH; 2) to determine the best model for predicting periodic annual diameter increment, both models using tree size, age, site index and different indices of competitive status; and 3) compare which of these two modeling approaches predicts better the future DBH. OLS provided a better fit for both of the growth functions, especially in regards to bias. Both models showed advantages and disadvantages when they were used to predict growth and future diameter. For the prediction of future diameter with FDM, accuracy of predictions were within one centimeter for a five-year projection interval. The PADIM presented negligible bias in estimating future diameter, although there was a small increase in bias as time of prediction increased. As expected, each model was the best in estimating the response variable it was developed for.. However, a closer examination of the distribution of errors showed a slight advantage of the FDM

  4. Assessing ozone and nitrogen impact on net primary productivity with a Generalised non-Linear Model

    International Nuclear Information System (INIS)

    De Marco, Alessandra; Screpanti, Augusto; Attorre, Fabio; Proietti, Chiara; Vitale, Marcello

    2013-01-01

    Some studies suggest that in Europe the majority of forest growth increment can be accounted for N deposition and very little by elevated CO 2 . High ozone (O 3 ) concentrations cause reductions in carbon fixation in native plants by offsetting the effects of elevated CO 2 or N deposition. The cause-effect relationships between primary productivity (NPP) of Quercus cerris, Q. ilex and Fagus sylvatica plant species and climate and pollutants (O 3 and N deposition) in Italy have been investigated by application of Generalised Linear/non-Linear regression model (GLZ model). The GLZ model highlighted: i) cumulative O 3 concentration-based indicator (AOT40F) did not significantly affect NPP; ii) a differential action of oxidised and reduced nitrogen depositions to NPP was linked to the geographical location; iii) the species-specific variation of NPP caused by combination of pollutants and climatic variables could be a potentially important drive-factor for the plant species' shift as response to the future climate change. - Highlights: ► GLZ Models emphasized the role of combination of variables affecting NPP. ► A differential action of ox-N and red-N deposition to NPP was observed for plants. ► Different responses to climate and pollutants could affect the plant species' shift. - Ozone and nitrogen depositions have non-linear effects on primary productivity of tree species differently distributed in Italy.

  5. Contributions to micromechanical model of the non linear behavior of the Callovo-Oxfordian argillite

    International Nuclear Information System (INIS)

    Abou-Chakra Guery, A.

    2007-12-01

    This work is performed in the general context of the project of underground disposal of radioactive waste, undertaken by the French National Radioactive Waste Management Agency (ANDRA). Due to its strong density and weak permeability, the formation of Callovo-Oxfordian argillite is chosen as one of possible geological barriers to radionuclides. The objective of the study to develop and validate a non linear homogenization approach of the mechanical behavior of Callovo-Oxfordian argillites. The material is modelled as a composite constituted of an elasto(visco)plastic clay matrix and of linear elastic or elastic damage inclusions. The macroscopic constitutive law is obtained by adapting the incremental method proposed by Hill. The derived model is first compared to Finite Element calculations on unit cell. It is then validated and applied for the prediction of the macroscopic stress-strain responses of the argillite at different geological depths. Finally, the micromechanical model is implemented in a commercial finite element code (Abaqus) for the simulation of a vertical shaft of the underground laboratory. This allows predicting the distribution of damage state and plastic strains and characterizing the excavation damage zone (EDZ). (author)

  6. Do otolith increments allow correct inferences about age and growth of coral reef fishes?

    Science.gov (United States)

    Booth, D. J.

    2014-03-01

    Otolith increment structure is widely used to estimate age and growth of marine fishes. Here, I test the accuracy of the long-term otolith increment analysis of the lemon damselfish Pomacentrus moluccensis to describe age and growth characteristics. I compare the number of putative annual otolith increments (as a proxy for actual age) and widths of these increments (as proxies for somatic growth) with actual tagged fish-length data, based on a 6-year dataset, the longest time course for a coral reef fish. Estimated age from otoliths corresponded closely with actual age in all cases, confirming annual increment formation. However, otolith increment widths were poor proxies for actual growth in length [linear regression r 2 = 0.44-0.90, n = 6 fish] and were clearly of limited value in estimating annual growth. Up to 60 % of the annual growth variation was missed using otolith increments, suggesting the long-term back calculations of otolith growth characteristics of reef fish populations should be interpreted with caution.

  7. Linear Models

    CERN Document Server

    Searle, Shayle R

    2012-01-01

    This 1971 classic on linear models is once again available--as a Wiley Classics Library Edition. It features material that can be understood by any statistician who understands matrix algebra and basic statistical methods.

  8. Validation Techniques of network harmonic models based on switching of a series linear component and measuring resultant harmonic increments

    DEFF Research Database (Denmark)

    Wiechowski, Wojciech Tomasz; Lykkegaard, Jan; Bak, Claus Leth

    2007-01-01

    In this paper two methods of validation of transmission network harmonic models are introduced. The methods were developed as a result of the work presented in [1]. The first method allows calculating the transfer harmonic impedance between two nodes of a network. Switching a linear, series network......, as for example a transmission line. Both methods require that harmonic measurements performed at two ends of the disconnected element are precisely synchronized....... are used for calculation of the transfer harmonic impedance between the nodes. The determined transfer harmonic impedance can be used to validate a computer model of the network. The second method is an extension of the fist one. It allows switching a series element that contains a shunt branch...

  9. Using Incremental Dynamic Analysis to Visualize the Effects of Viscous Fluid Dampers on Steel Moment Frame Drift

    OpenAIRE

    Kruep, Stephanie Jean

    2007-01-01

    This thesis presents the details of a study regarding both the use of linear viscous fluid dampers in controlling the interstory drift in steel moment frames, and the use of incremental dynamic analysis as a method of visualizing the behavior of these moment frames when subjected to seismic load effects. Models of three story and nine story steel moment frames were designed to meet typical strength requirements for office buildings in Seattle, Washington. These models were intentionally des...

  10. Dynamic Linear Models with R

    CERN Document Server

    Campagnoli, Patrizia; Petris, Giovanni

    2009-01-01

    State space models have gained tremendous popularity in as disparate fields as engineering, economics, genetics and ecology. Introducing general state space models, this book focuses on dynamic linear models, emphasizing their Bayesian analysis. It illustrates the fundamental steps needed to use dynamic linear models in practice, using R package.

  11. Modelling and Predicting Backstroke Start Performance Using Non-Linear and Linear Models.

    Science.gov (United States)

    de Jesus, Karla; Ayala, Helon V H; de Jesus, Kelly; Coelho, Leandro Dos S; Medeiros, Alexandre I A; Abraldes, José A; Vaz, Mário A P; Fernandes, Ricardo J; Vilas-Boas, João Paulo

    2018-03-01

    Our aim was to compare non-linear and linear mathematical model responses for backstroke start performance prediction. Ten swimmers randomly completed eight 15 m backstroke starts with feet over the wedge, four with hands on the highest horizontal and four on the vertical handgrip. Swimmers were videotaped using a dual media camera set-up, with the starts being performed over an instrumented block with four force plates. Artificial neural networks were applied to predict 5 m start time using kinematic and kinetic variables and to determine the accuracy of the mean absolute percentage error. Artificial neural networks predicted start time more robustly than the linear model with respect to changing training to the validation dataset for the vertical handgrip (3.95 ± 1.67 vs. 5.92 ± 3.27%). Artificial neural networks obtained a smaller mean absolute percentage error than the linear model in the horizontal (0.43 ± 0.19 vs. 0.98 ± 0.19%) and vertical handgrip (0.45 ± 0.19 vs. 1.38 ± 0.30%) using all input data. The best artificial neural network validation revealed a smaller mean absolute error than the linear model for the horizontal (0.007 vs. 0.04 s) and vertical handgrip (0.01 vs. 0.03 s). Artificial neural networks should be used for backstroke 5 m start time prediction due to the quite small differences among the elite level performances.

  12. Implementing Kernel Methods Incrementally by Incremental Nonlinear Projection Trick.

    Science.gov (United States)

    Kwak, Nojun

    2016-05-20

    Recently, the nonlinear projection trick (NPT) was introduced enabling direct computation of coordinates of samples in a reproducing kernel Hilbert space. With NPT, any machine learning algorithm can be extended to a kernel version without relying on the so called kernel trick. However, NPT is inherently difficult to be implemented incrementally because an ever increasing kernel matrix should be treated as additional training samples are introduced. In this paper, an incremental version of the NPT (INPT) is proposed based on the observation that the centerization step in NPT is unnecessary. Because the proposed INPT does not change the coordinates of the old data, the coordinates obtained by INPT can directly be used in any incremental methods to implement a kernel version of the incremental methods. The effectiveness of the INPT is shown by applying it to implement incremental versions of kernel methods such as, kernel singular value decomposition, kernel principal component analysis, and kernel discriminant analysis which are utilized for problems of kernel matrix reconstruction, letter classification, and face image retrieval, respectively.

  13. Piecewise Linear-Linear Latent Growth Mixture Models with Unknown Knots

    Science.gov (United States)

    Kohli, Nidhi; Harring, Jeffrey R.; Hancock, Gregory R.

    2013-01-01

    Latent growth curve models with piecewise functions are flexible and useful analytic models for investigating individual behaviors that exhibit distinct phases of development in observed variables. As an extension of this framework, this study considers a piecewise linear-linear latent growth mixture model (LGMM) for describing segmented change of…

  14. An Improved Incremental Learning Approach for KPI Prognosis of Dynamic Fuel Cell System.

    Science.gov (United States)

    Yin, Shen; Xie, Xiaochen; Lam, James; Cheung, Kie Chung; Gao, Huijun

    2016-12-01

    The key performance indicator (KPI) has an important practical value with respect to the product quality and economic benefits for modern industry. To cope with the KPI prognosis issue under nonlinear conditions, this paper presents an improved incremental learning approach based on available process measurements. The proposed approach takes advantage of the algorithm overlapping of locally weighted projection regression (LWPR) and partial least squares (PLS), implementing the PLS-based prognosis in each locally linear model produced by the incremental learning process of LWPR. The global prognosis results including KPI prediction and process monitoring are obtained from the corresponding normalized weighted means of all the local models. The statistical indicators for prognosis are enhanced as well by the design of novel KPI-related and KPI-unrelated statistics with suitable control limits for non-Gaussian data. For application-oriented purpose, the process measurements from real datasets of a proton exchange membrane fuel cell system are employed to demonstrate the effectiveness of KPI prognosis. The proposed approach is finally extended to a long-term voltage prediction for potential reference of further fuel cell applications.

  15. A diameter increment model for Red Fir in California and Southern Oregon

    Science.gov (United States)

    K. Leroy Dolph

    1992-01-01

    Periodic (10-year) diameter increment of individual red fir trees in Califomia and southern Oregon can be predicted from initial diameter and crown ratio of each tree, site index, percent slope, and aspect of the site. The model actually predicts the natural logarithm ofthe change in squared diameter inside bark between the startand the end of a 10-year growth period....

  16. An incremental anomaly detection model for virtual machines.

    Directory of Open Access Journals (Sweden)

    Hancui Zhang

    Full Text Available Self-Organizing Map (SOM algorithm as an unsupervised learning method has been applied in anomaly detection due to its capabilities of self-organizing and automatic anomaly prediction. However, because of the algorithm is initialized in random, it takes a long time to train a detection model. Besides, the Cloud platforms with large scale virtual machines are prone to performance anomalies due to their high dynamic and resource sharing characters, which makes the algorithm present a low accuracy and a low scalability. To address these problems, an Improved Incremental Self-Organizing Map (IISOM model is proposed for anomaly detection of virtual machines. In this model, a heuristic-based initialization algorithm and a Weighted Euclidean Distance (WED algorithm are introduced into SOM to speed up the training process and improve model quality. Meanwhile, a neighborhood-based searching algorithm is presented to accelerate the detection time by taking into account the large scale and high dynamic features of virtual machines on cloud platform. To demonstrate the effectiveness, experiments on a common benchmark KDD Cup dataset and a real dataset have been performed. Results suggest that IISOM has advantages in accuracy and convergence velocity of anomaly detection for virtual machines on cloud platform.

  17. An incremental anomaly detection model for virtual machines

    Science.gov (United States)

    Zhang, Hancui; Chen, Shuyu; Liu, Jun; Zhou, Zhen; Wu, Tianshu

    2017-01-01

    Self-Organizing Map (SOM) algorithm as an unsupervised learning method has been applied in anomaly detection due to its capabilities of self-organizing and automatic anomaly prediction. However, because of the algorithm is initialized in random, it takes a long time to train a detection model. Besides, the Cloud platforms with large scale virtual machines are prone to performance anomalies due to their high dynamic and resource sharing characters, which makes the algorithm present a low accuracy and a low scalability. To address these problems, an Improved Incremental Self-Organizing Map (IISOM) model is proposed for anomaly detection of virtual machines. In this model, a heuristic-based initialization algorithm and a Weighted Euclidean Distance (WED) algorithm are introduced into SOM to speed up the training process and improve model quality. Meanwhile, a neighborhood-based searching algorithm is presented to accelerate the detection time by taking into account the large scale and high dynamic features of virtual machines on cloud platform. To demonstrate the effectiveness, experiments on a common benchmark KDD Cup dataset and a real dataset have been performed. Results suggest that IISOM has advantages in accuracy and convergence velocity of anomaly detection for virtual machines on cloud platform. PMID:29117245

  18. Is incremental hemodialysis ready to return on the scene? From empiricism to kinetic modelling.

    Science.gov (United States)

    Basile, Carlo; Casino, Francesco Gaetano; Kalantar-Zadeh, Kamyar

    2017-08-01

    Most people who make the transition to maintenance dialysis therapy are treated with a fixed dose thrice-weekly hemodialysis regimen without considering their residual kidney function (RKF). The RKF provides effective and naturally continuous clearance of both small and middle molecules, plays a major role in metabolic homeostasis, nutritional status, and cardiovascular health, and aids in fluid management. The RKF is associated with better patient survival and greater health-related quality of life, although these effects may be confounded by patient comorbidities. Preservation of the RKF requires a careful approach, including regular monitoring, avoidance of nephrotoxins, gentle control of blood pressure to avoid intradialytic hypotension, and an individualized dialysis prescription including the consideration of incremental hemodialysis. There is currently no standardized method for applying incremental hemodialysis in practice. Infrequent (once- to twice-weekly) hemodialysis regimens are often used arbitrarily, without knowing which patients would benefit the most from them or how to escalate the dialysis dose as RKF declines over time. The recently heightened interest in incremental hemodialysis has been hindered by the current limitations of the urea kinetic models (UKM) which tend to overestimate the dialysis dose required in the presence of substantial RKF. This is due to an erroneous extrapolation of the equivalence between renal urea clearance (Kru) and dialyser urea clearance (Kd), correctly assumed by the UKM, to the clinical domain. In this context, each ml/min of Kd clears the urea from the blood just as 1 ml/min of Kru does. By no means should such kinetic equivalence imply that 1 ml/min of Kd is clinically equivalent to 1 ml/min of urea clearance provided by the native kidneys. A recent paper by Casino and Basile suggested a variable target model (VTM) as opposed to the fixed model, because the VTM gives more clinical weight to the RKF and allows

  19. Modeling of surface stress effects on bending behavior of nanowires: Incremental deformation theory

    International Nuclear Information System (INIS)

    Song, F.; Huang, G.L.

    2009-01-01

    The surface stress effects on bending behavior of nanowires have recently attracted a lot of attention. In this letter, the incremental deformation theory is first applied to study the surface stress effects upon the bending behavior of the nanowires. Different from other linear continuum approaches, the local geometrical nonlinearity of the Lagrangian strain is considered, therefore, the contribution of the surface stresses is naturally derived by applying the Hamilton's principle, and influence of the surface stresses along all surfaces of the nanowires is captured. It is first shown that the surface stresses along all surfaces have contribution not only on the effective Young's modulus of the nanowires but also on the loading term in the governing equation. The predictions of the effective Young's modulus and the resonance shift of the nanowires from the current method are compared with those from the experimental measurement and other existing approaches. The difference with other models is discussed. Finally, based on the current theory, the resonant shift predictions by using both the modified Euler-Bernoulli beam and the modified Timoshenko beam theories of the nanowires are investigated and compared. It is noticed that the higher vibration modes are less sensitive to the surface stresses than the lower vibration modes.

  20. A Batch-Incremental Video Background Estimation Model using Weighted Low-Rank Approximation of Matrices

    KAUST Repository

    Dutta, Aritra

    2017-07-02

    Principal component pursuit (PCP) is a state-of-the-art approach for background estimation problems. Due to their higher computational cost, PCP algorithms, such as robust principal component analysis (RPCA) and its variants, are not feasible in processing high definition videos. To avoid the curse of dimensionality in those algorithms, several methods have been proposed to solve the background estimation problem in an incremental manner. We propose a batch-incremental background estimation model using a special weighted low-rank approximation of matrices. Through experiments with real and synthetic video sequences, we demonstrate that our method is superior to the state-of-the-art background estimation algorithms such as GRASTA, ReProCS, incPCP, and GFL.

  1. A Batch-Incremental Video Background Estimation Model using Weighted Low-Rank Approximation of Matrices

    KAUST Repository

    Dutta, Aritra; Li, Xin; Richtarik, Peter

    2017-01-01

    Principal component pursuit (PCP) is a state-of-the-art approach for background estimation problems. Due to their higher computational cost, PCP algorithms, such as robust principal component analysis (RPCA) and its variants, are not feasible in processing high definition videos. To avoid the curse of dimensionality in those algorithms, several methods have been proposed to solve the background estimation problem in an incremental manner. We propose a batch-incremental background estimation model using a special weighted low-rank approximation of matrices. Through experiments with real and synthetic video sequences, we demonstrate that our method is superior to the state-of-the-art background estimation algorithms such as GRASTA, ReProCS, incPCP, and GFL.

  2. Simulation and comparison of perturb and observe and incremental ...

    Indian Academy of Sciences (India)

    Perturb and Observe (P & O) algorithm and Incremental conductance algorithm. ... Keywords. Solar array; insolation; MPPT; modelling, P & O; incremental conductance. 1. .... voltage level. It is also ..... Int. J. Advances in Eng. Technol. 133–148.

  3. Distance-independent individual tree diameter-increment model for Thuya [Tetraclinis articulata (VAHL.) MAST.] stands in Tunisia

    OpenAIRE

    T. Sghaier; M. Tome; J. Tome; M. Sanchez-Gonzalez; I. Cañellas; R. Calama

    2013-01-01

    Aim of study: The aim of the work was to develop an individual tree diameter-increment model for Thuya (Tetraclinis articulata) in Tunisia.Area of study: The natural Tetraclinis articulata stands at Jbel Lattrech in north-eastern of Tunisia.Material and methods:  Data came from 200 trees located in 50 sample plots. The diameter at age t and the diameter increment for the last five years obtained from cores taken at breast height were measured for each tree. Four difference equations derived f...

  4. Dimension of linear models

    DEFF Research Database (Denmark)

    Høskuldsson, Agnar

    1996-01-01

    Determination of the proper dimension of a given linear model is one of the most important tasks in the applied modeling work. We consider here eight criteria that can be used to determine the dimension of the model, or equivalently, the number of components to use in the model. Four...... the basic problems in determining the dimension of linear models. Then each of the eight measures are treated. The results are illustrated by examples....... of these criteria are widely used ones, while the remaining four are ones derived from the H-principle of mathematical modeling. Many examples from practice show that the criteria derived from the H-principle function better than the known and popular criteria for the number of components. We shall briefly review...

  5. Forecasting the summer rainfall in North China using the year-to-year increment approach

    Institute of Scientific and Technical Information of China (English)

    2009-01-01

    A new approach to forecasting the year-to-year increment of rainfall in North China in July-August (JA) is proposed. DY is defined as the difference of a variable between the current year and the preceding year (year-to-year increment). NR denotes the seasonal mean precipitation rate over North China in JA. After analyzing the atmospheric circulation anomalies associated with the DY of NR, five key predictors for the DY of NR have been identified. The prediction model for the DY of NR is established by using multi-linear regression method and the NR is obtained (the current forecasted DY of NR added to the preceding observed NR). The prediction model shows a high correlation coefficient (0.8) between the simulated and the observed DY of NR throughout period 1965-1999, with an average relative root mean square error of 19% for the percentage of precipitation rate anomaly over North China. The prediction model makes a hindcast for 2000-2007, with an average relative root mean square error of 21% for the percentage of precipitation rate anomaly over North China. The model reproduces the downward trend of the percentage of precipitation rate anomaly over North China during 1965-2006. Because the current operational prediction models of the summer precipitation have average forecast scores of 60%-70%, it has been more difficult to forecast the summer rainfall over North China. Thus this new approach for predicting the year-to-year increment of the summer precipitation (and hence the summer precipitation itself) has the potential to significantly improve operational forecasting skill for summer precipitation.

  6. Ordinal Log-Linear Models for Contingency Tables

    Directory of Open Access Journals (Sweden)

    Brzezińska Justyna

    2016-12-01

    Full Text Available A log-linear analysis is a method providing a comprehensive scheme to describe the association for categorical variables in a contingency table. The log-linear model specifies how the expected counts depend on the levels of the categorical variables for these cells and provide detailed information on the associations. The aim of this paper is to present theoretical, as well as empirical, aspects of ordinal log-linear models used for contingency tables with ordinal variables. We introduce log-linear models for ordinal variables: linear-by-linear association, row effect model, column effect model and RC Goodman’s model. Algorithm, advantages and disadvantages will be discussed in the paper. An empirical analysis will be conducted with the use of R.

  7. Parameterized Linear Longitudinal Airship Model

    Science.gov (United States)

    Kulczycki, Eric; Elfes, Alberto; Bayard, David; Quadrelli, Marco; Johnson, Joseph

    2010-01-01

    A parameterized linear mathematical model of the longitudinal dynamics of an airship is undergoing development. This model is intended to be used in designing control systems for future airships that would operate in the atmospheres of Earth and remote planets. Heretofore, the development of linearized models of the longitudinal dynamics of airships has been costly in that it has been necessary to perform extensive flight testing and to use system-identification techniques to construct models that fit the flight-test data. The present model is a generic one that can be relatively easily specialized to approximate the dynamics of specific airships at specific operating points, without need for further system identification, and with significantly less flight testing. The approach taken in the present development is to merge the linearized dynamical equations of an airship with techniques for estimation of aircraft stability derivatives, and to thereby make it possible to construct a linearized dynamical model of the longitudinal dynamics of a specific airship from geometric and aerodynamic data pertaining to that airship. (It is also planned to develop a model of the lateral dynamics by use of the same methods.) All of the aerodynamic data needed to construct the model of a specific airship can be obtained from wind-tunnel testing and computational fluid dynamics

  8. Linear and non-linear autoregressive models for short-term wind speed forecasting

    International Nuclear Information System (INIS)

    Lydia, M.; Suresh Kumar, S.; Immanuel Selvakumar, A.; Edwin Prem Kumar, G.

    2016-01-01

    Highlights: • Models for wind speed prediction at 10-min intervals up to 1 h built on time-series wind speed data. • Four different multivariate models for wind speed built based on exogenous variables. • Non-linear models built using three data mining algorithms outperform the linear models. • Autoregressive models based on wind direction perform better than other models. - Abstract: Wind speed forecasting aids in estimating the energy produced from wind farms. The soaring energy demands of the world and minimal availability of conventional energy sources have significantly increased the role of non-conventional sources of energy like solar, wind, etc. Development of models for wind speed forecasting with higher reliability and greater accuracy is the need of the hour. In this paper, models for predicting wind speed at 10-min intervals up to 1 h have been built based on linear and non-linear autoregressive moving average models with and without external variables. The autoregressive moving average models based on wind direction and annual trends have been built using data obtained from Sotavento Galicia Plc. and autoregressive moving average models based on wind direction, wind shear and temperature have been built on data obtained from Centre for Wind Energy Technology, Chennai, India. While the parameters of the linear models are obtained using the Gauss–Newton algorithm, the non-linear autoregressive models are developed using three different data mining algorithms. The accuracy of the models has been measured using three performance metrics namely, the Mean Absolute Error, Root Mean Squared Error and Mean Absolute Percentage Error.

  9. Correlations and Non-Linear Probability Models

    DEFF Research Database (Denmark)

    Breen, Richard; Holm, Anders; Karlson, Kristian Bernt

    2014-01-01

    the dependent variable of the latent variable model and its predictor variables. We show how this correlation can be derived from the parameters of non-linear probability models, develop tests for the statistical significance of the derived correlation, and illustrate its usefulness in two applications. Under......Although the parameters of logit and probit and other non-linear probability models are often explained and interpreted in relation to the regression coefficients of an underlying linear latent variable model, we argue that they may also be usefully interpreted in terms of the correlations between...... certain circumstances, which we explain, the derived correlation provides a way of overcoming the problems inherent in cross-sample comparisons of the parameters of non-linear probability models....

  10. The impact of weather conditions on dynamics of Hylocomium splendens annual increment and net production in forest communities of forest-steppe zone in Khakassia

    Directory of Open Access Journals (Sweden)

    I. A. Goncharova

    2015-12-01

    Full Text Available Dynamics of annual increments of green moss Hylocomium splendens (Hedw. Schimp. in B.S.G. in the Khakassia forest-steppe zone has been studied. The values of the moss linear and phytomass increments were investigated in different habitats for 6 years. The aboveground annual production of the H. splendens in phytocenosis was estimated. Linear increments of the H. splendens growing under the tree canopy and opening between trees were not significantly different. Phytomass increments under the tree canopy are significantly higher than in the openings between trees. The density of moss mats, proportion between leaves and stems were calculated. It was revealed that climatic factors have a different degree and duration influence on the moss increments in different habitats. Linear increments of H. splendens in different habitats synchronously respond to weather factor changes. The air temperature was the most important at the beginning and the end of the vegetation period; the amount of precipitation was more important in the middle of the growth period. Phytomass increments of H. splendens in different habitats respond differently to influence of weather conditions. Phytomass increments under the tree canopy are not sensitive to air temperature, and more sensitive to precipitations in the middle of growth period than one of opening between trees. The specificity of the climatic factors’ influence on the biomass growth depends on habitat conditions.

  11. Generalized, Linear, and Mixed Models

    CERN Document Server

    McCulloch, Charles E; Neuhaus, John M

    2011-01-01

    An accessible and self-contained introduction to statistical models-now in a modernized new editionGeneralized, Linear, and Mixed Models, Second Edition provides an up-to-date treatment of the essential techniques for developing and applying a wide variety of statistical models. The book presents thorough and unified coverage of the theory behind generalized, linear, and mixed models and highlights their similarities and differences in various construction, application, and computational aspects.A clear introduction to the basic ideas of fixed effects models, random effects models, and mixed m

  12. Multivariate generalized linear mixed models using R

    CERN Document Server

    Berridge, Damon Mark

    2011-01-01

    Multivariate Generalized Linear Mixed Models Using R presents robust and methodologically sound models for analyzing large and complex data sets, enabling readers to answer increasingly complex research questions. The book applies the principles of modeling to longitudinal data from panel and related studies via the Sabre software package in R. A Unified Framework for a Broad Class of Models The authors first discuss members of the family of generalized linear models, gradually adding complexity to the modeling framework by incorporating random effects. After reviewing the generalized linear model notation, they illustrate a range of random effects models, including three-level, multivariate, endpoint, event history, and state dependence models. They estimate the multivariate generalized linear mixed models (MGLMMs) using either standard or adaptive Gaussian quadrature. The authors also compare two-level fixed and random effects linear models. The appendices contain additional information on quadrature, model...

  13. Modeling patterns in data using linear and related models

    International Nuclear Information System (INIS)

    Engelhardt, M.E.

    1996-06-01

    This report considers the use of linear models for analyzing data related to reliability and safety issues of the type usually associated with nuclear power plants. The report discusses some of the general results of linear regression analysis, such as the model assumptions and properties of the estimators of the parameters. The results are motivated with examples of operational data. Results about the important case of a linear regression model with one covariate are covered in detail. This case includes analysis of time trends. The analysis is applied with two different sets of time trend data. Diagnostic procedures and tests for the adequacy of the model are discussed. Some related methods such as weighted regression and nonlinear models are also considered. A discussion of the general linear model is also included. Appendix A gives some basic SAS programs and outputs for some of the analyses discussed in the body of the report. Appendix B is a review of some of the matrix theoretic results which are useful in the development of linear models

  14. OXYGEN UPTAKE KINETICS DURING INCREMENTAL- AND DECREMENTAL-RAMP CYCLE ERGOMETRY

    Directory of Open Access Journals (Sweden)

    Fadıl Özyener

    2011-09-01

    Full Text Available The pulmonary oxygen uptake (VO2 response to incremental-ramp cycle ergometry typically demonstrates lagged-linear first-order kinetics with a slope of ~10-11 ml·min-1·W-1, both above and below the lactate threshold (ӨL, i.e. there is no discernible VO2 slow component (or "excess" VO2 above ӨL. We were interested in determining whether a reverse ramp profile would yield the same response dynamics. Ten healthy males performed a maximum incremental -ramp (15-30 W·min-1, depending on fitness. On another day, the work rate (WR was increased abruptly to the incremental maximum and then decremented at the same rate of 15-30 W.min-1 (step-decremental ramp. Five subjects also performed a sub-maximal ramp-decremental test from 90% of ӨL. VO2 was determined breath-by-breath from continuous monitoring of respired volumes (turbine and gas concentrations (mass spectrometer. The incremental-ramp VO2-WR slope was 10.3 ± 0.7 ml·min-1·W-1, whereas that of the descending limb of the decremental ramp was 14.2 ± 1.1 ml·min-1·W-1 (p < 0.005. The sub-maximal decremental-ramp slope, however, was only 9. 8 ± 0.9 ml·min-1·W-1: not significantly different from that of the incremental-ramp. This suggests that the VO2 response in the supra-ӨL domain of incremental-ramp exercise manifest not actual, but pseudo, first-order kinetics

  15. Incremental prognostic value of coronary computed tomographic angiography high-risk plaque characteristics in newly symptomatic patients.

    Science.gov (United States)

    Fujimoto, Shinichiro; Kondo, Takeshi; Takamura, Kazuhisa; Baber, Usman; Shinozaki, Tomohiro; Nishizaki, Yuji; Kawaguchi, Yuko; Matsumori, Rie; Hiki, Makoto; Miyauchi, Katsumi; Daida, Hiroyuki; Hecht, Harvey; Stone, Gregg W; Narula, Jagat

    2016-06-01

    The incremental prognostic value of the plaque features in coronary computed tomographic angiography (CTA) has not been well assessed. This study was designed to determine whether CTA high-risk plaques have prognostic value incremental to the Framingham risk score (FRS) and the severity of luminal obstruction. A total of 628 newly symptomatic patients without known coronary artery disease underwent CTA. They were followed for a median of 677 days during which there were 26 cardiac events, including cardiac death, acute myocardial infarction, and hospitalization for unstable angina. Incremental prognostic value of adding plaque characteristics to the number of diseased vessels and the FRS was evaluated using 3 Cox models and net reclassification indexes. The discrimination index was significantly increased by adding the number of diseased vessels to the FRS (change in c-statistic from 65.8% to 78.6%, p=0.028) but not significantly by further adding plaque characteristics (change in c-statistic from 78.6% to 80.0%, p=0.812). However, improved model-fitting by adding plaque characteristics into the linear combination with risk score and the number of diseased vessels (p=0.007 from likelihood ratio test) and the lowest value of Akaike's information criteria of that model indicated that plaque characteristics improved both predictive accuracy and discrimination perspective. More subjects reclassified by plaque characteristics were moved to directions consistent with their subsequent cardiac event status than in an inconsistent direction. Evaluation of CTA plaque characteristics may provide incremental prognostic value to the number of diseased vessels and the FRS. Copyright © 2015 Japanese College of Cardiology. Published by Elsevier Ltd. All rights reserved.

  16. Comparison of linear and non-linear models for predicting energy expenditure from raw accelerometer data.

    Science.gov (United States)

    Montoye, Alexander H K; Begum, Munni; Henning, Zachary; Pfeiffer, Karin A

    2017-02-01

    This study had three purposes, all related to evaluating energy expenditure (EE) prediction accuracy from body-worn accelerometers: (1) compare linear regression to linear mixed models, (2) compare linear models to artificial neural network models, and (3) compare accuracy of accelerometers placed on the hip, thigh, and wrists. Forty individuals performed 13 activities in a 90 min semi-structured, laboratory-based protocol. Participants wore accelerometers on the right hip, right thigh, and both wrists and a portable metabolic analyzer (EE criterion). Four EE prediction models were developed for each accelerometer: linear regression, linear mixed, and two ANN models. EE prediction accuracy was assessed using correlations, root mean square error (RMSE), and bias and was compared across models and accelerometers using repeated-measures analysis of variance. For all accelerometer placements, there were no significant differences for correlations or RMSE between linear regression and linear mixed models (correlations: r  =  0.71-0.88, RMSE: 1.11-1.61 METs; p  >  0.05). For the thigh-worn accelerometer, there were no differences in correlations or RMSE between linear and ANN models (ANN-correlations: r  =  0.89, RMSE: 1.07-1.08 METs. Linear models-correlations: r  =  0.88, RMSE: 1.10-1.11 METs; p  >  0.05). Conversely, one ANN had higher correlations and lower RMSE than both linear models for the hip (ANN-correlation: r  =  0.88, RMSE: 1.12 METs. Linear models-correlations: r  =  0.86, RMSE: 1.18-1.19 METs; p  linear models for the wrist-worn accelerometers (ANN-correlations: r  =  0.82-0.84, RMSE: 1.26-1.32 METs. Linear models-correlations: r  =  0.71-0.73, RMSE: 1.55-1.61 METs; p  models offer a significant improvement in EE prediction accuracy over linear models. Conversely, linear models showed similar EE prediction accuracy to machine learning models for hip- and thigh

  17. Combining Compact Representation and Incremental Generation in Large Games with Sequential Strategies

    DEFF Research Database (Denmark)

    Bosansky, Branislav; Xin Jiang, Albert; Tambe, Milind

    2015-01-01

    representation of sequential strategies and linear programming, or by incremental strategy generation of iterative double-oracle methods. In this paper, we present novel hybrid of these two approaches: compact-strategy double-oracle (CS-DO) algorithm that combines the advantages of the compact representation...

  18. Prediction model of ammonium uranyl carbonate calcination by microwave heating using incremental improved Back-Propagation neural network

    Energy Technology Data Exchange (ETDEWEB)

    Li Yingwei [Faculty of Metallurgical and Energy Engineering, Kunming University of Science and Technology, Kunming, Yunnan Province 650093 (China); Key Laboratory of Unconventional Metallurgy, Ministry of Education, Kunming University of Science and Technology, Kunming, Yunnan Province 650093 (China); Peng Jinhui, E-mail: jhpeng@kmust.edu.c [Faculty of Metallurgical and Energy Engineering, Kunming University of Science and Technology, Kunming, Yunnan Province 650093 (China); Key Laboratory of Unconventional Metallurgy, Ministry of Education, Kunming University of Science and Technology, Kunming, Yunnan Province 650093 (China); Liu Bingguo [Faculty of Metallurgical and Energy Engineering, Kunming University of Science and Technology, Kunming, Yunnan Province 650093 (China); Key Laboratory of Unconventional Metallurgy, Ministry of Education, Kunming University of Science and Technology, Kunming, Yunnan Province 650093 (China); Li Wei [Key Laboratory of Unconventional Metallurgy, Ministry of Education, Kunming University of Science and Technology, Kunming, Yunnan Province 650093 (China); Huang Daifu [No. 272 Nuclear Industry Factory, China National Nuclear Corporation, Hengyang, Hunan Province 421002 (China); Zhang Libo [Faculty of Metallurgical and Energy Engineering, Kunming University of Science and Technology, Kunming, Yunnan Province 650093 (China); Key Laboratory of Unconventional Metallurgy, Ministry of Education, Kunming University of Science and Technology, Kunming, Yunnan Province 650093 (China)

    2011-05-15

    Research highlights: The incremental improved Back-Propagation neural network prediction model using the Levenberg-Marquardt algorithm based on optimizing theory is put forward. The prediction model of the nonlinear system is built, which can effectively predict the experiment of microwave calcining of ammonium uranyl carbonate (AUC). AUC can accept the microwave energy and microwave heating can quickly decompose AUC. In the experiment of microwave calcining of AUC, the contents of U and U{sup 4+} increased with increasing of microwave power and irradiation time, and decreased with increasing of the material average depth. - Abstract: The incremental improved Back-Propagation (BP) neural network prediction model was put forward, which was very useful in overcoming the problems, such as long testing cycle, high testing quantity, difficulty of optimization for process parameters, many training data probably were offered by the way of increment batch and the limitation of the system memory could make the training data infeasible, which existed in the process of calcinations for ammonium uranyl carbonate (AUC) by microwave heating. The prediction model of the nonlinear system was built, which could effectively predict the experiment of microwave calcining of AUC. The predicted results indicated that the contents of U and U{sup 4+} were increased with increasing of microwave power and irradiation time, and decreased with increasing of the material average depth.

  19. Prediction model of ammonium uranyl carbonate calcination by microwave heating using incremental improved Back-Propagation neural network

    International Nuclear Information System (INIS)

    Li Yingwei; Peng Jinhui; Liu Bingguo; Li Wei; Huang Daifu; Zhang Libo

    2011-01-01

    Research highlights: → The incremental improved Back-Propagation neural network prediction model using the Levenberg-Marquardt algorithm based on optimizing theory is put forward. → The prediction model of the nonlinear system is built, which can effectively predict the experiment of microwave calcining of ammonium uranyl carbonate (AUC). → AUC can accept the microwave energy and microwave heating can quickly decompose AUC. → In the experiment of microwave calcining of AUC, the contents of U and U 4+ increased with increasing of microwave power and irradiation time, and decreased with increasing of the material average depth. - Abstract: The incremental improved Back-Propagation (BP) neural network prediction model was put forward, which was very useful in overcoming the problems, such as long testing cycle, high testing quantity, difficulty of optimization for process parameters, many training data probably were offered by the way of increment batch and the limitation of the system memory could make the training data infeasible, which existed in the process of calcinations for ammonium uranyl carbonate (AUC) by microwave heating. The prediction model of the nonlinear system was built, which could effectively predict the experiment of microwave calcining of AUC. The predicted results indicated that the contents of U and U 4+ were increased with increasing of microwave power and irradiation time, and decreased with increasing of the material average depth.

  20. An incremental procedure model for e-learning projects at universities

    Directory of Open Access Journals (Sweden)

    Pahlke, Friedrich

    2006-11-01

    Full Text Available E-learning projects at universities are produced under different conditions than in industry. The main characteristic of many university projects is that these are realized quasi in a solo effort. In contrast, in private industry the different, interdisciplinary skills that are necessary for the development of e-learning are typically supplied by a multimedia agency.A specific procedure tailored for the use at universities is therefore required to facilitate mastering the amount and complexity of the tasks.In this paper an incremental procedure model is presented, which describes the proceeding in every phase of the project. It allows a high degree of flexibility and emphasizes the didactical concept – instead of the technical implementation. In the second part, we illustrate the practical use of the theoretical procedure model based on the project “Online training in Genetic Epidemiology”.

  1. Incremental Closed-loop Identification of Linear Parameter Varying Systems

    DEFF Research Database (Denmark)

    Bendtsen, Jan Dimon; Trangbæk, Klaus

    2011-01-01

    , closed-loop system identification is more difficult than open-loop identification. In this paper we prove that the so-called Hansen Scheme, a technique known from linear time-invariant systems theory for transforming closed-loop system identification problems into open-loop-like problems, can be extended...

  2. Core seismic behaviour: linear and non-linear models

    International Nuclear Information System (INIS)

    Bernard, M.; Van Dorsselaere, M.; Gauvain, M.; Jenapierre-Gantenbein, M.

    1981-08-01

    The usual methodology for the core seismic behaviour analysis leads to a double complementary approach: to define a core model to be included in the reactor-block seismic response analysis, simple enough but representative of basic movements (diagrid or slab), to define a finer core model, with basic data issued from the first model. This paper presents the history of the different models of both kinds. The inert mass model (IMM) yielded a first rough diagrid movement. The direct linear model (DLM), without shocks and with sodium as an added mass, let to two different ones: DLM 1 with independent movements of the fuel and radial blanket subassemblies, and DLM 2 with a core combined movement. The non-linear (NLM) ''CORALIE'' uses the same basic modelization (Finite Element Beams) but accounts for shocks. It studies the response of a diameter on flats and takes into account the fluid coupling and the wrapper tube flexibility at the pad level. Damping consists of one modal part of 2% and one part due to shocks. Finally, ''CORALIE'' yields the time-history of the displacements and efforts on the supports, but damping (probably greater than 2%) and fluid-structures interaction are still to be precised. The validation experiments were performed on a RAPSODIE core mock-up on scale 1, in similitude of 1/3 as to SPX 1. The equivalent linear model (ELM) was developed for the SPX 1 reactor-block response analysis and a specified seismic level (SB or SM). It is composed of several oscillators fixed to the diagrid and yields the same maximum displacements and efforts than the NLM. The SPX 1 core seismic analysis with a diagrid input spectrum which corresponds to a 0,1 g group acceleration, has been carried out with these models: some aspects of these calculations are presented here

  3. Atmospheric response to Saharan dust deduced from ECMWF reanalysis increments

    Science.gov (United States)

    Kishcha, P.; Alpert, P.; Barkan, J.; Kirchner, I.; Machenhauer, B.

    2003-04-01

    This study focuses on the atmospheric temperature response to dust deduced from a new source of data - the European Reanalysis (ERA) increments. These increments are the systematic errors of global climate models, generated in reanalysis procedure. The model errors result not only from the lack of desert dust but also from a complex combination of many kinds of model errors. Over the Sahara desert the dust radiative effect is believed to be a predominant model defect which should significantly affect the increments. This dust effect was examined by considering correlation between the increments and remotely-sensed dust. Comparisons were made between April temporal variations of the ERA analysis increments and the variations of the Total Ozone Mapping Spectrometer aerosol index (AI) between 1979 and 1993. The distinctive structure was identified in the distribution of correlation composed of three nested areas with high positive correlation (> 0.5), low correlation, and high negative correlation (Forecast(ECMWF) suggests that the PCA (NCA) corresponds mainly to anticyclonic (cyclonic) flow, negative (positive) vorticity, and downward (upward) airflow. These facts indicate an interaction between dust-forced heating /cooling and atmospheric circulation. The April correlation results are supported by the analysis of vertical distribution of dust concentration, derived from the 24-hour dust prediction system at Tel Aviv University (website: http://earth.nasa.proj.ac.il/dust/current/). For other months the analysis is more complicated because of the essential increasing of humidity along with the northward progress of the ITCZ and the significant impact on the increments.

  4. Linear Logistic Test Modeling with R

    Science.gov (United States)

    Baghaei, Purya; Kubinger, Klaus D.

    2015-01-01

    The present paper gives a general introduction to the linear logistic test model (Fischer, 1973), an extension of the Rasch model with linear constraints on item parameters, along with eRm (an R package to estimate different types of Rasch models; Mair, Hatzinger, & Mair, 2014) functions to estimate the model and interpret its parameters. The…

  5. Explorative methods in linear models

    DEFF Research Database (Denmark)

    Høskuldsson, Agnar

    2004-01-01

    The author has developed the H-method of mathematical modeling that builds up the model by parts, where each part is optimized with respect to prediction. Besides providing with better predictions than traditional methods, these methods provide with graphic procedures for analyzing different feat...... features in data. These graphic methods extend the well-known methods and results of Principal Component Analysis to any linear model. Here the graphic procedures are applied to linear regression and Ridge Regression....

  6. Sparse Linear Identifiable Multivariate Modeling

    DEFF Research Database (Denmark)

    Henao, Ricardo; Winther, Ole

    2011-01-01

    and bench-marked on artificial and real biological data sets. SLIM is closest in spirit to LiNGAM (Shimizu et al., 2006), but differs substantially in inference, Bayesian network structure learning and model comparison. Experimentally, SLIM performs equally well or better than LiNGAM with comparable......In this paper we consider sparse and identifiable linear latent variable (factor) and linear Bayesian network models for parsimonious analysis of multivariate data. We propose a computationally efficient method for joint parameter and model inference, and model comparison. It consists of a fully...

  7. Incremental validity of positive orientation: predictive efficiency beyond the five-factor model

    Directory of Open Access Journals (Sweden)

    Łukasz Roland Miciuk

    2016-05-01

    Full Text Available Background The relation of positive orientation (a basic predisposition to think positively of oneself, one’s life and one’s future and personality traits is still disputable. The purpose of the described research was to verify the hypothesis that positive orientation has predictive efficiency beyond the five-factor model. Participants and procedure One hundred and thirty participants (at the mean age M = 24.84 completed the following questionnaires: the Self-Esteem Scale (SES, the Satisfaction with Life Scale (SWLS, the Life Orientation Test-Revised (LOT-R, the Positivity Scale (P-SCALE, the NEO Five Factor Inventory (NEO-FFI, the Self-Concept Clarity Scale (SCC, the Generalized Self-Efficacy Scale (GSES and the Life Engagement Test (LET. Results The introduction of positive orientation as an additional predictor in the second step of regression analyses led to better prediction of the following variables: purpose in life, self-concept clarity and generalized self-efficacy. This effect was the strongest for predicting purpose in life (i.e. 14% increment of the explained variance. Conclusions The results confirmed our hypothesis that positive orientation can be characterized by incremental validity – its inclusion in the regression model (in addition to the five main factors of personality increases the amount of explained variance. These findings may provide further evidence for the legitimacy of measuring positive orientation and personality traits separately.

  8. Single-point incremental forming and formability-failure diagrams

    DEFF Research Database (Denmark)

    Silva, M.B.; Skjødt, Martin; Atkins, A.G.

    2008-01-01

    In a recent work [1], the authors constructed a closed-form analytical model that is capable of dealing with the fundamentals of single point incremental forming and explaining the experimental and numerical results published in the literature over the past couple of years. The model is based...... of deformation that are commonly found in general single point incremental forming processes; and (ii) to investigate the formability limits of SPIF in terms of ductile damage mechanics and the question of whether necking does, or does not, precede fracture. Experimentation by the authors together with data...

  9. Latent log-linear models for handwritten digit classification.

    Science.gov (United States)

    Deselaers, Thomas; Gass, Tobias; Heigold, Georg; Ney, Hermann

    2012-06-01

    We present latent log-linear models, an extension of log-linear models incorporating latent variables, and we propose two applications thereof: log-linear mixture models and image deformation-aware log-linear models. The resulting models are fully discriminative, can be trained efficiently, and the model complexity can be controlled. Log-linear mixture models offer additional flexibility within the log-linear modeling framework. Unlike previous approaches, the image deformation-aware model directly considers image deformations and allows for a discriminative training of the deformation parameters. Both are trained using alternating optimization. For certain variants, convergence to a stationary point is guaranteed and, in practice, even variants without this guarantee converge and find models that perform well. We tune the methods on the USPS data set and evaluate on the MNIST data set, demonstrating the generalization capabilities of our proposed models. Our models, although using significantly fewer parameters, are able to obtain competitive results with models proposed in the literature.

  10. A parallel ILP algorithm that incorporates incremental batch learning

    OpenAIRE

    Nuno Fonseca; Rui Camacho; Fernado Silva

    2003-01-01

    In this paper we tackle the problems of eciency and scala-bility faced by Inductive Logic Programming (ILP) systems. We proposethe use of parallelism to improve eciency and the use of an incrementalbatch learning to address the scalability problem. We describe a novelparallel algorithm that incorporates into ILP the method of incremen-tal batch learning. The theoretical complexity of the algorithm indicatesthat a linear speedup can be achieved.

  11. Equivalent linear damping characterization in linear and nonlinear force-stiffness muscle models.

    Science.gov (United States)

    Ovesy, Marzieh; Nazari, Mohammad Ali; Mahdavian, Mohammad

    2016-02-01

    In the current research, the muscle equivalent linear damping coefficient which is introduced as the force-velocity relation in a muscle model and the corresponding time constant are investigated. In order to reach this goal, a 1D skeletal muscle model was used. Two characterizations of this model using a linear force-stiffness relationship (Hill-type model) and a nonlinear one have been implemented. The OpenSim platform was used for verification of the model. The isometric activation has been used for the simulation. The equivalent linear damping and the time constant of each model were extracted by using the results obtained from the simulation. The results provide a better insight into the characteristics of each model. It is found that the nonlinear models had a response rate closer to the reality compared to the Hill-type models.

  12. Support vector machine incremental learning triggered by wrongly predicted samples

    Science.gov (United States)

    Tang, Ting-long; Guan, Qiu; Wu, Yi-rong

    2018-05-01

    According to the classic Karush-Kuhn-Tucker (KKT) theorem, at every step of incremental support vector machine (SVM) learning, the newly adding sample which violates the KKT conditions will be a new support vector (SV) and migrate the old samples between SV set and non-support vector (NSV) set, and at the same time the learning model should be updated based on the SVs. However, it is not exactly clear at this moment that which of the old samples would change between SVs and NSVs. Additionally, the learning model will be unnecessarily updated, which will not greatly increase its accuracy but decrease the training speed. Therefore, how to choose the new SVs from old sets during the incremental stages and when to process incremental steps will greatly influence the accuracy and efficiency of incremental SVM learning. In this work, a new algorithm is proposed to select candidate SVs and use the wrongly predicted sample to trigger the incremental processing simultaneously. Experimental results show that the proposed algorithm can achieve good performance with high efficiency, high speed and good accuracy.

  13. Dental caries increments and related factors in children with type 1 diabetes mellitus.

    Science.gov (United States)

    Siudikiene, J; Machiulskiene, V; Nyvad, B; Tenovuo, J; Nedzelskiene, I

    2008-01-01

    The aim of this study was to analyse possible associations between caries increments and selected caries determinants in children with type 1 diabetes mellitus and their age- and sex-matched non-diabetic controls, over 2 years. A total of 63 (10-15 years old) diabetic and non-diabetic pairs were examined for dental caries, oral hygiene and salivary factors. Salivary flow rates, buffer effect, concentrations of mutans streptococci, lactobacilli, yeasts, total IgA and IgG, protein, albumin, amylase and glucose were analysed. Means of 2-year decayed/missing/filled surface (DMFS) increments were similar in diabetics and their controls. Over the study period, both unstimulated and stimulated salivary flow rates remained significantly lower in diabetic children compared to controls. No differences were observed in the counts of lactobacilli, mutans streptococci or yeast growth during follow-up, whereas salivary IgA, protein and glucose concentrations were higher in diabetics than in controls throughout the 2-year period. Multivariable linear regression analysis showed that children with higher 2-year DMFS increments were older at baseline and had higher salivary glucose concentrations than children with lower 2-year DMFS increments. Likewise, higher 2-year DMFS increments in diabetics versus controls were associated with greater increments in salivary glucose concentrations in diabetics. Higher increments in active caries lesions in diabetics versus controls were associated with greater increments of dental plaque and greater increments of salivary albumin. Our results suggest that, in addition to dental plaque as a common caries risk factor, diabetes-induced changes in salivary glucose and albumin concentrations are indicative of caries development among diabetics. Copyright 2008 S. Karger AG, Basel.

  14. From linear to generalized linear mixed models: A case study in repeated measures

    Science.gov (United States)

    Compared to traditional linear mixed models, generalized linear mixed models (GLMMs) can offer better correspondence between response variables and explanatory models, yielding more efficient estimates and tests in the analysis of data from designed experiments. Using proportion data from a designed...

  15. Extended Linear Models with Gaussian Priors

    DEFF Research Database (Denmark)

    Quinonero, Joaquin

    2002-01-01

    In extended linear models the input space is projected onto a feature space by means of an arbitrary non-linear transformation. A linear model is then applied to the feature space to construct the model output. The dimension of the feature space can be very large, or even infinite, giving the model...... a very big flexibility. Support Vector Machines (SVM's) and Gaussian processes are two examples of such models. In this technical report I present a model in which the dimension of the feature space remains finite, and where a Bayesian approach is used to train the model with Gaussian priors...... on the parameters. The Relevance Vector Machine, introduced by Tipping, is a particular case of such a model. I give the detailed derivations of the expectation-maximisation (EM) algorithm used in the training. These derivations are not found in the literature, and might be helpful for newcomers....

  16. Linear mixed models for longitudinal data

    CERN Document Server

    Molenberghs, Geert

    2000-01-01

    This paperback edition is a reprint of the 2000 edition. This book provides a comprehensive treatment of linear mixed models for continuous longitudinal data. Next to model formulation, this edition puts major emphasis on exploratory data analysis for all aspects of the model, such as the marginal model, subject-specific profiles, and residual covariance structure. Further, model diagnostics and missing data receive extensive treatment. Sensitivity analysis for incomplete data is given a prominent place. Several variations to the conventional linear mixed model are discussed (a heterogeity model, conditional linear mixed models). This book will be of interest to applied statisticians and biomedical researchers in industry, public health organizations, contract research organizations, and academia. The book is explanatory rather than mathematically rigorous. Most analyses were done with the MIXED procedure of the SAS software package, and many of its features are clearly elucidated. However, some other commerc...

  17. Incremental Structured Dictionary Learning for Video Sensor-Based Object Tracking

    Science.gov (United States)

    Xue, Ming; Yang, Hua; Zheng, Shibao; Zhou, Yi; Yu, Zhenghua

    2014-01-01

    To tackle robust object tracking for video sensor-based applications, an online discriminative algorithm based on incremental discriminative structured dictionary learning (IDSDL-VT) is presented. In our framework, a discriminative dictionary combining both positive, negative and trivial patches is designed to sparsely represent the overlapped target patches. Then, a local update (LU) strategy is proposed for sparse coefficient learning. To formulate the training and classification process, a multiple linear classifier group based on a K-combined voting (KCV) function is proposed. As the dictionary evolves, the models are also trained to timely adapt the target appearance variation. Qualitative and quantitative evaluations on challenging image sequences compared with state-of-the-art algorithms demonstrate that the proposed tracking algorithm achieves a more favorable performance. We also illustrate its relay application in visual sensor networks. PMID:24549252

  18. Non-linear finite element modeling

    DEFF Research Database (Denmark)

    Mikkelsen, Lars Pilgaard

    The note is written for courses in "Non-linear finite element method". The note has been used by the author teaching non-linear finite element modeling at Civil Engineering at Aalborg University, Computational Mechanics at Aalborg University Esbjerg, Structural Engineering at the University...

  19. Atmospheric response to Saharan dust deduced from ECMWF reanalysis (ERA) temperature increments

    Science.gov (United States)

    Kishcha, P.; Alpert, P.; Barkan, J.; Kirchner, I.; Machenhauer, B.

    2003-09-01

    This study focuses on the atmospheric temperature response to dust deduced from a new source of data the European Reanalysis (ERA) increments. These increments are the systematic errors of global climate models, generated in the reanalysis procedure. The model errors result not only from the lack of desert dust but also from a complex combination of many kinds of model errors. Over the Sahara desert the lack of dust radiative effect is believed to be a predominant model defect which should significantly affect the increments. This dust effect was examined by considering correlation between the increments and remotely sensed dust. Comparisons were made between April temporal variations of the ERA analysis increments and the variations of the Total Ozone Mapping Spectrometer aerosol index (AI) between 1979 and 1993. The distinctive structure was identified in the distribution of correlation composed of three nested areas with high positive correlation (>0.5), low correlation and high negative correlation (Forecast (ECMWF) suggest that the PCA (NCA) corresponds mainly to anticyclonic (cyclonic) flow, negative (positive) vorticity and downward (upward) airflow. These findings are associated with the interaction between dust-forced heating/cooling and atmospheric circulation. This paper contributes to a better understanding of dust radiative processes missed in the model.

  20. An incremental DPMM-based method for trajectory clustering, modeling, and retrieval.

    Science.gov (United States)

    Hu, Weiming; Li, Xi; Tian, Guodong; Maybank, Stephen; Zhang, Zhongfei

    2013-05-01

    Trajectory analysis is the basis for many applications, such as indexing of motion events in videos, activity recognition, and surveillance. In this paper, the Dirichlet process mixture model (DPMM) is applied to trajectory clustering, modeling, and retrieval. We propose an incremental version of a DPMM-based clustering algorithm and apply it to cluster trajectories. An appropriate number of trajectory clusters is determined automatically. When trajectories belonging to new clusters arrive, the new clusters can be identified online and added to the model without any retraining using the previous data. A time-sensitive Dirichlet process mixture model (tDPMM) is applied to each trajectory cluster for learning the trajectory pattern which represents the time-series characteristics of the trajectories in the cluster. Then, a parameterized index is constructed for each cluster. A novel likelihood estimation algorithm for the tDPMM is proposed, and a trajectory-based video retrieval model is developed. The tDPMM-based probabilistic matching method and the DPMM-based model growing method are combined to make the retrieval model scalable and adaptable. Experimental comparisons with state-of-the-art algorithms demonstrate the effectiveness of our algorithm.

  1. Robot Visual Tracking via Incremental Self-Updating of Appearance Model

    Directory of Open Access Journals (Sweden)

    Danpei Zhao

    2013-09-01

    Full Text Available This paper proposes a target tracking method called Incremental Self-Updating Visual Tracking for robot platforms. Our tracker treats the tracking problem as a binary classification: the target and the background. The greyscale, HOG and LBP features are used in this work to represent the target and are integrated into a particle filter framework. To track the target over long time sequences, the tracker has to update its model to follow the most recent target. In order to deal with the problems of calculation waste and lack of model-updating strategy with the traditional methods, an intelligent and effective online self-updating strategy is devised to choose the optimal update opportunity. The strategy of updating the appearance model can be achieved based on the change in the discriminative capability between the current frame and the previous updated frame. By adjusting the update step adaptively, severe waste of calculation time for needless updates can be avoided while keeping the stability of the model. Moreover, the appearance model can be kept away from serious drift problems when the target undergoes temporary occlusion. The experimental results show that the proposed tracker can achieve robust and efficient performance in several benchmark-challenging video sequences with various complex environment changes in posture, scale, illumination and occlusion.

  2. Product Quality Modelling Based on Incremental Support Vector Machine

    International Nuclear Information System (INIS)

    Wang, J; Zhang, W; Qin, B; Shi, W

    2012-01-01

    Incremental Support vector machine (ISVM) is a new learning method developed in recent years based on the foundations of statistical learning theory. It is suitable for the problem of sequentially arriving field data and has been widely used for product quality prediction and production process optimization. However, the traditional ISVM learning does not consider the quality of the incremental data which may contain noise and redundant data; it will affect the learning speed and accuracy to a great extent. In order to improve SVM training speed and accuracy, a modified incremental support vector machine (MISVM) is proposed in this paper. Firstly, the margin vectors are extracted according to the Karush-Kuhn-Tucker (KKT) condition; then the distance from the margin vectors to the final decision hyperplane is calculated to evaluate the importance of margin vectors, where the margin vectors are removed while their distance exceed the specified value; finally, the original SVs and remaining margin vectors are used to update the SVM. The proposed MISVM can not only eliminate the unimportant samples such as noise samples, but also can preserve the important samples. The MISVM has been experimented on two public data and one field data of zinc coating weight in strip hot-dip galvanizing, and the results shows that the proposed method can improve the prediction accuracy and the training speed effectively. Furthermore, it can provide the necessary decision supports and analysis tools for auto control of product quality, and also can extend to other process industries, such as chemical process and manufacturing process.

  3. Modeling of Photovoltaic System with Modified Incremental Conductance Algorithm for Fast Changes of Irradiance

    Directory of Open Access Journals (Sweden)

    Saad Motahhir

    2018-01-01

    Full Text Available The first objective of this work is to determine some of the performance parameters characterizing the behavior of a particular photovoltaic (PV panels that are not normally provided in the manufacturers’ specifications. These provide the basis for developing a simple model for the electrical behavior of the PV panel. Next, using this model, the effects of varying solar irradiation, temperature, series and shunt resistances, and partial shading on the output of the PV panel are presented. In addition, the PV panel model is used to configure a large photovoltaic array. Next, a boost converter for the PV panel is designed. This converter is put between the panel and the load in order to control it by means of a maximum power point tracking (MPPT controller. The MPPT used is based on incremental conductance (INC, and it is demonstrated here that this technique does not respond accurately when solar irradiation is increased. To investigate this, a modified incremental conductance technique is presented in this paper. It is shown that this system does respond accurately and reduces the steady-state oscillations when solar irradiation is increased. Finally, simulations of the conventional and modified algorithm are compared, and the results show that the modified algorithm provides an accurate response to a sudden increase in solar irradiation.

  4. linear-quadratic-linear model

    Directory of Open Access Journals (Sweden)

    Tanwiwat Jaikuna

    2017-02-01

    Full Text Available Purpose: To develop an in-house software program that is able to calculate and generate the biological dose distribution and biological dose volume histogram by physical dose conversion using the linear-quadratic-linear (LQL model. Material and methods : The Isobio software was developed using MATLAB version 2014b to calculate and generate the biological dose distribution and biological dose volume histograms. The physical dose from each voxel in treatment planning was extracted through Computational Environment for Radiotherapy Research (CERR, and the accuracy was verified by the differentiation between the dose volume histogram from CERR and the treatment planning system. An equivalent dose in 2 Gy fraction (EQD2 was calculated using biological effective dose (BED based on the LQL model. The software calculation and the manual calculation were compared for EQD2 verification with pair t-test statistical analysis using IBM SPSS Statistics version 22 (64-bit. Results: Two and three-dimensional biological dose distribution and biological dose volume histogram were displayed correctly by the Isobio software. Different physical doses were found between CERR and treatment planning system (TPS in Oncentra, with 3.33% in high-risk clinical target volume (HR-CTV determined by D90%, 0.56% in the bladder, 1.74% in the rectum when determined by D2cc, and less than 1% in Pinnacle. The difference in the EQD2 between the software calculation and the manual calculation was not significantly different with 0.00% at p-values 0.820, 0.095, and 0.593 for external beam radiation therapy (EBRT and 0.240, 0.320, and 0.849 for brachytherapy (BT in HR-CTV, bladder, and rectum, respectively. Conclusions : The Isobio software is a feasible tool to generate the biological dose distribution and biological dose volume histogram for treatment plan evaluation in both EBRT and BT.

  5. Developing risk prediction models for kidney injury and assessing incremental value for novel biomarkers.

    Science.gov (United States)

    Kerr, Kathleen F; Meisner, Allison; Thiessen-Philbrook, Heather; Coca, Steven G; Parikh, Chirag R

    2014-08-07

    The field of nephrology is actively involved in developing biomarkers and improving models for predicting patients' risks of AKI and CKD and their outcomes. However, some important aspects of evaluating biomarkers and risk models are not widely appreciated, and statistical methods are still evolving. This review describes some of the most important statistical concepts for this area of research and identifies common pitfalls. Particular attention is paid to metrics proposed within the last 5 years for quantifying the incremental predictive value of a new biomarker. Copyright © 2014 by the American Society of Nephrology.

  6. Statistical Tests for Mixed Linear Models

    CERN Document Server

    Khuri, André I; Sinha, Bimal K

    2011-01-01

    An advanced discussion of linear models with mixed or random effects. In recent years a breakthrough has occurred in our ability to draw inferences from exact and optimum tests of variance component models, generating much research activity that relies on linear models with mixed and random effects. This volume covers the most important research of the past decade as well as the latest developments in hypothesis testing. It compiles all currently available results in the area of exact and optimum tests for variance component models and offers the only comprehensive treatment for these models a

  7. Matrix Tricks for Linear Statistical Models

    CERN Document Server

    Puntanen, Simo; Styan, George PH

    2011-01-01

    In teaching linear statistical models to first-year graduate students or to final-year undergraduate students there is no way to proceed smoothly without matrices and related concepts of linear algebra; their use is really essential. Our experience is that making some particular matrix tricks very familiar to students can substantially increase their insight into linear statistical models (and also multivariate statistical analysis). In matrix algebra, there are handy, sometimes even very simple "tricks" which simplify and clarify the treatment of a problem - both for the student and

  8. An online re-linearization scheme suited for Model Predictive and Linear Quadratic Control

    DEFF Research Database (Denmark)

    Henriksen, Lars Christian; Poulsen, Niels Kjølstad

    This technical note documents the equations for primal-dual interior-point quadratic programming problem solver used for MPC. The algorithm exploits the special structure of the MPC problem and is able to reduce the computational burden such that the computational burden scales with prediction...... horizon length in a linear way rather than cubic, which would be the case if the structure was not exploited. It is also shown how models used for design of model-based controllers, e.g. linear quadratic and model predictive, can be linearized both at equilibrium and non-equilibrium points, making...

  9. Parallel Algorithm for Incremental Betweenness Centrality on Large Graphs

    KAUST Repository

    Jamour, Fuad Tarek

    2017-10-17

    Betweenness centrality quantifies the importance of nodes in a graph in many applications, including network analysis, community detection and identification of influential users. Typically, graphs in such applications evolve over time. Thus, the computation of betweenness centrality should be performed incrementally. This is challenging because updating even a single edge may trigger the computation of all-pairs shortest paths in the entire graph. Existing approaches cannot scale to large graphs: they either require excessive memory (i.e., quadratic to the size of the input graph) or perform unnecessary computations rendering them prohibitively slow. We propose iCentral; a novel incremental algorithm for computing betweenness centrality in evolving graphs. We decompose the graph into biconnected components and prove that processing can be localized within the affected components. iCentral is the first algorithm to support incremental betweeness centrality computation within a graph component. This is done efficiently, in linear space; consequently, iCentral scales to large graphs. We demonstrate with real datasets that the serial implementation of iCentral is up to 3.7 times faster than existing serial methods. Our parallel implementation that scales to large graphs, is an order of magnitude faster than the state-of-the-art parallel algorithm, while using an order of magnitude less computational resources.

  10. Modeling of Volatility with Non-linear Time Series Model

    OpenAIRE

    Kim Song Yon; Kim Mun Chol

    2013-01-01

    In this paper, non-linear time series models are used to describe volatility in financial time series data. To describe volatility, two of the non-linear time series are combined into form TAR (Threshold Auto-Regressive Model) with AARCH (Asymmetric Auto-Regressive Conditional Heteroskedasticity) error term and its parameter estimation is studied.

  11. Incremental testing of the Community Multiscale Air Quality (CMAQ modeling system version 4.7

    Directory of Open Access Journals (Sweden)

    K. M. Foley

    2010-03-01

    Full Text Available This paper describes the scientific and structural updates to the latest release of the Community Multiscale Air Quality (CMAQ modeling system version 4.7 (v4.7 and points the reader to additional resources for further details. The model updates were evaluated relative to observations and results from previous model versions in a series of simulations conducted to incrementally assess the effect of each change. The focus of this paper is on five major scientific upgrades: (a updates to the heterogeneous N2O5 parameterization, (b improvement in the treatment of secondary organic aerosol (SOA, (c inclusion of dynamic mass transfer for coarse-mode aerosol, (d revisions to the cloud model, and (e new options for the calculation of photolysis rates. Incremental test simulations over the eastern United States during January and August 2006 are evaluated to assess the model response to each scientific improvement, providing explanations of differences in results between v4.7 and previously released CMAQ model versions. Particulate sulfate predictions are improved across all monitoring networks during both seasons due to cloud module updates. Numerous updates to the SOA module improve the simulation of seasonal variability and decrease the bias in organic carbon predictions at urban sites in the winter. Bias in the total mass of fine particulate matter (PM2.5 is dominated by overpredictions of unspeciated PM2.5 (PMother in the winter and by underpredictions of carbon in the summer. The CMAQv4.7 model results show slightly worse performance for ozone predictions. However, changes to the meteorological inputs are found to have a much greater impact on ozone predictions compared to changes to the CMAQ modules described here. Model updates had little effect on existing biases in wet deposition predictions.

  12. Applicability of linear and non-linear potential flow models on a Wavestar float

    DEFF Research Database (Denmark)

    Bozonnet, Pauline; Dupin, Victor; Tona, Paolino

    2017-01-01

    as a model based on non-linear potential flow theory and weakscatterer hypothesis are successively considered. Simple tests, such as dip tests, decay tests and captive tests enable to highlight the improvements obtained with the introduction of nonlinearities. Float motion under wave actions and without...... control action, limited to small amplitude motion with a single float, is well predicted by the numerical models, including the linear one. Still, float velocity is better predicted by accounting for non-linear hydrostatic and Froude-Krylov forces.......Numerical models based on potential flow theory, including different types of nonlinearities are compared and validated against experimental data for the Wavestar wave energy converter technology. Exact resolution of the rotational motion, non-linear hydrostatic and Froude-Krylov forces as well...

  13. Non-linearities in tensile creep of concrete at early age

    DEFF Research Database (Denmark)

    Hauggaard-Nielsen, Anders Boe; Damkilde, Lars

    1997-01-01

    A meterial model for creep is proposed which takes into consideration some of the couplings in early age concrete. The model is in incremental form and reflect the hydration process where new layers of cement gel are formed in a stress free state. In the present context attention is on non......-linear creep at high stress levels. The parameteres in the model develop in time as a result of hydration. The creep model has been used to analyse the tensile experiments at different stress levels carried out in the HETEK project. The tests were made on dogbone shaped specimen and the test procedure...

  14. Forecasting Volatility of Dhaka Stock Exchange: Linear Vs Non-linear models

    Directory of Open Access Journals (Sweden)

    Masudul Islam

    2012-10-01

    Full Text Available Prior information about a financial market is very essential for investor to invest money on parches share from the stock market which can strengthen the economy. The study examines the relative ability of various models to forecast daily stock indexes future volatility. The forecasting models that employed from simple to relatively complex ARCH-class models. It is found that among linear models of stock indexes volatility, the moving average model ranks first using root mean square error, mean absolute percent error, Theil-U and Linex loss function  criteria. We also examine five nonlinear models. These models are ARCH, GARCH, EGARCH, TGARCH and restricted GARCH models. We find that nonlinear models failed to dominate linear models utilizing different error measurement criteria and moving average model appears to be the best. Then we forecast the next two months future stock index price volatility by the best (moving average model.

  15. Incremental Structured Dictionary Learning for Video Sensor-Based Object Tracking

    Directory of Open Access Journals (Sweden)

    Ming Xue

    2014-02-01

    Full Text Available To tackle robust object tracking for video sensor-based applications, an online discriminative algorithm based on incremental discriminative structured dictionary learning (IDSDL-VT is presented. In our framework, a discriminative dictionary combining both positive, negative and trivial patches is designed to sparsely represent the overlapped target patches. Then, a local update (LU strategy is proposed for sparse coefficient learning. To formulate the training and classification process, a multiple linear classifier group based on a K-combined voting (KCV function is proposed. As the dictionary evolves, the models are also trained to timely adapt the target appearance variation. Qualitative and quantitative evaluations on challenging image sequences compared with state-of-the-art algorithms demonstrate that the proposed tracking algorithm achieves a more favorable performance. We also illustrate its relay application in visual sensor networks.

  16. Comparing linear probability model coefficients across groups

    DEFF Research Database (Denmark)

    Holm, Anders; Ejrnæs, Mette; Karlson, Kristian Bernt

    2015-01-01

    of the following three components: outcome truncation, scale parameters and distributional shape of the predictor variable. These results point to limitations in using linear probability model coefficients for group comparisons. We also provide Monte Carlo simulations and real examples to illustrate......This article offers a formal identification analysis of the problem in comparing coefficients from linear probability models between groups. We show that differences in coefficients from these models can result not only from genuine differences in effects, but also from differences in one or more...... these limitations, and we suggest a restricted approach to using linear probability model coefficients in group comparisons....

  17. Distance-independent individual tree diameter-increment model for Thuya [Tetraclinis articulata (VAHL. MAST.] stands in Tunisia

    Directory of Open Access Journals (Sweden)

    T. Sghaier

    2013-12-01

    Full Text Available Aim of study: The aim of the work was to develop an individual tree diameter-increment model for Thuya (Tetraclinis articulata in Tunisia.Area of study: The natural Tetraclinis articulata stands at Jbel Lattrech in north-eastern of Tunisia.Material and methods:  Data came from 200 trees located in 50 sample plots. The diameter at age t and the diameter increment for the last five years obtained from cores taken at breast height were measured for each tree. Four difference equations derived from the base functions of Richards, Lundqvist, Hossfeld IV and Weibull were tested using the age-independent formulations of the growth functions. Both numerical and graphical analyses were used to evaluate the performance of the candidate models.Main results: Based on the analysis, the age-independent difference equation derived from the base function Richards model was selected. Two of the three parameters (growth rate and shape parameter of the retained model were related to site quality, represented by a Growth Index, stand density and the basal area in larger trees divided by diameter of the subject tree expressing the inter-tree competition.Research highlights: The proposed model can be useful for predicting the diameter growth of Tetraclinis articulata in Tunisia when age is not available or for trees growing in uneven-aged stands.Keywords: Age-independent growth model; difference equations; Tetraclinis articulata; Tunisia.

  18. Testing Parametric versus Semiparametric Modelling in Generalized Linear Models

    NARCIS (Netherlands)

    Härdle, W.K.; Mammen, E.; Müller, M.D.

    1996-01-01

    We consider a generalized partially linear model E(Y|X,T) = G{X'b + m(T)} where G is a known function, b is an unknown parameter vector, and m is an unknown function.The paper introduces a test statistic which allows to decide between a parametric and a semiparametric model: (i) m is linear, i.e.

  19. Model Predictive Control for Linear Complementarity and Extended Linear Complementarity Systems

    Directory of Open Access Journals (Sweden)

    Bambang Riyanto

    2005-11-01

    Full Text Available In this paper, we propose model predictive control method for linear complementarity and extended linear complementarity systems by formulating optimization along prediction horizon as mixed integer quadratic program. Such systems contain interaction between continuous dynamics and discrete event systems, and therefore, can be categorized as hybrid systems. As linear complementarity and extended linear complementarity systems finds applications in different research areas, such as impact mechanical systems, traffic control and process control, this work will contribute to the development of control design method for those areas as well, as shown by three given examples.

  20. Linear approximation model network and its formation via ...

    Indian Academy of Sciences (India)

    To overcome the deficiency of `local model network' (LMN) techniques, an alternative `linear approximation model' (LAM) network approach is proposed. Such a network models a nonlinear or practical system with multiple linear models fitted along operating trajectories, where individual models are simply networked ...

  1. Composite Linear Models | Division of Cancer Prevention

    Science.gov (United States)

    By Stuart G. Baker The composite linear models software is a matrix approach to compute maximum likelihood estimates and asymptotic standard errors for models for incomplete multinomial data. It implements the method described in Baker SG. Composite linear models for incomplete multinomial data. Statistics in Medicine 1994;13:609-622. The software includes a library of thirty

  2. The effect of the model posture on the forming quality in the CNC incremental forming

    International Nuclear Information System (INIS)

    Zhu, H; Zhang, W; Bai, J L; Yu, C; Xing, Y F

    2015-01-01

    Sheet rupture caused by a sheet metal thickness non-uniformity persists in CNC (Computer Numerical Control) incremental forming. Because the forming half cone angle is determined by the orientation of the model to be formed, so is the sheet metal's uniformity. The finite element analysis models for the two kinds of the postures of the model were established, and the digital simulation was conducted by using the ANSYS/LA-DYNA software. The effect of the model's posture on the sheet thickness distribution and the sheet thickness thinning rate were studied by comparing the simulation results of two kinds of the finite elements analyzes. (paper)

  3. Deep Incremental Boosting

    OpenAIRE

    Mosca, Alan; Magoulas, George D

    2017-01-01

    This paper introduces Deep Incremental Boosting, a new technique derived from AdaBoost, specifically adapted to work with Deep Learning methods, that reduces the required training time and improves generalisation. We draw inspiration from Transfer of Learning approaches to reduce the start-up time to training each incremental Ensemble member. We show a set of experiments that outlines some preliminary results on some common Deep Learning datasets and discuss the potential improvements Deep In...

  4. On excursion increments in heartbeat dynamics

    International Nuclear Information System (INIS)

    Guzmán-Vargas, L.; Reyes-Ramírez, I.; Hernández-Pérez, R.

    2013-01-01

    We study correlation properties of excursion increments of heartbeat time series from healthy subjects and heart failure patients. We construct the excursion time based on the original heartbeat time series, representing the time employed by the walker to return to the local mean value. Next, the detrended fluctuation analysis and the fractal dimension method are applied to the magnitude and sign of the increments in the time excursions between successive excursions for the mentioned groups. Our results show that for magnitude series of excursion increments both groups display long-range correlations with similar correlation exponents, indicating that large (small) increments (decrements) are more likely to be followed by large (small) increments (decrements). For sign sequences and for both groups, we find that increments are short-range anti-correlated, which is noticeable under heart failure conditions

  5. Noise masking of S-cone increments and decrements.

    Science.gov (United States)

    Wang, Quanhong; Richters, David P; Eskew, Rhea T

    2014-11-12

    S-cone increment and decrement detection thresholds were measured in the presence of bipolar, dynamic noise masks. Noise chromaticities were the L-, M-, and S-cone directions, as well as L-M, L+M, and achromatic (L+M+S) directions. Noise contrast power was varied to measure threshold Energy versus Noise (EvN) functions. S+ and S- thresholds were similarly, and weakly, raised by achromatic noise. However, S+ thresholds were much more elevated by S, L+M, L-M, L- and M-cone noises than were S- thresholds, even though the noises consisted of two symmetric chromatic polarities of equal contrast power. A linear cone combination model accounts for the overall pattern of masking of a single test polarity well. L and M cones have opposite signs in their effects upon raising S+ and S- thresholds. The results strongly indicate that the psychophysical mechanisms responsible for S+ and S- detection, presumably based on S-ON and S-OFF pathways, are distinct, unipolar mechanisms, and that they have different spatiotemporal sampling characteristics, or contrast gains, or both. © 2014 ARVO.

  6. New Incremental Actuators based on Electro-active Polymer: Conceptual, Control, and Driver Design Considerations. 

    OpenAIRE

    Thummala, Prasanth; Schneider, Henrik; Zhang, Zhe; Andersen, Michael A. E.; Sarban, Rahimullah

    2016-01-01

    This paper presents an overview of the widely usedconventional linear actuator technologies and existing electroactivepolymer based linear and rotary actuators. It also providesthe conceptual, control and driver design considerations for anew dielectric electro-active polymer (DEAP) based incrementalactuator. The DEAP incremental actuator consists of threeindependent DEAP actuators with a unique cylindrical designthat potentially simplifies mass production and scalabilitycompared to existing ...

  7. Actuarial statistics with generalized linear mixed models

    NARCIS (Netherlands)

    Antonio, K.; Beirlant, J.

    2007-01-01

    Over the last decade the use of generalized linear models (GLMs) in actuarial statistics has received a lot of attention, starting from the actuarial illustrations in the standard text by McCullagh and Nelder [McCullagh, P., Nelder, J.A., 1989. Generalized linear models. In: Monographs on Statistics

  8. Heterotic sigma models and non-linear strings

    International Nuclear Information System (INIS)

    Hull, C.M.

    1986-01-01

    The two-dimensional supersymmetric non-linear sigma models are examined with respect to the heterotic string. The paper was presented at the workshop on :Supersymmetry and its applications', Cambridge, United Kingdom, 1985. The non-linear sigma model with Wess-Zumino-type term, the coupling of the fermionic superfields to the sigma model, super-conformal invariance, and the supersymmetric string, are all discussed. (U.K.)

  9. Weighted tunable clustering in local-world networks with increment behavior

    International Nuclear Information System (INIS)

    Ma, Ying-Hong; Li, Huijia; Zhang, Xiao-Dong

    2010-01-01

    Since some realistic networks are influenced not only by increment behavior but also by the tunable clustering mechanism with new nodes to be added to networks, it is interesting to characterize the model for those actual networks. In this paper, a weighted local-world model, which incorporates increment behavior and the tunable clustering mechanism, is proposed and its properties are investigated, such as degree distribution and clustering coefficient. Numerical simulations are fitted to the model and also display good right-skewed scale-free properties. Furthermore, the correlation of vertices in our model is studied which shows the assortative property. The epidemic spreading process by weighted transmission rate on the model shows that the tunable clustering behavior has a great impact on the epidemic dynamic

  10. Comparison of linear and non-linear models for the adsorption of fluoride onto geo-material: limonite.

    Science.gov (United States)

    Sahin, Rubina; Tapadia, Kavita

    2015-01-01

    The three widely used isotherms Langmuir, Freundlich and Temkin were examined in an experiment using fluoride (F⁻) ion adsorption on a geo-material (limonite) at four different temperatures by linear and non-linear models. Comparison of linear and non-linear regression models were given in selecting the optimum isotherm for the experimental results. The coefficient of determination, r², was used to select the best theoretical isotherm. The four Langmuir linear equations (1, 2, 3, and 4) are discussed. Langmuir isotherm parameters obtained from the four Langmuir linear equations using the linear model differed but they were the same when using the nonlinear model. Langmuir-2 isotherm is one of the linear forms, and it had the highest coefficient of determination (r² = 0.99) compared to the other Langmuir linear equations (1, 3 and 4) in linear form, whereas, for non-linear, Langmuir-4 fitted best among all the isotherms because it had the highest coefficient of determination (r² = 0.99). The results showed that the non-linear model may be a better way to obtain the parameters. In the present work, the thermodynamic parameters show that the absorption of fluoride onto limonite is both spontaneous (ΔG 0). Scanning electron microscope and X-ray diffraction images also confirm the adsorption of F⁻ ion onto limonite. The isotherm and kinetic study reveals that limonite can be used as an adsorbent for fluoride removal. In future we can develop new technology for fluoride removal in large scale by using limonite which is cost-effective, eco-friendly and is easily available in the study area.

  11. Linear and nonlinear symmetrically loaded shells of revolution approximated with the finite element method

    International Nuclear Information System (INIS)

    Cook, W.A.

    1978-10-01

    Nuclear Material shipping containers have shells of revolution as a basic structural component. Analytically modeling the response of these containers to severe accident impact conditions requires a nonlinear shell-of-revolution model that accounts for both geometric and material nonlinearities. Present models are limited to large displacements, small rotations, and nonlinear materials. This report discusses a first approach to developing a finite element nonlinear shell of revolution model that accounts for these nonlinear geometric effects. The approach uses incremental loads and a linear shell model with equilibrium iterations. Sixteen linear models are developed, eight using the potential energy variational principle and eight using a mixed variational principle. Four of these are suitable for extension to nonlinear shell theory. A nonlinear shell theory is derived, and a computational technique used in its solution is presented

  12. Extrinsic contribution to the non-linearity in a PZT disc

    Energy Technology Data Exchange (ETDEWEB)

    Perez, Rafel [Departament de Fisica Aplicada, Universitat Politecnica de Catalunya, Jordi Girona 1-3, Campus Nord, 08034 Barcelona (Spain); Albareda, Alfons [Departament de Fisica Aplicada, Universitat Politecnica de Catalunya, Jordi Girona 1-3, Campus Nord, 08034 Barcelona (Spain); Garcia, Jose E [Departament de Fisica Aplicada, Universitat Politecnica de Catalunya, Jordi Girona 1-3, Campus Nord, 08034 Barcelona (Spain); Tiana, Jordi [Departament de Fisica Aplicada, Universitat Politecnica de Catalunya, Jordi Girona 1-3, Campus Nord, 08034 Barcelona (Spain); Ringgaard, Erling [Ferroperm Piezoceramics A/S, Hejreskovvej 18, DK-3490 Kvistgaard (Denmark); Wolny, Wanda W [Ferroperm Piezoceramics A/S, Hejreskovvej 18, DK-3490 Kvistgaard (Denmark)

    2004-10-07

    Non-linear increases in elastic, piezoelectric (direct and reverse) and dielectric coefficients have been measured under a high electrical field or under high mechanical stress. The permittivity and reverse piezoelectric coefficient can be measured by applying a high voltage at a low frequency, while the elastic compliance and direct piezoelectric coefficient can be measured at the first radial resonance frequency in order to apply a high stress. The non-linear behaviour has been analysed at the radial resonance of a disc. In all the materials tested, the results show that there is a close relation between the non-linear increments of the different coefficients. An empirical model has been proposed in order to describe and understand these relations. It is assumed that either the strain or the electrical displacement is produced by intrinsic and extrinsic processes, but only the latter, which consist mainly in the motion of domain walls, contribute to the non-linearity. The model enables us to find the domain wall contribution to elastic, piezoelectric and dielectric non-linearities, and allows us to compare the amplitudes of the fields and stresses that produce the same displacement of domain walls.

  13. Legislative Bargaining and Incremental Budgeting

    OpenAIRE

    Dhammika Dharmapala

    2002-01-01

    The notion of 'incrementalism', formulated by Aaron Wildavsky in the 1960's, has been extremely influential in the public budgeting literature. In essence, it entails the claim that legislators engaged in budgetary policymaking accept past allocations, and decide only on the allocation of increments to revenue. Wildavsky explained incrementalism with reference to the cognitive limitations of lawmakers and their desire to reduce conflict. This paper uses a legislative bargaining framework to u...

  14. Non linear viscoelastic models

    DEFF Research Database (Denmark)

    Agerkvist, Finn T.

    2011-01-01

    Viscoelastic eects are often present in loudspeaker suspensions, this can be seen in the displacement transfer function which often shows a frequency dependent value below the resonance frequency. In this paper nonlinear versions of the standard linear solid model (SLS) are investigated....... The simulations show that the nonlinear version of the Maxwell SLS model can result in a time dependent small signal stiness while the Kelvin Voight version does not....

  15. Genetic parameters for racing records in trotters using linear and generalized linear models.

    Science.gov (United States)

    Suontama, M; van der Werf, J H J; Juga, J; Ojala, M

    2012-09-01

    Heritability and repeatability and genetic and phenotypic correlations were estimated for trotting race records with linear and generalized linear models using 510,519 records on 17,792 Finnhorses and 513,161 records on 25,536 Standardbred trotters. Heritability and repeatability were estimated for single racing time and earnings traits with linear models, and logarithmic scale was used for racing time and fourth-root scale for earnings to correct for nonnormality. Generalized linear models with a gamma distribution were applied for single racing time and with a multinomial distribution for single earnings traits. In addition, genetic parameters for annual earnings were estimated with linear models on the observed and fourth-root scales. Racing success traits of single placings, winnings, breaking stride, and disqualifications were analyzed using generalized linear models with a binomial distribution. Estimates of heritability were greatest for racing time, which ranged from 0.32 to 0.34. Estimates of heritability were low for single earnings with all distributions, ranging from 0.01 to 0.09. Annual earnings were closer to normal distribution than single earnings. Heritability estimates were moderate for annual earnings on the fourth-root scale, 0.19 for Finnhorses and 0.27 for Standardbred trotters. Heritability estimates for binomial racing success variables ranged from 0.04 to 0.12, being greatest for winnings and least for breaking stride. Genetic correlations among racing traits were high, whereas phenotypic correlations were mainly low to moderate, except correlations between racing time and earnings were high. On the basis of a moderate heritability and moderate to high repeatability for racing time and annual earnings, selection of horses for these traits is effective when based on a few repeated records. Because of high genetic correlations, direct selection for racing time and annual earnings would also result in good genetic response in racing success.

  16. Generalised linear models for correlated pseudo-observations, with applications to multi-state models

    DEFF Research Database (Denmark)

    Andersen, Per Kragh; Klein, John P.; Rosthøj, Susanne

    2003-01-01

    Generalised estimating equation; Generalised linear model; Jackknife pseudo-value; Logistic regression; Markov Model; Multi-state model......Generalised estimating equation; Generalised linear model; Jackknife pseudo-value; Logistic regression; Markov Model; Multi-state model...

  17. Linear causal modeling with structural equations

    CERN Document Server

    Mulaik, Stanley A

    2009-01-01

    Emphasizing causation as a functional relationship between variables that describe objects, Linear Causal Modeling with Structural Equations integrates a general philosophical theory of causation with structural equation modeling (SEM) that concerns the special case of linear causal relations. In addition to describing how the functional relation concept may be generalized to treat probabilistic causation, the book reviews historical treatments of causation and explores recent developments in experimental psychology on studies of the perception of causation. It looks at how to perceive causal

  18. TENSOR DECOMPOSITIONS AND SPARSE LOG-LINEAR MODELS

    Science.gov (United States)

    Johndrow, James E.; Bhattacharya, Anirban; Dunson, David B.

    2017-01-01

    Contingency table analysis routinely relies on log-linear models, with latent structure analysis providing a common alternative. Latent structure models lead to a reduced rank tensor factorization of the probability mass function for multivariate categorical data, while log-linear models achieve dimensionality reduction through sparsity. Little is known about the relationship between these notions of dimensionality reduction in the two paradigms. We derive several results relating the support of a log-linear model to nonnegative ranks of the associated probability tensor. Motivated by these findings, we propose a new collapsed Tucker class of tensor decompositions, which bridge existing PARAFAC and Tucker decompositions, providing a more flexible framework for parsimoniously characterizing multivariate categorical data. Taking a Bayesian approach to inference, we illustrate empirical advantages of the new decompositions. PMID:29332971

  19. A maximal incremental effort alters tear osmolarity depending on the fitness level in military helicopter pilots.

    Science.gov (United States)

    Vera, Jesús; Jiménez, Raimundo; Madinabeitia, Iker; Masiulis, Nerijus; Cárdenas, David

    2017-10-01

    Fitness level modulates the physiological responses to exercise for a variety of indices. While intense bouts of exercise have been demonstrated to increase tear osmolarity (Tosm), it is not known if fitness level can affect the Tosm response to acute exercise. This study aims to compare the effect of a maximal incremental test on Tosm between trained and untrained military helicopter pilots. Nineteen military helicopter pilots (ten trained and nine untrained) performed a maximal incremental test on a treadmill. A tear sample was collected before and after physical effort to determine the exercise-induced changes on Tosm. The Bayesian statistical analysis demonstrated that Tosm significantly increased from 303.72 ± 6.76 to 310.56 ± 8.80 mmol/L after performance of a maximal incremental test. However, while the untrained group showed an acute Tosm rise (12.33 mmol/L of increment), the trained group experienced a stable Tosm physical effort (1.45 mmol/L). There was a significant positive linear association between fat indices and Tosm changes (correlation coefficients [r] range: 0.77-0.89), whereas the Tosm changes displayed a negative relationship with the cardiorespiratory capacity (VO2 max; r = -0.75) and performance parameters (r = -0.75 for velocity, and r = -0.67 for time to exhaustion). The findings from this study provide evidence that fitness level is a major determinant of Tosm response to maximal incremental physical effort, showing a fairly linear association with several indices related to fitness level. High fitness level seems to be beneficial to avoid Tosm changes as consequence of intense exercise. Copyright © 2017 Elsevier Inc. All rights reserved.

  20. On D-branes from gauged linear sigma models

    International Nuclear Information System (INIS)

    Govindarajan, S.; Jayaraman, T.; Sarkar, T.

    2001-01-01

    We study both A-type and B-type D-branes in the gauged linear sigma model by considering worldsheets with boundary. The boundary conditions on the matter and vector multiplet fields are first considered in the large-volume phase/non-linear sigma model limit of the corresponding Calabi-Yau manifold, where we find that we need to add a contact term on the boundary. These considerations enable to us to derive the boundary conditions in the full gauged linear sigma model, including the addition of the appropriate boundary contact terms, such that these boundary conditions have the correct non-linear sigma model limit. Most of the analysis is for the case of Calabi-Yau manifolds with one Kaehler modulus (including those corresponding to hypersurfaces in weighted projective space), though we comment on possible generalisations

  1. Organization Strategy and Structural Differences for Radical Versus Incremental Innovation

    OpenAIRE

    John E. Ettlie; William P. Bridges; Robert D. O'Keefe

    1984-01-01

    The purpose of this study was to test a model of the organizational innovation process that suggests that the strategy-structure causal sequence is differentiated by radical versus incremental innovation. That is, unique strategy and structure will be required for radical innovation, especially process adoption, while more traditional strategy and structure arrangements tend to support new product introduction and incremental process adoption. This differentiated theory is strongly supported ...

  2. Hardiness scales in Iranian managers: evidence of incremental validity in relationships with the five factor model and with organizational and psychological adjustment.

    Science.gov (United States)

    Ghorbani, Nima; Watson, P J

    2005-06-01

    This study examined the incremental validity of Hardiness scales in a sample of Iranian managers. Along with measures of the Five Factor Model and of Organizational and Psychological Adjustment, Hardiness scales were administered to 159 male managers (M age = 39.9, SD = 7.5) who had worked in their organizations for 7.9 yr. (SD=5.4). Hardiness predicted greater Job Satisfaction, higher Organization-based Self-esteem, and perceptions of the work environment as being less stressful and constraining. Hardiness also correlated positively with Assertiveness, Emotional Stability, Extraversion, Openness to Experience, Agreeableness, and Conscientiousness and negatively with Depression, Anxiety, Perceived Stress, Chance External Control, and a Powerful Others External Control. Evidence of incremental validity was obtained when the Hardiness scales supplemented the Five Factor Model in predicting organizational and psychological adjustment. These data documented the incremental validity of the Hardiness scales in a non-Western sample and thus confirmed once again that Hardiness has a relevance that extends beyond the culture in which it was developed.

  3. Applying and Individual-Based Model to Simultaneously Evaluate Net Ecosystem Production and Tree Diameter Increment

    Science.gov (United States)

    Fang, F. J.

    2017-12-01

    Reconciling observations at fundamentally different scales is central in understanding the global carbon cycle. This study investigates a model-based melding of forest inventory data, remote-sensing data and micrometeorological-station data ("flux towers" estimating forest heat, CO2 and H2O fluxes). The individual tree-based model FORCCHN was used to evaluate the tree DBH increment and forest carbon fluxes. These are the first simultaneous simulations of the forest carbon budgets from flux towers and individual-tree growth estimates of forest carbon budgets using the continuous forest inventory data — under circumstances in which both predictions can be tested. Along with the global implications of such findings, this also improves the capacity for forest sustainable management and the comprehensive understanding of forest ecosystems. In forest ecology, diameter at breast height (DBH) of a tree significantly determines an individual tree's cross-sectional sapwood area, its biomass and carbon storage. Evaluation the annual DBH increment (ΔDBH) of an individual tree is central to understanding tree growth and forest ecology. Ecosystem Carbon flux is a consequence of key ecosystem processes in the forest-ecosystem carbon cycle, Gross and Net Primary Production (GPP and NPP, respectively) and Net Ecosystem Respiration (NEP). All of these closely relate with tree DBH changes and tree death. Despite advances in evaluating forest carbon fluxes with flux towers and forest inventories for individual tree ΔDBH, few current ecological models can simultaneously quantify and predict the tree ΔDBH and forest carbon flux.

  4. The intake of long chain omega 3 fatty acids through fish versus capsules results in greater increments of their plasma levels

    Directory of Open Access Journals (Sweden)

    Visioli Francesco

    2004-03-01

    Full Text Available Omega 3 fatty acids from fish appear to be more cardioprotective than equivalent amounts provided as capsules. We gave volunteers, for six weeks, either 100 g\\\\day of salmon, providing 383 mg of EPA and 544 mg of DHA or one or three capsules of fish oil\\\\day, providing 150 mg of EPA and 106 mg of DHA or 450 mg of EPA and 318 mg of DHA. We also re-evaluated data from a previous study carried out with the same design. Marked increments in plasma EPA and DHA concentrations (μg\\\\mg total lipid and percentages of total fatty acids were recorded at the end of either treatment. Such increments were linearly and significantly correlated with the dose after capsule administration. Notably, increments in plasma EPA and DHA concentration after salmon intake were significantly higher than after administration of capsules. In fact, the same increments would be obtained with at least two- and nine-fold higher doses of EPA and DHA, respectively, if administered with capsules rather than salmon. In turn, we provide experimental evidence that omega 3 fatty acids from fish are more effectively incorporated into plasma lipids than when administered as capsules and that increments in plasma concentrations of EPA and DHA given as capsules are linearly correlated with their intakes.

  5. Decomposable log-linear models

    DEFF Research Database (Denmark)

    Eriksen, Poul Svante

    can be characterized by a structured set of conditional independencies between some variables given some other variables. We term the new model class decomposable log-linear models, which is illustrated to be a much richer class than decomposable graphical models.It covers a wide range of non...... The present paper considers discrete probability models with exact computational properties. In relation to contingency tables this means closed form expressions of the maksimum likelihood estimate and its distribution. The model class includes what is known as decomposable graphicalmodels, which......-hierarchical models, models with structural zeroes, models described by quasi independence and models for level merging. Also, they have a very natural interpretation as they may be formulated by a structured set of conditional independencies between two events given some other event. In relation to contingency...

  6. Modeling digital switching circuits with linear algebra

    CERN Document Server

    Thornton, Mitchell A

    2014-01-01

    Modeling Digital Switching Circuits with Linear Algebra describes an approach for modeling digital information and circuitry that is an alternative to Boolean algebra. While the Boolean algebraic model has been wildly successful and is responsible for many advances in modern information technology, the approach described in this book offers new insight and different ways of solving problems. Modeling the bit as a vector instead of a scalar value in the set {0, 1} allows digital circuits to be characterized with transfer functions in the form of a linear transformation matrix. The use of transf

  7. Non-linear Growth Models in Mplus and SAS

    Science.gov (United States)

    Grimm, Kevin J.; Ram, Nilam

    2013-01-01

    Non-linear growth curves or growth curves that follow a specified non-linear function in time enable researchers to model complex developmental patterns with parameters that are easily interpretable. In this paper we describe how a variety of sigmoid curves can be fit using the Mplus structural modeling program and the non-linear mixed-effects modeling procedure NLMIXED in SAS. Using longitudinal achievement data collected as part of a study examining the effects of preschool instruction on academic gain we illustrate the procedures for fitting growth models of logistic, Gompertz, and Richards functions. Brief notes regarding the practical benefits, limitations, and choices faced in the fitting and estimation of such models are included. PMID:23882134

  8. Variance Function Partially Linear Single-Index Models1.

    Science.gov (United States)

    Lian, Heng; Liang, Hua; Carroll, Raymond J

    2015-01-01

    We consider heteroscedastic regression models where the mean function is a partially linear single index model and the variance function depends upon a generalized partially linear single index model. We do not insist that the variance function depend only upon the mean function, as happens in the classical generalized partially linear single index model. We develop efficient and practical estimation methods for the variance function and for the mean function. Asymptotic theory for the parametric and nonparametric parts of the model is developed. Simulations illustrate the results. An empirical example involving ozone levels is used to further illustrate the results, and is shown to be a case where the variance function does not depend upon the mean function.

  9. Unmanned Maritime Systems Incremental Acquisition Approach

    Science.gov (United States)

    2016-12-01

    REPORT TYPE AND DATES COVERED MBA professional report 4. TITLE AND SUBTITLE UNMANNED MARITIME SYSTEMS INCREMENTAL ACQUISITION APPROACH 5. FUNDING...Approved for public release. Distribution is unlimited. UNMANNED MARITIME SYSTEMS INCREMENTAL ACQUISITION APPROACH Thomas Driscoll, Lieutenant...UNMANNED MARITIME SYSTEMS INCREMENTAL ACQUISITION APPROACH ABSTRACT The purpose of this MBA report is to explore and understand the issues

  10. Linear latent variable models: the lava-package

    DEFF Research Database (Denmark)

    Holst, Klaus Kähler; Budtz-Jørgensen, Esben

    2013-01-01

    are implemented including robust standard errors for clustered correlated data, multigroup analyses, non-linear parameter constraints, inference with incomplete data, maximum likelihood estimation with censored and binary observations, and instrumental variable estimators. In addition an extensive simulation......An R package for specifying and estimating linear latent variable models is presented. The philosophy of the implementation is to separate the model specification from the actual data, which leads to a dynamic and easy way of modeling complex hierarchical structures. Several advanced features...

  11. On the validity of the incremental approach to estimate the impact of cities on air quality

    Science.gov (United States)

    Thunis, Philippe

    2018-01-01

    The question of how much cities are the sources of their own air pollution is not only theoretical as it is critical to the design of effective strategies for urban air quality planning. In this work, we assess the validity of the commonly used incremental approach to estimate the likely impact of cities on their air pollution. With the incremental approach, the city impact (i.e. the concentration change generated by the city emissions) is estimated as the concentration difference between a rural background and an urban background location, also known as the urban increment. We show that the city impact is in reality made up of the urban increment and two additional components and consequently two assumptions need to be fulfilled for the urban increment to be representative of the urban impact. The first assumption is that the rural background location is not influenced by emissions from within the city whereas the second requires that background concentration levels, obtained with zero city emissions, are equal at both locations. Because the urban impact is not measurable, the SHERPA modelling approach, based on a full air quality modelling system, is used in this work to assess the validity of these assumptions for some European cities. Results indicate that for PM2.5, these two assumptions are far from being fulfilled for many large or medium city sizes. For this type of cities, urban increments are largely underestimating city impacts. Although results are in better agreement for NO2, similar issues are met. In many situations the incremental approach is therefore not an adequate estimate of the urban impact on air pollution. This poses issues in terms of interpretation when these increments are used to define strategic options in terms of air quality planning. We finally illustrate the interest of comparing modelled and measured increments to improve our confidence in the model results.

  12. Incremental learning for automated knowledge capture

    Energy Technology Data Exchange (ETDEWEB)

    Benz, Zachary O. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Basilico, Justin Derrick [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Davis, Warren Leon [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Dixon, Kevin R. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Jones, Brian S. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Martin, Nathaniel [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Wendt, Jeremy Daniel [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2013-12-01

    People responding to high-consequence national-security situations need tools to help them make the right decision quickly. The dynamic, time-critical, and ever-changing nature of these situations, especially those involving an adversary, require models of decision support that can dynamically react as a situation unfolds and changes. Automated knowledge capture is a key part of creating individualized models of decision making in many situations because it has been demonstrated as a very robust way to populate computational models of cognition. However, existing automated knowledge capture techniques only populate a knowledge model with data prior to its use, after which the knowledge model is static and unchanging. In contrast, humans, including our national-security adversaries, continually learn, adapt, and create new knowledge as they make decisions and witness their effect. This artificial dichotomy between creation and use exists because the majority of automated knowledge capture techniques are based on traditional batch machine-learning and statistical algorithms. These algorithms are primarily designed to optimize the accuracy of their predictions and only secondarily, if at all, concerned with issues such as speed, memory use, or ability to be incrementally updated. Thus, when new data arrives, batch algorithms used for automated knowledge capture currently require significant recomputation, frequently from scratch, which makes them ill suited for use in dynamic, timecritical, high-consequence decision making environments. In this work we seek to explore and expand upon the capabilities of dynamic, incremental models that can adapt to an ever-changing feature space.

  13. FEM Simulation of Incremental Shear

    International Nuclear Information System (INIS)

    Rosochowski, Andrzej; Olejnik, Lech

    2007-01-01

    A popular way of producing ultrafine grained metals on a laboratory scale is severe plastic deformation. This paper introduces a new severe plastic deformation process of incremental shear. A finite element method simulation is carried out for various tool geometries and process kinematics. It has been established that for the successful realisation of the process the inner radius of the channel as well as the feeding increment should be approximately 30% of the billet thickness. The angle at which the reciprocating die works the material can be 30 deg. . When compared to equal channel angular pressing, incremental shear shows basic similarities in the mode of material flow and a few technological advantages which make it an attractive alternative to the known severe plastic deformation processes. The most promising characteristic of incremental shear is the possibility of processing very long billets in a continuous way which makes the process more industrially relevant

  14. Evaluation of incremental reactivity and its uncertainty in Southern California.

    Science.gov (United States)

    Martien, Philip T; Harley, Robert A; Milford, Jana B; Russell, Armistead G

    2003-04-15

    The incremental reactivity (IR) and relative incremental reactivity (RIR) of carbon monoxide and 30 individual volatile organic compounds (VOC) were estimated for the South Coast Air Basin using two photochemical air quality models: a 3-D, grid-based model and a vertically resolved trajectory model. Both models include an extended version of the SAPRC99 chemical mechanism. For the 3-D modeling, the decoupled direct method (DDM-3D) was used to assess reactivities. The trajectory model was applied to estimate uncertainties in reactivities due to uncertainties in chemical rate parameters, deposition parameters, and emission rates using Monte Carlo analysis with Latin hypercube sampling. For most VOC, RIRs were found to be consistent in rankings with those produced by Carter using a box model. However, 3-D simulations show that coastal regions, upwind of most of the emissions, have comparatively low IR but higher RIR than predicted by box models for C4-C5 alkenes and carbonyls that initiate the production of HOx radicals. Biogenic VOC emissions were found to have a lower RIR than predicted by box model estimates, because emissions of these VOC were mostly downwind of the areas of primary ozone production. Uncertainties in RIR of individual VOC were found to be dominated by uncertainties in the rate parameters of their primary oxidation reactions. The coefficient of variation (COV) of most RIR values ranged from 20% to 30%, whereas the COV of absolute incremental reactivity ranged from about 30% to 40%. In general, uncertainty and variability both decreased when relative rather than absolute reactivity metrics were used.

  15. A Model Based Approach to Increase the Part Accuracy in Robot Based Incremental Sheet Metal Forming

    International Nuclear Information System (INIS)

    Meier, Horst; Laurischkat, Roman; Zhu Junhong

    2011-01-01

    One main influence on the dimensional accuracy in robot based incremental sheet metal forming results from the compliance of the involved robot structures. Compared to conventional machine tools the low stiffness of the robot's kinematic results in a significant deviation of the planned tool path and therefore in a shape of insufficient quality. To predict and compensate these deviations offline, a model based approach, consisting of a finite element approach, to simulate the sheet forming, and a multi body system, modeling the compliant robot structure, has been developed. This paper describes the implementation and experimental verification of the multi body system model and its included compensation method.

  16. Linear factor copula models and their properties

    KAUST Repository

    Krupskii, Pavel; Genton, Marc G.

    2018-01-01

    We consider a special case of factor copula models with additive common factors and independent components. These models are flexible and parsimonious with O(d) parameters where d is the dimension. The linear structure allows one to obtain closed form expressions for some copulas and their extreme‐value limits. These copulas can be used to model data with strong tail dependencies, such as extreme data. We study the dependence properties of these linear factor copula models and derive the corresponding limiting extreme‐value copulas with a factor structure. We show how parameter estimates can be obtained for these copulas and apply one of these copulas to analyse a financial data set.

  17. Linear factor copula models and their properties

    KAUST Repository

    Krupskii, Pavel

    2018-04-25

    We consider a special case of factor copula models with additive common factors and independent components. These models are flexible and parsimonious with O(d) parameters where d is the dimension. The linear structure allows one to obtain closed form expressions for some copulas and their extreme‐value limits. These copulas can be used to model data with strong tail dependencies, such as extreme data. We study the dependence properties of these linear factor copula models and derive the corresponding limiting extreme‐value copulas with a factor structure. We show how parameter estimates can be obtained for these copulas and apply one of these copulas to analyse a financial data set.

  18. Decoupled Simulation Method For Incremental Sheet Metal Forming

    International Nuclear Information System (INIS)

    Sebastiani, G.; Brosius, A.; Tekkaya, A. E.; Homberg, W.; Kleiner, M.

    2007-01-01

    Within the scope of this article a decoupling algorithm to reduce computing time in Finite Element Analyses of incremental forming processes will be investigated. Based on the given position of the small forming zone, the presented algorithm aims at separating a Finite Element Model in an elastic and an elasto-plastic deformation zone. Including the elastic response of the structure by means of model simplifications, the costly iteration in the elasto-plastic zone can be restricted to the small forming zone and to few supporting elements in order to reduce computation time. Since the forming zone moves along the specimen, an update of both, forming zone with elastic boundary and supporting structure, is needed after several increments.The presented paper discusses the algorithmic implementation of the approach and introduces several strategies to implement the denoted elastic boundary condition at the boundary of the plastic forming zone

  19. Modelling Loudspeaker Non-Linearities

    DEFF Research Database (Denmark)

    Agerkvist, Finn T.

    2007-01-01

    This paper investigates different techniques for modelling the non-linear parameters of the electrodynamic loudspeaker. The methods are tested not only for their accuracy within the range of original data, but also for the ability to work reasonable outside that range, and it is demonstrated...... that polynomial expansions are rather poor at this, whereas an inverse polynomial expansion or localized fitting functions such as the gaussian are better suited for modelling the Bl-factor and compliance. For the inductance the sigmoid function is shown to give very good results. Finally the time varying...

  20. Broad Learning System: An Effective and Efficient Incremental Learning System Without the Need for Deep Architecture.

    Science.gov (United States)

    Chen, C L Philip; Liu, Zhulin

    2018-01-01

    Broad Learning System (BLS) that aims to offer an alternative way of learning in deep structure is proposed in this paper. Deep structure and learning suffer from a time-consuming training process because of a large number of connecting parameters in filters and layers. Moreover, it encounters a complete retraining process if the structure is not sufficient to model the system. The BLS is established in the form of a flat network, where the original inputs are transferred and placed as "mapped features" in feature nodes and the structure is expanded in wide sense in the "enhancement nodes." The incremental learning algorithms are developed for fast remodeling in broad expansion without a retraining process if the network deems to be expanded. Two incremental learning algorithms are given for both the increment of the feature nodes (or filters in deep structure) and the increment of the enhancement nodes. The designed model and algorithms are very versatile for selecting a model rapidly. In addition, another incremental learning is developed for a system that has been modeled encounters a new incoming input. Specifically, the system can be remodeled in an incremental way without the entire retraining from the beginning. Satisfactory result for model reduction using singular value decomposition is conducted to simplify the final structure. Compared with existing deep neural networks, experimental results on the Modified National Institute of Standards and Technology database and NYU NORB object recognition dataset benchmark data demonstrate the effectiveness of the proposed BLS.

  1. Strategies and limits in multi-stage single-point incremental forming

    DEFF Research Database (Denmark)

    Skjødt, Martin; Silva, M.B.; Martins, P. A. F.

    2010-01-01

    paths. The results also reveal that the sequence of multi-stage forming has a large effect on the location of strain points in the principal strain space. Strain paths are linear in the first stage and highly non-linear in the subsequent forming stages. The overall results show that the experimentally......Multi-stage single-point incremental forming (SPIF) is a state-of-the-art manufacturing process that allows small-quantity production of complex sheet metal parts with vertical walls. This paper is focused on the application of multi-stage SPIF with the objective of producing cylindrical cups......-limit curves and fracture forming-limit curves (FFLCs), numerical simulation, and experimentation, namely the evaluation of strain paths and fracture strains in actual multi-stage parts. Assessment of numerical simulation with experimentation shows good agreement between computed and measured strain and strain...

  2. Log-normal frailty models fitted as Poisson generalized linear mixed models.

    Science.gov (United States)

    Hirsch, Katharina; Wienke, Andreas; Kuss, Oliver

    2016-12-01

    The equivalence of a survival model with a piecewise constant baseline hazard function and a Poisson regression model has been known since decades. As shown in recent studies, this equivalence carries over to clustered survival data: A frailty model with a log-normal frailty term can be interpreted and estimated as a generalized linear mixed model with a binary response, a Poisson likelihood, and a specific offset. Proceeding this way, statistical theory and software for generalized linear mixed models are readily available for fitting frailty models. This gain in flexibility comes at the small price of (1) having to fix the number of pieces for the baseline hazard in advance and (2) having to "explode" the data set by the number of pieces. In this paper we extend the simulations of former studies by using a more realistic baseline hazard (Gompertz) and by comparing the model under consideration with competing models. Furthermore, the SAS macro %PCFrailty is introduced to apply the Poisson generalized linear mixed approach to frailty models. The simulations show good results for the shared frailty model. Our new %PCFrailty macro provides proper estimates, especially in case of 4 events per piece. The suggested Poisson generalized linear mixed approach for log-normal frailty models based on the %PCFrailty macro provides several advantages in the analysis of clustered survival data with respect to more flexible modelling of fixed and random effects, exact (in the sense of non-approximate) maximum likelihood estimation, and standard errors and different types of confidence intervals for all variance parameters. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  3. Linearized models for a new magnetic control in MAST

    International Nuclear Information System (INIS)

    Artaserse, G.; Maviglia, F.; Albanese, R.; McArdle, G.J.; Pangione, L.

    2013-01-01

    Highlights: ► We applied linearized models for a new magnetic control on MAST tokamak. ► A suite of procedures, conceived to be machine independent, have been used. ► We carried out model-based simulations, taking into account eddy currents effects. ► Comparison with the EFIT flux maps and the experimental magnetic signals are shown. ► A current driven model for the dynamic simulations of the experimental data have been performed. -- Abstract: The aim of this work is to provide reliable linearized models for the design and assessment of a new magnetic control system for MAST (Mega Ampère Spherical Tokamak) using rtEFIT, which can easily be exported to MAST Upgrade. Linearized models for magnetic control have been obtained using the 2D axisymmetric finite element code CREATE L. MAST linearized models include equivalent 2D axisymmetric schematization of poloidal field (PF) coils, vacuum vessel, and other conducting structures. A plasmaless and a double null configuration have been chosen as benchmark cases for the comparison with experimental data and EFIT reconstructions. Good agreement has been found with the EFIT flux map and the experimental signals coming from magnetic probes with only few mismatches probably due to broken sensors. A suite of procedures (equipped with a user friendly interface to be run even remotely) to provide linearized models for magnetic control is now available on the MAST linux machines. A new current driven model has been used to obtain a state space model having the PF coil currents as inputs. Dynamic simulations of experimental data have been carried out using linearized models, including modelling of the effects of the passive structures, showing a fair agreement. The modelling activity has been useful also to reproduce accurately the interaction between plasma current and radial position control loops

  4. Linearized models for a new magnetic control in MAST

    Energy Technology Data Exchange (ETDEWEB)

    Artaserse, G., E-mail: giovanni.artaserse@enea.it [Associazione Euratom-ENEA sulla Fusione, Via Enrico Fermi 45, I-00044 Frascati (RM) (Italy); Maviglia, F.; Albanese, R. [Associazione Euratom-ENEA-CREATE sulla Fusione, Via Claudio 21, I-80125 Napoli (Italy); McArdle, G.J.; Pangione, L. [EURATOM/CCFE Fusion Association, Culham Science Centre, Abingdon, Oxon, OX14 3DB (United Kingdom)

    2013-10-15

    Highlights: ► We applied linearized models for a new magnetic control on MAST tokamak. ► A suite of procedures, conceived to be machine independent, have been used. ► We carried out model-based simulations, taking into account eddy currents effects. ► Comparison with the EFIT flux maps and the experimental magnetic signals are shown. ► A current driven model for the dynamic simulations of the experimental data have been performed. -- Abstract: The aim of this work is to provide reliable linearized models for the design and assessment of a new magnetic control system for MAST (Mega Ampère Spherical Tokamak) using rtEFIT, which can easily be exported to MAST Upgrade. Linearized models for magnetic control have been obtained using the 2D axisymmetric finite element code CREATE L. MAST linearized models include equivalent 2D axisymmetric schematization of poloidal field (PF) coils, vacuum vessel, and other conducting structures. A plasmaless and a double null configuration have been chosen as benchmark cases for the comparison with experimental data and EFIT reconstructions. Good agreement has been found with the EFIT flux map and the experimental signals coming from magnetic probes with only few mismatches probably due to broken sensors. A suite of procedures (equipped with a user friendly interface to be run even remotely) to provide linearized models for magnetic control is now available on the MAST linux machines. A new current driven model has been used to obtain a state space model having the PF coil currents as inputs. Dynamic simulations of experimental data have been carried out using linearized models, including modelling of the effects of the passive structures, showing a fair agreement. The modelling activity has been useful also to reproduce accurately the interaction between plasma current and radial position control loops.

  5. Modeling Non-Linear Material Properties in Composite Materials

    Science.gov (United States)

    2016-06-28

    Technical Report ARWSB-TR-16013 MODELING NON-LINEAR MATERIAL PROPERTIES IN COMPOSITE MATERIALS Michael F. Macri Andrew G...REPORT TYPE Technical 3. DATES COVERED (From - To) 4. TITLE AND SUBTITLE MODELING NON-LINEAR MATERIAL PROPERTIES IN COMPOSITE MATERIALS ...systems are increasingly incorporating composite materials into their design. Many of these systems subject the composites to environmental conditions

  6. On conditional scalar increment and joint velocity-scalar increment statistics

    International Nuclear Information System (INIS)

    Zhang Hengbin; Wang Danhong; Tong Chenning

    2004-01-01

    Conditional velocity and scalar increment statistics are usually studied in the context of Kolmogorov's refined similarity hypotheses and are considered universal (quasi-Gaussian) for inertial-range separations. In such analyses the locally averaged energy and scalar dissipation rates are used as conditioning variables. Recent studies have shown that certain local turbulence structures can be captured when the local scalar variance (φ 2 ) r and the local kinetic energy k r are used as the conditioning variables. We study the conditional increments using these conditioning variables, which also provide the local turbulence scales. Experimental data obtained in the fully developed region of an axisymmetric turbulent jet are used to compute the statistics. The conditional scalar increment probability density function (PDF) conditional on (φ 2 ) r is found to be close to Gaussian for (φ 2 ) r small compared with its mean and is sub-Gaussian and bimodal for large (φ 2 ) r , and therefore is not universal. We find that the different shapes of the conditional PDFs are related to the instantaneous degree of non-equilibrium (production larger than dissipation) of the local scalar. There is further evidence of this from the conditional PDF conditional on both (φ 2 ) r and χ r , which is largely a function of (φ 2 ) r /χ r , a measure of the degree of non-equilibrium. The velocity-scalar increment joint PDF is close to joint Gaussian and quad-modal for equilibrium and non-equilibrium local velocity and scalar, respectively. The latter shape is associated with a combination of the ramp-cliff and plane strain structures. Kolmogorov's refined similarity hypotheses also predict a dependence of the conditional PDF on the degree of non-equilibrium. Therefore, the quasi-Gaussian (joint) PDF, previously observed in the context of Kolmogorov's refined similarity hypotheses, is only one of the conditional PDF shapes of inertial range turbulence. The present study suggests that

  7. Non-linear Loudspeaker Unit Modelling

    DEFF Research Database (Denmark)

    Pedersen, Bo Rohde; Agerkvist, Finn T.

    2008-01-01

    Simulations of a 6½-inch loudspeaker unit are performed and compared with a displacement measurement. The non-linear loudspeaker model is based on the major nonlinear functions and expanded with time-varying suspension behaviour and flux modulation. The results are presented with FFT plots of thr...... frequencies and different displacement levels. The model errors are discussed and analysed including a test with loudspeaker unit where the diaphragm is removed....

  8. Comparison between linear quadratic and early time dose models

    International Nuclear Information System (INIS)

    Chougule, A.A.; Supe, S.J.

    1993-01-01

    During the 70s, much interest was focused on fractionation in radiotherapy with the aim of improving tumor control rate without producing unacceptable normal tissue damage. To compare the radiobiological effectiveness of various fractionation schedules, empirical formulae such as Nominal Standard Dose, Time Dose Factor, Cumulative Radiation Effect and Tumour Significant Dose, were introduced and were used despite many shortcomings. It has been claimed that a recent linear quadratic model is able to predict the radiobiological responses of tumours as well as normal tissues more accurately. We compared Time Dose Factor and Tumour Significant Dose models with the linear quadratic model for tumour regression in patients with carcinomas of the cervix. It was observed that the prediction of tumour regression estimated by the Tumour Significant Dose and Time Dose factor concepts varied by 1.6% from that of the linear quadratic model prediction. In view of the lack of knowledge of the precise values of the parameters of the linear quadratic model, it should be applied with caution. One can continue to use the Time Dose Factor concept which has been in use for more than a decade as its results are within ±2% as compared to that predicted by the linear quadratic model. (author). 11 refs., 3 figs., 4 tabs

  9. A Note on the Identifiability of Generalized Linear Mixed Models

    DEFF Research Database (Denmark)

    Labouriau, Rodrigo

    2014-01-01

    I present here a simple proof that, under general regularity conditions, the standard parametrization of generalized linear mixed model is identifiable. The proof is based on the assumptions of generalized linear mixed models on the first and second order moments and some general mild regularity...... conditions, and, therefore, is extensible to quasi-likelihood based generalized linear models. In particular, binomial and Poisson mixed models with dispersion parameter are identifiable when equipped with the standard parametrization...

  10. A non-linear state space approach to model groundwater fluctuations

    NARCIS (Netherlands)

    Berendrecht, W.L.; Heemink, A.W.; Geer, F.C. van; Gehrels, J.C.

    2006-01-01

    A non-linear state space model is developed for describing groundwater fluctuations. Non-linearity is introduced by modeling the (unobserved) degree of water saturation of the root zone. The non-linear relations are based on physical concepts describing the dependence of both the actual

  11. Effective connectivity between superior temporal gyrus and Heschl's gyrus during white noise listening: linear versus non-linear models.

    Science.gov (United States)

    Hamid, Ka; Yusoff, An; Rahman, Mza; Mohamad, M; Hamid, Aia

    2012-04-01

    This fMRI study is about modelling the effective connectivity between Heschl's gyrus (HG) and the superior temporal gyrus (STG) in human primary auditory cortices. MATERIALS #ENTITYSTARTX00026; Ten healthy male participants were required to listen to white noise stimuli during functional magnetic resonance imaging (fMRI) scans. Statistical parametric mapping (SPM) was used to generate individual and group brain activation maps. For input region determination, two intrinsic connectivity models comprising bilateral HG and STG were constructed using dynamic causal modelling (DCM). The models were estimated and inferred using DCM while Bayesian Model Selection (BMS) for group studies was used for model comparison and selection. Based on the winning model, six linear and six non-linear causal models were derived and were again estimated, inferred, and compared to obtain a model that best represents the effective connectivity between HG and the STG, balancing accuracy and complexity. Group results indicated significant asymmetrical activation (p(uncorr) Model comparison results showed strong evidence of STG as the input centre. The winning model is preferred by 6 out of 10 participants. The results were supported by BMS results for group studies with the expected posterior probability, r = 0.7830 and exceedance probability, ϕ = 0.9823. One-sample t-tests performed on connection values obtained from the winning model indicated that the valid connections for the winning model are the unidirectional parallel connections from STG to bilateral HG (p model comparison between linear and non-linear models using BMS prefers non-linear connection (r = 0.9160, ϕ = 1.000) from which the connectivity between STG and the ipsi- and contralateral HG is gated by the activity in STG itself. We are able to demonstrate that the effective connectivity between HG and STG while listening to white noise for the respective participants can be explained by a non-linear dynamic causal model with

  12. Linear Equating for the NEAT Design: Parameter Substitution Models and Chained Linear Relationship Models

    Science.gov (United States)

    Kane, Michael T.; Mroch, Andrew A.; Suh, Youngsuk; Ripkey, Douglas R.

    2009-01-01

    This paper analyzes five linear equating models for the "nonequivalent groups with anchor test" (NEAT) design with internal anchors (i.e., the anchor test is part of the full test). The analysis employs a two-dimensional framework. The first dimension contrasts two general approaches to developing the equating relationship. Under a "parameter…

  13. Recent Updates to the GEOS-5 Linear Model

    Science.gov (United States)

    Holdaway, Dan; Kim, Jong G.; Errico, Ron; Gelaro, Ronald; Mahajan, Rahul

    2014-01-01

    Global Modeling and Assimilation Office (GMAO) is close to having a working 4DVAR system and has developed a linearized version of GEOS-5.This talk outlines a series of improvements made to the linearized dynamics, physics and trajectory.Of particular interest is the development of linearized cloud microphysics, which provides the framework for 'all-sky' data assimilation.

  14. Non-linear calibration models for near infrared spectroscopy

    DEFF Research Database (Denmark)

    Ni, Wangdong; Nørgaard, Lars; Mørup, Morten

    2014-01-01

    by ridge regression (RR). The performance of the different methods is demonstrated by their practical applications using three real-life near infrared (NIR) data sets. Different aspects of the various approaches including computational time, model interpretability, potential over-fitting using the non-linear...... models on linear problems, robustness to small or medium sample sets, and robustness to pre-processing, are discussed. The results suggest that GPR and BANN are powerful and promising methods for handling linear as well as nonlinear systems, even when the data sets are moderately small. The LS......-SVM), relevance vector machines (RVM), Gaussian process regression (GPR), artificial neural network (ANN), and Bayesian ANN (BANN). In this comparison, partial least squares (PLS) regression is used as a linear benchmark, while the relationship of the methods is considered in terms of traditional calibration...

  15. [From clinical judgment to linear regression model.

    Science.gov (United States)

    Palacios-Cruz, Lino; Pérez, Marcela; Rivas-Ruiz, Rodolfo; Talavera, Juan O

    2013-01-01

    When we think about mathematical models, such as linear regression model, we think that these terms are only used by those engaged in research, a notion that is far from the truth. Legendre described the first mathematical model in 1805, and Galton introduced the formal term in 1886. Linear regression is one of the most commonly used regression models in clinical practice. It is useful to predict or show the relationship between two or more variables as long as the dependent variable is quantitative and has normal distribution. Stated in another way, the regression is used to predict a measure based on the knowledge of at least one other variable. Linear regression has as it's first objective to determine the slope or inclination of the regression line: Y = a + bx, where "a" is the intercept or regression constant and it is equivalent to "Y" value when "X" equals 0 and "b" (also called slope) indicates the increase or decrease that occurs when the variable "x" increases or decreases in one unit. In the regression line, "b" is called regression coefficient. The coefficient of determination (R 2 ) indicates the importance of independent variables in the outcome.

  16. Two-Point Incremental Forming with Partial Die: Theory and Experimentation

    Science.gov (United States)

    Silva, M. B.; Martins, P. A. F.

    2013-04-01

    This paper proposes a new level of understanding of two-point incremental forming (TPIF) with partial die by means of a combined theoretical and experimental investigation. The theoretical developments include an innovative extension of the analytical model for rotational symmetric single point incremental forming (SPIF), originally developed by the authors, to address the influence of the major operating parameters of TPIF and to successfully explain the differences in formability between SPIF and TPIF. The experimental work comprised the mechanical characterization of the material and the determination of its formability limits at necking and fracture by means of circle grid analysis and benchmark incremental sheet forming tests. Results show the adequacy of the proposed analytical model to handle the deformation mechanics of SPIF and TPIF with partial die and demonstrate that neck formation is suppressed in TPIF, so that traditional forming limit curves are inapplicable to describe failure and must be replaced by fracture forming limits derived from ductile damage mechanics. The overall geometric accuracy of sheet metal parts produced by TPIF with partial die is found to be better than that of parts fabricated by SPIF due to smaller elastic recovery upon unloading.

  17. Defining a Family of Cognitive Diagnosis Models Using Log-Linear Models with Latent Variables

    Science.gov (United States)

    Henson, Robert A.; Templin, Jonathan L.; Willse, John T.

    2009-01-01

    This paper uses log-linear models with latent variables (Hagenaars, in "Loglinear Models with Latent Variables," 1993) to define a family of cognitive diagnosis models. In doing so, the relationship between many common models is explicitly defined and discussed. In addition, because the log-linear model with latent variables is a general model for…

  18. Robot-based additive manufacturing for flexible die-modelling in incremental sheet forming

    Science.gov (United States)

    Rieger, Michael; Störkle, Denis Daniel; Thyssen, Lars; Kuhlenkötter, Bernd

    2017-10-01

    The paper describes the application concept of additive manufactured dies to support the robot-based incremental sheet metal forming process (`Roboforming') for the production of sheet metal components in small batch sizes. Compared to the dieless kinematic-based generation of a shape by means of two cooperating industrial robots, the supporting robot models a die on the back of the metal sheet by using the robot-based fused layer manufacturing process (FLM). This tool chain is software-defined and preserves the high geometrical form flexibility of Roboforming while flexibly generating support structures adapted to the final part's geometry. Test series serve to confirm the feasibility of the concept by investigating the process challenges of the adhesion to the sheet surface and the general stability as well as the influence on the geometric accuracy compared to the well-known forming strategies.

  19. Phylogenetic mixtures and linear invariants for equal input models.

    Science.gov (United States)

    Casanellas, Marta; Steel, Mike

    2017-04-01

    The reconstruction of phylogenetic trees from molecular sequence data relies on modelling site substitutions by a Markov process, or a mixture of such processes. In general, allowing mixed processes can result in different tree topologies becoming indistinguishable from the data, even for infinitely long sequences. However, when the underlying Markov process supports linear phylogenetic invariants, then provided these are sufficiently informative, the identifiability of the tree topology can be restored. In this paper, we investigate a class of processes that support linear invariants once the stationary distribution is fixed, the 'equal input model'. This model generalizes the 'Felsenstein 1981' model (and thereby the Jukes-Cantor model) from four states to an arbitrary number of states (finite or infinite), and it can also be described by a 'random cluster' process. We describe the structure and dimension of the vector spaces of phylogenetic mixtures and of linear invariants for any fixed phylogenetic tree (and for all trees-the so called 'model invariants'), on any number n of leaves. We also provide a precise description of the space of mixtures and linear invariants for the special case of [Formula: see text] leaves. By combining techniques from discrete random processes and (multi-) linear algebra, our results build on a classic result that was first established by James Lake (Mol Biol Evol 4:167-191, 1987).

  20. Forecasting the EMU inflation rate: Linear econometric vs. non-linear computational models using genetic neural fuzzy systems

    DEFF Research Database (Denmark)

    Kooths, Stefan; Mitze, Timo Friedel; Ringhut, Eric

    2004-01-01

    This paper compares the predictive power of linear econometric and non-linear computational models for forecasting the inflation rate in the European Monetary Union (EMU). Various models of both types are developed using different monetary and real activity indicators. They are compared according...

  1. A BEHAVIORAL-APPROACH TO LINEAR EXACT MODELING

    NARCIS (Netherlands)

    ANTOULAS, AC; WILLEMS, JC

    1993-01-01

    The behavioral approach to system theory provides a parameter-free framework for the study of the general problem of linear exact modeling and recursive modeling. The main contribution of this paper is the solution of the (continuous-time) polynomial-exponential time series modeling problem. Both

  2. Non-linear scaling of a musculoskeletal model of the lower limb using statistical shape models.

    Science.gov (United States)

    Nolte, Daniel; Tsang, Chui Kit; Zhang, Kai Yu; Ding, Ziyun; Kedgley, Angela E; Bull, Anthony M J

    2016-10-03

    Accurate muscle geometry for musculoskeletal models is important to enable accurate subject-specific simulations. Commonly, linear scaling is used to obtain individualised muscle geometry. More advanced methods include non-linear scaling using segmented bone surfaces and manual or semi-automatic digitisation of muscle paths from medical images. In this study, a new scaling method combining non-linear scaling with reconstructions of bone surfaces using statistical shape modelling is presented. Statistical Shape Models (SSMs) of femur and tibia/fibula were used to reconstruct bone surfaces of nine subjects. Reference models were created by morphing manually digitised muscle paths to mean shapes of the SSMs using non-linear transformations and inter-subject variability was calculated. Subject-specific models of muscle attachment and via points were created from three reference models. The accuracy was evaluated by calculating the differences between the scaled and manually digitised models. The points defining the muscle paths showed large inter-subject variability at the thigh and shank - up to 26mm; this was found to limit the accuracy of all studied scaling methods. Errors for the subject-specific muscle point reconstructions of the thigh could be decreased by 9% to 20% by using the non-linear scaling compared to a typical linear scaling method. We conclude that the proposed non-linear scaling method is more accurate than linear scaling methods. Thus, when combined with the ability to reconstruct bone surfaces from incomplete or scattered geometry data using statistical shape models our proposed method is an alternative to linear scaling methods. Copyright © 2016 The Author. Published by Elsevier Ltd.. All rights reserved.

  3. Numerical simulation of pseudoelastic shape memory alloys using the large time increment method

    Science.gov (United States)

    Gu, Xiaojun; Zhang, Weihong; Zaki, Wael; Moumni, Ziad

    2017-04-01

    The paper presents a numerical implementation of the large time increment (LATIN) method for the simulation of shape memory alloys (SMAs) in the pseudoelastic range. The method was initially proposed as an alternative to the conventional incremental approach for the integration of nonlinear constitutive models. It is adapted here for the simulation of pseudoelastic SMA behavior using the Zaki-Moumni model and is shown to be especially useful in situations where the phase transformation process presents little or lack of hardening. In these situations, a slight stress variation in a load increment can result in large variations of strain and local state variables, which may lead to difficulties in numerical convergence. In contrast to the conventional incremental method, the LATIN method solve the global equilibrium and local consistency conditions sequentially for the entire loading path. The achieved solution must satisfy the conditions of static and kinematic admissibility and consistency simultaneously after several iterations. 3D numerical implementation is accomplished using an implicit algorithm and is then used for finite element simulation using the software Abaqus. Computational tests demonstrate the ability of this approach to simulate SMAs presenting flat phase transformation plateaus and subjected to complex loading cases, such as the quasi-static behavior of a stent structure. Some numerical results are contrasted to those obtained using step-by-step incremental integration.

  4. Preisach hysteresis model for non-linear 2D heat diffusion

    International Nuclear Information System (INIS)

    Jancskar, Ildiko; Ivanyi, Amalia

    2006-01-01

    This paper analyzes a non-linear heat diffusion process when the thermal diffusivity behaviour is a hysteretic function of the temperature. Modelling this temperature dependence, the discrete Preisach algorithm as general hysteresis model has been integrated into a non-linear multigrid solver. The hysteretic diffusion shows a heating-cooling asymmetry in character. The presented type of hysteresis speeds up the thermal processes in the modelled systems by a very interesting non-linear way

  5. Tip-tilt disturbance model identification based on non-linear least squares fitting for Linear Quadratic Gaussian control

    Science.gov (United States)

    Yang, Kangjian; Yang, Ping; Wang, Shuai; Dong, Lizhi; Xu, Bing

    2018-05-01

    We propose a method to identify tip-tilt disturbance model for Linear Quadratic Gaussian control. This identification method based on Levenberg-Marquardt method conducts with a little prior information and no auxiliary system and it is convenient to identify the tip-tilt disturbance model on-line for real-time control. This identification method makes it easy that Linear Quadratic Gaussian control runs efficiently in different adaptive optics systems for vibration mitigation. The validity of the Linear Quadratic Gaussian control associated with this tip-tilt disturbance model identification method is verified by experimental data, which is conducted in replay mode by simulation.

  6. On-line validation of linear process models using generalized likelihood ratios

    International Nuclear Information System (INIS)

    Tylee, J.L.

    1981-12-01

    A real-time method for testing the validity of linear models of nonlinear processes is described and evaluated. Using generalized likelihood ratios, the model dynamics are continually monitored to see if the process has moved far enough away from the nominal linear model operating point to justify generation of a new linear model. The method is demonstrated using a seventh-order model of a natural circulation steam generator

  7. 48 CFR 3432.771 - Provision for incremental funding.

    Science.gov (United States)

    2010-10-01

    ... 48 Federal Acquisition Regulations System 7 2010-10-01 2010-10-01 false Provision for incremental funding. 3432.771 Section 3432.771 Federal Acquisition Regulations System DEPARTMENT OF EDUCATION..., Incremental Funding, in a solicitation if a cost-reimbursement contract using incremental funding is...

  8. Second-order kinetic model for the sorption of cadmium onto tree fern: a comparison of linear and non-linear methods.

    Science.gov (United States)

    Ho, Yuh-Shan

    2006-01-01

    A comparison was made of the linear least-squares method and a trial-and-error non-linear method of the widely used pseudo-second-order kinetic model for the sorption of cadmium onto ground-up tree fern. Four pseudo-second-order kinetic linear equations are discussed. Kinetic parameters obtained from the four kinetic linear equations using the linear method differed but they were the same when using the non-linear method. A type 1 pseudo-second-order linear kinetic model has the highest coefficient of determination. Results show that the non-linear method may be a better way to obtain the desired parameters.

  9. Identification of an Equivalent Linear Model for a Non-Linear Time-Variant RC-Structure

    DEFF Research Database (Denmark)

    Kirkegaard, Poul Henning; Andersen, P.; Brincker, Rune

    are investigated and compared with ARMAX models used on a running window. The techniques are evaluated using simulated data generated by the non-linear finite element program SARCOF modeling a 10-storey 3-bay concrete structure subjected to amplitude modulated Gaussian white noise filtered through a Kanai......This paper considers estimation of the maximum softening for a RC-structure subjected to earthquake excitation. The so-called Maximum Softening damage indicator relates the global damage state of the RC-structure to the relative decrease of the fundamental eigenfrequency in an equivalent linear...

  10. Incremental Support Vector Machine Framework for Visual Sensor Networks

    Directory of Open Access Journals (Sweden)

    Yuichi Motai

    2007-01-01

    Full Text Available Motivated by the emerging requirements of surveillance networks, we present in this paper an incremental multiclassification support vector machine (SVM technique as a new framework for action classification based on real-time multivideo collected by homogeneous sites. The technique is based on an adaptation of least square SVM (LS-SVM formulation but extends beyond the static image-based learning of current SVM methodologies. In applying the technique, an initial supervised offline learning phase is followed by a visual behavior data acquisition and an online learning phase during which the cluster head performs an ensemble of model aggregations based on the sensor nodes inputs. The cluster head then selectively switches on designated sensor nodes for future incremental learning. Combining sensor data offers an improvement over single camera sensing especially when the latter has an occluded view of the target object. The optimization involved alleviates the burdens of power consumption and communication bandwidth requirements. The resulting misclassification error rate, the iterative error reduction rate of the proposed incremental learning, and the decision fusion technique prove its validity when applied to visual sensor networks. Furthermore, the enabled online learning allows an adaptive domain knowledge insertion and offers the advantage of reducing both the model training time and the information storage requirements of the overall system which makes it even more attractive for distributed sensor networks communication.

  11. On-line control models for the Stanford Linear Collider

    International Nuclear Information System (INIS)

    Sheppard, J.C.; Helm, R.H.; Lee, M.J.; Woodley, M.D.

    1983-03-01

    Models for computer control of the SLAC three-kilometer linear accelerator and damping rings have been developed as part of the control system for the Stanford Linear Collider. Some of these models have been tested experimentally and implemented in the control program for routine linac operations. This paper will describe the development and implementation of these models, as well as some of the operational results

  12. Vortices, semi-local vortices in gauged linear sigma model

    International Nuclear Information System (INIS)

    Kim, Namkwon

    1998-11-01

    We consider the static (2+1)D gauged linear sigma model. By analyzing the governing system of partial differential equations, we investigate various aspects of the model. We show the existence of energy finite vortices under a partially broken symmetry on R 2 with the necessary condition suggested by Y. Yang. We also introduce generalized semi-local vortices and show the existence of energy finite semi-local vortices under a certain condition. The vacuum manifold for the semi-local vortices turns out to be graded. Besides, with a special choice of a representation, we show that the O(3) sigma model of which target space is nonlinear is a singular limit of the gauged linear sigma model of which target space is linear. (author)

  13. An Integrated Modelling and Toolpathing Approach for a Frameless Stressed Skin Structure, Fabricated Using Robotic Incremental Sheet Forming

    DEFF Research Database (Denmark)

    Nicholas, Paul; Stasiuk, David; Nørgaard, Esben Clausen

    2016-01-01

    with performance implications at material, element and structural scales. This paper briefly presents ISF as a method of fabrication, and introduces the context of structures where the skin plays an integral role. It describes the development of an integrated approach for the modelling and fabrication of Stressed...... Skins, an incrementally formed sheet metal structure. The paper then focus upon the use of prototypes and empirical testing as means to inform digital models about fabrication and material parameters including: material forming limits and thinning; the parameterisation of macro and meso simulations...

  14. The scope of application of incremental rapid prototyping methods in foundry engineering

    Directory of Open Access Journals (Sweden)

    M. Stankiewicz

    2010-01-01

    Full Text Available The article presents the scope of application of selected incremental Rapid Prototyping methods in the process of manufacturing casting models, casting moulds and casts. The Rapid Prototyping methods (SL, SLA, FDM, 3DP, JS are predominantly used for the production of models and model sets for casting moulds. The Rapid Tooling methods, such as: ZCast-3DP, ProMetalRCT and VoxelJet, enable the fabrication of casting moulds in the incremental process. The application of the RP methods in cast production makes it possible to speed up the prototype preparation process. This is particularly vital to elements of complex shapes. The time required for the manufacture of the model, the mould and the cast proper may vary from a few to several dozen hours.

  15. Increment memory module for spectrometric data recording

    International Nuclear Information System (INIS)

    Zhuchkov, A.A.; Myagkikh, A.I.

    1988-01-01

    Incremental memory unit designed to input differential energy spectra of nuclear radiation is described. ROM application as incremental device has allowed to reduce the number of elements and do simplify information readout from the unit. 12-bit 2048 channels present memory unit organization. The device is connected directly with the bus of microprocessor systems similar to KR 580. Incrementation maximal time constitutes 3 mks. It is possible to use this unit in multichannel counting mode

  16. Small Diameter Bomb Increment II (SDB II)

    Science.gov (United States)

    2015-12-01

    Selected Acquisition Report (SAR) RCS: DD-A&T(Q&A)823-439 Small Diameter Bomb Increment II (SDB II) As of FY 2017 President’s Budget Defense... Bomb Increment II (SDB II) DoD Component Air Force Joint Participants Department of the Navy Responsible Office References SAR Baseline (Production...Mission and Description Small Diameter Bomb Increment II (SDB II) is a joint interest United States Air Force (USAF) and Department of the Navy

  17. Utilizing encoding in scalable linear optics quantum computing

    International Nuclear Information System (INIS)

    Hayes, A J F; Gilchrist, A; Myers, C R; Ralph, T C

    2004-01-01

    We present a scheme which offers a significant reduction in the resources required to implement linear optics quantum computing. The scheme is a variation of the proposal of Knill, Laflamme and Milburn, and makes use of an incremental approach to the error encoding to boost probability of success

  18. Model Selection with the Linear Mixed Model for Longitudinal Data

    Science.gov (United States)

    Ryoo, Ji Hoon

    2011-01-01

    Model building or model selection with linear mixed models (LMMs) is complicated by the presence of both fixed effects and random effects. The fixed effects structure and random effects structure are codependent, so selection of one influences the other. Most presentations of LMM in psychology and education are based on a multilevel or…

  19. Application of linearized model to the stability analysis of the pressurized water reactor

    International Nuclear Information System (INIS)

    Li Haipeng; Huang Xiaojin; Zhang Liangju

    2008-01-01

    A Linear Time-Invariant model of the Pressurized Water Reactor is formulated through the linearization of the nonlinear model. The model simulation results show that the linearized model agrees well with the nonlinear model under small perturbation. Based upon the Lyapunov's First Method, the linearized model is applied to the stability analysis of the Pressurized Water Reactor. The calculation results show that the methodology of linearization to stability analysis is conveniently feasible. (authors)

  20. Incremental Costs and Cost Effectiveness of Intensive Treatment in Individuals with Type 2 Diabetes Detected by Screening in the ADDITION-UK Trial: An Update with Empirical Trial-Based Cost Data.

    Science.gov (United States)

    Laxy, Michael; Wilson, Edward C F; Boothby, Clare E; Griffin, Simon J

    2017-12-01

    There is uncertainty about the cost effectiveness of early intensive treatment versus routine care in individuals with type 2 diabetes detected by screening. To derive a trial-informed estimate of the incremental costs of intensive treatment as delivered in the Anglo-Danish-Dutch Study of Intensive Treatment in People with Screen-Detected Diabetes in Primary Care-Europe (ADDITION) trial and to revisit the long-term cost-effectiveness analysis from the perspective of the UK National Health Service. We analyzed the electronic primary care records of a subsample of the ADDITION-Cambridge trial cohort (n = 173). Unit costs of used primary care services were taken from the published literature. Incremental annual costs of intensive treatment versus routine care in years 1 to 5 after diagnosis were calculated using multilevel generalized linear models. We revisited the long-term cost-utility analyses for the ADDITION-UK trial cohort and reported results for ADDITION-Cambridge using the UK Prospective Diabetes Study Outcomes Model and the trial-informed cost estimates according to a previously developed evaluation framework. Incremental annual costs of intensive treatment over years 1 to 5 averaged £29.10 (standard error = £33.00) for consultations with general practitioners and nurses and £54.60 (standard error = £28.50) for metabolic and cardioprotective medication. For ADDITION-UK, over the 10-, 20-, and 30-year time horizon, adjusted incremental quality-adjusted life-years (QALYs) were 0.014, 0.043, and 0.048, and adjusted incremental costs were £1,021, £1,217, and £1,311, resulting in incremental cost-effectiveness ratios of £71,232/QALY, £28,444/QALY, and £27,549/QALY, respectively. Respective incremental cost-effectiveness ratios for ADDITION-Cambridge were slightly higher. The incremental costs of intensive treatment as delivered in the ADDITION-Cambridge trial were lower than expected. Given UK willingness-to-pay thresholds in patients with screen

  1. A Non-linear Stochastic Model for an Office Building with Air Infiltration

    DEFF Research Database (Denmark)

    Thavlov, Anders; Madsen, Henrik

    2015-01-01

    This paper presents a non-linear heat dynamic model for a multi-room office building with air infiltration. Several linear and non-linear models, with and without air infiltration, are investigated and compared. The models are formulated using stochastic differential equations and the model...

  2. Stochastic linear programming models, theory, and computation

    CERN Document Server

    Kall, Peter

    2011-01-01

    This new edition of Stochastic Linear Programming: Models, Theory and Computation has been brought completely up to date, either dealing with or at least referring to new material on models and methods, including DEA with stochastic outputs modeled via constraints on special risk functions (generalizing chance constraints, ICC’s and CVaR constraints), material on Sharpe-ratio, and Asset Liability Management models involving CVaR in a multi-stage setup. To facilitate use as a text, exercises are included throughout the book, and web access is provided to a student version of the authors’ SLP-IOR software. Additionally, the authors have updated the Guide to Available Software, and they have included newer algorithms and modeling systems for SLP. The book is thus suitable as a text for advanced courses in stochastic optimization, and as a reference to the field. From Reviews of the First Edition: "The book presents a comprehensive study of stochastic linear optimization problems and their applications. … T...

  3. Multivariate covariance generalized linear models

    DEFF Research Database (Denmark)

    Bonat, W. H.; Jørgensen, Bent

    2016-01-01

    are fitted by using an efficient Newton scoring algorithm based on quasi-likelihood and Pearson estimating functions, using only second-moment assumptions. This provides a unified approach to a wide variety of types of response variables and covariance structures, including multivariate extensions......We propose a general framework for non-normal multivariate data analysis called multivariate covariance generalized linear models, designed to handle multivariate response variables, along with a wide range of temporal and spatial correlation structures defined in terms of a covariance link...... function combined with a matrix linear predictor involving known matrices. The method is motivated by three data examples that are not easily handled by existing methods. The first example concerns multivariate count data, the second involves response variables of mixed types, combined with repeated...

  4. An Incremental Physically-Based Model of P91 Steel Flow Behaviour for the Numerical Analysis of Hot-Working Processes

    Directory of Open Access Journals (Sweden)

    Alberto Murillo-Marrodán

    2018-04-01

    Full Text Available This paper is aimed at modelling the flow behaviour of P91 steel at high temperature and a wide range of strain rates for constant and also variable strain-rate deformation conditions, such as those in real hot-working processes. For this purpose, an incremental physically-based model is proposed for the P91 steel flow behavior. This formulation considers the effects of dynamic recovery (DRV and dynamic recrystallization (DRX on the mechanical properties of the material, using only the flow stress, strain rate and temperature as state variables and not the accumulated strain. Therefore, it reproduces accurately the flow stress, work hardening and work softening not only under constant, but also under transient deformation conditions. To accomplish this study, the material is characterised experimentally by means of uniaxial compression tests, conducted at a temperature range of 900–1270 °C and at strain rates in the range of 0.005–10 s−1. Finally, the proposed model is implemented in commercial finite element (FE software to provide evidence of the performance of the proposed formulation. The experimental compression tests are simulated using the novel model and the well-known Hansel–Spittel formulation. In conclusion, the incremental physically-based model shows accurate results when work softening is present, especially under variable strain-rate deformation conditions. Hence, the present formulation is appropriate for the simulation of the hot-working processes typically conducted at industrial scale.

  5. Modelling a linear PM motor including magnetic saturation

    NARCIS (Netherlands)

    Polinder, H.; Slootweg, J.G.; Compter, J.C.; Hoeijmakers, M.J.

    2002-01-01

    The use of linear permanent-magnet (PM) actuators increases in a wide variety of applications because of the high force density, robustness and accuracy. The paper describes the modelling of a linear PM motor applied in, for example, wafer steppers, including magnetic saturation. This is important

  6. Genomic prediction based on data from three layer lines using non-linear regression models.

    Science.gov (United States)

    Huang, Heyun; Windig, Jack J; Vereijken, Addie; Calus, Mario P L

    2014-11-06

    Most studies on genomic prediction with reference populations that include multiple lines or breeds have used linear models. Data heterogeneity due to using multiple populations may conflict with model assumptions used in linear regression methods. In an attempt to alleviate potential discrepancies between assumptions of linear models and multi-population data, two types of alternative models were used: (1) a multi-trait genomic best linear unbiased prediction (GBLUP) model that modelled trait by line combinations as separate but correlated traits and (2) non-linear models based on kernel learning. These models were compared to conventional linear models for genomic prediction for two lines of brown layer hens (B1 and B2) and one line of white hens (W1). The three lines each had 1004 to 1023 training and 238 to 240 validation animals. Prediction accuracy was evaluated by estimating the correlation between observed phenotypes and predicted breeding values. When the training dataset included only data from the evaluated line, non-linear models yielded at best a similar accuracy as linear models. In some cases, when adding a distantly related line, the linear models showed a slight decrease in performance, while non-linear models generally showed no change in accuracy. When only information from a closely related line was used for training, linear models and non-linear radial basis function (RBF) kernel models performed similarly. The multi-trait GBLUP model took advantage of the estimated genetic correlations between the lines. Combining linear and non-linear models improved the accuracy of multi-line genomic prediction. Linear models and non-linear RBF models performed very similarly for genomic prediction, despite the expectation that non-linear models could deal better with the heterogeneous multi-population data. This heterogeneity of the data can be overcome by modelling trait by line combinations as separate but correlated traits, which avoids the occasional

  7. Determining Predictor Importance in Hierarchical Linear Models Using Dominance Analysis

    Science.gov (United States)

    Luo, Wen; Azen, Razia

    2013-01-01

    Dominance analysis (DA) is a method used to evaluate the relative importance of predictors that was originally proposed for linear regression models. This article proposes an extension of DA that allows researchers to determine the relative importance of predictors in hierarchical linear models (HLM). Commonly used measures of model adequacy in…

  8. FDTD Stability: Critical Time Increment

    OpenAIRE

    Z. Skvor; L. Pauk

    2003-01-01

    A new approach suitable for determination of the maximal stable time increment for the Finite-Difference Time-Domain (FDTD) algorithm in common curvilinear coordinates, for general mesh shapes and certain types of boundaries is presented. The maximal time increment corresponds to a characteristic value of a Helmholz equation that is solved by a finite-difference (FD) method. If this method uses exactly the same discretization as the given FDTD method (same mesh, boundary conditions, order of ...

  9. General mirror pairs for gauged linear sigma models

    Energy Technology Data Exchange (ETDEWEB)

    Aspinwall, Paul S.; Plesser, M. Ronen [Departments of Mathematics and Physics, Duke University,Box 90320, Durham, NC 27708-0320 (United States)

    2015-11-05

    We carefully analyze the conditions for an abelian gauged linear σ-model to exhibit nontrivial IR behavior described by a nonsingular superconformal field theory determining a superstring vacuum. This is done without reference to a geometric phase, by associating singular behavior to a noncompact space of (semi-)classical vacua. We find that models determined by reflexive combinatorial data are nonsingular for generic values of their parameters. This condition has the pleasant feature that the mirror of a nonsingular gauged linear σ-model is another such model, but it is clearly too strong and we provide an example of a non-reflexive mirror pair. We discuss a weaker condition inspired by considering extremal transitions, which is also mirror symmetric and which we conjecture to be sufficient. We apply these ideas to extremal transitions and to understanding the way in which both Berglund-Hübsch mirror symmetry and the Vafa-Witten mirror orbifold with discrete torsion can be seen as special cases of the general combinatorial duality of gauged linear σ-models. In the former case we encounter an example showing that our weaker condition is still not necessary.

  10. General mirror pairs for gauged linear sigma models

    International Nuclear Information System (INIS)

    Aspinwall, Paul S.; Plesser, M. Ronen

    2015-01-01

    We carefully analyze the conditions for an abelian gauged linear σ-model to exhibit nontrivial IR behavior described by a nonsingular superconformal field theory determining a superstring vacuum. This is done without reference to a geometric phase, by associating singular behavior to a noncompact space of (semi-)classical vacua. We find that models determined by reflexive combinatorial data are nonsingular for generic values of their parameters. This condition has the pleasant feature that the mirror of a nonsingular gauged linear σ-model is another such model, but it is clearly too strong and we provide an example of a non-reflexive mirror pair. We discuss a weaker condition inspired by considering extremal transitions, which is also mirror symmetric and which we conjecture to be sufficient. We apply these ideas to extremal transitions and to understanding the way in which both Berglund-Hübsch mirror symmetry and the Vafa-Witten mirror orbifold with discrete torsion can be seen as special cases of the general combinatorial duality of gauged linear σ-models. In the former case we encounter an example showing that our weaker condition is still not necessary.

  11. Incremental Visualizer for Visible Objects

    DEFF Research Database (Denmark)

    Bukauskas, Linas; Bøhlen, Michael Hanspeter

    This paper discusses the integration of database back-end and visualizer front-end into a one tightly coupled system. The main aim which we achieve is to reduce the data pipeline from database to visualization by using incremental data extraction of visible objects in a fly-through scenarios. We...... also argue that passing only relevant data from the database will substantially reduce the overall load of the visualization system. We propose the system Incremental Visualizer for Visible Objects (IVVO) which considers visible objects and enables incremental visualization along the observer movement...... path. IVVO is the novel solution which allows data to be visualized and loaded on the fly from the database and which regards visibilities of objects. We run a set of experiments to convince that IVVO is feasible in terms of I/O operations and CPU load. We consider the example of data which uses...

  12. Half-trek criterion for generic identifiability of linear structural equation models

    NARCIS (Netherlands)

    Foygel, R.; Draisma, J.; Drton, M.

    2012-01-01

    A linear structural equation model relates random variables of interest and corresponding Gaussian noise terms via a linear equation system. Each such model can be represented by a mixed graph in which directed edges encode the linear equations, and bidirected edges indicate possible correlations

  13. Half-trek criterion for generic identifiability of linear structural equation models

    NARCIS (Netherlands)

    Foygel, R.; Draisma, J.; Drton, M.

    2011-01-01

    A linear structural equation model relates random variables of interest and corresponding Gaussian noise terms via a linear equation system. Each such model can be represented by a mixed graph in which directed edges encode the linear equations, and bidirected edges indicate possible correlations

  14. Thermomechanical simulations and experimental validation for high speed incremental forming

    Science.gov (United States)

    Ambrogio, Giuseppina; Gagliardi, Francesco; Filice, Luigino; Romero, Natalia

    2016-10-01

    Incremental sheet forming (ISF) consists in deforming only a small region of the workspace through a punch driven by a NC machine. The drawback of this process is its slowness. In this study, a high speed variant has been investigated from both numerical and experimental points of view. The aim has been the design of a FEM model able to perform the material behavior during the high speed process by defining a thermomechanical model. An experimental campaign has been performed by a CNC lathe with high speed to test process feasibility. The first results have shown how the material presents the same performance than in conventional speed ISF and, in some cases, better material behavior due to the temperature increment. An accurate numerical simulation has been performed to investigate the material behavior during the high speed process confirming substantially experimental evidence.

  15. Generalized Linear Models with Applications in Engineering and the Sciences

    CERN Document Server

    Myers, Raymond H; Vining, G Geoffrey; Robinson, Timothy J

    2012-01-01

    Praise for the First Edition "The obvious enthusiasm of Myers, Montgomery, and Vining and their reliance on their many examples as a major focus of their pedagogy make Generalized Linear Models a joy to read. Every statistician working in any area of applied science should buy it and experience the excitement of these new approaches to familiar activities."-Technometrics Generalized Linear Models: With Applications in Engineering and the Sciences, Second Edition continues to provide a clear introduction to the theoretical foundations and key applications of generalized linear models (GLMs). Ma

  16. Effect of high altitude exposure on the hemodynamics of the bidirectional Glenn physiology: modeling incremented pulmonary vascular resistance and heart rate.

    Science.gov (United States)

    Vallecilla, Carolina; Khiabani, Reza H; Sandoval, Néstor; Fogel, Mark; Briceño, Juan Carlos; Yoganathan, Ajit P

    2014-06-03

    The considerable blood mixing in the bidirectional Glenn (BDG) physiology further limits the capacity of the single working ventricle to pump enough oxygenated blood to the circulatory system. This condition is exacerbated under severe conditions such as physical activity or high altitude. In this study, the effect of high altitude exposure on hemodynamics and ventricular function of the BDG physiology is investigated. For this purpose, a mathematical approach based on a lumped parameter model was developed to model the BDG circulation. Catheterization data from 39 BDG patients at stabilized oxygen conditions was used to determine baseline flows and pressures for the model. The effect of high altitude exposure was modeled by increasing the pulmonary vascular resistance (PVR) and heart rate (HR) in increments up to 80% and 40%, respectively. The resulting differences in vascular flows, pressures and ventricular function parameters were analyzed. By simultaneously increasing PVR and HR, significant changes (p fails to overcome the increased preload and implied low oxygenation in BDG patients at higher altitudes, especially for those with high baseline PVRs. The presented mathematical model provides a framework to estimate the hemodynamic performance of BDG patients at different PVR increments. Copyright © 2014 Elsevier Ltd. All rights reserved.

  17. Maximum photovoltaic power tracking for the PV array using the fractional-order incremental conductance method

    International Nuclear Information System (INIS)

    Lin, Chia-Hung; Huang, Cong-Hui; Du, Yi-Chun; Chen, Jian-Liung

    2011-01-01

    Highlights: → The FOICM can shorten the tracking time less than traditional methods. → The proposed method can work under lower solar radiation including thin and heavy clouds. → The FOICM algorithm can achieve MPPT for radiation and temperature changes. → It is easy to implement in a single-chip microcontroller or embedded system. -- Abstract: This paper proposes maximum photovoltaic power tracking (MPPT) for the photovoltaic (PV) array using the fractional-order incremental conductance method (FOICM). Since the PV array has low conversion efficiency, and the output power of PV array depends on the operation environments, such as various solar radiation, environment temperature, and weather conditions. Maximum charging power can be increased to a battery using a MPPT algorithm. The energy conversion of the absorbed solar light and cell temperature is directly transferred to the semiconductor, but electricity conduction has anomalous diffusion phenomena in inhomogeneous material. FOICM can provide a dynamic mathematical model to describe non-linear characteristics. The fractional-order incremental change as dynamic variable is used to adjust the PV array voltage toward the maximum power point. For a small-scale PV conversion system, the proposed method is validated by simulation with different operation environments. Compared with traditional methods, experimental results demonstrate the short tracking time and the practicality in MPPT of PV array.

  18. Stem analysis program (GOAP for evaluating of increment and growth data at individual tree

    Directory of Open Access Journals (Sweden)

    Gafura Aylak Özdemir

    2016-07-01

    Full Text Available Stem analysis is a method evaluating in a detailed way data of increment and growth of individual tree at the past periods and widely used in various forestry disciplines. Untreated data of stem analysis consist of annual ring count and measurement procedures performed on cross sections taken from individual tree by section method. The evaluation of obtained this untreated data takes quite some time. Thus, a computer software was developed in this study to quickly and efficiently perform stem analysis. This computer software developed to evaluate untreated data of stem analysis as numerical and graphical was programmed as macro by utilizing Visual Basic for Application feature of MS Excel 2013 program currently the most widely used. In developed this computer software, growth height model is formed from two different approaches, individual tree volume depending on section method, cross-sectional area, increments of diameter, height and volume, volume increment percent and stem form factor at breast height are calculated depending on desired period lengths. This calculated values are given as table. Development of diameter, height, volume, increments of these variables, volume increment percent and stem form factor at breast height according to periodic age are given as chart. Stem model showing development of diameter, height and shape of individual tree in the past periods also can be taken from computer software as chart.

  19. Health level seven interoperability strategy: big data, incrementally structured.

    Science.gov (United States)

    Dolin, R H; Rogers, B; Jaffe, C

    2015-01-01

    Describe how the HL7 Clinical Document Architecture (CDA), a foundational standard in US Meaningful Use, contributes to a "big data, incrementally structured" interoperability strategy, whereby data structured incrementally gets large amounts of data flowing faster. We present cases showing how this approach is leveraged for big data analysis. To support the assertion that semi-structured narrative in CDA format can be a useful adjunct in an overall big data analytic approach, we present two case studies. The first assesses an organization's ability to generate clinical quality reports using coded data alone vs. coded data supplemented by CDA narrative. The second leverages CDA to construct a network model for referral management, from which additional observations can be gleaned. The first case shows that coded data supplemented by CDA narrative resulted in significant variances in calculated performance scores. In the second case, we found that the constructed network model enables the identification of differences in patient characteristics among different referral work flows. The CDA approach goes after data indirectly, by focusing first on the flow of narrative, which is then incrementally structured. A quantitative assessment of whether this approach will lead to a greater flow of data and ultimately a greater flow of structured data vs. other approaches is planned as a future exercise. Along with growing adoption of CDA, we are now seeing the big data community explore the standard, particularly given its potential to supply analytic en- gines with volumes of data previously not possible.

  20. A penalized framework for distributed lag non-linear models.

    Science.gov (United States)

    Gasparrini, Antonio; Scheipl, Fabian; Armstrong, Ben; Kenward, Michael G

    2017-09-01

    Distributed lag non-linear models (DLNMs) are a modelling tool for describing potentially non-linear and delayed dependencies. Here, we illustrate an extension of the DLNM framework through the use of penalized splines within generalized additive models (GAM). This extension offers built-in model selection procedures and the possibility of accommodating assumptions on the shape of the lag structure through specific penalties. In addition, this framework includes, as special cases, simpler models previously proposed for linear relationships (DLMs). Alternative versions of penalized DLNMs are compared with each other and with the standard unpenalized version in a simulation study. Results show that this penalized extension to the DLNM class provides greater flexibility and improved inferential properties. The framework exploits recent theoretical developments of GAMs and is implemented using efficient routines within freely available software. Real-data applications are illustrated through two reproducible examples in time series and survival analysis. © 2017 The Authors Biometrics published by Wiley Periodicals, Inc. on behalf of International Biometric Society.

  1. Prediction of Mind-Wandering with Electroencephalogram and Non-linear Regression Modeling.

    Science.gov (United States)

    Kawashima, Issaku; Kumano, Hiroaki

    2017-01-01

    Mind-wandering (MW), task-unrelated thought, has been examined by researchers in an increasing number of articles using models to predict whether subjects are in MW, using numerous physiological variables. However, these models are not applicable in general situations. Moreover, they output only binary classification. The current study suggests that the combination of electroencephalogram (EEG) variables and non-linear regression modeling can be a good indicator of MW intensity. We recorded EEGs of 50 subjects during the performance of a Sustained Attention to Response Task, including a thought sampling probe that inquired the focus of attention. We calculated the power and coherence value and prepared 35 patterns of variable combinations and applied Support Vector machine Regression (SVR) to them. Finally, we chose four SVR models: two of them non-linear models and the others linear models; two of the four models are composed of a limited number of electrodes to satisfy model usefulness. Examination using the held-out data indicated that all models had robust predictive precision and provided significantly better estimations than a linear regression model using single electrode EEG variables. Furthermore, in limited electrode condition, non-linear SVR model showed significantly better precision than linear SVR model. The method proposed in this study helps investigations into MW in various little-examined situations. Further, by measuring MW with a high temporal resolution EEG, unclear aspects of MW, such as time series variation, are expected to be revealed. Furthermore, our suggestion that a few electrodes can also predict MW contributes to the development of neuro-feedback studies.

  2. Prediction of Mind-Wandering with Electroencephalogram and Non-linear Regression Modeling

    Directory of Open Access Journals (Sweden)

    Issaku Kawashima

    2017-07-01

    Full Text Available Mind-wandering (MW, task-unrelated thought, has been examined by researchers in an increasing number of articles using models to predict whether subjects are in MW, using numerous physiological variables. However, these models are not applicable in general situations. Moreover, they output only binary classification. The current study suggests that the combination of electroencephalogram (EEG variables and non-linear regression modeling can be a good indicator of MW intensity. We recorded EEGs of 50 subjects during the performance of a Sustained Attention to Response Task, including a thought sampling probe that inquired the focus of attention. We calculated the power and coherence value and prepared 35 patterns of variable combinations and applied Support Vector machine Regression (SVR to them. Finally, we chose four SVR models: two of them non-linear models and the others linear models; two of the four models are composed of a limited number of electrodes to satisfy model usefulness. Examination using the held-out data indicated that all models had robust predictive precision and provided significantly better estimations than a linear regression model using single electrode EEG variables. Furthermore, in limited electrode condition, non-linear SVR model showed significantly better precision than linear SVR model. The method proposed in this study helps investigations into MW in various little-examined situations. Further, by measuring MW with a high temporal resolution EEG, unclear aspects of MW, such as time series variation, are expected to be revealed. Furthermore, our suggestion that a few electrodes can also predict MW contributes to the development of neuro-feedback studies.

  3. Power variation for Gaussian processes with stationary increments

    DEFF Research Database (Denmark)

    Barndorff-Nielsen, Ole Eiler; Corcuera, J.M.; Podolskij, Mark

    2009-01-01

    We develop the asymptotic theory for the realised power variation of the processes X=•G, where G is a Gaussian process with stationary increments. More specifically, under some mild assumptions on the variance function of the increments of G and certain regularity conditions on the path of the pr......We develop the asymptotic theory for the realised power variation of the processes X=•G, where G is a Gaussian process with stationary increments. More specifically, under some mild assumptions on the variance function of the increments of G and certain regularity conditions on the path...... a chaos representation....

  4. Game Theory and its Relationship with Linear Programming Models ...

    African Journals Online (AJOL)

    Game Theory and its Relationship with Linear Programming Models. ... This paper shows that game theory and linear programming problem are closely related subjects since any computing method devised for ... AJOL African Journals Online.

  5. Modelling subject-specific childhood growth using linear mixed-effect models with cubic regression splines.

    Science.gov (United States)

    Grajeda, Laura M; Ivanescu, Andrada; Saito, Mayuko; Crainiceanu, Ciprian; Jaganath, Devan; Gilman, Robert H; Crabtree, Jean E; Kelleher, Dermott; Cabrera, Lilia; Cama, Vitaliano; Checkley, William

    2016-01-01

    Childhood growth is a cornerstone of pediatric research. Statistical models need to consider individual trajectories to adequately describe growth outcomes. Specifically, well-defined longitudinal models are essential to characterize both population and subject-specific growth. Linear mixed-effect models with cubic regression splines can account for the nonlinearity of growth curves and provide reasonable estimators of population and subject-specific growth, velocity and acceleration. We provide a stepwise approach that builds from simple to complex models, and account for the intrinsic complexity of the data. We start with standard cubic splines regression models and build up to a model that includes subject-specific random intercepts and slopes and residual autocorrelation. We then compared cubic regression splines vis-à-vis linear piecewise splines, and with varying number of knots and positions. Statistical code is provided to ensure reproducibility and improve dissemination of methods. Models are applied to longitudinal height measurements in a cohort of 215 Peruvian children followed from birth until their fourth year of life. Unexplained variability, as measured by the variance of the regression model, was reduced from 7.34 when using ordinary least squares to 0.81 (p linear mixed-effect models with random slopes and a first order continuous autoregressive error term. There was substantial heterogeneity in both the intercept (p modeled with a first order continuous autoregressive error term as evidenced by the variogram of the residuals and by a lack of association among residuals. The final model provides a parametric linear regression equation for both estimation and prediction of population- and individual-level growth in height. We show that cubic regression splines are superior to linear regression splines for the case of a small number of knots in both estimation and prediction with the full linear mixed effect model (AIC 19,352 vs. 19

  6. Double generalized linear compound poisson models to insurance claims data

    DEFF Research Database (Denmark)

    Andersen, Daniel Arnfeldt; Bonat, Wagner Hugo

    2017-01-01

    This paper describes the specification, estimation and comparison of double generalized linear compound Poisson models based on the likelihood paradigm. The models are motivated by insurance applications, where the distribution of the response variable is composed by a degenerate distribution...... implementation and illustrate the application of double generalized linear compound Poisson models using a data set about car insurances....

  7. Alcohol consumption as an incremental factor in health care costs for traffic accident victims: evidence in a medium sized Colombian city.

    Science.gov (United States)

    Gómez-Restrepo, Carlos; Gómez-García, María Juliana; Naranjo, Salomé; Rondón, Martín Alonso; Acosta-Hernández, Andrés Leonardo

    2014-12-01

    Identify the possibility that alcohol consumption represents an incremental factor in healthcare costs of patients involved in traffic accidents. Data of people admitted into three major health institutions from an intermediate city in Colombia was collected. Socio-demographic characteristics, health care costs and alcohol consumption levels by breath alcohol concentration (BrAC) methodology were identified. Generalized linear models were applied to investigate whether alcohol consumption acts as an incremental factor for healthcare costs. The average cost of healthcare was 878 USD. In general, there are differences between health care costs for patients with positive blood alcohol level compared with those who had negative levels. Univariate analysis shows that the average cost of care can be 2.26 times higher (95% CI: 1.20-4.23), and after controlling for patient characteristics, alcohol consumption represents an incremental factor of almost 1.66 times (95% CI: 1.05-2.62). Alcohol is identified as a possible factor associated with the increased use of direct health care resources. The estimates show the need to implement and enhance prevention programs against alcohol consumption among citizens, in order to mitigate the impact that traffic accidents have on their health status. The law enforcement to help reduce driving under the influence of alcoholic beverages could help to diminish the economic and social impacts of this problem. Copyright © 2014 Elsevier Ltd. All rights reserved.

  8. Reliability modelling and simulation of switched linear system ...

    African Journals Online (AJOL)

    Reliability modelling and simulation of switched linear system control using temporal databases. ... design of fault-tolerant real-time switching systems control and modelling embedded micro-schedulers for complex systems maintenance.

  9. Mixed models, linear dependency, and identification in age-period-cohort models.

    Science.gov (United States)

    O'Brien, Robert M

    2017-07-20

    This paper examines the identification problem in age-period-cohort models that use either linear or categorically coded ages, periods, and cohorts or combinations of these parameterizations. These models are not identified using the traditional fixed effect regression model approach because of a linear dependency between the ages, periods, and cohorts. However, these models can be identified if the researcher introduces a single just identifying constraint on the model coefficients. The problem with such constraints is that the results can differ substantially depending on the constraint chosen. Somewhat surprisingly, age-period-cohort models that specify one or more of ages and/or periods and/or cohorts as random effects are identified. This is the case without introducing an additional constraint. I label this identification as statistical model identification and show how statistical model identification comes about in mixed models and why which effects are treated as fixed and which are treated as random can substantially change the estimates of the age, period, and cohort effects. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

  10. A fuzzy Bi-linear management model in reverse logistic chains

    Directory of Open Access Journals (Sweden)

    Tadić Danijela

    2016-01-01

    Full Text Available The management of the electrical and electronic waste (WEEE problem in the uncertain environment has a critical effect on the economy and environmental protection of each region. The considered problem can be stated as a fuzzy non-convex optimization problem with linear objective function and a set of linear and non-linear constraints. The original problem is reformulated by using linear relaxation into a fuzzy linear programming problem. The fuzzy rating of collecting point capacities and fix costs of recycling centers are modeled by triangular fuzzy numbers. The optimal solution of the reformulation model is found by using optimality concept. The proposed model is verified through an illustrative example with real-life data. The obtained results represent an input for future research which should include a good benchmark base for tested reverse logistic chains and their continuous improvement. [Projekat Ministarstva nauke Republike Srbije, br. 035033: Sustainable development technology and equipment for the recycling of motor vehicles

  11. DIAMOND: A model of incremental decision making for resource acquisition by electric utilities

    Energy Technology Data Exchange (ETDEWEB)

    Gettings, M.; Hirst, E.; Yourstone, E.

    1991-02-01

    Uncertainty is a major issue facing electric utilities in planning and decision making. Substantial uncertainties exist concerning future load growth; the lifetimes and performances of existing power plants; the construction times, costs, and performances of new resources being brought online; and the regulatory and economic environment in which utilities operate. This report describes a utility planning model that focuses on frequent and incremental decisions. The key features of this model are its explicit treatment of uncertainty, frequent user interaction with the model, and the ability to change prior decisions. The primary strength of this model is its representation of the planning and decision-making environment that utility planners and executives face. Users interact with the model after every year or two of simulation, which provides an opportunity to modify past decisions as well as to make new decisions. For example, construction of a power plant can be started one year, and if circumstances change, the plant can be accelerated, mothballed, canceled, or continued as originally planned. Similarly, the marketing and financial incentives for demand-side management programs can be changed from year to year, reflecting the short lead time and small unit size of these resources. This frequent user interaction with the model, an operational game, should build greater understanding and insights among utility planners about the risks associated with different types of resources. The model is called DIAMOND, Decision Impact Assessment Model. In consists of four submodels: FUTURES, FORECAST, SIMULATION, and DECISION. It runs on any IBM-compatible PC and requires no special software or hardware. 19 refs., 13 figs., 15 tabs.

  12. Identification of Influential Points in a Linear Regression Model

    Directory of Open Access Journals (Sweden)

    Jan Grosz

    2011-03-01

    Full Text Available The article deals with the detection and identification of influential points in the linear regression model. Three methods of detection of outliers and leverage points are described. These procedures can also be used for one-sample (independentdatasets. This paper briefly describes theoretical aspects of several robust methods as well. Robust statistics is a powerful tool to increase the reliability and accuracy of statistical modelling and data analysis. A simulation model of the simple linear regression is presented.

  13. Linear regression crash prediction models : issues and proposed solutions.

    Science.gov (United States)

    2010-05-01

    The paper develops a linear regression model approach that can be applied to : crash data to predict vehicle crashes. The proposed approach involves novice data aggregation : to satisfy linear regression assumptions; namely error structure normality ...

  14. An application of the J-integral to an incremental analysis of blunting crack behavior

    International Nuclear Information System (INIS)

    Merkle, J.G.

    1989-01-01

    This paper describes an analytical approach to estimating the elastic-plastic stresses and strains near the tip of a blunting crack with a finite root radius. Rice's original derivation of the path independent J-integral considered the possibility of a finite crack tip root radius. For this problem Creager's elastic analysis gives the relation between the stress intensity factor K I and the near tip stresses. It can be shown that the relation K I 2 = E'J holds when the root radius is finite. Recognizing that elastic-plastic behavior is incrementally linear then allows a derivation to be performed for a bielastic specimen having a crack tip region of reduced modulus, and the result differentiated to estimate elastic-plastic behavior. The result is the incremental form of Neuber's equation. This result does not require the assumption of any particular stress-strain relation. However by assuming a pure power law stress-strain relation and using Ilyushin's principle, the ordinary deformation theory form of Neuber's equation, K σ K var epsilon = K t 2 , is obtained. Applications of the incremental form of Neuber's equation have already been made to fatigue and fracture analysis. This paper helps to provide a theoretical basis for these methods previously considered semiempirical. 26 refs., 4 figs

  15. Matrix model and time-like linear dila ton matter

    International Nuclear Information System (INIS)

    Takayanagi, Tadashi

    2004-01-01

    We consider a matrix model description of the 2d string theory whose matter part is given by a time-like linear dilaton CFT. This is equivalent to the c=1 matrix model with a deformed, but very simple Fermi surface. Indeed, after a Lorentz transformation, the corresponding 2d spacetime is a conventional linear dila ton background with a time-dependent tachyon field. We show that the tree level scattering amplitudes in the matrix model perfectly agree with those computed in the world-sheet theory. The classical trajectories of fermions correspond to the decaying D-boranes in the time-like linear dilaton CFT. We also discuss the ground ring structure. Furthermore, we study the properties of the time-like Liouville theory by applying this matrix model description. We find that its ground ring structure is very similar to that of the minimal string. (author)

  16. Linear Power-Flow Models in Multiphase Distribution Networks: Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Bernstein, Andrey; Dall' Anese, Emiliano

    2017-05-26

    This paper considers multiphase unbalanced distribution systems and develops approximate power-flow models where bus-voltages, line-currents, and powers at the point of common coupling are linearly related to the nodal net power injections. The linearization approach is grounded on a fixed-point interpretation of the AC power-flow equations, and it is applicable to distribution systems featuring (i) wye connections; (ii) ungrounded delta connections; (iii) a combination of wye-connected and delta-connected sources/loads; and, (iv) a combination of line-to-line and line-to-grounded-neutral devices at the secondary of distribution transformers. The proposed linear models can facilitate the development of computationally-affordable optimization and control applications -- from advanced distribution management systems settings to online and distributed optimization routines. Performance of the proposed models is evaluated on different test feeders.

  17. Linear mixed models a practical guide using statistical software

    CERN Document Server

    West, Brady T; Galecki, Andrzej T

    2006-01-01

    Simplifying the often confusing array of software programs for fitting linear mixed models (LMMs), Linear Mixed Models: A Practical Guide Using Statistical Software provides a basic introduction to primary concepts, notation, software implementation, model interpretation, and visualization of clustered and longitudinal data. This easy-to-navigate reference details the use of procedures for fitting LMMs in five popular statistical software packages: SAS, SPSS, Stata, R/S-plus, and HLM. The authors introduce basic theoretical concepts, present a heuristic approach to fitting LMMs based on bo

  18. Optimal designs for linear mixture models

    NARCIS (Netherlands)

    Mendieta, E.J.; Linssen, H.N.; Doornbos, R.

    1975-01-01

    In a recent paper Snee and Marquardt (1974) considered designs for linear mixture models, where the components are subject to individual lower and/or upper bounds. When the number of components is large their algorithm XVERT yields designs far too extensive for practical purposes. The purpose of

  19. Functional linear models for association analysis of quantitative traits.

    Science.gov (United States)

    Fan, Ruzong; Wang, Yifan; Mills, James L; Wilson, Alexander F; Bailey-Wilson, Joan E; Xiong, Momiao

    2013-11-01

    Functional linear models are developed in this paper for testing associations between quantitative traits and genetic variants, which can be rare variants or common variants or the combination of the two. By treating multiple genetic variants of an individual in a human population as a realization of a stochastic process, the genome of an individual in a chromosome region is a continuum of sequence data rather than discrete observations. The genome of an individual is viewed as a stochastic function that contains both linkage and linkage disequilibrium (LD) information of the genetic markers. By using techniques of functional data analysis, both fixed and mixed effect functional linear models are built to test the association between quantitative traits and genetic variants adjusting for covariates. After extensive simulation analysis, it is shown that the F-distributed tests of the proposed fixed effect functional linear models have higher power than that of sequence kernel association test (SKAT) and its optimal unified test (SKAT-O) for three scenarios in most cases: (1) the causal variants are all rare, (2) the causal variants are both rare and common, and (3) the causal variants are common. The superior performance of the fixed effect functional linear models is most likely due to its optimal utilization of both genetic linkage and LD information of multiple genetic variants in a genome and similarity among different individuals, while SKAT and SKAT-O only model the similarities and pairwise LD but do not model linkage and higher order LD information sufficiently. In addition, the proposed fixed effect models generate accurate type I error rates in simulation studies. We also show that the functional kernel score tests of the proposed mixed effect functional linear models are preferable in candidate gene analysis and small sample problems. The methods are applied to analyze three biochemical traits in data from the Trinity Students Study. © 2013 WILEY

  20. Modeling of optimization strategies in the incremental CNC sheet metal forming process

    International Nuclear Information System (INIS)

    Bambach, M.; Hirt, G.; Ames, J.

    2004-01-01

    Incremental CNC sheet forming (ISF) is a relatively new sheet metal forming process for small batch production and prototyping. In ISF, a blank is shaped by the CNC movements of a simple tool in combination with a simplified die. The standard forming strategies in ISF entail two major drawbacks: (i) the inherent forming kinematics set limits on the maximum wall angle that can be formed with ISF. (ii) since elastic parts of the imposed deformation can currently not be accounted for in CNC code generation, the standard strategies can lead to undesired deviations between the target and the sample geometry.Several enhancements have recently been put forward to overcome the above limitations, among them a multistage forming strategy to manufacture steep flanges, and a correction algorithm to improve the geometric accuracy. Both strategies have been successful in improving the forming of simple parts. However, the high experimental effort to empirically optimize the tool paths motivates the use of process modeling techniques.This paper deals with finite element modeling of the ISF process. In particular, the outcome of different multistage strategies is modeled and compared to collated experimental results regarding aspects such as sheet thickness and the onset of wrinkling. Moreover, the feasibility of modeling the geometry of a part is investigated as this is of major importance with respect to optimizing the geometric accuracy. Experimental validation is achieved by optical deformation measurement that gives the local displacements and strains of the sheet during forming as benchmark quantities for the simulation

  1. Renormalization a la BRS of the non-linear σ-model

    International Nuclear Information System (INIS)

    Blasi, A.; Collina, R.

    1987-01-01

    We characterize the non-linear O(N+1) σ-model in an arbitrary parametrization with a nihilpotent BRS operator obtained from the symmetry transformation by the use of anticommuting parameters. The identity can be made compatible with the presence of a mass term in the model, so we can analyze its stability and prove that the model is anomaly free. This procedure avoids many problems encountered in the conventional analysis; in particular the introduction of an infinite number of sources coupled to the successive variations of the field is not necessary and the linear O(N) symmetry is respected as a consequence of the identity. The approach may provide useful in discussing the renormalizability of a wider class of models with non-linear symmetries. (orig.)

  2. Robust Comparison of the Linear Model Structures in Self-tuning Adaptive Control

    DEFF Research Database (Denmark)

    Zhou, Jianjun; Conrad, Finn

    1989-01-01

    The Generalized Predictive Controller (GPC) is extended to the systems with a generalized linear model structure which contains a number of choices of linear model structures. The Recursive Prediction Error Method (RPEM) is used to estimate the unknown parameters of the linear model structures...... to constitute a GPC self-tuner. Different linear model structures commonly used are compared and evaluated by applying them to the extended GPC self-tuner as well as to the special cases of the GPC, the GMV and MV self-tuners. The simulation results show how the choice of model structure affects the input......-output behaviour of self-tuning controllers....

  3. Partial and incremental PCMH practice transformation: implications for quality and costs.

    Science.gov (United States)

    Paustian, Michael L; Alexander, Jeffrey A; El Reda, Darline K; Wise, Chris G; Green, Lee A; Fetters, Michael D

    2014-02-01

    To examine the associations between partial and incremental implementation of the Patient Centered Medical Home (PCMH) model and measures of cost and quality of care. We combined validated, self-reported PCMH capabilities data with administrative claims data for a diverse statewide population of 2,432 primary care practices in Michigan. These data were supplemented with contextual data from the Area Resource File. We measured medical home capabilities in place as of June 2009 and change in medical home capabilities implemented between July 2009 and June 2010. Generalized estimating equations were used to estimate the mean effect of these PCMH measures on total medical costs and quality of care delivered in physician practices between July 2009 and June 2010, while controlling for potential practice, patient cohort, physician organization, and practice environment confounders. Based on the observed relationships for partial implementation, full implementation of the PCMH model is associated with a 3.5 percent higher quality composite score, a 5.1 percent higher preventive composite score, and $26.37 lower per member per month medical costs for adults. Full PCMH implementation is also associated with a 12.2 percent higher preventive composite score, but no reductions in costs for pediatric populations. Incremental improvements in PCMH model implementation yielded similar positive effects on quality of care for both adult and pediatric populations but were not associated with cost savings for either population. Estimated effects of the PCMH model on quality and cost of care appear to improve with the degree of PCMH implementation achieved and with incremental improvements in implementation. © Health Research and Educational Trust.

  4. Multicollinearity in hierarchical linear models.

    Science.gov (United States)

    Yu, Han; Jiang, Shanhe; Land, Kenneth C

    2015-09-01

    This study investigates an ill-posed problem (multicollinearity) in Hierarchical Linear Models from both the data and the model perspectives. We propose an intuitive, effective approach to diagnosing the presence of multicollinearity and its remedies in this class of models. A simulation study demonstrates the impacts of multicollinearity on coefficient estimates, associated standard errors, and variance components at various levels of multicollinearity for finite sample sizes typical in social science studies. We further investigate the role multicollinearity plays at each level for estimation of coefficient parameters in terms of shrinkage. Based on these analyses, we recommend a top-down method for assessing multicollinearity in HLMs that first examines the contextual predictors (Level-2 in a two-level model) and then the individual predictors (Level-1) and uses the results for data collection, research problem redefinition, model re-specification, variable selection and estimation of a final model. Copyright © 2015 Elsevier Inc. All rights reserved.

  5. Efficiency of Oral Incremental Rehearsal versus Written Incremental Rehearsal on Students' Rate, Retention, and Generalization of Spelling Words

    Science.gov (United States)

    Garcia, Dru; Joseph, Laurice M.; Alber-Morgan, Sheila; Konrad, Moira

    2014-01-01

    The purpose of this study was to examine the efficiency of an incremental rehearsal oral versus an incremental rehearsal written procedure on a sample of primary grade children's weekly spelling performance. Participants included five second and one first grader who were in need of help with their spelling according to their teachers. An…

  6. Optimal designs for linear mixture models

    NARCIS (Netherlands)

    Mendieta, E.J.; Linssen, H.N.; Doornbos, R.

    1975-01-01

    In a recent paper Snee and Marquardt [8] considered designs for linear mixture models, where the components are subject to individual lower and/or upper bounds. When the number of components is large their algorithm XVERT yields designs far too extensive for practical purposes. The purpose of this

  7. Incremental validity of positive and negative valence in predicting personality disorder.

    Science.gov (United States)

    Simms, Leonard J; Yufik, Tom; Gros, Daniel F

    2010-04-01

    The Big Seven model of personality includes five dimensions similar to the Big Five model as well as two evaluative dimensions—Positive Valence (PV) and Negative Valence (NV)—which reflect extremely positive and negative person descriptors, respectively. Recent theory and research have suggested that PV and NV predict significant variance in personality disorder (PD) above that predicted by the Big Five, but firm conclusions have not been possible because previous studies have been limited to only single measures of PV, NV, and the Big Five traits. In the present study, we replicated and extended previous findings using three markers of all key constructs—including PV, NV, and the Big Five—in a diverse sample of 338 undergraduates. Results of hierarchical multiple regression analyses revealed that PV incrementally predicted Narcissistic and Histrionic PDs above the Big Five and that NV nonspecifically incremented the prediction of most PDs. Implications for dimensional models of personality pathology are discussed. PsycINFO Database Record (c) 2010 APA, all rights reserved.

  8. Low-energy limit of the extended Linear Sigma Model

    Energy Technology Data Exchange (ETDEWEB)

    Divotgey, Florian [Johann Wolfgang Goethe-Universitaet, Institut fuer Theoretische Physik, Frankfurt am Main (Germany); Kovacs, Peter [Wigner Research Center for Physics, Hungarian Academy of Sciences, Institute for Particle and Nuclear Physics, Budapest (Hungary); GSI Helmholtzzentrum fuer Schwerionenforschung, ExtreMe Matter Institute, Darmstadt (Germany); Giacosa, Francesco [Johann Wolfgang Goethe-Universitaet, Institut fuer Theoretische Physik, Frankfurt am Main (Germany); Jan-Kochanowski University, Institute of Physics, Kielce (Poland); Rischke, Dirk H. [Johann Wolfgang Goethe-Universitaet, Institut fuer Theoretische Physik, Frankfurt am Main (Germany); University of Science and Technology of China, Interdisciplinary Center for Theoretical Study and Department of Modern Physics, Hefei, Anhui (China)

    2018-01-15

    The extended Linear Sigma Model is an effective hadronic model based on the linear realization of chiral symmetry SU(N{sub f}){sub L} x SU(N{sub f}){sub R}, with (pseudo)scalar and (axial-)vector mesons as degrees of freedom. In this paper, we study the low-energy limit of the extended Linear Sigma Model (eLSM) for N{sub f} = flavors by integrating out all fields except for the pions, the (pseudo-)Nambu-Goldstone bosons of chiral symmetry breaking. The resulting low-energy effective action is identical to Chiral Perturbation Theory (ChPT) after choosing a representative for the coset space generated by chiral symmetry breaking and expanding it in powers of (derivatives of) the pion fields. The tree-level values of the coupling constants of the effective low-energy action agree remarkably well with those of ChPT. (orig.)

  9. A national-scale model of linear features improves predictions of farmland biodiversity.

    Science.gov (United States)

    Sullivan, Martin J P; Pearce-Higgins, James W; Newson, Stuart E; Scholefield, Paul; Brereton, Tom; Oliver, Tom H

    2017-12-01

    Modelling species distribution and abundance is important for many conservation applications, but it is typically performed using relatively coarse-scale environmental variables such as the area of broad land-cover types. Fine-scale environmental data capturing the most biologically relevant variables have the potential to improve these models. For example, field studies have demonstrated the importance of linear features, such as hedgerows, for multiple taxa, but the absence of large-scale datasets of their extent prevents their inclusion in large-scale modelling studies.We assessed whether a novel spatial dataset mapping linear and woody-linear features across the UK improves the performance of abundance models of 18 bird and 24 butterfly species across 3723 and 1547 UK monitoring sites, respectively.Although improvements in explanatory power were small, the inclusion of linear features data significantly improved model predictive performance for many species. For some species, the importance of linear features depended on landscape context, with greater importance in agricultural areas. Synthesis and applications . This study demonstrates that a national-scale model of the extent and distribution of linear features improves predictions of farmland biodiversity. The ability to model spatial variability in the role of linear features such as hedgerows will be important in targeting agri-environment schemes to maximally deliver biodiversity benefits. Although this study focuses on farmland, data on the extent of different linear features are likely to improve species distribution and abundance models in a wide range of systems and also can potentially be used to assess habitat connectivity.

  10. Modeling containment of large wildfires using generalized linear mixed-model analysis

    Science.gov (United States)

    Mark Finney; Isaac C. Grenfell; Charles W. McHugh

    2009-01-01

    Billions of dollars are spent annually in the United States to contain large wildland fires, but the factors contributing to suppression success remain poorly understood. We used a regression model (generalized linear mixed-model) to model containment probability of individual fires, assuming that containment was a repeated-measures problem (fixed effect) and...

  11. Incremental Validity of the Trait Emotional Intelligence Questionnaire-Short Form (TEIQue-SF).

    Science.gov (United States)

    Siegling, A B; Vesely, Ashley K; Petrides, K V; Saklofske, Donald H

    2015-01-01

    This study examined the incremental validity of the adult short form of the Trait Emotional Intelligence Questionnaire (TEIQue-SF) in predicting 7 construct-relevant criteria beyond the variance explained by the Five-factor model and coping strategies. Additionally, the relative contributions of the questionnaire's 4 subscales were assessed. Two samples of Canadian university students completed the TEIQue-SF, along with measures of the Big Five, coping strategies (Sample 1 only), and emotion-laden criteria. The TEIQue-SF showed consistent incremental effects beyond the Big Five or the Big Five and coping strategies, predicting all 7 criteria examined across the 2 samples. Furthermore, 2 of the 4 TEIQue-SF subscales accounted for the measure's incremental validity. Although the findings provide good support for the validity and utility of the TEIQue-SF, directions for further research are emphasized.

  12. A test for the parameters of multiple linear regression models ...

    African Journals Online (AJOL)

    A test for the parameters of multiple linear regression models is developed for conducting tests simultaneously on all the parameters of multiple linear regression models. The test is robust relative to the assumptions of homogeneity of variances and absence of serial correlation of the classical F-test. Under certain null and ...

  13. Nonlinear Modeling by Assembling Piecewise Linear Models

    Science.gov (United States)

    Yao, Weigang; Liou, Meng-Sing

    2013-01-01

    To preserve nonlinearity of a full order system over a parameters range of interest, we propose a simple modeling approach by assembling a set of piecewise local solutions, including the first-order Taylor series terms expanded about some sampling states. The work by Rewienski and White inspired our use of piecewise linear local solutions. The assembly of these local approximations is accomplished by assigning nonlinear weights, through radial basis functions in this study. The efficacy of the proposed procedure is validated for a two-dimensional airfoil moving at different Mach numbers and pitching motions, under which the flow exhibits prominent nonlinear behaviors. All results confirm that our nonlinear model is accurate and stable for predicting not only aerodynamic forces but also detailed flowfields. Moreover, the model is robustness-accurate for inputs considerably different from the base trajectory in form and magnitude. This modeling preserves nonlinearity of the problems considered in a rather simple and accurate manner.

  14. Performance Evaluation of Incremental K-means Clustering Algorithm

    OpenAIRE

    Chakraborty, Sanjay; Nagwani, N. K.

    2014-01-01

    The incremental K-means clustering algorithm has already been proposed and analysed in paper [Chakraborty and Nagwani, 2011]. It is a very innovative approach which is applicable in periodically incremental environment and dealing with a bulk of updates. In this paper the performance evaluation is done for this incremental K-means clustering algorithm using air pollution database. This paper also describes the comparison on the performance evaluations between existing K-means clustering and i...

  15. Evaluation of an Incremental Ventilation Energy Model for Estimating Impacts of Air Sealing and Mechanical Ventilation

    Energy Technology Data Exchange (ETDEWEB)

    Logue, Jennifer M. [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Turner, Willliam JN [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Walker, Iain S. [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Singer, Brett C. [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States)

    2012-07-01

    Changing the rate of airflow through a home affects the annual thermal conditioning energy. Large-scale changes to airflow rates of the housing stock can significantly alter the energy consumption of the residential energy sector. However, the complexity of existing residential energy models hampers the ability to estimate the impact of policy changes on a state or nationwide level. The Incremental Ventilation Energy (IVE) model developed in this study was designed to combine the output of simple airflow models and a limited set of home characteristics to estimate the associated change in energy demand of homes. The IVE model was designed specifically to enable modelers to use existing databases of home characteristics to determine the impact of policy on ventilation at a population scale. In this report, we describe the IVE model and demonstrate that its estimates of energy change are comparable to the estimates of a wellvalidated, complex residential energy model when applied to homes with limited parameterization. Homes with extensive parameterization would be more accurately characterized by complex residential energy models. The demonstration included a range of home types, climates, and ventilation systems that cover a large fraction of the residential housing sector.

  16. Solution strategies for linear and nonlinear instability phenomena for arbitrarily thin shell structures

    International Nuclear Information System (INIS)

    Eckstein, U.; Harte, R.; Kraetzig, W.B.; Wittek, U.

    1983-01-01

    In order to describe nonlinear response and instability behaviour the paper starts with the total potential energy considering the basic kinematic equations of a consistent nonlinear shell theory for large displacements and moderate rotations. The material behaviour is assumed to be hyperelastic and isotropic. The incrementation and discretization of the total potential energy leads to the tangent stiffness relation, which is the central equation of computational algorithms based on combined incremental and iterative techniques. Here a symmetrized form of the RIKS/WEMPNER-algorithm for positive and negative load incrementation represents the basis of the nonlinear solution technique. To detect secondary equilibrium branches at points of neutral equilibrium within nonlinear primary paths a quadratic eigenvalue-problem has to be solved. In order to follow those complicated nonlinear response phenomena the RIKS/WEMPNER incrementation/iteration process is combined with a simultaneous solution of the linearized quadratic eigenvalue-problem. Additionally the essentials of a recently derived family of arbitrarily curved shell elements for linear (LACS) and geometrically nonlinear (NACS) shell problems are presented. The main advantage of these elements is the exact description of all geometric properties as well as the energy-equivalent representation of the applied loads in combination with an efficient algorithm to form the stiffness submatrices. Especially the NACS-elements are designed to improve the accuracy of the solution in the deep postbuckling range including moderate rotations. The derived finite elements and solution strategies are applied to a certain number of typical shell problems to prove the precision of the shell elements and to demonstrate the possibilities of tracing linear and nonlinear bifurcation problems as well as snap-through phenomena with and without secondary bifurcation branches. (orig.)

  17. Portfolio optimization by using linear programing models based on genetic algorithm

    Science.gov (United States)

    Sukono; Hidayat, Y.; Lesmana, E.; Putra, A. S.; Napitupulu, H.; Supian, S.

    2018-01-01

    In this paper, we discussed the investment portfolio optimization using linear programming model based on genetic algorithms. It is assumed that the portfolio risk is measured by absolute standard deviation, and each investor has a risk tolerance on the investment portfolio. To complete the investment portfolio optimization problem, the issue is arranged into a linear programming model. Furthermore, determination of the optimum solution for linear programming is done by using a genetic algorithm. As a numerical illustration, we analyze some of the stocks traded on the capital market in Indonesia. Based on the analysis, it is shown that the portfolio optimization performed by genetic algorithm approach produces more optimal efficient portfolio, compared to the portfolio optimization performed by a linear programming algorithm approach. Therefore, genetic algorithms can be considered as an alternative on determining the investment portfolio optimization, particularly using linear programming models.

  18. One possible method of mathematical modeling of turbulent transport processes in plasma

    International Nuclear Information System (INIS)

    Skvortsova, Nina N.; Batanov, German M.; Petrov, Alexander E.; Pshenichnikov, Anton A.; Sarksyan, Karen A.; Kharchev, Nikolay K.; Bening, Vladimir E.; Korolev, Victor Yu.

    2003-01-01

    It is proposed to use the mathematical modeling of the increments of fluctuating plasma variables to analyzing the probability characteristics of turbulent transport processes in plasma. It is shown that, in plasma of the L-2M stellarator and the TAU-1 linear device, the increments of the process of local fluctuating particle flux are stochastic in nature and their distribution is a scale mixture of Gaussians. (author)

  19. Plane answers to complex questions the theory of linear models

    CERN Document Server

    Christensen, Ronald

    1987-01-01

    This book was written to rigorously illustrate the practical application of the projective approach to linear models. To some, this may seem contradictory. I contend that it is possible to be both rigorous and illustrative and that it is possible to use the projective approach in practical applications. Therefore, unlike many other books on linear models, the use of projections and sub­ spaces does not stop after the general theory. They are used wherever I could figure out how to do it. Solving normal equations and using calculus (outside of maximum likelihood theory) are anathema to me. This is because I do not believe that they contribute to the understanding of linear models. I have similar feelings about the use of side conditions. Such topics are mentioned when appropriate and thenceforward avoided like the plague. On the other side of the coin, I just as strenuously reject teaching linear models with a coordinate free approach. Although Joe Eaton assures me that the issues in complicated problems freq...

  20. Quantum independent increment processes

    CERN Document Server

    Franz, Uwe

    2005-01-01

    This volume is the first of two volumes containing the revised and completed notes lectures given at the school "Quantum Independent Increment Processes: Structure and Applications to Physics". This school was held at the Alfried-Krupp-Wissenschaftskolleg in Greifswald during the period March 9 – 22, 2003, and supported by the Volkswagen Foundation. The school gave an introduction to current research on quantum independent increment processes aimed at graduate students and non-specialists working in classical and quantum probability, operator algebras, and mathematical physics. The present first volume contains the following lectures: "Lévy Processes in Euclidean Spaces and Groups" by David Applebaum, "Locally Compact Quantum Groups" by Johan Kustermans, "Quantum Stochastic Analysis" by J. Martin Lindsay, and "Dilations, Cocycles and Product Systems" by B.V. Rajarama Bhat.

  1. Approximating chiral quark models with linear σ-models

    International Nuclear Information System (INIS)

    Broniowski, Wojciech; Golli, Bojan

    2003-01-01

    We study the approximation of chiral quark models with simpler models, obtained via gradient expansion. The resulting Lagrangian of the type of the linear σ-model contains, at the lowest level of the gradient-expanded meson action, an additional term of the form ((1)/(2))A(σ∂ μ σ+π∂ μ π) 2 . We investigate the dynamical consequences of this term and its relevance to the phenomenology of the soliton models of the nucleon. It is found that the inclusion of the new term allows for a more efficient approximation of the underlying quark theory, especially in those cases where dynamics allows for a large deviation of the chiral fields from the chiral circle, such as in quark models with non-local regulators. This is of practical importance, since the σ-models with valence quarks only are technically much easier to treat and simpler to solve than the quark models with the full-fledged Dirac sea

  2. Multivariate statistical modelling based on generalized linear models

    CERN Document Server

    Fahrmeir, Ludwig

    1994-01-01

    This book is concerned with the use of generalized linear models for univariate and multivariate regression analysis. Its emphasis is to provide a detailed introductory survey of the subject based on the analysis of real data drawn from a variety of subjects including the biological sciences, economics, and the social sciences. Where possible, technical details and proofs are deferred to an appendix in order to provide an accessible account for non-experts. Topics covered include: models for multi-categorical responses, model checking, time series and longitudinal data, random effects models, and state-space models. Throughout, the authors have taken great pains to discuss the underlying theoretical ideas in ways that relate well to the data at hand. As a result, numerous researchers whose work relies on the use of these models will find this an invaluable account to have on their desks. "The basic aim of the authors is to bring together and review a large part of recent advances in statistical modelling of m...

  3. Analysis of Dynamical Characteristic of Piecewise-Nonlinear Asymmetric Hysteretic System Based on Incremental Harmonic Balance Method

    Directory of Open Access Journals (Sweden)

    H. R. Liu

    2015-01-01

    Full Text Available Considering a sort of elastic mass with asymmetric hysteresis characteristic which widespread existing in engineering field, a piecewise-nonlinear dynamical equation, which contains asymmetric hysteretic loop, is established. By using the method of Incremental Harmonic Balance (IHB, the analytic linearized algebraic equation of the system is obtained. On the basis of this algebraic equation, the coefficients of the algebraic expression are figured out by the incremental procedure and the iterative process of the regulated variable. Through the emulation, the amplitude frequency response curve and the relation between the value of the harmonic component and the external excitation are researched; the bistable regions of the bifurcation diagram of the system under the variation of the excitation amplitude are studied. The above results can be used to guide the research of the asymmetric hysteretic system with polynomial expression.

  4. Finiteness of Ricci flat supersymmetric non-linear sigma-models

    International Nuclear Information System (INIS)

    Alvarez-Gaume, L.; Ginsparg, P.

    1985-01-01

    Combining the constraints of Kaehler differential geometry with the universality of the normal coordinate expansion in the background field method, we study the ultraviolet behavior of 2-dimensional supersymmetric non-linear sigma-models with target space an arbitrary riemannian manifold M. We show that the constraint of N=2 supersymmetry requires that all counterterms to the metric beyond one-loop order are cohomologically trivial. It follows that such supersymmetric non-linear sigma-models defined on locally symmetric spaces are super-renormalizable and that N=4 models are on-shell ultraviolet finite to all orders of perturbation theory. (orig.)

  5. Comparison between linear and non-parametric regression models for genome-enabled prediction in wheat.

    Science.gov (United States)

    Pérez-Rodríguez, Paulino; Gianola, Daniel; González-Camacho, Juan Manuel; Crossa, José; Manès, Yann; Dreisigacker, Susanne

    2012-12-01

    In genome-enabled prediction, parametric, semi-parametric, and non-parametric regression models have been used. This study assessed the predictive ability of linear and non-linear models using dense molecular markers. The linear models were linear on marker effects and included the Bayesian LASSO, Bayesian ridge regression, Bayes A, and Bayes B. The non-linear models (this refers to non-linearity on markers) were reproducing kernel Hilbert space (RKHS) regression, Bayesian regularized neural networks (BRNN), and radial basis function neural networks (RBFNN). These statistical models were compared using 306 elite wheat lines from CIMMYT genotyped with 1717 diversity array technology (DArT) markers and two traits, days to heading (DTH) and grain yield (GY), measured in each of 12 environments. It was found that the three non-linear models had better overall prediction accuracy than the linear regression specification. Results showed a consistent superiority of RKHS and RBFNN over the Bayesian LASSO, Bayesian ridge regression, Bayes A, and Bayes B models.

  6. Analytic description of the frictionally engaged in-plane bending process incremental swivel bending (ISB)

    Science.gov (United States)

    Frohn, Peter; Engel, Bernd; Groth, Sebastian

    2018-05-01

    Kinematic forming processes shape geometries by the process parameters to achieve a more universal process utilizations regarding geometric configurations. The kinematic forming process Incremental Swivel Bending (ISB) bends sheet metal strips or profiles in plane. The sequence for bending an arc increment is composed of the steps clamping, bending, force release and feed. The bending moment is frictionally engaged by two clamping units in a laterally adjustable bending pivot. A minimum clamping force hindering the material from slipping through the clamping units is a crucial criterion to achieve a well-defined incremental arc. Therefore, an analytic description of a singular bent increment is developed in this paper. The bending moment is calculated by the uniaxial stress distribution over the profiles' width depending on the bending pivot's position. By a Coulomb' based friction model, necessary clamping force is described in dependence of friction, offset, dimensions of the clamping tools and strip thickness as well as material parameters. Boundaries for the uniaxial stress calculation are given in dependence of friction, tools' dimensions and strip thickness. The results indicate that changing the bending pivot to an eccentric position significantly affects the process' bending moment and, hence, clamping force, which is given in dependence of yield stress and hardening exponent. FE simulations validate the model with satisfactory accordance.

  7. Robust Linear Models for Cis-eQTL Analysis.

    Science.gov (United States)

    Rantalainen, Mattias; Lindgren, Cecilia M; Holmes, Christopher C

    2015-01-01

    Expression Quantitative Trait Loci (eQTL) analysis enables characterisation of functional genetic variation influencing expression levels of individual genes. In outbread populations, including humans, eQTLs are commonly analysed using the conventional linear model, adjusting for relevant covariates, assuming an allelic dosage model and a Gaussian error term. However, gene expression data generally have noise that induces heavy-tailed errors relative to the Gaussian distribution and often include atypical observations, or outliers. Such departures from modelling assumptions can lead to an increased rate of type II errors (false negatives), and to some extent also type I errors (false positives). Careful model checking can reduce the risk of type-I errors but often not type II errors, since it is generally too time-consuming to carefully check all models with a non-significant effect in large-scale and genome-wide studies. Here we propose the application of a robust linear model for eQTL analysis to reduce adverse effects of deviations from the assumption of Gaussian residuals. We present results from a simulation study as well as results from the analysis of real eQTL data sets. Our findings suggest that in many situations robust models have the potential to provide more reliable eQTL results compared to conventional linear models, particularly in respect to reducing type II errors due to non-Gaussian noise. Post-genomic data, such as that generated in genome-wide eQTL studies, are often noisy and frequently contain atypical observations. Robust statistical models have the potential to provide more reliable results and increased statistical power under non-Gaussian conditions. The results presented here suggest that robust models should be considered routinely alongside other commonly used methodologies for eQTL analysis.

  8. Practical likelihood analysis for spatial generalized linear mixed models

    DEFF Research Database (Denmark)

    Bonat, W. H.; Ribeiro, Paulo Justiniano

    2016-01-01

    We investigate an algorithm for maximum likelihood estimation of spatial generalized linear mixed models based on the Laplace approximation. We compare our algorithm with a set of alternative approaches for two datasets from the literature. The Rhizoctonia root rot and the Rongelap are......, respectively, examples of binomial and count datasets modeled by spatial generalized linear mixed models. Our results show that the Laplace approximation provides similar estimates to Markov Chain Monte Carlo likelihood, Monte Carlo expectation maximization, and modified Laplace approximation. Some advantages...... of Laplace approximation include the computation of the maximized log-likelihood value, which can be used for model selection and tests, and the possibility to obtain realistic confidence intervals for model parameters based on profile likelihoods. The Laplace approximation also avoids the tuning...

  9. Incremental short daily home hemodialysis: a case series.

    Science.gov (United States)

    Toth-Manikowski, Stephanie M; Mullangi, Surekha; Hwang, Seungyoung; Shafi, Tariq

    2017-07-05

    Patients starting dialysis often have substantial residual kidney function. Incremental hemodialysis provides a hemodialysis prescription that supplements patients' residual kidney function while maintaining total (residual + dialysis) urea clearance (standard Kt/Vurea) targets. We describe our experience with incremental hemodialysis in patients using NxStage System One for home hemodialysis. From 2011 to 2015, we initiated 5 incident hemodialysis patients on an incremental home hemodialysis regimen. The biochemical parameters of all patients remained stable on the incremental hemodialysis regimen and they consistently achieved standard Kt/Vurea targets. Of the two patients with follow-up >6 months, residual kidney function was preserved for ≥2 years. Importantly, the patients were able to transition to home hemodialysis without automatically requiring 5 sessions per week at the outset and gradually increased the number of treatments and/or dialysate volume as the residual kidney function declined. An incremental home hemodialysis regimen can be safely prescribed and may improve acceptability of home hemodialysis. Reducing hemodialysis frequency by even one treatment per week can reduce the number of fistula or graft cannulations or catheter connections by >100 per year, an important consideration for patient well-being, access longevity, and access-related infections. The incremental hemodialysis approach, supported by national guidelines, can be considered for all home hemodialysis patients with residual kidney function.

  10. Top-down attention based on object representation and incremental memory for knowledge building and inference.

    Science.gov (United States)

    Kim, Bumhwi; Ban, Sang-Woo; Lee, Minho

    2013-10-01

    Humans can efficiently perceive arbitrary visual objects based on an incremental learning mechanism with selective attention. This paper proposes a new task specific top-down attention model to locate a target object based on its form and color representation along with a bottom-up saliency based on relativity of primitive visual features and some memory modules. In the proposed model top-down bias signals corresponding to the target form and color features are generated, which draw the preferential attention to the desired object by the proposed selective attention model in concomitance with the bottom-up saliency process. The object form and color representation and memory modules have an incremental learning mechanism together with a proper object feature representation scheme. The proposed model includes a Growing Fuzzy Topology Adaptive Resonance Theory (GFTART) network which plays two important roles in object color and form biased attention; one is to incrementally learn and memorize color and form features of various objects, and the other is to generate a top-down bias signal to localize a target object by focusing on the candidate local areas. Moreover, the GFTART network can be utilized for knowledge inference which enables the perception of new unknown objects on the basis of the object form and color features stored in the memory during training. Experimental results show that the proposed model is successful in focusing on the specified target objects, in addition to the incremental representation and memorization of various objects in natural scenes. In addition, the proposed model properly infers new unknown objects based on the form and color features of previously trained objects. Copyright © 2013 Elsevier Ltd. All rights reserved.

  11. A linear model of ductile plastic damage

    International Nuclear Information System (INIS)

    Lemaitre, J.

    1983-01-01

    A three-dimensional model of isotropic ductile plastic damage based on a continuum damage variable on the effective stress concept and on thermodynamics is derived. As shown by experiments on several metals and alloys, the model, integrated in the case of proportional loading, is linear with respect to the accumulated plastic strain and shows a large influence of stress triaxiality [fr

  12. The Boundary Between Planning and Incremental Budgeting: Empirical Examination in a Publicly-Owned Corporation

    OpenAIRE

    S. K. Lioukas; D. J. Chambers

    1981-01-01

    This paper is a study within the field of public budgeting. It focuses on the capital budget, and it attempts to model and analyze the capital budgeting process using a framework previously developed in the literature of incremental budgeting. Within this framework the paper seeks to determine empirically whether the movement of capital expenditure budgets can be represented as the routine application of incremental adjustments over an existing base of allocations and whether further, forward...

  13. Sphaleron in a non-linear sigma model

    International Nuclear Information System (INIS)

    Sogo, Kiyoshi; Fujimoto, Yasushi.

    1989-08-01

    We present an exact classical saddle point solution in a non-linear sigma model. It has a topological charge 1/2 and mediates the vacuum transition. The quantum fluctuations and the transition rate are also examined. (author)

  14. Quasi-brittle damage modeling based on incremental energy relaxation combined with a viscous-type regularization

    Science.gov (United States)

    Langenfeld, K.; Junker, P.; Mosler, J.

    2018-05-01

    This paper deals with a constitutive model suitable for the analysis of quasi-brittle damage in structures. The model is based on incremental energy relaxation combined with a viscous-type regularization. A similar approach—which also represents the inspiration for the improved model presented in this paper—was recently proposed in Junker et al. (Contin Mech Thermodyn 29(1):291-310, 2017). Within this work, the model introduced in Junker et al. (2017) is critically analyzed first. This analysis leads to an improved model which shows the same features as that in Junker et al. (2017), but which (i) eliminates unnecessary model parameters, (ii) can be better interpreted from a physics point of view, (iii) can capture a fully softened state (zero stresses), and (iv) is characterized by a very simple evolution equation. In contrast to the cited work, this evolution equation is (v) integrated fully implicitly and (vi) the resulting time-discrete evolution equation can be solved analytically providing a numerically efficient closed-form solution. It is shown that the final model is indeed well-posed (i.e., its tangent is positive definite). Explicit conditions guaranteeing this well-posedness are derived. Furthermore, by additively decomposing the stress rate into deformation- and purely time-dependent terms, the functionality of the model is explained. Illustrative numerical examples confirm the theoretical findings.

  15. Fast and local non-linear evolution of steep wave-groups on deep water: A comparison of approximate models to fully non-linear simulations

    International Nuclear Information System (INIS)

    Adcock, T. A. A.; Taylor, P. H.

    2016-01-01

    The non-linear Schrödinger equation and its higher order extensions are routinely used for analysis of extreme ocean waves. This paper compares the evolution of individual wave-packets modelled using non-linear Schrödinger type equations with packets modelled using fully non-linear potential flow models. The modified non-linear Schrödinger Equation accurately models the relatively large scale non-linear changes to the shape of wave-groups, with a dramatic contraction of the group along the mean propagation direction and a corresponding extension of the width of the wave-crests. In addition, as extreme wave form, there is a local non-linear contraction of the wave-group around the crest which leads to a localised broadening of the wave spectrum which the bandwidth limited non-linear Schrödinger Equations struggle to capture. This limitation occurs for waves of moderate steepness and a narrow underlying spectrum

  16. Ground Motion Models for Future Linear Colliders

    International Nuclear Information System (INIS)

    Seryi, Andrei

    2000-01-01

    Optimization of the parameters of a future linear collider requires comprehensive models of ground motion. Both general models of ground motion and specific models of the particular site and local conditions are essential. Existing models are not completely adequate, either because they are too general, or because they omit important peculiarities of ground motion. The model considered in this paper is based on recent ground motion measurements performed at SLAC and at other accelerator laboratories, as well as on historical data. The issues to be studied for the models to become more predictive are also discussed

  17. Predicting musically induced emotions from physiological inputs: linear and neural network models.

    Science.gov (United States)

    Russo, Frank A; Vempala, Naresh N; Sandstrom, Gillian M

    2013-01-01

    Listening to music often leads to physiological responses. Do these physiological responses contain sufficient information to infer emotion induced in the listener? The current study explores this question by attempting to predict judgments of "felt" emotion from physiological responses alone using linear and neural network models. We measured five channels of peripheral physiology from 20 participants-heart rate (HR), respiration, galvanic skin response, and activity in corrugator supercilii and zygomaticus major facial muscles. Using valence and arousal (VA) dimensions, participants rated their felt emotion after listening to each of 12 classical music excerpts. After extracting features from the five channels, we examined their correlation with VA ratings, and then performed multiple linear regression to see if a linear relationship between the physiological responses could account for the ratings. Although linear models predicted a significant amount of variance in arousal ratings, they were unable to do so with valence ratings. We then used a neural network to provide a non-linear account of the ratings. The network was trained on the mean ratings of eight of the 12 excerpts and tested on the remainder. Performance of the neural network confirms that physiological responses alone can be used to predict musically induced emotion. The non-linear model derived from the neural network was more accurate than linear models derived from multiple linear regression, particularly along the valence dimension. A secondary analysis allowed us to quantify the relative contributions of inputs to the non-linear model. The study represents a novel approach to understanding the complex relationship between physiological responses and musically induced emotion.

  18. Modelling point patterns with linear structures

    DEFF Research Database (Denmark)

    Møller, Jesper; Rasmussen, Jakob Gulddahl

    2009-01-01

    processes whose realizations contain such linear structures. Such a point process is constructed sequentially by placing one point at a time. The points are placed in such a way that new points are often placed close to previously placed points, and the points form roughly line shaped structures. We...... consider simulations of this model and compare with real data....

  19. Modelling point patterns with linear structures

    DEFF Research Database (Denmark)

    Møller, Jesper; Rasmussen, Jakob Gulddahl

    processes whose realizations contain such linear structures. Such a point process is constructed sequentially by placing one point at a time. The points are placed in such a way that new points are often placed close to previously placed points, and the points form roughly line shaped structures. We...... consider simulations of this model and compare with real data....

  20. Quantum independent increment processes

    CERN Document Server

    Franz, Uwe

    2006-01-01

    This is the second of two volumes containing the revised and completed notes of lectures given at the school "Quantum Independent Increment Processes: Structure and Applications to Physics". This school was held at the Alfried-Krupp-Wissenschaftskolleg in Greifswald in March, 2003, and supported by the Volkswagen Foundation. The school gave an introduction to current research on quantum independent increment processes aimed at graduate students and non-specialists working in classical and quantum probability, operator algebras, and mathematical physics. The present second volume contains the following lectures: "Random Walks on Finite Quantum Groups" by Uwe Franz and Rolf Gohm, "Quantum Markov Processes and Applications in Physics" by Burkhard Kümmerer, Classical and Free Infinite Divisibility and Lévy Processes" by Ole E. Barndorff-Nielsen, Steen Thorbjornsen, and "Lévy Processes on Quantum Groups and Dual Groups" by Uwe Franz.

  1. Modeling exposure–lag–response associations with distributed lag non-linear models

    Science.gov (United States)

    Gasparrini, Antonio

    2014-01-01

    In biomedical research, a health effect is frequently associated with protracted exposures of varying intensity sustained in the past. The main complexity of modeling and interpreting such phenomena lies in the additional temporal dimension needed to express the association, as the risk depends on both intensity and timing of past exposures. This type of dependency is defined here as exposure–lag–response association. In this contribution, I illustrate a general statistical framework for such associations, established through the extension of distributed lag non-linear models, originally developed in time series analysis. This modeling class is based on the definition of a cross-basis, obtained by the combination of two functions to flexibly model linear or nonlinear exposure-responses and the lag structure of the relationship, respectively. The methodology is illustrated with an example application to cohort data and validated through a simulation study. This modeling framework generalizes to various study designs and regression models, and can be applied to study the health effects of protracted exposures to environmental factors, drugs or carcinogenic agents, among others. © 2013 The Authors. Statistics in Medicine published by John Wiley & Sons, Ltd. PMID:24027094

  2. Biochemical methane potential prediction of plant biomasses: Comparing chemical composition versus near infrared methods and linear versus non-linear models.

    Science.gov (United States)

    Godin, Bruno; Mayer, Frédéric; Agneessens, Richard; Gerin, Patrick; Dardenne, Pierre; Delfosse, Philippe; Delcarte, Jérôme

    2015-01-01

    The reliability of different models to predict the biochemical methane potential (BMP) of various plant biomasses using a multispecies dataset was compared. The most reliable prediction models of the BMP were those based on the near infrared (NIR) spectrum compared to those based on the chemical composition. The NIR predictions of local (specific regression and non-linear) models were able to estimate quantitatively, rapidly, cheaply and easily the BMP. Such a model could be further used for biomethanation plant management and optimization. The predictions of non-linear models were more reliable compared to those of linear models. The presentation form (green-dried, silage-dried and silage-wet form) of biomasses to the NIR spectrometer did not influence the performances of the NIR prediction models. The accuracy of the BMP method should be improved to enhance further the BMP prediction models. Copyright © 2014 Elsevier Ltd. All rights reserved.

  3. An R2 statistic for fixed effects in the linear mixed model.

    Science.gov (United States)

    Edwards, Lloyd J; Muller, Keith E; Wolfinger, Russell D; Qaqish, Bahjat F; Schabenberger, Oliver

    2008-12-20

    Statisticians most often use the linear mixed model to analyze Gaussian longitudinal data. The value and familiarity of the R(2) statistic in the linear univariate model naturally creates great interest in extending it to the linear mixed model. We define and describe how to compute a model R(2) statistic for the linear mixed model by using only a single model. The proposed R(2) statistic measures multivariate association between the repeated outcomes and the fixed effects in the linear mixed model. The R(2) statistic arises as a 1-1 function of an appropriate F statistic for testing all fixed effects (except typically the intercept) in a full model. The statistic compares the full model with a null model with all fixed effects deleted (except typically the intercept) while retaining exactly the same covariance structure. Furthermore, the R(2) statistic leads immediately to a natural definition of a partial R(2) statistic. A mixed model in which ethnicity gives a very small p-value as a longitudinal predictor of blood pressure (BP) compellingly illustrates the value of the statistic. In sharp contrast to the extreme p-value, a very small R(2) , a measure of statistical and scientific importance, indicates that ethnicity has an almost negligible association with the repeated BP outcomes for the study.

  4. Comparison of linear, skewed-linear, and proportional hazard models for the analysis of lambing interval in Ripollesa ewes.

    Science.gov (United States)

    Casellas, J; Bach, R

    2012-06-01

    Lambing interval is a relevant reproductive indicator for sheep populations under continuous mating systems, although there is a shortage of selection programs accounting for this trait in the sheep industry. Both the historical assumption of small genetic background and its unorthodox distribution pattern have limited its implementation as a breeding objective. In this manuscript, statistical performances of 3 alternative parametrizations [i.e., symmetric Gaussian mixed linear (GML) model, skew-Gaussian mixed linear (SGML) model, and piecewise Weibull proportional hazard (PWPH) model] have been compared to elucidate the preferred methodology to handle lambing interval data. More specifically, flock-by-flock analyses were performed on 31,986 lambing interval records (257.3 ± 0.2 d) from 6 purebred Ripollesa flocks. Model performances were compared in terms of deviance information criterion (DIC) and Bayes factor (BF). For all flocks, PWPH models were clearly preferred; they generated a reduction of 1,900 or more DIC units and provided BF estimates larger than 100 (i.e., PWPH models against linear models). These differences were reduced when comparing PWPH models with different number of change points for the baseline hazard function. In 4 flocks, only 2 change points were required to minimize the DIC, whereas 4 and 6 change points were needed for the 2 remaining flocks. These differences demonstrated a remarkable degree of heterogeneity across sheep flocks that must be properly accounted for in genetic evaluation models to avoid statistical biases and suboptimal genetic trends. Within this context, all 6 Ripollesa flocks revealed substantial genetic background for lambing interval with heritabilities ranging between 0.13 and 0.19. This study provides the first evidence of the suitability of PWPH models for lambing interval analysis, clearly discarding previous parametrizations focused on mixed linear models.

  5. Effects of frequency and duration on psychometric functions for detection of increments and decrements in sinusoids in noise.

    Science.gov (United States)

    Moore, B C; Peters, R W; Glasberg, B R

    1999-12-01

    Psychometric functions for detecting increments or decrements in level of sinusoidal pedestals were measured for increment and decrement durations of 5, 10, 20, 50, 100, and 200 ms and for frequencies of 250, 1000, and 4000 Hz. The sinusoids were presented in background noise intended to mask spectral splatter. A three-interval, three-alternative procedure was used. The results indicated that, for increments, the detectability index d' was approximately proportional to delta I/I. For decrements, d' was approximately proportional to delta L. The slopes of the psychometric functions increased (indicating better performance) with increasing frequency for both increments and decrements. For increments, the slopes increased with increasing increment duration up to 200 ms at 250 and 1000 Hz, but at 4000 Hz they increased only up to 50 ms. For decrements, the slopes increased for durations up to 50 ms, and then remained roughly constant, for all frequencies. For a center frequency of 250 Hz, the slopes of the psychometric functions for increment detection increased with duration more rapidly than predicted by a "multiple-looks" hypothesis, i.e., more rapidly than the square root of duration, for durations up to 50 ms. For center frequencies of 1000 and 4000 Hz, the slopes increased less rapidly than predicted by a multiple-looks hypothesis, for durations greater than about 20 ms. The slopes of the psychometric functions for decrement detection increased with decrement duration at a rate slightly greater than the square root of duration, for durations up to 50 ms, at all three frequencies. For greater durations, the increase in slope was less than proportional to the square root of duration. The results were analyzed using a model incorporating a simulated auditory filter, a compressive nonlinearity, a sliding temporal integrator, and a decision device based on a template mechanism. The model took into account the effects of both the external noise and an assumed internal

  6. Spatial variability in growth-increment chronologies of long-lived freshwater mussels: Implications for climate impacts and reconstructions

    Science.gov (United States)

    Black, Bryan A.; Dunham, Jason B.; Blundon, Brett W.; Raggon, Mark F.; Zima, Daniela

    2010-01-01

    Estimates of historical variability in river ecosystems are often lacking, but long-lived freshwater mussels could provide unique opportunities to understand past conditions in these environments. We applied dendrochronology techniques to quantify historical variability in growth-increment widths in valves (shells) of western pearlshell freshwater mussels (Margaritifera falcata). A total of 3 growth-increment chronologies, spanning 19 to 26 y in length, were developed. Growth was highly synchronous among individuals within each site, and to a lesser extent, chronologies were synchronous among sites. All 3 chronologies negatively related to instrumental records of stream discharge, while correlations with measures of water temperature were consistently positive but weaker. A reconstruction of stream discharge was performed using linear regressions based on a mussel growth chronology and the regional Palmer Drought Severity Index (PDSI). Models based on mussel growth and PDSI yielded similar coefficients of prediction (R2Pred) of 0.73 and 0.77, respectively, for predicting out-ofsample observations. From an ecological perspective, we found that mussel chronologies provided a rich source of information for understanding climate impacts. Responses of mussels to changes in climate and stream ecosystems can be very site- and process-specific, underscoring the complex nature of biotic responses to climate change and the need to understand both regional and local processes in projecting climate impacts on freshwater species.

  7. Contributions to micromechanical model of the non linear behavior of the Callovo-Oxfordian argillite; Contributions a la modelisation micromecanique du comportement non lineaire de l'argilite du callovo-oxfordien

    Energy Technology Data Exchange (ETDEWEB)

    Abou-Chakra Guery, A

    2007-12-15

    This work is performed in the general context of the project of underground disposal of radioactive waste, undertaken by the French National Radioactive Waste Management Agency (ANDRA). Due to its strong density and weak permeability, the formation of Callovo-Oxfordian argillite is chosen as one of possible geological barriers to radionuclides. The objective of the study to develop and validate a non linear homogenization approach of the mechanical behavior of Callovo-Oxfordian argillites. The material is modelled as a composite constituted of an elasto(visco)plastic clay matrix and of linear elastic or elastic damage inclusions. The macroscopic constitutive law is obtained by adapting the incremental method proposed by Hill. The derived model is first compared to Finite Element calculations on unit cell. It is then validated and applied for the prediction of the macroscopic stress-strain responses of the argillite at different geological depths. Finally, the micromechanical model is implemented in a commercial finite element code (Abaqus) for the simulation of a vertical shaft of the underground laboratory. This allows predicting the distribution of damage state and plastic strains and characterizing the excavation damage zone (EDZ). (author)

  8. Estimation of group means when adjusting for covariates in generalized linear models.

    Science.gov (United States)

    Qu, Yongming; Luo, Junxiang

    2015-01-01

    Generalized linear models are commonly used to analyze categorical data such as binary, count, and ordinal outcomes. Adjusting for important prognostic factors or baseline covariates in generalized linear models may improve the estimation efficiency. The model-based mean for a treatment group produced by most software packages estimates the response at the mean covariate, not the mean response for this treatment group for the studied population. Although this is not an issue for linear models, the model-based group mean estimates in generalized linear models could be seriously biased for the true group means. We propose a new method to estimate the group mean consistently with the corresponding variance estimation. Simulation showed the proposed method produces an unbiased estimator for the group means and provided the correct coverage probability. The proposed method was applied to analyze hypoglycemia data from clinical trials in diabetes. Copyright © 2014 John Wiley & Sons, Ltd.

  9. Identifiability Results for Several Classes of Linear Compartment Models.

    Science.gov (United States)

    Meshkat, Nicolette; Sullivant, Seth; Eisenberg, Marisa

    2015-08-01

    Identifiability concerns finding which unknown parameters of a model can be estimated, uniquely or otherwise, from given input-output data. If some subset of the parameters of a model cannot be determined given input-output data, then we say the model is unidentifiable. In this work, we study linear compartment models, which are a class of biological models commonly used in pharmacokinetics, physiology, and ecology. In past work, we used commutative algebra and graph theory to identify a class of linear compartment models that we call identifiable cycle models, which are unidentifiable but have the simplest possible identifiable functions (so-called monomial cycles). Here we show how to modify identifiable cycle models by adding inputs, adding outputs, or removing leaks, in such a way that we obtain an identifiable model. We also prove a constructive result on how to combine identifiable models, each corresponding to strongly connected graphs, into a larger identifiable model. We apply these theoretical results to several real-world biological models from physiology, cell biology, and ecology.

  10. A non-linear model of economic production processes

    Science.gov (United States)

    Ponzi, A.; Yasutomi, A.; Kaneko, K.

    2003-06-01

    We present a new two phase model of economic production processes which is a non-linear dynamical version of von Neumann's neoclassical model of production, including a market price-setting phase as well as a production phase. The rate of an economic production process is observed, for the first time, to depend on the minimum of its input supplies. This creates highly non-linear supply and demand dynamics. By numerical simulation, production networks are shown to become unstable when the ratio of different products to total processes increases. This provides some insight into observed stability of competitive capitalist economies in comparison to monopolistic economies. Capitalist economies are also shown to have low unemployment.

  11. Growth increments in teeth of Diictodon (Therapsida

    Directory of Open Access Journals (Sweden)

    J. Francis Thackeray

    1991-09-01

    Full Text Available Growth increments circa 0.02 mm in width have been observed in sectioned tusks of Diictodon from the Late Permian lower Beaufort succession of the South African Karoo, dated between about 260 and 245 million years ago. Mean growth increments show a decline from relatively high values in the Tropidostoma/Endothiodon Assemblage Zone, to lower values in the Aulacephalodon/Cistecephaluszone, declining still further in the Dicynodon lacerficeps/Whaitsia zone at the end of the Permian. These changes coincide with gradual changes in carbon isotope ratios measured from Diictodon tooth apatite. It is suggested that the decline in growth increments is related to environmental changes associated with a decline in primary production which contributed to the decline in abundance and ultimate extinction of Diictodon.

  12. Effect Displays in R for Generalised Linear Models

    Directory of Open Access Journals (Sweden)

    John Fox

    2003-07-01

    Full Text Available This paper describes the implementation in R of a method for tabular or graphical display of terms in a complex generalised linear model. By complex, I mean a model that contains terms related by marginality or hierarchy, such as polynomial terms, or main effects and interactions. I call these tables or graphs effect displays. Effect displays are constructed by identifying high-order terms in a generalised linear model. Fitted values under the model are computed for each such term. The lower-order "relatives" of a high-order term (e.g., main effects marginal to an interaction are absorbed into the term, allowing the predictors appearing in the high-order term to range over their values. The values of other predictors are fixed at typical values: for example, a covariate could be fixed at its mean or median, a factor at its proportional distribution in the data, or to equal proportions in its several levels. Variations of effect displays are also described, including representation of terms higher-order to any appearing in the model.

  13. H∞ /H2 model reduction through dilated linear matrix inequalities

    DEFF Research Database (Denmark)

    Adegas, Fabiano Daher; Stoustrup, Jakob

    2012-01-01

    This paper presents sufficient dilated linear matrix inequalities (LMI) conditions to the $H_{infty}$ and $H_{2}$ model reduction problem. A special structure of the auxiliary (slack) variables allows the original model of order $n$ to be reduced to an order $r=n/s$ where $n,r,s in field{N}$. Arb......This paper presents sufficient dilated linear matrix inequalities (LMI) conditions to the $H_{infty}$ and $H_{2}$ model reduction problem. A special structure of the auxiliary (slack) variables allows the original model of order $n$ to be reduced to an order $r=n/s$ where $n,r,s in field...

  14. Optimization Research of Generation Investment Based on Linear Programming Model

    Science.gov (United States)

    Wu, Juan; Ge, Xueqian

    Linear programming is an important branch of operational research and it is a mathematical method to assist the people to carry out scientific management. GAMS is an advanced simulation and optimization modeling language and it will combine a large number of complex mathematical programming, such as linear programming LP, nonlinear programming NLP, MIP and other mixed-integer programming with the system simulation. In this paper, based on the linear programming model, the optimized investment decision-making of generation is simulated and analyzed. At last, the optimal installed capacity of power plants and the final total cost are got, which provides the rational decision-making basis for optimized investments.

  15. Incremental health expenditure and lost days of normal activity for individuals with mental disorders: results from the São Paulo Megacity Study.

    Science.gov (United States)

    Chiavegatto Filho, Alexandre Dias Porto; Wang, Yuan-Pang; Campino, Antonio Carlos Coelho; Malik, Ana Maria; Viana, Maria Carmen; Andrade, Laura Helena

    2015-08-05

    With the recent increase in the prevalence of mental disorders in developing countries, there is a growing interest in the study of its consequences. We examined the association of depression, anxiety and any mental disorders with incremental health expenditure, i.e. the linear increase in health expenditure associated with mental disorders, and lost days of normal activity. We analyzed the results from a representative sample survey of residents of the Metropolitan Region of São Paulo (n = 2,920; São Paulo Megacity Mental Health Survey), part of the World Mental Health (WMH) Survey Initiative, coordinated by the World Health Organization and performed in 28 countries. The instrument used for obtaining the individual results, including the assessment of mental disorders, was the WMH version of the Composite International Diagnostic Interview 3.0 (WMH-CIDI 3.0) that generates psychiatric diagnoses according to the Diagnostic and Statistical Manual of Mental Disorders (DSM-IV) criteria. Statistical analyses were performed by multilevel generalized least squares (GLS) regression models. Sociodemographic determinants such as income, age, education and marital status were included as controls. Depression, anxiety and any mental disorders were consistently associated with both incremental health expenditure and missing days of normal activity. Depression was associated with an incremental annual expenditure of R$308.28 (95% CI: R$194.05-R$422.50), or US$252.48 in terms of purchasing power parity (PPP). Anxiety and any mental disorders were associated with a lower, but also statistically significant, incremental annual expenditure (R$177.82, 95% CI: 79.68-275.97; and R$180.52, 95% CI: 91.13-269.92, or US$145.64 and US$147.85 in terms of PPP, respectively). Most of the incremental health costs associated with mental disorders came from medications. Depression was independently associated with higher incremental health expenditure than the two most prevalent chronic

  16. Incremental Integrity Checking: Limitations and Possibilities

    DEFF Research Database (Denmark)

    Christiansen, Henning; Martinenghi, Davide

    2005-01-01

    Integrity checking is an essential means for the preservation of the intended semantics of a deductive database. Incrementality is the only feasible approach to checking and can be obtained with respect to given update patterns by exploiting query optimization techniques. By reducing the problem...... to query containment, we show that no procedure exists that always returns the best incremental test (aka simplification of integrity constraints), and this according to any reasonable criterion measuring the checking effort. In spite of this theoretical limitation, we develop an effective procedure...

  17. Modeling of non-linear CHP efficiency curves in distributed energy systems

    DEFF Research Database (Denmark)

    Milan, Christian; Stadler, Michael; Cardoso, Gonçalo

    2015-01-01

    Distributed energy resources gain an increased importance in commercial and industrial building design. Combined heat and power (CHP) units are considered as one of the key technologies for cost and emission reduction in buildings. In order to make optimal decisions on investment and operation...... for these technologies, detailed system models are needed. These models are often formulated as linear programming problems to keep computational costs and complexity in a reasonable range. However, CHP systems involve variations of the efficiency for large nameplate capacity ranges and in case of part load operation......, which can be even of non-linear nature. Since considering these characteristics would turn the models into non-linear problems, in most cases only constant efficiencies are assumed. This paper proposes possible solutions to address this issue. For a mixed integer linear programming problem two...

  18. Dynamic physiological responses to the incremental shuttle walk test in adults

    Directory of Open Access Journals (Sweden)

    Evandro Fornias Sperandio

    Full Text Available Abstract Introduction: Understanding the normal dynamic physiological responses to the incremental shuttle walk test might enhance the interpretation of walking performance in clinical settings. Objective: To assess dynamic physiological responses to the incremental shuttle walk test and its predictors in healthy adults. Methods: We assessed the simultaneous rates of changes of Δoxygen uptake/Δwalking velocity (ΔVO 2 /ΔWV, Δheart rate/Δoxygen uptake (ΔHR/ΔVO 2 , Δventilation/Δcarbon dioxide production (ΔVE/ΔVCO 2 , and Δtidal volume/Δlinearized ventilation (ΔVT/ΔlnVE during the incremental shuttle walk test in 100 men and women older than 40 years. Fat and lean body masses (bioimpedance were also evaluated. Results: We found that the dynamic relationships were not sex-dependent. Participants aged ≥ 70 presented declines in ΔVO 2 /ΔWV slope compared to those aged 40-49 (215 ± 69 vs. 288 ± 84 mL.min-1.km.h-1. Obese participants presented shallower slopes for ΔVO 2 /ΔWV (2.94 ± 0.90 vs. 3.84 ± 1.21 mL.min-1.kg-1.km.h-1 and ΔVT/ΔlnVE (0.57 ± 0.20 vs. 0.67 ± 0.26. We found negative influence of fat body mass on ΔVT/ΔlnVE (R2 = 0.20 and positive influence of lean body mass on ΔVO 2 /ΔWV (R2 = 0.31, ΔHR/ΔVO2 (R2 = 0.25, and ΔVT/ΔlnVE (R2 = 0.44. Conclusion: Dynamic relationships during walking were slightly influenced by age, but not sex-dependent. Body composition played an important role in these indices. Our results may provide better interpretation of walking performance in patients with chronic diseases.

  19. Non-linear characterisation of the physical model of an ancient masonry bridge

    International Nuclear Information System (INIS)

    Fragonara, L Zanotti; Ceravolo, R; Matta, E; Quattrone, A; De Stefano, A; Pecorelli, M

    2012-01-01

    This paper presents the non-linear investigations carried out on a scaled model of a two-span masonry arch bridge. The model has been built in order to study the effect of the central pile settlement due to riverbank erosion. Progressive damage was induced in several steps by applying increasing settlements at the central pier. For each settlement step, harmonic shaker tests were conducted under different excitation levels, this allowing for the non-linear identification of the progressively damaged system. The shaker tests have been performed at resonance with the modal frequency of the structure, which were determined from a previous linear identification. Estimated non-linearity parameters, which result from the systematic application of restoring force based identification algorithms, can corroborate models to be used in the reassessment of existing structures. The method used for non-linear identification allows monitoring the evolution of non-linear parameters or indicators which can be used in damage and safety assessment.

  20. Study of the critical behavior of the O(N) linear and nonlinear sigma models

    International Nuclear Information System (INIS)

    Graziani, F.R.

    1983-01-01

    A study of the large N behavior of both the O(N) linear and nonlinear sigma models is presented. The purpose is to investigate the relationship between the disordered (ordered) phase of the linear and nonlinear sigma models. Utilizing operator product expansions and stability analyses, it is shown that for 2 - (lambda/sub R/(M) is the dimensionless renormalized quartic coupling and lambda* is the IR fixed point) limit of the linear sigma model which yields the nonlinear sigma model. It is also shown that stable large N linear sigma models with lambda 0) and nonlinear models are trivial. This result (i.e., triviality) is well known but only for one and two component models. Interestingly enough, the lambda< d = 4 linear sigma model remains nontrivial and tachyonic free

  1. Design of methodology for incremental compiler construction

    Directory of Open Access Journals (Sweden)

    Pavel Haluza

    2011-01-01

    Full Text Available The paper deals with possibilities of the incremental compiler construction. It represents the compiler construction possibilities for languages with a fixed set of lexical units and for languages with a variable set of lexical units, too. The methodology design for the incremental compiler construction is based on the known algorithms for standard compiler construction and derived for both groups of languages. Under the group of languages with a fixed set of lexical units there belong languages, where each lexical unit has its constant meaning, e.g., common programming languages. For this group of languages the paper tries to solve the problem of the incremental semantic analysis, which is based on incremental parsing. In the group of languages with a variable set of lexical units (e.g., professional typographic system TEX, it is possible to change arbitrarily the meaning of each character on the input file at any time during processing. The change takes effect immediately and its validity can be somehow limited or is given by the end of the input. For this group of languages this paper tries to solve the problem case when we use macros temporarily changing the category of arbitrary characters.

  2. Linear versus non-linear supersymmetry, in general

    Energy Technology Data Exchange (ETDEWEB)

    Ferrara, Sergio [Theoretical Physics Department, CERN,CH-1211 Geneva 23 (Switzerland); INFN - Laboratori Nazionali di Frascati,Via Enrico Fermi 40, I-00044 Frascati (Italy); Department of Physics and Astronomy, UniversityC.L.A.,Los Angeles, CA 90095-1547 (United States); Kallosh, Renata [SITP and Department of Physics, Stanford University,Stanford, California 94305 (United States); Proeyen, Antoine Van [Institute for Theoretical Physics, Katholieke Universiteit Leuven,Celestijnenlaan 200D, B-3001 Leuven (Belgium); Wrase, Timm [Institute for Theoretical Physics, Technische Universität Wien,Wiedner Hauptstr. 8-10, A-1040 Vienna (Austria)

    2016-04-12

    We study superconformal and supergravity models with constrained superfields. The underlying version of such models with all unconstrained superfields and linearly realized supersymmetry is presented here, in addition to the physical multiplets there are Lagrange multiplier (LM) superfields. Once the equations of motion for the LM superfields are solved, some of the physical superfields become constrained. The linear supersymmetry of the original models becomes non-linearly realized, its exact form can be deduced from the original linear supersymmetry. Known examples of constrained superfields are shown to require the following LM’s: chiral superfields, linear superfields, general complex superfields, some of them are multiplets with a spin.

  3. Linear versus non-linear supersymmetry, in general

    International Nuclear Information System (INIS)

    Ferrara, Sergio; Kallosh, Renata; Proeyen, Antoine Van; Wrase, Timm

    2016-01-01

    We study superconformal and supergravity models with constrained superfields. The underlying version of such models with all unconstrained superfields and linearly realized supersymmetry is presented here, in addition to the physical multiplets there are Lagrange multiplier (LM) superfields. Once the equations of motion for the LM superfields are solved, some of the physical superfields become constrained. The linear supersymmetry of the original models becomes non-linearly realized, its exact form can be deduced from the original linear supersymmetry. Known examples of constrained superfields are shown to require the following LM’s: chiral superfields, linear superfields, general complex superfields, some of them are multiplets with a spin.

  4. Inverse Modelling Problems in Linear Algebra Undergraduate Courses

    Science.gov (United States)

    Martinez-Luaces, Victor E.

    2013-01-01

    This paper will offer an analysis from a theoretical point of view of mathematical modelling, applications and inverse problems of both causation and specification types. Inverse modelling problems give the opportunity to establish connections between theory and practice and to show this fact, a simple linear algebra example in two different…

  5. A phenomenological biological dose model for proton therapy based on linear energy transfer spectra.

    Science.gov (United States)

    Rørvik, Eivind; Thörnqvist, Sara; Stokkevåg, Camilla H; Dahle, Tordis J; Fjaera, Lars Fredrik; Ytre-Hauge, Kristian S

    2017-06-01

    The relative biological effectiveness (RBE) of protons varies with the radiation quality, quantified by the linear energy transfer (LET). Most phenomenological models employ a linear dependency of the dose-averaged LET (LET d ) to calculate the biological dose. However, several experiments have indicated a possible non-linear trend. Our aim was to investigate if biological dose models including non-linear LET dependencies should be considered, by introducing a LET spectrum based dose model. The RBE-LET relationship was investigated by fitting of polynomials from 1st to 5th degree to a database of 85 data points from aerobic in vitro experiments. We included both unweighted and weighted regression, the latter taking into account experimental uncertainties. Statistical testing was performed to decide whether higher degree polynomials provided better fits to the data as compared to lower degrees. The newly developed models were compared to three published LET d based models for a simulated spread out Bragg peak (SOBP) scenario. The statistical analysis of the weighted regression analysis favored a non-linear RBE-LET relationship, with the quartic polynomial found to best represent the experimental data (P = 0.010). The results of the unweighted regression analysis were on the borderline of statistical significance for non-linear functions (P = 0.053), and with the current database a linear dependency could not be rejected. For the SOBP scenario, the weighted non-linear model estimated a similar mean RBE value (1.14) compared to the three established models (1.13-1.17). The unweighted model calculated a considerably higher RBE value (1.22). The analysis indicated that non-linear models could give a better representation of the RBE-LET relationship. However, this is not decisive, as inclusion of the experimental uncertainties in the regression analysis had a significant impact on the determination and ranking of the models. As differences between the models were

  6. Tracking and recognition face in videos with incremental local sparse representation model

    Science.gov (United States)

    Wang, Chao; Wang, Yunhong; Zhang, Zhaoxiang

    2013-10-01

    This paper addresses the problem of tracking and recognizing faces via incremental local sparse representation. First a robust face tracking algorithm is proposed via employing local sparse appearance and covariance pooling method. In the following face recognition stage, with the employment of a novel template update strategy, which combines incremental subspace learning, our recognition algorithm adapts the template to appearance changes and reduces the influence of occlusion and illumination variation. This leads to a robust video-based face tracking and recognition with desirable performance. In the experiments, we test the quality of face recognition in real-world noisy videos on YouTube database, which includes 47 celebrities. Our proposed method produces a high face recognition rate at 95% of all videos. The proposed face tracking and recognition algorithms are also tested on a set of noisy videos under heavy occlusion and illumination variation. The tracking results on challenging benchmark videos demonstrate that the proposed tracking algorithm performs favorably against several state-of-the-art methods. In the case of the challenging dataset in which faces undergo occlusion and illumination variation, and tracking and recognition experiments under significant pose variation on the University of California, San Diego (Honda/UCSD) database, our proposed method also consistently demonstrates a high recognition rate.

  7. Available pressure amplitude of linear compressor based on phasor triangle model

    Science.gov (United States)

    Duan, C. X.; Jiang, X.; Zhi, X. Q.; You, X. K.; Qiu, L. M.

    2017-12-01

    The linear compressor for cryocoolers possess the advantages of long-life operation, high efficiency, low vibration and compact structure. It is significant to study the match mechanisms between the compressor and the cold finger, which determines the working efficiency of the cryocooler. However, the output characteristics of linear compressor are complicated since it is affected by many interacting parameters. The existing matching methods are simplified and mainly focus on the compressor efficiency and output acoustic power, while neglecting the important output parameter of pressure amplitude. In this study, a phasor triangle model basing on analyzing the forces of the piston is proposed. It can be used to predict not only the output acoustic power, the efficiency, but also the pressure amplitude of the linear compressor. Calculated results agree well with the measurement results of the experiment. By this phasor triangle model, the theoretical maximum output pressure amplitude of the linear compressor can be calculated simply based on a known charging pressure and operating frequency. Compared with the mechanical and electrical model of the linear compressor, the new model can provide an intuitionistic understanding on the match mechanism with faster computational process. The model can also explain the experimental phenomenon of the proportional relationship between the output pressure amplitude and the piston displacement in experiments. By further model analysis, such phenomenon is confirmed as an expression of the unmatched design of the compressor. The phasor triangle model may provide an alternative method for the compressor design and matching with the cold finger.

  8. 21 CFR 874.1070 - Short increment sensitivity index (SISI) adapter.

    Science.gov (United States)

    2010-04-01

    ... 21 Food and Drugs 8 2010-04-01 2010-04-01 false Short increment sensitivity index (SISI) adapter. 874.1070 Section 874.1070 Food and Drugs FOOD AND DRUG ADMINISTRATION, DEPARTMENT OF HEALTH AND HUMAN... increment sensitivity index (SISI) adapter. (a) Identification. A short increment sensitivity index (SISI...

  9. A Syntactic-Semantic Approach to Incremental Verification

    OpenAIRE

    Bianculli, Domenico; Filieri, Antonio; Ghezzi, Carlo; Mandrioli, Dino

    2013-01-01

    Software verification of evolving systems is challenging mainstream methodologies and tools. Formal verification techniques often conflict with the time constraints imposed by change management practices for evolving systems. Since changes in these systems are often local to restricted parts, an incremental verification approach could be beneficial. This paper introduces SiDECAR, a general framework for the definition of verification procedures, which are made incremental by the framework...

  10. The Overgeneralization of Linear Models among University Students' Mathematical Productions: A Long-Term Study

    Science.gov (United States)

    Esteley, Cristina B.; Villarreal, Monica E.; Alagia, Humberto R.

    2010-01-01

    Over the past several years, we have been exploring and researching a phenomenon that occurs among undergraduate students that we called extension of linear models to non-linear contexts or overgeneralization of linear models. This phenomenon appears when some students use linear representations in situations that are non-linear. In a first phase,…

  11. Free-piston engine linear generator for hybrid vehicles modeling study

    Science.gov (United States)

    Callahan, T. J.; Ingram, S. K.

    1995-05-01

    Development of a free piston engine linear generator was investigated for use as an auxiliary power unit for a hybrid electric vehicle. The main focus of the program was to develop an efficient linear generator concept to convert the piston motion directly into electrical power. Computer modeling techniques were used to evaluate five different designs for linear generators. These designs included permanent magnet generators, reluctance generators, linear DC generators, and two and three-coil induction generators. The efficiency of the linear generator was highly dependent on the design concept. The two-coil induction generator was determined to be the best design, with an efficiency of approximately 90 percent.

  12. Influence of regression model and initial intensity of an incremental test on the relationship between the lactate threshold estimated by the maximal-deviation method and running performance.

    Science.gov (United States)

    Santos-Concejero, Jordan; Tucker, Ross; Granados, Cristina; Irazusta, Jon; Bidaurrazaga-Letona, Iraia; Zabala-Lili, Jon; Gil, Susana María

    2014-01-01

    This study investigated the influence of the regression model and initial intensity during an incremental test on the relationship between the lactate threshold estimated by the maximal-deviation method and performance in elite-standard runners. Twenty-three well-trained runners completed a discontinuous incremental running test on a treadmill. Speed started at 9 km · h(-1) and increased by 1.5 km · h(-1) every 4 min until exhaustion, with a minute of recovery for blood collection. Lactate-speed data were fitted by exponential and polynomial models. The lactate threshold was determined for both models, using all the co-ordinates, excluding the first and excluding the first and second points. The exponential lactate threshold was greater than the polynomial equivalent in any co-ordinate condition (P performance and is independent of the initial intensity of the test.

  13. Neutron stars in non-linear coupling models

    International Nuclear Information System (INIS)

    Taurines, Andre R.; Vasconcellos, Cesar A.Z.; Malheiro, Manuel; Chiapparini, Marcelo

    2001-01-01

    We present a class of relativistic models for nuclear matter and neutron stars which exhibits a parameterization, through mathematical constants, of the non-linear meson-baryon couplings. For appropriate choices of the parameters, it recovers current QHD models found in the literature: Walecka, ZM and ZM3 models. We have found that the ZM3 model predicts a very small maximum neutron star mass, ∼ 0.72M s un. A strong similarity between the results of ZM-like models and those with exponential couplings is noted. Finally, we discuss the very intense scalar condensates found in the interior of neutron stars which may lead to negative effective masses. (author)

  14. Neutron stars in non-linear coupling models

    Energy Technology Data Exchange (ETDEWEB)

    Taurines, Andre R.; Vasconcellos, Cesar A.Z. [Rio Grande do Sul Univ., Porto Alegre, RS (Brazil); Malheiro, Manuel [Universidade Federal Fluminense, Niteroi, RJ (Brazil); Chiapparini, Marcelo [Universidade do Estado, Rio de Janeiro, RJ (Brazil)

    2001-07-01

    We present a class of relativistic models for nuclear matter and neutron stars which exhibits a parameterization, through mathematical constants, of the non-linear meson-baryon couplings. For appropriate choices of the parameters, it recovers current QHD models found in the literature: Walecka, ZM and ZM3 models. We have found that the ZM3 model predicts a very small maximum neutron star mass, {approx} 0.72M{sub s}un. A strong similarity between the results of ZM-like models and those with exponential couplings is noted. Finally, we discuss the very intense scalar condensates found in the interior of neutron stars which may lead to negative effective masses. (author)

  15. Contributions to micromechanical model of the non linear behavior of the Callovo-Oxfordian argillite; Contributions a la modelisation micromecanique du comportement non lineaire de l'argilite du callovo-oxfordien

    Energy Technology Data Exchange (ETDEWEB)

    Abou-Chakra Guery, A

    2007-12-15

    This work is performed in the general context of the project of underground disposal of radioactive waste, undertaken by the French National Radioactive Waste Management Agency (ANDRA). Due to its strong density and weak permeability, the formation of Callovo-Oxfordian argillite is chosen as one of possible geological barriers to radionuclides. The objective of the study to develop and validate a non linear homogenization approach of the mechanical behavior of Callovo-Oxfordian argillites. The material is modelled as a composite constituted of an elasto(visco)plastic clay matrix and of linear elastic or elastic damage inclusions. The macroscopic constitutive law is obtained by adapting the incremental method proposed by Hill. The derived model is first compared to Finite Element calculations on unit cell. It is then validated and applied for the prediction of the macroscopic stress-strain responses of the argillite at different geological depths. Finally, the micromechanical model is implemented in a commercial finite element code (Abaqus) for the simulation of a vertical shaft of the underground laboratory. This allows predicting the distribution of damage state and plastic strains and characterizing the excavation damage zone (EDZ). (author)

  16. A comparison of linear tyre models for analysing shimmy

    NARCIS (Netherlands)

    Besselink, I.J.M.; Maas, J.W.L.H.; Nijmeijer, H.

    2011-01-01

    A comparison is made between three linear, dynamic tyre models using low speed step responses and yaw oscillation tests. The match with the measurements improves with increasing complexity of the tyre model. Application of the different tyre models to a two degree of freedom trailing arm suspension

  17. S-AMP for non-linear observation models

    DEFF Research Database (Denmark)

    Cakmak, Burak; Winther, Ole; Fleury, Bernard H.

    2015-01-01

    Recently we presented the S-AMP approach, an extension of approximate message passing (AMP), to be able to handle general invariant matrix ensembles. In this contribution we extend S-AMP to non-linear observation models. We obtain generalized AMP (GAMP) as the special case when the measurement...

  18. Diagnostics for Linear Models With Functional Responses

    OpenAIRE

    Xu, Hongquan; Shen, Qing

    2005-01-01

    Linear models where the response is a function and the predictors are vectors are useful in analyzing data from designed experiments and other situations with functional observations. Residual analysis and diagnostics are considered for such models. Studentized residuals are defined and their properties are studied. Chi-square quantile-quantile plots are proposed to check the assumption of Gaussian error process and outliers. Jackknife residuals and an associated test are proposed to det...

  19. New classical r-matrices from integrable non-linear sigma-models

    International Nuclear Information System (INIS)

    Laartz, J.; Bordemann, M.; Forger, M.; Schaper, U.

    1993-01-01

    Non-linear sigma models on Riemannian symmetric spaces constitute the most general class of classical non-linear sigma models which are known to be integrable. Using the current algebra structure of these models their canonical structure is analyzed and it is shown that their non-ultralocal fundamental Poisson bracket relation is governed by a field dependent non antisymmetric r-matrix obeying a dynamical Yang Baxter equation. The fundamental Poisson bracket relations and the r-matrix are derived explicitly and a new kind of algebra is found that is supposed to replace the classical Yang Baxter algebra governing the canonical structure of ultralocal models. (Author) 9 refs

  20. Non-linear mixed-effects pharmacokinetic/pharmacodynamic modelling in NLME using differential equations

    DEFF Research Database (Denmark)

    Tornøe, Christoffer Wenzel; Agersø, Henrik; Madsen, Henrik

    2004-01-01

    The standard software for non-linear mixed-effect analysis of pharmacokinetic/phar-macodynamic (PK/PD) data is NONMEM while the non-linear mixed-effects package NLME is an alternative as tong as the models are fairly simple. We present the nlmeODE package which combines the ordinary differential...... equation (ODE) solver package odesolve and the non-Linear mixed effects package NLME thereby enabling the analysis of complicated systems of ODEs by non-linear mixed-effects modelling. The pharmacokinetics of the anti-asthmatic drug theophylline is used to illustrate the applicability of the nlme...

  1. Matrix algebra for linear models

    CERN Document Server

    Gruber, Marvin H J

    2013-01-01

    Matrix methods have evolved from a tool for expressing statistical problems to an indispensable part of the development, understanding, and use of various types of complex statistical analyses. This evolution has made matrix methods a vital part of statistical education. Traditionally, matrix methods are taught in courses on everything from regression analysis to stochastic processes, thus creating a fractured view of the topic. Matrix Algebra for Linear Models offers readers a unique, unified view of matrix analysis theory (where and when necessary), methods, and their applications. Written f

  2. A gauge model describing N relativistic particles bound by linear forces

    International Nuclear Information System (INIS)

    Filippov, A.T.

    1988-01-01

    A relativistic model of N particles bound by linear forces is obtained by applying the gauging procedure to the linear canonical symmteries of a simple (rudimentary) nonrelativistic N-particle Lagrangian extended to relativistic phase space. The new (gauged) Lagrangian is formally Poincare invariant, the Hamiltonian is a linear combination of first-class constraints which are closed with respect to Pisson brackets and generate the localized canonical symmteries. The gauge potentials appear as the Lagrange multipliers of the constraints. Gauge fixing and quantization of the model are also briefly discussed. 11 refs

  3. Mathematical modelling in engineering: A proposal to introduce linear algebra concepts

    Directory of Open Access Journals (Sweden)

    Andrea Dorila Cárcamo

    2016-03-01

    Full Text Available The modern dynamic world requires that basic science courses for engineering, including linear algebra, emphasize the development of mathematical abilities primarily associated with modelling and interpreting, which aren´t limited only to calculus abilities. Considering this, an instructional design was elaborated based on mathematic modelling and emerging heuristic models for the construction of specific linear algebra concepts:  span and spanning set. This was applied to first year engineering students. Results suggest that this type of instructional design contributes to the construction of these mathematical concepts and can also favour first year engineering students understanding of key linear algebra concepts and potentiate the development of higher order skills.

  4. Non-linear models for the detection of impaired cerebral blood flow autoregulation.

    Science.gov (United States)

    Chacón, Max; Jara, José Luis; Miranda, Rodrigo; Katsogridakis, Emmanuel; Panerai, Ronney B

    2018-01-01

    The ability to discriminate between normal and impaired dynamic cerebral autoregulation (CA), based on measurements of spontaneous fluctuations in arterial blood pressure (BP) and cerebral blood flow (CBF), has considerable clinical relevance. We studied 45 normal subjects at rest and under hypercapnia induced by breathing a mixture of carbon dioxide and air. Non-linear models with BP as input and CBF velocity (CBFV) as output, were implemented with support vector machines (SVM) using separate recordings for learning and validation. Dynamic SVM implementations used either moving average or autoregressive structures. The efficiency of dynamic CA was estimated from the model's derived CBFV response to a step change in BP as an autoregulation index for both linear and non-linear models. Non-linear models with recurrences (autoregressive) showed the best results, with CA indexes of 5.9 ± 1.5 in normocapnia, and 2.5 ± 1.2 for hypercapnia with an area under the receiver-operator curve of 0.955. The high performance achieved by non-linear SVM models to detect deterioration of dynamic CA should encourage further assessment of its applicability to clinical conditions where CA might be impaired.

  5. Analysis of baseline, average, and longitudinally measured blood pressure data using linear mixed models.

    Science.gov (United States)

    Hossain, Ahmed; Beyene, Joseph

    2014-01-01

    This article compares baseline, average, and longitudinal data analysis methods for identifying genetic variants in genome-wide association study using the Genetic Analysis Workshop 18 data. We apply methods that include (a) linear mixed models with baseline measures, (b) random intercept linear mixed models with mean measures outcome, and (c) random intercept linear mixed models with longitudinal measurements. In the linear mixed models, covariates are included as fixed effects, whereas relatedness among individuals is incorporated as the variance-covariance structure of the random effect for the individuals. The overall strategy of applying linear mixed models decorrelate the data is based on Aulchenko et al.'s GRAMMAR. By analyzing systolic and diastolic blood pressure, which are used separately as outcomes, we compare the 3 methods in identifying a known genetic variant that is associated with blood pressure from chromosome 3 and simulated phenotype data. We also analyze the real phenotype data to illustrate the methods. We conclude that the linear mixed model with longitudinal measurements of diastolic blood pressure is the most accurate at identifying the known single-nucleotide polymorphism among the methods, but linear mixed models with baseline measures perform best with systolic blood pressure as the outcome.

  6. Shredder: GPU-Accelerated Incremental Storage and Computation

    OpenAIRE

    Bhatotia, Pramod; Rodrigues, Rodrigo; Verma, Akshat

    2012-01-01

    Redundancy elimination using data deduplication and incremental data processing has emerged as an important technique to minimize storage and computation requirements in data center computing. In this paper, we present the design, implementation and evaluation of Shredder, a high performance content-based chunking framework for supporting incremental storage and computation systems. Shredder exploits the massively parallel processing power of GPUs to overcome the CPU bottlenecks of content-ba...

  7. An Incremental Time-delay Neural Network for Dynamical Recurrent Associative Memory

    Institute of Scientific and Technical Information of China (English)

    2002-01-01

    An incremental time-delay neural network based on synapse growth, which is suitable for dynamic control and learning of autonomous robots, is proposed to improve the learning and retrieving performance of dynamical recurrent associative memory architecture. The model allows steady and continuous establishment of associative memory for spatio-temporal regularities and time series in discrete sequence of inputs. The inserted hidden units can be taken as the long-term memories that expand the capacity of network and sometimes may fade away under certain condition. Preliminary experiment has shown that this incremental network may be a promising approach to endow autonomous robots with the ability of adapting to new data without destroying the learned patterns. The system also benefits from its potential chaos character for emergence.

  8. How to Understand Incrementalism?: Politics of Charles Lindblom’s Theory

    Directory of Open Access Journals (Sweden)

    Krešimir Petković

    2007-01-01

    Full Text Available The paper is dedicated to the political process theory by the American political scientist and economist Charles E. Lindblom. After providing a contextual insight into Lindblom’s complete theoretical opus, which is a necessary prerequisite for the interpretative manoeuvre in the central part of the text, the paper is primarily focused on Lindblom’s theory of incremental decision-making, developed in The Science of Muddling Through (1959 and in A Strategy of Decision (1963, which is related to his concept of “partisan mutual adjustment” developed in The Intelligence of Democracy (1965. The paper offers an interpretation of Lindblom’s argument which moves away from its past understanding in Croatian political science literature. There, Lindblom’s decision-making model has been basically interpreted descriptively, as a description of the actual decision-making practices, and opposed to the prescriptive rational decision-making model, which is a characteristic feature even of some foreign interpretations. This paper, however, suggests that Lindblom’s theory contains a strong prescriptive element. Lindblom’s theory of incrementalism, taken together with the pluralist model of partisan mutual adjustment, offers a complete and consistent model of politics with marked normative implications, which justifies the use of the syntagm the politics of theory, substantiated in greater detail in the final section of the paper.

  9. Linear models for joint association and linkage QTL mapping

    Directory of Open Access Journals (Sweden)

    Fernando Rohan L

    2009-09-01

    Full Text Available Abstract Background Populational linkage disequilibrium and within-family linkage are commonly used for QTL mapping and marker assisted selection. The combination of both results in more robust and accurate locations of the QTL, but models proposed so far have been either single marker, complex in practice or well fit to a particular family structure. Results We herein present linear model theory to come up with additive effects of the QTL alleles in any member of a general pedigree, conditional to observed markers and pedigree, accounting for possible linkage disequilibrium among QTLs and markers. The model is based on association analysis in the founders; further, the additive effect of the QTLs transmitted to the descendants is a weighted (by the probabilities of transmission average of the substitution effects of founders' haplotypes. The model allows for non-complete linkage disequilibrium QTL-markers in the founders. Two submodels are presented: a simple and easy to implement Haley-Knott type regression for half-sib families, and a general mixed (variance component model for general pedigrees. The model can use information from all markers. The performance of the regression method is compared by simulation with a more complex IBD method by Meuwissen and Goddard. Numerical examples are provided. Conclusion The linear model theory provides a useful framework for QTL mapping with dense marker maps. Results show similar accuracies but a bias of the IBD method towards the center of the region. Computations for the linear regression model are extremely simple, in contrast with IBD methods. Extensions of the model to genomic selection and multi-QTL mapping are straightforward.

  10. A non-linear dissipative model of magnetism

    Czech Academy of Sciences Publication Activity Database

    Durand, P.; Paidarová, Ivana

    2010-01-01

    Roč. 89, č. 6 (2010), s. 67004 ISSN 1286-4854 R&D Projects: GA AV ČR IAA100400501 Institutional research plan: CEZ:AV0Z40400503 Keywords : non-linear dissipative model of magnetism * thermodynamics * physical chemistry Subject RIV: CF - Physical ; Theoretical Chemistry http://epljournal.edpsciences.org/

  11. Modelling and measurement of a moving magnet linear compressor performance

    International Nuclear Information System (INIS)

    Liang, Kun; Stone, Richard; Davies, Gareth; Dadd, Mike; Bailey, Paul

    2014-01-01

    A novel moving magnet linear compressor with clearance seals and flexure bearings has been designed and constructed. It is suitable for a refrigeration system with a compact heat exchanger, such as would be needed for CPU cooling. The performance of the compressor has been experimentally evaluated with nitrogen and a mathematical model has been developed to evaluate the performance of the linear compressor. The results from the compressor model and the measurements have been compared in terms of cylinder pressure, the ‘P–V’ loop, stroke, mass flow rate and shaft power. The cylinder pressure was not measured directly but was derived from the compressor dynamics and the motor magnetic force characteristics. The comparisons indicate that the compressor model is well validated and can be used to study the performance of this type of compressor, to help with design optimization and the identification of key parameters affecting the system transients. The electrical and thermodynamic losses were also investigated, particularly for the design point (stroke of 13 mm and pressure ratio of 3.0), since a full understanding of these can lead to an increase in compressor efficiency. - Highlights: • Model predictions of the performance of a novel moving magnet linear compressor. • Prototype linear compressor performance measurements using nitrogen. • Reconstruction of P–V loops using a model of the dynamics and electromagnetics. • Close agreement between the model and measurements for the P–V loops. • The design point motor efficiency was 74%, with potential improvements identified

  12. Prediction of minimum temperatures in an alpine region by linear and non-linear post-processing of meteorological models

    Directory of Open Access Journals (Sweden)

    R. Barbiero

    2007-05-01

    Full Text Available Model Output Statistics (MOS refers to a method of post-processing the direct outputs of numerical weather prediction (NWP models in order to reduce the biases introduced by a coarse horizontal resolution. This technique is especially useful in orographically complex regions, where large differences can be found between the NWP elevation model and the true orography. This study carries out a comparison of linear and non-linear MOS methods, aimed at the prediction of minimum temperatures in a fruit-growing region of the Italian Alps, based on the output of two different NWPs (ECMWF T511–L60 and LAMI-3. Temperature, of course, is a particularly important NWP output; among other roles it drives the local frost forecast, which is of great interest to agriculture. The mechanisms of cold air drainage, a distinctive aspect of mountain environments, are often unsatisfactorily captured by global circulation models. The simplest post-processing technique applied in this work was a correction for the mean bias, assessed at individual model grid points. We also implemented a multivariate linear regression on the output at the grid points surrounding the target area, and two non-linear models based on machine learning techniques: Neural Networks and Random Forest. We compare the performance of all these techniques on four different NWP data sets. Downscaling the temperatures clearly improved the temperature forecasts with respect to the raw NWP output, and also with respect to the basic mean bias correction. Multivariate methods generally yielded better results, but the advantage of using non-linear algorithms was small if not negligible. RF, the best performing method, was implemented on ECMWF prognostic output at 06:00 UTC over the 9 grid points surrounding the target area. Mean absolute errors in the prediction of 2 m temperature at 06:00 UTC were approximately 1.2°C, close to the natural variability inside the area itself.

  13. Fault-tolerant incremental diagnosis with limited historical data

    OpenAIRE

    Gillblad, Daniel; Holst, Anders; Steinert, Rebecca

    2006-01-01

    In many diagnosis situations it is desirable to perform a classification in an iterative and interactive manner. All relevant information may not be available initially and must be acquired manually or at a cost. The matter is often complicated by very limited amounts of knowledge and examples when a new system to be diagnosed is initially brought into use. Here, we will describe how to create an incremental classification system based on a statistical model that is trained from empirical dat...

  14. A Linear Viscoelastic Model Calibration of Sylgard 184.

    Energy Technology Data Exchange (ETDEWEB)

    Long, Kevin Nicholas; Brown, Judith Alice

    2017-04-01

    We calibrate a linear thermoviscoelastic model for solid Sylgard 184 (90-10 formulation), a lightly cross-linked, highly flexible isotropic elastomer for use both in Sierra / Solid Mechanics via the Universal Polymer Model as well as in Sierra / Structural Dynamics (Salinas) for use as an isotropic viscoelastic material. Material inputs for the calibration in both codes are provided. The frequency domain master curve of oscillatory shear was obtained from a report from Los Alamos National Laboratory (LANL). However, because the form of that data is different from the constitutive models in Sierra, we also present the mapping of the LANL data onto Sandia’s constitutive models. Finally, blind predictions of cyclic tension and compression out to moderate strains of 40 and 20% respectively are compared with Sandia’s legacy cure schedule material. Although the strain rate of the data is unknown, the linear thermoviscoelastic model accurately predicts the experiments out to moderate strains for the slower strain rates, which is consistent with the expectation that quasistatic test procedures were likely followed. This good agreement comes despite the different cure schedules between the Sandia and LANL data.

  15. Phase-of-flight method for setting the accelerating fields in the ion linear accelerator

    International Nuclear Information System (INIS)

    Dvortsov, S.V.; Lomize, L.G.

    1983-01-01

    For setting amplitudes and phases of accelerating fields in multiresonator ion accelerators presently Δt-procedure is used. The determination and setting of two unknown parameters of RF-field (amplitude and phase) in n-resonator is made according to the two increments of particle time-of-flight, measured experimentally: according to the change of the particle time-of-flight Δt 1 in the n-resonator, during the field switching in the resonator, and according to the change of Δt 2 of the time-of-flight in (n+1) resonator without RF-field with the switching of accelerating field in the n-resonator. When approaching the accelerator exit the particle energy increases, relative energy increment decreases and the accuracy of setting decreases. To enchance the accuracy of accelerating fields setting in a linear ion accelerator a phase-of-flight method is developed, in which for the setting of accelerating fields the measured time-of-flight increment Δt only in one resonator is used (the one in which the change of amplitude and phase is performed). Results of simulation of point bunch motion in the IYaI AN USSR linear accelerator are presented

  16. Linearization of the interaction principle: Analytic Jacobians in the 'Radiant' model

    International Nuclear Information System (INIS)

    Spurr, R.J.D.; Christi, M.J.

    2007-01-01

    In this paper we present a new linearization of the Radiant radiative transfer model. Radiant uses discrete ordinates for solving the radiative transfer equation in a multiply-scattering anisotropic medium with solar and thermal sources, but employs the adding method (interaction principle) for the stacking of reflection and transmission matrices in a multilayer atmosphere. For the linearization, we show that the entire radiation field is analytically differentiable with respect to any surface or atmospheric parameter for which we require Jacobians (derivatives of the radiance field). Derivatives of the discrete ordinate solutions are based on existing methods developed for the LIDORT radiative transfer models. Linearization of the interaction principle is completely new and constitutes the major theme of the paper. We discuss the application of the Radiant model and its linearization in the Level 2 algorithm for the retrieval of columns of carbon dioxide as the main target of the Orbiting Carbon Observatory (OCO) mission

  17. A Technique of Fuzzy C-Mean in Multiple Linear Regression Model toward Paddy Yield

    Science.gov (United States)

    Syazwan Wahab, Nur; Saifullah Rusiman, Mohd; Mohamad, Mahathir; Amira Azmi, Nur; Che Him, Norziha; Ghazali Kamardan, M.; Ali, Maselan

    2018-04-01

    In this paper, we propose a hybrid model which is a combination of multiple linear regression model and fuzzy c-means method. This research involved a relationship between 20 variates of the top soil that are analyzed prior to planting of paddy yields at standard fertilizer rates. Data used were from the multi-location trials for rice carried out by MARDI at major paddy granary in Peninsular Malaysia during the period from 2009 to 2012. Missing observations were estimated using mean estimation techniques. The data were analyzed using multiple linear regression model and a combination of multiple linear regression model and fuzzy c-means method. Analysis of normality and multicollinearity indicate that the data is normally scattered without multicollinearity among independent variables. Analysis of fuzzy c-means cluster the yield of paddy into two clusters before the multiple linear regression model can be used. The comparison between two method indicate that the hybrid of multiple linear regression model and fuzzy c-means method outperform the multiple linear regression model with lower value of mean square error.

  18. Linear accelerator modeling: development and application

    International Nuclear Information System (INIS)

    Jameson, R.A.; Jule, W.D.

    1977-01-01

    Most of the parameters of a modern linear accelerator can be selected by simulating the desired machine characteristics in a computer code and observing how the parameters affect the beam dynamics. The code PARMILA is used at LAMPF for the low-energy portion of linacs. Collections of particles can be traced with a free choice of input distributions in six-dimensional phase space. Random errors are often included in order to study the tolerances which should be imposed during manufacture or in operation. An outline is given of the modifications made to the model, the results of experiments which indicate the validity of the model, and the use of the model to optimize the longitudinal tuning of the Alvarez linac

  19. Pornographic image recognition and filtering using incremental learning in compressed domain

    Science.gov (United States)

    Zhang, Jing; Wang, Chao; Zhuo, Li; Geng, Wenhao

    2015-11-01

    With the rapid development and popularity of the network, the openness, anonymity, and interactivity of networks have led to the spread and proliferation of pornographic images on the Internet, which have done great harm to adolescents' physical and mental health. With the establishment of image compression standards, pornographic images are mainly stored with compressed formats. Therefore, how to efficiently filter pornographic images is one of the challenging issues for information security. A pornographic image recognition and filtering method in the compressed domain is proposed by using incremental learning, which includes the following steps: (1) low-resolution (LR) images are first reconstructed from the compressed stream of pornographic images, (2) visual words are created from the LR image to represent the pornographic image, and (3) incremental learning is adopted to continuously adjust the classification rules to recognize the new pornographic image samples after the covering algorithm is utilized to train and recognize the visual words in order to build the initial classification model of pornographic images. The experimental results show that the proposed pornographic image recognition method using incremental learning has a higher recognition rate as well as costing less recognition time in the compressed domain.

  20. Incremental change or initial differences? Testing two models of marital deterioration.

    Science.gov (United States)

    Lavner, Justin A; Bradbury, Thomas N; Karney, Benjamin R

    2012-08-01

    Most couples begin marriage intent on maintaining a fulfilling relationship, but some newlyweds soon struggle, and others continue to experience high levels of satisfaction. Do these diverse outcomes result from an incremental process that unfolds over time, as prevailing models suggest, or are they a manifestation of initial differences that are largely evident at the start of the marriage? Using 8 waves of data collected over the first 4 years of marriage (N = 502 spouses, or 251 newlywed marriages), we tested these competing perspectives first by identifying 3 qualitatively distinct relationship satisfaction trajectory groups and then by determining the extent to which spouses in these groups were differentiated on the basis of (a) initial scores and (b) 4-year changes in a set of established predictor variables, including relationship problems, aggression, attributions, stress, and self-esteem. The majority of spouses exhibited high, stable satisfaction over the first 4 years of marriage, whereas declining satisfaction was isolated among couples with relatively low initial satisfaction. Across all predictor variables, initial values afforded stronger discrimination of outcome groups than did rates of change in these variables. Thus, readily measured initial differences are potent antecedents of relationship deterioration, and studies are now needed to clarify the specific ways in which initial indices of risk come to influence changes in spouses' judgments of relationship satisfaction. PsycINFO Database Record (c) 2012 APA, all rights reserved.

  1. Optimal difference-based estimation for partially linear models

    KAUST Repository

    Zhou, Yuejin; Cheng, Yebin; Dai, Wenlin; Tong, Tiejun

    2017-01-01

    Difference-based methods have attracted increasing attention for analyzing partially linear models in the recent literature. In this paper, we first propose to solve the optimal sequence selection problem in difference-based estimation for the linear component. To achieve the goal, a family of new sequences and a cross-validation method for selecting the adaptive sequence are proposed. We demonstrate that the existing sequences are only extreme cases in the proposed family. Secondly, we propose a new estimator for the residual variance by fitting a linear regression method to some difference-based estimators. Our proposed estimator achieves the asymptotic optimal rate of mean squared error. Simulation studies also demonstrate that our proposed estimator performs better than the existing estimator, especially when the sample size is small and the nonparametric function is rough.

  2. Optimal difference-based estimation for partially linear models

    KAUST Repository

    Zhou, Yuejin

    2017-12-16

    Difference-based methods have attracted increasing attention for analyzing partially linear models in the recent literature. In this paper, we first propose to solve the optimal sequence selection problem in difference-based estimation for the linear component. To achieve the goal, a family of new sequences and a cross-validation method for selecting the adaptive sequence are proposed. We demonstrate that the existing sequences are only extreme cases in the proposed family. Secondly, we propose a new estimator for the residual variance by fitting a linear regression method to some difference-based estimators. Our proposed estimator achieves the asymptotic optimal rate of mean squared error. Simulation studies also demonstrate that our proposed estimator performs better than the existing estimator, especially when the sample size is small and the nonparametric function is rough.

  3. The minimal linear σ model for the Goldstone Higgs

    International Nuclear Information System (INIS)

    Feruglio, F.; Gavela, M.B.; Kanshin, K.; Machado, P.A.N.; Rigolin, S.; Saa, S.

    2016-01-01

    In the context of the minimal SO(5) linear σ-model, a complete renormalizable Lagrangian -including gauge bosons and fermions- is considered, with the symmetry softly broken to SO(4). The scalar sector describes both the electroweak Higgs doublet and the singlet σ. Varying the σ mass would allow to sweep from the regime of perturbative ultraviolet completion to the non-linear one assumed in models in which the Higgs particle is a low-energy remnant of some strong dynamics. We analyze the phenomenological implications and constraints from precision observables and LHC data. Furthermore, we derive the d≤6 effective Lagrangian in the limit of heavy exotic fermions.

  4. Non-linear finite element modelling and analysis of the effect of gasket creep-relaxation on circular bolted flange connections

    International Nuclear Information System (INIS)

    Luyt, P.C.B.; Theron, N.J.; Pietra, F.

    2017-01-01

    It is well known that gasket creep-relaxation results in a reduction of contact pressure between the surface of a gasket and the face of a flange over an extended period of time. This reduction may result in the subsequent failure of the circular bolted flange connection due to leakage. In this paper a pair of flat and raised face integral flanges, PN 10 DN 50 (in accordance with the European EN 1092-1 standard), with non-asbestos compressed fibre ring gaskets with aramid and a nitrile rubber binder were considered. Finite element modelling and analyses were done, for both the circular bolted flange configurations, during the seating condition. The results of the finite element analyses were experimentally validated. It was found that the number of bolt tightening increments as well as the time between the bolt tightening increments had a significant impact on the effect which gasket creep-relaxation had after the seating condition. An increase in either the number of bolting increments or the time between the bolting increments will reduce the effect which gasket creep-relaxation has once the bolts had been fastened. Based on these results it is possible to develop an optimisation scheme to minimize the effect which gasket creep-relaxation has on the contact pressure between the face of the flange and the gasket, after seating, by either increasing or decreasing the number of bolt tightening increments or the time between the bolt tightening increments. - Highlights: • Number of bolt tightening increments and time between bolt tightening increments had significant impact on effect of gasket creep-relaxation after the seating condition. • Impact of gasket creep-relaxation during seating and operating phases investigated by means of finite element analysis and experimentally verified. • Possible to develop optimisation scheme to minimize effect ofh gasket creep-relaxation on contact pressure between flange face and gasket. • Knowing the contact pressure is

  5. Efficient incremental relaying

    KAUST Repository

    Fareed, Muhammad Mehboob

    2013-07-01

    We propose a novel relaying scheme which improves the spectral efficiency of cooperative diversity systems by utilizing limited feedback from destination. Our scheme capitalizes on the fact that relaying is only required when direct transmission suffers deep fading. We calculate the packet error rate for the proposed efficient incremental relaying scheme with both amplify and forward and decode and forward relaying. Numerical results are also presented to verify their analytical counterparts. © 2013 IEEE.

  6. Abnormal Gait Behavior Detection for Elderly Based on Enhanced Wigner-Ville Analysis and Cloud Incremental SVM Learning

    Directory of Open Access Journals (Sweden)

    Jian Luo

    2016-01-01

    Full Text Available A cloud based health care system is proposed in this paper for the elderly by providing abnormal gait behavior detection, classification, online diagnosis, and remote aid service. Intelligent mobile terminals with triaxial acceleration sensor embedded are used to capture the movement and ambulation information of elderly. The collected signals are first enhanced by a Kalman filter. And the magnitude of signal vector features is then extracted and decomposed into a linear combination of enhanced Gabor atoms. The Wigner-Ville analysis method is introduced and the problem is studied by joint time-frequency analysis. In order to solve the large-scale abnormal behavior data lacking problem in training process, a cloud based incremental SVM (CI-SVM learning method is proposed. The original abnormal behavior data are first used to get the initial SVM classifier. And the larger abnormal behavior data of elderly collected by mobile devices are then gathered in cloud platform to conduct incremental training and get the new SVM classifier. By the CI-SVM learning method, the knowledge of SVM classifier could be accumulated due to the dynamic incremental learning. Experimental results demonstrate that the proposed method is feasible and can be applied to aged care, emergency aid, and related fields.

  7. Scalable, incremental learning with MapReduce parallelization for cell detection in high-resolution 3D microscopy data

    KAUST Repository

    Sung, Chul

    2013-08-01

    Accurate estimation of neuronal count and distribution is central to the understanding of the organization and layout of cortical maps in the brain, and changes in the cell population induced by brain disorders. High-throughput 3D microscopy techniques such as Knife-Edge Scanning Microscopy (KESM) are enabling whole-brain survey of neuronal distributions. Data from such techniques pose serious challenges to quantitative analysis due to the massive, growing, and sparsely labeled nature of the data. In this paper, we present a scalable, incremental learning algorithm for cell body detection that can address these issues. Our algorithm is computationally efficient (linear mapping, non-iterative) and does not require retraining (unlike gradient-based approaches) or retention of old raw data (unlike instance-based learning). We tested our algorithm on our rat brain Nissl data set, showing superior performance compared to an artificial neural network-based benchmark, and also demonstrated robust performance in a scenario where the data set is rapidly growing in size. Our algorithm is also highly parallelizable due to its incremental nature, and we demonstrated this empirically using a MapReduce-based implementation of the algorithm. We expect our scalable, incremental learning approach to be widely applicable to medical imaging domains where there is a constant flux of new data. © 2013 IEEE.

  8. Robust estimation for partially linear models with large-dimensional covariates.

    Science.gov (United States)

    Zhu, LiPing; Li, RunZe; Cui, HengJian

    2013-10-01

    We are concerned with robust estimation procedures to estimate the parameters in partially linear models with large-dimensional covariates. To enhance the interpretability, we suggest implementing a noncon-cave regularization method in the robust estimation procedure to select important covariates from the linear component. We establish the consistency for both the linear and the nonlinear components when the covariate dimension diverges at the rate of [Formula: see text], where n is the sample size. We show that the robust estimate of linear component performs asymptotically as well as its oracle counterpart which assumes the baseline function and the unimportant covariates were known a priori. With a consistent estimator of the linear component, we estimate the nonparametric component by a robust local linear regression. It is proved that the robust estimate of nonlinear component performs asymptotically as well as if the linear component were known in advance. Comprehensive simulation studies are carried out and an application is presented to examine the finite-sample performance of the proposed procedures.

  9. Optimization for decision making linear and quadratic models

    CERN Document Server

    Murty, Katta G

    2010-01-01

    While maintaining the rigorous linear programming instruction required, Murty's new book is unique in its focus on developing modeling skills to support valid decision-making for complex real world problems, and includes solutions to brand new algorithms.

  10. A note on probabilistic models over strings: the linear algebra approach.

    Science.gov (United States)

    Bouchard-Côté, Alexandre

    2013-12-01

    Probabilistic models over strings have played a key role in developing methods that take into consideration indels as phylogenetically informative events. There is an extensive literature on using automata and transducers on phylogenies to do inference on these probabilistic models, in which an important theoretical question is the complexity of computing the normalization of a class of string-valued graphical models. This question has been investigated using tools from combinatorics, dynamic programming, and graph theory, and has practical applications in Bayesian phylogenetics. In this work, we revisit this theoretical question from a different point of view, based on linear algebra. The main contribution is a set of results based on this linear algebra view that facilitate the analysis and design of inference algorithms on string-valued graphical models. As an illustration, we use this method to give a new elementary proof of a known result on the complexity of inference on the "TKF91" model, a well-known probabilistic model over strings. Compared to previous work, our proving method is easier to extend to other models, since it relies on a novel weak condition, triangular transducers, which is easy to establish in practice. The linear algebra view provides a concise way of describing transducer algorithms and their compositions, opens the possibility of transferring fast linear algebra libraries (for example, based on GPUs), as well as low rank matrix approximation methods, to string-valued inference problems.

  11. Entity versus incremental theories predict older adults' memory performance.

    Science.gov (United States)

    Plaks, Jason E; Chasteen, Alison L

    2013-12-01

    The authors examined whether older adults' implicit theories regarding the modifiability of memory in particular (Studies 1 and 3) and abilities in general (Study 2) would predict memory performance. In Study 1, individual differences in older adults' endorsement of the "entity theory" (a belief that one's ability is fixed) or "incremental theory" (a belief that one's ability is malleable) of memory were measured using a version of the Implicit Theories Measure (Dweck, 1999). Memory performance was assessed with a free-recall task. Results indicated that the higher the endorsement of the incremental theory, the better the free recall. In Study 2, older and younger adults' theories were measured using a more general version of the Implicit Theories Measure that focused on the modifiability of abilities in general. Again, for older adults, the higher the incremental endorsement, the better the free recall. Moreover, as predicted, implicit theories did not predict younger adults' memory performance. In Study 3, participants read mock news articles reporting evidence in favor of either the entity or incremental theory. Those in the incremental condition outperformed those in the entity condition on reading span and free-recall tasks. These effects were mediated by pretask worry such that, for those in the entity condition, higher worry was associated with lower performance. Taken together, these studies suggest that variation in entity versus incremental endorsement represents a key predictor of older adults' memory performance. PsycINFO Database Record (c) 2013 APA, all rights reserved.

  12. Predicting recycling behaviour: Comparison of a linear regression model and a fuzzy logic model.

    Science.gov (United States)

    Vesely, Stepan; Klöckner, Christian A; Dohnal, Mirko

    2016-03-01

    In this paper we demonstrate that fuzzy logic can provide a better tool for predicting recycling behaviour than the customarily used linear regression. To show this, we take a set of empirical data on recycling behaviour (N=664), which we randomly divide into two halves. The first half is used to estimate a linear regression model of recycling behaviour, and to develop a fuzzy logic model of recycling behaviour. As the first comparison, the fit of both models to the data included in estimation of the models (N=332) is evaluated. As the second comparison, predictive accuracy of both models for "new" cases (hold-out data not included in building the models, N=332) is assessed. In both cases, the fuzzy logic model significantly outperforms the regression model in terms of fit. To conclude, when accurate predictions of recycling and possibly other environmental behaviours are needed, fuzzy logic modelling seems to be a promising technique. Copyright © 2015 Elsevier Ltd. All rights reserved.

  13. Steady-state global optimization of metabolic non-linear dynamic models through recasting into power-law canonical models.

    Science.gov (United States)

    Pozo, Carlos; Marín-Sanguino, Alberto; Alves, Rui; Guillén-Gosálbez, Gonzalo; Jiménez, Laureano; Sorribas, Albert

    2011-08-25

    Design of newly engineered microbial strains for biotechnological purposes would greatly benefit from the development of realistic mathematical models for the processes to be optimized. Such models can then be analyzed and, with the development and application of appropriate optimization techniques, one could identify the modifications that need to be made to the organism in order to achieve the desired biotechnological goal. As appropriate models to perform such an analysis are necessarily non-linear and typically non-convex, finding their global optimum is a challenging task. Canonical modeling techniques, such as Generalized Mass Action (GMA) models based on the power-law formalism, offer a possible solution to this problem because they have a mathematical structure that enables the development of specific algorithms for global optimization. Based on the GMA canonical representation, we have developed in previous works a highly efficient optimization algorithm and a set of related strategies for understanding the evolution of adaptive responses in cellular metabolism. Here, we explore the possibility of recasting kinetic non-linear models into an equivalent GMA model, so that global optimization on the recast GMA model can be performed. With this technique, optimization is greatly facilitated and the results are transposable to the original non-linear problem. This procedure is straightforward for a particular class of non-linear models known as Saturable and Cooperative (SC) models that extend the power-law formalism to deal with saturation and cooperativity. Our results show that recasting non-linear kinetic models into GMA models is indeed an appropriate strategy that helps overcoming some of the numerical difficulties that arise during the global optimization task.

  14. Steady-state global optimization of metabolic non-linear dynamic models through recasting into power-law canonical models

    Directory of Open Access Journals (Sweden)

    Sorribas Albert

    2011-08-01

    Full Text Available Abstract Background Design of newly engineered microbial strains for biotechnological purposes would greatly benefit from the development of realistic mathematical models for the processes to be optimized. Such models can then be analyzed and, with the development and application of appropriate optimization techniques, one could identify the modifications that need to be made to the organism in order to achieve the desired biotechnological goal. As appropriate models to perform such an analysis are necessarily non-linear and typically non-convex, finding their global optimum is a challenging task. Canonical modeling techniques, such as Generalized Mass Action (GMA models based on the power-law formalism, offer a possible solution to this problem because they have a mathematical structure that enables the development of specific algorithms for global optimization. Results Based on the GMA canonical representation, we have developed in previous works a highly efficient optimization algorithm and a set of related strategies for understanding the evolution of adaptive responses in cellular metabolism. Here, we explore the possibility of recasting kinetic non-linear models into an equivalent GMA model, so that global optimization on the recast GMA model can be performed. With this technique, optimization is greatly facilitated and the results are transposable to the original non-linear problem. This procedure is straightforward for a particular class of non-linear models known as Saturable and Cooperative (SC models that extend the power-law formalism to deal with saturation and cooperativity. Conclusions Our results show that recasting non-linear kinetic models into GMA models is indeed an appropriate strategy that helps overcoming some of the numerical difficulties that arise during the global optimization task.

  15. Parkinson Symptoms and Health Related Quality of Life as Predictors of Costs: A Longitudinal Observational Study with Linear Mixed Model Analysis.

    Directory of Open Access Journals (Sweden)

    Pablo Martinez-Martín

    Full Text Available To estimate the magnitude in which Parkinson's disease (PD symptoms and health- related quality of life (HRQoL determined PD costs over a 4-year period.Data collected during 3-month, each year, for 4 years, from the ELEP study, included sociodemographic, clinical and use of resources information. Costs were calculated yearly, as mean 3-month costs/patient and updated to Spanish €, 2012. Mixed linear models were performed to analyze total, direct and indirect costs based on symptoms and HRQoL.One-hundred and seventy four patients were included. Mean (SD age: 63 (11 years, mean (SD disease duration: 8 (6 years. Ninety-three percent were HY I, II or III (mild or moderate disease. Forty-nine percent remained in the same stage during the study period. Clinical evaluation and HRQoL scales showed relatively slight changes over time, demonstrating a stable group overall. Mean (SD PD total costs augmented 92.5%, from € 2,082.17 (€ 2,889.86 in year 1 to € 4,008.6 (€ 7,757.35 in year 4. Total, direct and indirect cost incremented 45.96%, 35.63%, and 69.69% for mild disease, respectively, whereas increased 166.52% for total, 55.68% for direct and 347.85% for indirect cost in patients with moderate PD. For severe patients, cost remained almost the same throughout the study. For each additional point in the SCOPA-Motor scale total costs increased € 75.72 (p = 0.0174; for each additional point on SCOPA-Motor and the SCOPA-COG, direct costs incremented € 49.21 (p = 0.0094 and € 44.81 (p = 0.0404, respectively; and for each extra point on the pain scale, indirect costs increased € 16.31 (p = 0.0228.PD is an expensive disease in Spain. Disease progression and severity as well as motor and cognitive dysfunctions are major drivers of costs increments. Therapeutic measures aimed at controlling progression and symptoms could help contain disease expenses.

  16. Modeling the frequency of opposing left-turn conflicts at signalized intersections using generalized linear regression models.

    Science.gov (United States)

    Zhang, Xin; Liu, Pan; Chen, Yuguang; Bai, Lu; Wang, Wei

    2014-01-01

    The primary objective of this study was to identify whether the frequency of traffic conflicts at signalized intersections can be modeled. The opposing left-turn conflicts were selected for the development of conflict predictive models. Using data collected at 30 approaches at 20 signalized intersections, the underlying distributions of the conflicts under different traffic conditions were examined. Different conflict-predictive models were developed to relate the frequency of opposing left-turn conflicts to various explanatory variables. The models considered include a linear regression model, a negative binomial model, and separate models developed for four traffic scenarios. The prediction performance of different models was compared. The frequency of traffic conflicts follows a negative binominal distribution. The linear regression model is not appropriate for the conflict frequency data. In addition, drivers behaved differently under different traffic conditions. Accordingly, the effects of conflicting traffic volumes on conflict frequency vary across different traffic conditions. The occurrences of traffic conflicts at signalized intersections can be modeled using generalized linear regression models. The use of conflict predictive models has potential to expand the uses of surrogate safety measures in safety estimation and evaluation.

  17. Non-linear sigma model on the fuzzy supersphere

    International Nuclear Information System (INIS)

    Kurkcuoglu, Seckin

    2004-01-01

    In this note we develop fuzzy versions of the supersymmetric non-linear sigma model on the supersphere S (2,2) . In hep-th/0212133 Bott projectors have been used to obtain the fuzzy C P 1 model. Our approach utilizes the use of supersymmetric extensions of these projectors. Here we obtain these (super)-projectors and quantize them in a fashion similar to the one given in hep-th/0212133. We discuss the interpretation of the resulting model as a finite dimensional matrix model. (author)

  18. Linear and Nonlinear Career Models: Metaphors, Paradigms, and Ideologies.

    Science.gov (United States)

    Buzzanell, Patrice M.; Goldzwig, Steven R.

    1991-01-01

    Examines the linear or bureaucratic career models (dominant in career research, metaphors, paradigms, and ideologies) which maintain career myths of flexibility and individualized routes to success in organizations incapable of offering such versatility. Describes nonlinear career models which offer suggestive metaphors for re-visioning careers…

  19. MAGDM linear-programming models with distinct uncertain preference structures.

    Science.gov (United States)

    Xu, Zeshui S; Chen, Jian

    2008-10-01

    Group decision making with preference information on alternatives is an interesting and important research topic which has been receiving more and more attention in recent years. The purpose of this paper is to investigate multiple-attribute group decision-making (MAGDM) problems with distinct uncertain preference structures. We develop some linear-programming models for dealing with the MAGDM problems, where the information about attribute weights is incomplete, and the decision makers have their preferences on alternatives. The provided preference information can be represented in the following three distinct uncertain preference structures: 1) interval utility values; 2) interval fuzzy preference relations; and 3) interval multiplicative preference relations. We first establish some linear-programming models based on decision matrix and each of the distinct uncertain preference structures and, then, develop some linear-programming models to integrate all three structures of subjective uncertain preference information provided by the decision makers and the objective information depicted in the decision matrix. Furthermore, we propose a simple and straightforward approach in ranking and selecting the given alternatives. It is worth pointing out that the developed models can also be used to deal with the situations where the three distinct uncertain preference structures are reduced to the traditional ones, i.e., utility values, fuzzy preference relations, and multiplicative preference relations. Finally, we use a practical example to illustrate in detail the calculation process of the developed approach.

  20. Linear approximation model network and its formation via ...

    Indian Academy of Sciences (India)

    niques, an alternative `linear approximation model' (LAM) network approach is .... network is LPV, existing LTI theory is difficult to apply (Kailath 1980). ..... Beck J V, Arnold K J 1977 Parameter estimation in engineering and science (New York: ...

  1. Nonlinearity measure and internal model control based linearization in anti-windup design

    Energy Technology Data Exchange (ETDEWEB)

    Perev, Kamen [Systems and Control Department, Technical University of Sofia, 8 Cl. Ohridski Blvd., 1756 Sofia (Bulgaria)

    2013-12-18

    This paper considers the problem of internal model control based linearization in anti-windup design. The nonlinearity measure concept is used for quantifying the control system degree of nonlinearity. The linearizing effect of a modified internal model control structure is presented by comparing the nonlinearity measures of the open-loop and closed-loop systems. It is shown that the linearization properties are improved by increasing the control system local feedback gain. However, it is emphasized that at the same time the stability of the system deteriorates. The conflicting goals of stability and linearization are resolved by solving the design problem in different frequency ranges.

  2. The more you learn, the less you store : Memory-controlled incremental SVM for visual place recognition

    OpenAIRE

    Pronobis, Andrzej; Jie, Luo; Caputo, Barbara

    2010-01-01

    The capability to learn from experience is a key property for autonomous cognitive systems working in realistic settings. To this end, this paper presents an SVM-based algorithm, capable of learning model representations incrementally while keeping under control memory requirements. We combine an incremental extension of SVMs [43] with a method reducing the number of support vectors needed to build the decision function without any loss in performance [15] introducing a parameter which permit...

  3. Linear Model for Optimal Distributed Generation Size Predication

    Directory of Open Access Journals (Sweden)

    Ahmed Al Ameri

    2017-01-01

    Full Text Available This article presents a linear model predicting optimal size of Distributed Generation (DG that addresses the minimum power loss. This method is based fundamentally on strong coupling between active power and voltage angle as well as between reactive power and voltage magnitudes. This paper proposes simplified method to calculate the total power losses in electrical grid for different distributed generation sizes and locations. The method has been implemented and tested on several IEEE bus test systems. The results show that the proposed method is capable of predicting approximate optimal size of DG when compared with precision calculations. The method that linearizes a complex model showed a good result, which can actually reduce processing time required. The acceptable accuracy with less time and memory required can help the grid operator to assess power system integrated within large-scale distribution generation.

  4. Dynamic generalized linear models for monitoring endemic diseases

    DEFF Research Database (Denmark)

    Lopes Antunes, Ana Carolina; Jensen, Dan; Hisham Beshara Halasa, Tariq

    2016-01-01

    The objective was to use a Dynamic Generalized Linear Model (DGLM) based on abinomial distribution with a linear trend, for monitoring the PRRS (Porcine Reproductive and Respiratory Syndrome sero-prevalence in Danish swine herds. The DGLM was described and its performance for monitoring control...... and eradication programmes based on changes in PRRS sero-prevalence was explored. Results showed a declining trend in PRRS sero-prevalence between 2007 and 2014 suggesting that Danish herds are slowly eradicating PRRS. The simulation study demonstrated the flexibility of DGLMs in adapting to changes intrends...... in sero-prevalence. Based on this, it was possible to detect variations in the growth model component. This study is a proof-of-concept, demonstrating the use of DGLMs for monitoring endemic diseases. In addition, the principles stated might be useful in general research on monitoring and surveillance...

  5. Ajuste de modelos estocásticos lineares e não-lineares para a descrição do perfil longitudinal de árvores Fitting linear and nonlinear stochastic models to describe longitudinal tree profile

    Directory of Open Access Journals (Sweden)

    Leonardo Machado Pires

    2007-10-01

    Full Text Available Os modelos polinomiais são mais difundidos no meio florestal brasileiro na descrição do perfil de árvores devido à sua facilidade de ajuste e precisão. O mesmo não ocorre com os modelos não-lineares, os quais possuem maior dificuldade de ajuste. Dentre os modelos não-lineares clássicos, na descrição do perfil, podem-se citar o de Gompertz, o Logístico e o de Weibull. Portanto, este estudo visou comparar os modelos lineares e não lineares para a descrição do perfil de árvores. As medidas de comparação foram o coeficiente de determinação (R², o erro-padrão residual (s yx, o coeficiente de determinação corrigido (R²ajustado, o gráfico dos resíduos e a facilidade de ajuste. Os resultados ressaltaram que, dentre os modelos não-lineares, o que obteve melhor desempenho, de forma geral, foi o modelo Logístico, apesar de o modelo de Gompertz ser melhor em termos de erro-padrão residual. Nos modelos lineares, o polinômio proposto por Pires & Calegario foi superior aos demais. Ao comparar os modelos não-lineares com os lineares, o modelo Logístico foi melhor em razão, principalmente, do fato de o comportamento dos dados ser não-linear, à baixa correlação entre os parâmetros e à fácil interpretação deles, facilitando a convergência e o ajuste.Polynomial models are most commonly used in Brazilian forestry for taper modeling due to its straightforwardly fitting and precision. The use of nonlinear regression classic models, like Gompertz, Logistic and Weibull, is not very common in Brazil. Therefore, this study aimed to verify the best nonlinear and linear models, and among these the best model to describe the longitudinal tree profile. The comparison measures were: R², syx, R²adjusted, residual graphics and fitting convergence. The results pointed out that among the non-linear models the best behavior, in general, was given by the Logistic model, although the Gompertz model was superior compared with the Weibull

  6. Lifetime costs of lung transplantation : Estimation of incremental costs

    NARCIS (Netherlands)

    VanEnckevort, PJ; Koopmanschap, MA; Tenvergert, EM; VanderBij, W; Rutten, FFH

    1997-01-01

    Despite an expanding number of centres which provide lung transplantation, information about the incremental costs of lung transplantation is scarce. From 1991 until 1995, in The Netherlands a technology assessment was performed which provided information about the incremental costs of lung

  7. A reliable incremental method of computing the limit load in deformation plasticity based on compliance: Continuous and discrete setting

    Czech Academy of Sciences Publication Activity Database

    Haslinger, Jaroslav; Repin, S.; Sysala, Stanislav

    2016-01-01

    Roč. 303, September 2016 (2016), s. 156-170 ISSN 0377-0427 R&D Projects: GA MŠk LQ1602; GA ČR GA13-18652S Institutional support: RVO:68145535 Keywords : variational problems with linear growth energy * incremental limit analysis * elastic-perfectly plastic problems * finite element approximation Subject RIV: BA - General Mathematics Impact factor: 1.357, year: 2016 http://www.sciencedirect.com/science/article/pii/S0377042716300917

  8. A non-linear model of information seeking behaviour

    Directory of Open Access Journals (Sweden)

    Allen E. Foster

    2005-01-01

    Full Text Available The results of a qualitative, naturalistic, study of information seeking behaviour are reported in this paper. The study applied the methods recommended by Lincoln and Guba for maximising credibility, transferability, dependability, and confirmability in data collection and analysis. Sampling combined purposive and snowball methods, and led to a final sample of 45 inter-disciplinary researchers from the University of Sheffield. In-depth semi-structured interviews were used to elicit detailed examples of information seeking. Coding of interview transcripts took place in multiple iterations over time and used Atlas-ti software to support the process. The results of the study are represented in a non-linear Model of Information Seeking Behaviour. The model describes three core processes (Opening, Orientation, and Consolidation and three levels of contextual interaction (Internal Context, External Context, and Cognitive Approach, each composed of several individual activities and attributes. The interactivity and shifts described by the model show information seeking to be non-linear, dynamic, holistic, and flowing. The paper concludes by describing the whole model of behaviours as analogous to an artist's palette, in which activities remain available throughout information seeking. A summary of key implications of the model and directions for further research are included.

  9. How to set the stage for a full-fledged clinical trial testing 'incremental haemodialysis'.

    Science.gov (United States)

    Casino, Francesco Gaetano; Basile, Carlo

    2017-07-21

    Most people who make the transition to maintenance haemodialysis (HD) therapy are treated with a fixed dose of thrice-weekly HD (3HD/week) regimen without consideration of their residual kidney function (RKF). The RKF provides an effective and naturally continuous clearance of both small and middle molecules, plays a major role in metabolic homeostasis, nutritional status and cardiovascular health, and aids in fluid management. The RKF is associated with better patient survival and greater health-related quality of life. Its preservation is instrumental to the prescription of incremental (1HD/week to 2HD/week) HD. The recently heightened interest in incremental HD has been hindered by the current limitations of the urea kinetic model (UKM), which tend to overestimate the needed dialysis dose in the presence of a substantial RKF. A recent paper by Casino and Basile suggested a variable target model (VTM), which gives more clinical weight to the RKF and allows less frequent HD treatments at lower RKF as opposed to the fixed target model, based on the wrong concept of the clinical equivalence between renal and dialysis clearance. A randomized controlled trial (RCT) enrolling incident patients and comparing incremental HD (prescribed according to the VTM) with the standard 3HD/week schedule and focused on hard outcomes, such as survival and health-related quality of life of patients, is urgently needed. The first step in designing such a study is to compute the 'adequacy lines' and the associated fitting equations necessary for the most appropriate allocation of the patients in the two arms and their correct and safe follow-up. In conclusion, the potentially important clinical and financial implications of the incremental HD render it highly promising and warrant RCTs. The UKM is the keystone for conducting such studies. © The Author 2017. Published by Oxford University Press on behalf of ERA-EDTA. All rights reserved.

  10. A quantitative analysis of instabilities in the linear chiral sigma model

    International Nuclear Information System (INIS)

    Nemes, M.C.; Nielsen, M.; Oliveira, M.M. de; Providencia, J. da

    1990-08-01

    We present a method to construct a complete set of stationary states corresponding to small amplitude motion which naturally includes the continuum solution. The energy wheighted sum rule (EWSR) is shown to provide for a quantitative criterium on the importance of instabilities which is known to occur in nonasymptotically free theories. Out results for the linear σ model showed be valid for a large class of models. A unified description of baryon and meson properties in terms of the linear σ model is also given. (author)

  11. Valuing a gas-fired power plant: A comparison of ordinary linear models, regime-switching approaches, and models with stochastic volatility

    International Nuclear Information System (INIS)

    Heydari, Somayeh; Siddiqui, Afzal

    2010-01-01

    Energy prices are often highly volatile with unexpected spikes. Capturing these sudden spikes may lead to more informed decision-making in energy investments, such as valuing gas-fired power plants, than ignoring them. In this paper, non-linear regime-switching models and models with mean-reverting stochastic volatility are compared with ordinary linear models. The study is performed using UK electricity and natural gas daily spot prices and suggests that with the aim of valuing a gas-fired power plant with and without operational flexibility, non-linear models with stochastic volatility, specifically for logarithms of electricity prices, provide better out-of-sample forecasts than both linear models and regime-switching models.

  12. Linear mixed models a practical guide using statistical software

    CERN Document Server

    West, Brady T; Galecki, Andrzej T

    2014-01-01

    Highly recommended by JASA, Technometrics, and other journals, the first edition of this bestseller showed how to easily perform complex linear mixed model (LMM) analyses via a variety of software programs. Linear Mixed Models: A Practical Guide Using Statistical Software, Second Edition continues to lead readers step by step through the process of fitting LMMs. This second edition covers additional topics on the application of LMMs that are valuable for data analysts in all fields. It also updates the case studies using the latest versions of the software procedures and provides up-to-date information on the options and features of the software procedures available for fitting LMMs in SAS, SPSS, Stata, R/S-plus, and HLM.New to the Second Edition A new chapter on models with crossed random effects that uses a case study to illustrate software procedures capable of fitting these models Power analysis methods for longitudinal and clustered study designs, including software options for power analyses and suggest...

  13. Application of the simplex method of linear programming model to ...

    African Journals Online (AJOL)

    This work discussed how the simplex method of linear programming could be used to maximize the profit of any business firm using Saclux Paint Company as a case study. It equally elucidated the effect variation in the optimal result obtained from linear programming model, will have on any given firm. It was demonstrated ...

  14. Electron Model of Linear-Field FFAG

    CERN Document Server

    Koscielniak, Shane R

    2005-01-01

    A fixed-field alternating-gradient accelerator (FFAG) that employs only linear-field elements ushers in a new regime in accelerator design and dynamics. The linear-field machine has the ability to compact an unprecedented range in momenta within a small component aperture. With a tune variation which results from the natural chromaticity, the beam crosses many strong, uncorrec-table, betatron resonances during acceleration. Further, relativistic particles in this machine exhibit a quasi-parabolic time-of-flight that cannot be addressed with a fixed-frequency rf system. This leads to a new concept of bucketless acceleration within a rotation manifold. With a large energy jump per cell, there is possibly strong synchro-betatron coupling. A few-MeV electron model has been proposed to demonstrate the feasibility of these untested acceleration features and to investigate them at length under a wide range of operating conditions. This paper presents a lattice optimized for a 1.3 GHz rf, initial technology choices f...

  15. Predicting musically induced emotions from physiological inputs: Linear and neural network models

    Directory of Open Access Journals (Sweden)

    Frank A. Russo

    2013-08-01

    Full Text Available Listening to music often leads to physiological responses. Do these physiological responses contain sufficient information to infer emotion induced in the listener? The current study explores this question by attempting to predict judgments of 'felt' emotion from physiological responses alone using linear and neural network models. We measured five channels of peripheral physiology from 20 participants – heart rate, respiration, galvanic skin response, and activity in corrugator supercilii and zygomaticus major facial muscles. Using valence and arousal (VA dimensions, participants rated their felt emotion after listening to each of 12 classical music excerpts. After extracting features from the five channels, we examined their correlation with VA ratings, and then performed multiple linear regression to see if a linear relationship between the physiological responses could account for the ratings. Although linear models predicted a significant amount of variance in arousal ratings, they were unable to do so with valence ratings. We then used a neural network to provide a nonlinear account of the ratings. The network was trained on the mean ratings of eight of the 12 excerpts and tested on the remainder. Performance of the neural network confirms that physiological responses alone can be used to predict musically induced emotion. The nonlinear model derived from the neural network was more accurate than linear models derived from multiple linear regression, particularly along the valence dimension. A secondary analysis allowed us to quantify the relative contributions of inputs to the nonlinear model. The study represents a novel approach to understanding the complex relationship between physiological responses and musically induced emotion.

  16. Defense Agencies Initiative Increment 2 (DAI Inc 2)

    Science.gov (United States)

    2016-03-01

    module. In an ADM dated September 23, 2013, the MDA established Increment 2 as a MAIS program to include budget formulation; grants financial...2016 Major Automated Information System Annual Report Defense Agencies Initiative Increment 2 (DAI Inc 2) Defense Acquisition Management...President’s Budget RDT&E - Research, Development, Test, and Evaluation SAE - Service Acquisition Executive TBD - To Be Determined TY - Then

  17. Biometrics Enabling Capability Increment 1 (BEC Inc 1)

    Science.gov (United States)

    2016-03-01

    modal biometrics submissions to include iris, face, palm and finger prints from biometrics collection devices, which will support the Warfighter in...2016 Major Automated Information System Annual Report Biometrics Enabling Capability Increment 1 (BEC Inc 1) Defense Acquisition Management...Phone: 227-3119 DSN Fax: Date Assigned: July 15, 2015 Program Information Program Name Biometrics Enabling Capability Increment 1 (BEC Inc 1) DoD

  18. Effect of linear and non-linear blade modelling techniques on simulated fatigue and extreme loads using Bladed

    Science.gov (United States)

    Beardsell, Alec; Collier, William; Han, Tao

    2016-09-01

    There is a trend in the wind industry towards ever larger and more flexible turbine blades. Blade tip deflections in modern blades now commonly exceed 10% of blade length. Historically, the dynamic response of wind turbine blades has been analysed using linear models of blade deflection which include the assumption of small deflections. For modern flexible blades, this assumption is becoming less valid. In order to continue to simulate dynamic turbine performance accurately, routine use of non-linear models of blade deflection may be required. This can be achieved by representing the blade as a connected series of individual flexible linear bodies - referred to in this paper as the multi-part approach. In this paper, Bladed is used to compare load predictions using single-part and multi-part blade models for several turbines. The study examines the impact on fatigue and extreme loads and blade deflection through reduced sets of load calculations based on IEC 61400-1 ed. 3. Damage equivalent load changes of up to 16% and extreme load changes of up to 29% are observed at some turbine load locations. It is found that there is no general pattern in the loading differences observed between single-part and multi-part blade models. Rather, changes in fatigue and extreme loads with a multi-part blade model depend on the characteristics of the individual turbine and blade. Key underlying causes of damage equivalent load change are identified as differences in edgewise- torsional coupling between the multi-part and single-part models, and increased edgewise rotor mode damping in the multi-part model. Similarly, a causal link is identified between torsional blade dynamics and changes in ultimate load results.

  19. 76 FR 53763 - Immigration Benefits Business Transformation, Increment I

    Science.gov (United States)

    2011-08-29

    ..., 100, et al. Immigration Benefits Business Transformation, Increment I; Final Rule #0;#0;Federal... Benefits Business Transformation, Increment I AGENCY: U.S. Citizenship and Immigration Services, DHS... USCIS is engaged in an enterprise-wide transformation effort to implement new business processes and to...

  20. The low-energy constants of the extended linear sigma model

    Energy Technology Data Exchange (ETDEWEB)

    Divotgey, Florian; Giacosa, Francesco; Kovacs, Peter; Rischke, Dirk H. [Institut fuer Theoretische Physik, Goethe-Universitaet Frankfurt am Main (Germany)

    2016-07-01

    The low-energy dynamics of Quantum Chromodynamics (QCD) is fully determined by the interactions of the (pseudo-) Nambu-Goldstone bosons of spontaneous chiral symmetry breaking, i.e., for two quark flavors, the pions. Pion dynamics is described by the low-energy effective theory of QCD, chiral perturbation theory (ChPT), which is based on the nonlinear realization of chiral symmetry. An alternative description is provided by the Linear Sigma Model, where chiral symmetry is linearly realized. An extended version of this model, the so-called extended Linear Sigma Model (eLSM) was recently developed which incorporates all J{sup P}=0{sup ±}, 1{sup ±} anti qq mesons up to 2 GeV in mass. A fit of the coupling constants of this model to experimentally measured masses and decay widths has a surprisingly good quality. In this talk, it is demonstrated that the low-energy limit of the eLSM, obtained by integrating out all fields which are heavier than the pions, assumes the same form as ChPT. Moreover, the low-energy constants (LECs) of the eLSM agree with those of ChPT.

  1. Ultimate parameters of the photon collider at the international linear ...

    Indian Academy of Sciences (India)

    be achieved by adding more wigglers to the DRs; the incremental cost is easily ... the above emittances, the limit on the effective horizontal β-function is about 5 mm [12 .... coupling in γγ collisions just above the γγ → hh threshold [19]. .... [21] V I Telnov, talk at the ECFA Workshop on Linear Colliders, Montpellier, France, 12–.

  2. Non Linear Modelling and Control of Hydraulic Actuators

    Directory of Open Access Journals (Sweden)

    B. Šulc

    2002-01-01

    Full Text Available This paper deals with non-linear modelling and control of a differential hydraulic actuator. The nonlinear state space equations are derived from basic physical laws. They are more powerful than the transfer function in the case of linear models, and they allow the application of an object oriented approach in simulation programs. The effects of all friction forces (static, Coulomb and viscous have been modelled, and many phenomena that are usually neglected are taken into account, e.g., the static term of friction, the leakage between the two chambers and external space. Proportional Differential (PD and Fuzzy Logic Controllers (FLC have been applied in order to make a comparison by means of simulation. Simulation is performed using Matlab/Simulink, and some of the results are compared graphically. FLC is tuned in a such way that it produces a constant control signal close to its maximum (or minimum, where possible. In the case of PD control the occurrence of peaks cannot be avoided. These peaks produce a very high velocity that oversteps the allowed values.

  3. Artificial Neural Network versus Linear Models Forecasting Doha Stock Market

    Science.gov (United States)

    Yousif, Adil; Elfaki, Faiz

    2017-12-01

    The purpose of this study is to determine the instability of Doha stock market and develop forecasting models. Linear time series models are used and compared with a nonlinear Artificial Neural Network (ANN) namely Multilayer Perceptron (MLP) Technique. It aims to establish the best useful model based on daily and monthly data which are collected from Qatar exchange for the period starting from January 2007 to January 2015. Proposed models are for the general index of Qatar stock exchange and also for the usages in other several sectors. With the help of these models, Doha stock market index and other various sectors were predicted. The study was conducted by using various time series techniques to study and analyze data trend in producing appropriate results. After applying several models, such as: Quadratic trend model, double exponential smoothing model, and ARIMA, it was concluded that ARIMA (2,2) was the most suitable linear model for the daily general index. However, ANN model was found to be more accurate than time series models.

  4. Shakedown analysis by finite element incremental procedures

    International Nuclear Information System (INIS)

    Borkowski, A.; Kleiber, M.

    1979-01-01

    It is a common occurence in many practical problems that external loads are variable and the exact time-dependent history of loading is unknown. Instead of it load is characterized by a given loading domain: a convex polyhedron in the n-dimensional space of load parameters. The problem is then to check whether a structure shakes down, i.e. responds elastically after a few elasto-plastic cycles, or not to a variable loading as defined above. Such check can be performed by an incremental procedure. One should reproduce incrementally a simple cyclic process which consists of proportional load paths that connect the origin of the load space with the corners of the loading domain. It was proved that if a structure shakes down to such loading history then it is able to adopt itself to an arbitrary load path contained in the loading domain. The main advantage of such approach is the possibility to use existing incremental finite-element computer codes. (orig.)

  5. Unification of three linear models for the transient visual system

    NARCIS (Netherlands)

    Brinker, den A.C.

    1989-01-01

    Three different linear filters are considered as a model describing the experimentally determined triphasic impulse responses of discs. These impulse responses arc associated with the transient visual system. Each model reveals a different feature of the system. Unification of the models is

  6. An improved robust model predictive control for linear parameter-varying input-output models

    NARCIS (Netherlands)

    Abbas, H.S.; Hanema, J.; Tóth, R.; Mohammadpour, J.; Meskin, N.

    2018-01-01

    This paper describes a new robust model predictive control (MPC) scheme to control the discrete-time linear parameter-varying input-output models subject to input and output constraints. Closed-loop asymptotic stability is guaranteed by including a quadratic terminal cost and an ellipsoidal terminal

  7. A Detailed Analytical Study of Non-Linear Semiconductor Device Modelling

    Directory of Open Access Journals (Sweden)

    Umesh Kumar

    1995-01-01

    junction diode have been developed. The results of computer simulated examples have been presented in each case. The non-linear lumped model for Gunn is a unified model as it describes the diffusion effects as the-domain traves from cathode to anode. An additional feature of this model is that it describes the domain extinction and nucleation phenomena in Gunn dioder with the help of a simple timing circuit. The non-linear lumped model for SCR is general and is valid under any mode of operation in any circuit environment. The memristive circuit model for p-n junction diodes is capable of simulating realistically the diode’s dynamic behavior under reverse, forward and sinusiodal operating modes. The model uses memristor, the charge-controlled resistor to mimic various second-order effects due to conductivity modulation. It is found that both storage time and fall time of the diode can be accurately predicted.

  8. Separation-induced boundary layer transition: Modeling with a non-linear eddy-viscosity model coupled with the laminar kinetic energy equation

    International Nuclear Information System (INIS)

    Vlahostergios, Z.; Yakinthos, K.; Goulas, A.

    2009-01-01

    We present an effort to model the separation-induced transition on a flat plate with a semi-circular leading edge, using a cubic non-linear eddy-viscosity model combined with the laminar kinetic energy. A non-linear model, compared to a linear one, has the advantage to resolve the anisotropic behavior of the Reynolds-stresses in the near-wall region and it provides a more accurate expression for the generation of turbulence in the transport equation of the turbulence kinetic energy. Although in its original formulation the model is not able to accurately predict the separation-induced transition, the inclusion of the laminar kinetic energy increases its accuracy. The adoption of the laminar kinetic energy by the non-linear model is presented in detail, together with some additional modifications required for the adaption of the laminar kinetic energy into the basic concepts of the non-linear eddy-viscosity model. The computational results using the proposed combined model are shown together with the ones obtained using an isotropic linear eddy-viscosity model, which adopts also the laminar kinetic energy concept and in comparison with the existing experimental data.

  9. Modelling non-linear effects of dark energy

    Science.gov (United States)

    Bose, Benjamin; Baldi, Marco; Pourtsidou, Alkistis

    2018-04-01

    We investigate the capabilities of perturbation theory in capturing non-linear effects of dark energy. We test constant and evolving w models, as well as models involving momentum exchange between dark energy and dark matter. Specifically, we compare perturbative predictions at 1-loop level against N-body results for four non-standard equations of state as well as varying degrees of momentum exchange between dark energy and dark matter. The interaction is modelled phenomenologically using a time dependent drag term in the Euler equation. We make comparisons at the level of the matter power spectrum and the redshift space monopole and quadrupole. The multipoles are modelled using the Taruya, Nishimichi and Saito (TNS) redshift space spectrum. We find perturbation theory does very well in capturing non-linear effects coming from dark sector interaction. We isolate and quantify the 1-loop contribution coming from the interaction and from the non-standard equation of state. We find the interaction parameter ξ amplifies scale dependent signatures in the range of scales considered. Non-standard equations of state also give scale dependent signatures within this same regime. In redshift space the match with N-body is improved at smaller scales by the addition of the TNS free parameter σv. To quantify the importance of modelling the interaction, we create mock data sets for varying values of ξ using perturbation theory. This data is given errors typical of Stage IV surveys. We then perform a likelihood analysis using the first two multipoles on these sets and a ξ=0 modelling, ignoring the interaction. We find the fiducial growth parameter f is generally recovered even for very large values of ξ both at z=0.5 and z=1. The ξ=0 modelling is most biased in its estimation of f for the phantom w=‑1.1 case.

  10. Generalized linear mixed models modern concepts, methods and applications

    CERN Document Server

    Stroup, Walter W

    2012-01-01

    PART I The Big PictureModeling BasicsWhat Is a Model?Two Model Forms: Model Equation and Probability DistributionTypes of Model EffectsWriting Models in Matrix FormSummary: Essential Elements for a Complete Statement of the ModelDesign MattersIntroductory Ideas for Translating Design and Objectives into ModelsDescribing ""Data Architecture"" to Facilitate Model SpecificationFrom Plot Plan to Linear PredictorDistribution MattersMore Complex Example: Multiple Factors with Different Units of ReplicationSetting the StageGoals for Inference with Models: OverviewBasic Tools of InferenceIssue I: Data

  11. Application of Incremental Dynamic Analysis (IDA Method for Studying the Dynamic Behavior of Structures During Earthquakes

    Directory of Open Access Journals (Sweden)

    M. Javanpour

    2017-02-01

    Full Text Available Prediction of existing buildings’ vulnerability by future earthquakes is one of the most essential topics in structural engineering. Modeling steel structures is a giant step in determining the damage caused by the earthquake, as such structures are increasingly being used in constructions. Hence, two same-order steel structures with two types of structural systems were selected (coaxial moment frames and moment frame. In most cases, a specific structure needs to satisfy several functional levels. For this purpose, a method is required to determine the input request to the structures under possible earthquakes. Therefore, the Incremental Dynamic Analysis (IDA was preferred to the Push-Over non-linear static method for the analysis and design of the considered steel structures, due its accuracy and effect of higher modes at the same time intervals. OpenSees software was used to perform accurate nonlinear analysis of the steel structure. Two parameters (spectral acceleration and maximum ground acceleration were introduced to the modeled frames to compare the numerical correlations of seismic vulnerability obtained by two statistical methods based on the "log-normal distribution" and "logistics distribution", and finally, the parameters of displacement and drift were assessed after analysis.

  12. Wavefront Sensing for WFIRST with a Linear Optical Model

    Science.gov (United States)

    Jurling, Alden S.; Content, David A.

    2012-01-01

    In this paper we develop methods to use a linear optical model to capture the field dependence of wavefront aberrations in a nonlinear optimization-based phase retrieval algorithm for image-based wavefront sensing. The linear optical model is generated from a ray trace model of the system and allows the system state to be described in terms of mechanical alignment parameters rather than wavefront coefficients. This approach allows joint optimization over images taken at different field points and does not require separate convergence of phase retrieval at individual field points. Because the algorithm exploits field diversity, multiple defocused images per field point are not required for robustness. Furthermore, because it is possible to simultaneously fit images of many stars over the field, it is not necessary to use a fixed defocus to achieve adequate signal-to-noise ratio despite having images with high dynamic range. This allows high performance wavefront sensing using in-focus science data. We applied this technique in a simulation model based on the Wide Field Infrared Survey Telescope (WFIRST) Intermediate Design Reference Mission (IDRM) imager using a linear optical model with 25 field points. We demonstrate sub-thousandth-wave wavefront sensing accuracy in the presence of noise and moderate undersampling for both monochromatic and polychromatic images using 25 high-SNR target stars. Using these high-quality wavefront sensing results, we are able to generate upsampled point-spread functions (PSFs) and use them to determine PSF ellipticity to high accuracy in order to reduce the systematic impact of aberrations on the accuracy of galactic ellipticity determination for weak-lensing science.

  13. Automatic incrementalization of Prolog based static analyses

    DEFF Research Database (Denmark)

    Eichberg, Michael; Kahl, Matthias; Saha, Diptikalyan

    2007-01-01

    Modem development environments integrate various static analyses into the build process. Analyses that analyze the whole project whenever the project changes are impractical in this context. We present an approach to automatic incrementalization of analyses that are specified as tabled logic...... programs and evaluated using incremental tabled evaluation, a technique for efficiently updating memo tables in response to changes in facts and rules. The approach has been implemented and integrated into the Eclipse IDE. Our measurements show that this technique is effective for automatically...

  14. Heterotic non-linear sigma models with anti-de Sitter target spaces

    International Nuclear Information System (INIS)

    Michalogiorgakis, Georgios; Gubser, Steven S.

    2006-01-01

    We calculate the beta function of non-linear sigma models with S D+1 and AdS D+1 target spaces in a 1/D expansion up to order 1/D 2 and to all orders in α ' . This beta function encodes partial information about the spacetime effective action for the heterotic string to all orders in α ' . We argue that a zero of the beta function, corresponding to a worldsheet CFT with AdS D+1 target space, arises from competition between the one-loop and higher-loop terms, similarly to the bosonic and supersymmetric cases studied previously in [J.J. Friess, S.S. Gubser, Non-linear sigma models with anti-de Sitter target spaces, Nucl. Phys. B 750 (2006) 111-141]. Various critical exponents of the non-linear sigma model are calculated, and checks of the calculation are presented

  15. Non Linear signa models probing the string structure

    International Nuclear Information System (INIS)

    Abdalla, E.

    1987-01-01

    The introduction of a term depending on the extrinsic curvature to the string action, and related non linear sigma models defined on a symmetric space SO(D)/SO(2) x SO(d-2) is descussed . Coupling to fermions are also treated. (author) [pt

  16. Weighted functional linear regression models for gene-based association analysis.

    Science.gov (United States)

    Belonogova, Nadezhda M; Svishcheva, Gulnara R; Wilson, James F; Campbell, Harry; Axenovich, Tatiana I

    2018-01-01

    Functional linear regression models are effectively used in gene-based association analysis of complex traits. These models combine information about individual genetic variants, taking into account their positions and reducing the influence of noise and/or observation errors. To increase the power of methods, where several differently informative components are combined, weights are introduced to give the advantage to more informative components. Allele-specific weights have been introduced to collapsing and kernel-based approaches to gene-based association analysis. Here we have for the first time introduced weights to functional linear regression models adapted for both independent and family samples. Using data simulated on the basis of GAW17 genotypes and weights defined by allele frequencies via the beta distribution, we demonstrated that type I errors correspond to declared values and that increasing the weights of causal variants allows the power of functional linear models to be increased. We applied the new method to real data on blood pressure from the ORCADES sample. Five of the six known genes with P models. Moreover, we found an association between diastolic blood pressure and the VMP1 gene (P = 8.18×10-6), when we used a weighted functional model. For this gene, the unweighted functional and weighted kernel-based models had P = 0.004 and 0.006, respectively. The new method has been implemented in the program package FREGAT, which is freely available at https://cran.r-project.org/web/packages/FREGAT/index.html.

  17. Mission Planning System Increment 5 (MPS Inc 5)

    Science.gov (United States)

    2016-03-01

    2016 Major Automated Information System Annual Report Mission Planning System Increment 5 (MPS Inc 5) Defense Acquisition Management Information...President’s Budget RDT&E - Research, Development, Test, and Evaluation SAE - Service Acquisition Executive TBD - To Be Determined TY - Then Year...Phone: 845-9625 DSN Fax: Date Assigned: May 19, 2014 Program Information Program Name Mission Planning System Increment 5 (MPS Inc 5) DoD

  18. MRI: Modular reasoning about interference in incremental programming

    OpenAIRE

    Oliveira, Bruno C. D. S; Schrijvers, Tom; Cook, William R

    2012-01-01

    Incremental Programming (IP) is a programming style in which new program components are defined as increments of other components. Examples of IP mechanisms include: Object-oriented programming (OOP) inheritance, aspect-oriented programming (AOP) advice and feature-oriented programming (FOP). A characteristic of IP mechanisms is that, while individual components can be independently defined, the composition of components makes those components become tightly coupled, sh...

  19. Incremental short daily home hemodialysis: a case series

    OpenAIRE

    Toth-Manikowski, Stephanie M.; Mullangi, Surekha; Hwang, Seungyoung; Shafi, Tariq

    2017-01-01

    Background Patients starting dialysis often have substantial residual kidney function. Incremental hemodialysis provides a hemodialysis prescription that supplements patients? residual kidney function while maintaining total (residual + dialysis) urea clearance (standard Kt/Vurea) targets. We describe our experience with incremental hemodialysis in patients using NxStage System One for home hemodialysis. Case presentation From 2011 to 2015, we initiated 5 incident hemodialysis patients on an ...

  20. Optical linear algebra processors - Noise and error-source modeling

    Science.gov (United States)

    Casasent, D.; Ghosh, A.

    1985-01-01

    The modeling of system and component noise and error sources in optical linear algebra processors (OLAPs) are considered, with attention to the frequency-multiplexed OLAP. General expressions are obtained for the output produced as a function of various component errors and noise. A digital simulator for this model is discussed.

  1. Optical linear algebra processors: noise and error-source modeling.

    Science.gov (United States)

    Casasent, D; Ghosh, A

    1985-06-01

    The modeling of system and component noise and error sources in optical linear algebra processors (OLAP's) are considered, with attention to the frequency-multiplexed OLAP. General expressions are obtained for the output produced as a function of various component errors and noise. A digital simulator for this model is discussed.

  2. OPLS statistical model versus linear regression to assess sonographic predictors of stroke prognosis.

    Science.gov (United States)

    Vajargah, Kianoush Fathi; Sadeghi-Bazargani, Homayoun; Mehdizadeh-Esfanjani, Robab; Savadi-Oskouei, Daryoush; Farhoudi, Mehdi

    2012-01-01

    The objective of the present study was to assess the comparable applicability of orthogonal projections to latent structures (OPLS) statistical model vs traditional linear regression in order to investigate the role of trans cranial doppler (TCD) sonography in predicting ischemic stroke prognosis. The study was conducted on 116 ischemic stroke patients admitted to a specialty neurology ward. The Unified Neurological Stroke Scale was used once for clinical evaluation on the first week of admission and again six months later. All data was primarily analyzed using simple linear regression and later considered for multivariate analysis using PLS/OPLS models through the SIMCA P+12 statistical software package. The linear regression analysis results used for the identification of TCD predictors of stroke prognosis were confirmed through the OPLS modeling technique. Moreover, in comparison to linear regression, the OPLS model appeared to have higher sensitivity in detecting the predictors of ischemic stroke prognosis and detected several more predictors. Applying the OPLS model made it possible to use both single TCD measures/indicators and arbitrarily dichotomized measures of TCD single vessel involvement as well as the overall TCD result. In conclusion, the authors recommend PLS/OPLS methods as complementary rather than alternative to the available classical regression models such as linear regression.

  3. Risk evaluations of aging phenomena: The linear aging reliability model and its extensions

    International Nuclear Information System (INIS)

    Vesely, W.E.; Wolford, A.J.

    1988-01-01

    A model for component failure rates due to aging mechanisms is developed from basic phenomenological considerations. In the treatment, the occurrences of deterioration are modeled as following a Poisson process. The severity of damage is allowed to have any distribution, however the damage is assumed to accumulate independently. Finally, the failure rate is modeled as being proportional to the accumulated damage. Using this treatment, the linear aging failure rate model is obtained. The applicability of the linear aging model to various mechanisms is discussed. Extensions of the model to cover nonlinear and dependent aging phenomena are also described. The implementability of the linear aging model is demonstrated by applying it to the aging data collected in the U.S. NRC Nuclear Plant Aging Research (NPAR) Program. (orig./HP)

  4. Volatilities, Traded Volumes, and Price Increments in Derivative Securities

    Science.gov (United States)

    Kim, Kyungsik; Lim, Gyuchang; Kim, Soo Yong; Scalas, Enrico

    2007-03-01

    We apply the detrended fluctuation analysis (DFA) to the statistics of the Korean treasury bond (KTB) futures from which the logarithmic increments, volatilities, and traded volumes are estimated over a specific time lag. For our case, the logarithmic increment of futures prices has no long-memory property, while the volatility and the traded volume exhibit the existence of long-memory property. To analyze and calculate whether the volatility clustering is due to the inherent higher-order correlation not detected by applying directly the DFA to logarithmic increments of the KTB futures, it is of importance to shuffle the original tick data of futures prices and to generate the geometric Brownian random walk with the same mean and standard deviation. It is really shown from comparing the three tick data that the higher-order correlation inherent in logarithmic increments makes the volatility clustering. Particularly, the result of the DFA on volatilities and traded volumes may be supported the hypothesis of price changes.

  5. Using Quartile-Quartile Lines as Linear Models

    Science.gov (United States)

    Gordon, Sheldon P.

    2015-01-01

    This article introduces the notion of the quartile-quartile line as an alternative to the regression line and the median-median line to produce a linear model based on a set of data. It is based on using the first and third quartiles of a set of (x, y) data. Dynamic spreadsheets are used as exploratory tools to compare the different approaches and…

  6. Otolith development in larval and juvenile Schizothorax davidi: ontogeny and growth increment characteristics

    Science.gov (United States)

    Yan, Taiming; Hu, Jiaxiang; Cai, Yueping; Xiong, Sen; Yang, Shiyong; Wang, Xiongyan; He, Zhi

    2017-09-01

    Laboratory-reared Schizothorax davidi larvae and juveniles were examined to assess the formation and characteristics of David's schizothoracin otoliths. Otolith development was observed and their formation period was verified by monitoring larvae and juveniles of known age. The results revealed that lapilli and sagittae developed before hatching, and the first otolith increment was identified at 2 days post hatching in both. The shape of lapilli was relatively stable during development compared with that of sagittae; however, growth of four sagittae and lapilli areas was consistent, but the posterior area grew faster than the anterior area and the ventral surface grew faster than the dorsal surface. Similarly, the sum length of the radius of the anterior and posterior areas on sagittae and lapilli were linearly and binomially related to total fish length, respectively. Moreover, daily deposition rates were validated by monitoring knownage larvae and juveniles. The increase in lapilli width was 1.88±0.080 0 μm at the ninth increment, which reached a maximum and the decreased gradually toward the otolith edge, whereas that of sagittae increased more slowly. These results illustrate the developmental biology of S. davidi, which will aid in population conservation and fish stock management.

  7. Is There a Critical Distance for Fickian Transport? - a Statistical Approach to Sub-Fickian Transport Modelling in Porous Media

    Science.gov (United States)

    Most, S.; Nowak, W.; Bijeljic, B.

    2014-12-01

    Transport processes in porous media are frequently simulated as particle movement. This process can be formulated as a stochastic process of particle position increments. At the pore scale, the geometry and micro-heterogeneities prohibit the commonly made assumption of independent and normally distributed increments to represent dispersion. Many recent particle methods seek to loosen this assumption. Recent experimental data suggest that we have not yet reached the end of the need to generalize, because particle increments show statistical dependency beyond linear correlation and over many time steps. The goal of this work is to better understand the validity regions of commonly made assumptions. We are investigating after what transport distances can we observe: A statistical dependence between increments, that can be modelled as an order-k Markov process, boils down to order 1. This would be the Markovian distance for the process, where the validity of yet-unexplored non-Gaussian-but-Markovian random walks would start. A bivariate statistical dependence that simplifies to a multi-Gaussian dependence based on simple linear correlation (validity of correlated PTRW). Complete absence of statistical dependence (validity of classical PTRW/CTRW). The approach is to derive a statistical model for pore-scale transport from a powerful experimental data set via copula analysis. The model is formulated as a non-Gaussian, mutually dependent Markov process of higher order, which allows us to investigate the validity ranges of simpler models.

  8. Are Fearless Dominance Traits Superfluous in Operationalizing Psychopathy? Incremental Validity and Sex Differences

    Science.gov (United States)

    Murphy, Brett; Lilienfeld, Scott; Skeem, Jennifer; Edens, John

    2016-01-01

    Researchers are vigorously debating whether psychopathic personality includes seemingly adaptive traits, especially social and physical boldness. In a large sample (N=1565) of adult offenders, we examined the incremental validity of two operationalizations of boldness (Fearless Dominance traits in the Psychopathy Personality Inventory, Lilienfeld & Andrews, 1996; Boldness traits in the Triarchic Model of Psychopathy, Patrick et al, 2009), above and beyond other characteristics of psychopathy, in statistically predicting scores on four psychopathy-related measures, including the Psychopathy Checklist-Revised (PCL-R). The incremental validity added by boldness traits in predicting the PCL-R’s representation of psychopathy was especially pronounced for interpersonal traits (e.g., superficial charm, deceitfulness). Our analyses, however, revealed unexpected sex differences in the relevance of these traits to psychopathy, with boldness traits exhibiting reduced importance for psychopathy in women. We discuss the implications of these findings for measurement models of psychopathy. PMID:26866795

  9. Technical note: A linear model for predicting δ13 Cprotein.

    Science.gov (United States)

    Pestle, William J; Hubbe, Mark; Smith, Erin K; Stevenson, Joseph M

    2015-08-01

    Development of a model for the prediction of δ(13) Cprotein from δ(13) Ccollagen and Δ(13) Cap-co . Model-generated values could, in turn, serve as "consumer" inputs for multisource mixture modeling of paleodiet. Linear regression analysis of previously published controlled diet data facilitated the development of a mathematical model for predicting δ(13) Cprotein (and an experimentally generated error term) from isotopic data routinely generated during the analysis of osseous remains (δ(13) Cco and Δ(13) Cap-co ). Regression analysis resulted in a two-term linear model (δ(13) Cprotein (%) = (0.78 × δ(13) Cco ) - (0.58× Δ(13) Cap-co ) - 4.7), possessing a high R-value of 0.93 (r(2)  = 0.86, P analysis of human osseous remains. These predicted values are ideal for use in multisource mixture modeling of dietary protein source contribution. © 2015 Wiley Periodicals, Inc.

  10. Average-case analysis of incremental topological ordering

    DEFF Research Database (Denmark)

    Ajwani, Deepak; Friedrich, Tobias

    2010-01-01

    Many applications like pointer analysis and incremental compilation require maintaining a topological ordering of the nodes of a directed acyclic graph (DAG) under dynamic updates. All known algorithms for this problem are either only analyzed for worst-case insertion sequences or only evaluated...... experimentally on random DAGs. We present the first average-case analysis of incremental topological ordering algorithms. We prove an expected runtime of under insertion of the edges of a complete DAG in a random order for the algorithms of Alpern et al. (1990) [4], Katriel and Bodlaender (2006) [18], and Pearce...

  11. Motion-Induced Blindness Using Increments and Decrements of Luminance

    Directory of Open Access Journals (Sweden)

    Stine Wm Wren

    2017-10-01

    Full Text Available Motion-induced blindness describes the disappearance of stationary elements of a scene when other, perhaps non-overlapping, elements of the scene are in motion. We measured the effects of increment (200.0 cd/m2 and decrement targets (15.0 cd/m2 and masks presented on a grey background (108.0 cd/m2, tapping into putative ON- and OFF-channels, on the rate of target disappearance psychophysically. We presented two-frame motion, which has coherent motion energy, and dynamic Glass patterns and dynamic anti-Glass patterns, which do not have coherent motion energy. Using the method of constant stimuli, participants viewed stimuli of varying durations (3.1 s, 4.6 s, 7.0 s, 11 s, or 16 s in a given trial and then indicated whether or not the targets vanished during that trial. Psychometric function midpoints were used to define absolute threshold mask duration for the disappearance of the target. 95% confidence intervals for threshold disappearance times were estimated using a bootstrap technique for each of the participants across two experiments. Decrement masks were more effective than increment masks with increment targets. Increment targets were easier to mask than decrement targets. Distinct mask pattern types had no effect, suggesting that perceived coherence contributes to the effectiveness of the mask. The ON/OFF dichotomy clearly carries its influence to the level of perceived motion coherence. Further, the asymmetry in the effects of increment and decrement masks on increment and decrement targets might lead one to speculate that they reflect the ‘importance’ of detecting decrements in the environment.

  12. Generalized linear longitudinal mixed models with linear covariance structure and multiplicative random effects

    DEFF Research Database (Denmark)

    Holst, René; Jørgensen, Bent

    2015-01-01

    The paper proposes a versatile class of multiplicative generalized linear longitudinal mixed models (GLLMM) with additive dispersion components, based on explicit modelling of the covariance structure. The class incorporates a longitudinal structure into the random effects models and retains...... a marginal as well as a conditional interpretation. The estimation procedure is based on a computationally efficient quasi-score method for the regression parameters combined with a REML-like bias-corrected Pearson estimating function for the dispersion and correlation parameters. This avoids...... the multidimensional integral of the conventional GLMM likelihood and allows an extension of the robust empirical sandwich estimator for use with both association and regression parameters. The method is applied to a set of otholit data, used for age determination of fish....

  13. Logistics Modernization Program Increment 2 (LMP Inc 2)

    Science.gov (United States)

    2016-03-01

    Sections 3 and 4 of the LMP Increment 2 Business Case, ADM), key functional requirements, Critical Design Review (CDR) Reports, and Economic ...from the 2013 version of the LMP Increment 2 Economic Analysis and replace it with references to the Economic Analysis that will be completed...of ( inbound /outbound) IDOCs into the system. LMP must be able to successfully process 95% of ( inbound /outbound) IDOCs into the system. Will meet

  14. Incremental Tensor Principal Component Analysis for Handwritten Digit Recognition

    Directory of Open Access Journals (Sweden)

    Chang Liu

    2014-01-01

    Full Text Available To overcome the shortcomings of traditional dimensionality reduction algorithms, incremental tensor principal component analysis (ITPCA based on updated-SVD technique algorithm is proposed in this paper. This paper proves the relationship between PCA, 2DPCA, MPCA, and the graph embedding framework theoretically and derives the incremental learning procedure to add single sample and multiple samples in detail. The experiments on handwritten digit recognition have demonstrated that ITPCA has achieved better recognition performance than that of vector-based principal component analysis (PCA, incremental principal component analysis (IPCA, and multilinear principal component analysis (MPCA algorithms. At the same time, ITPCA also has lower time and space complexity.

  15. Exploiting Outage and Error Probability of Cooperative Incremental Relaying in Underwater Wireless Sensor Networks

    Directory of Open Access Journals (Sweden)

    Hina Nasir

    2016-07-01

    Full Text Available This paper embeds a bi-fold contribution for Underwater Wireless Sensor Networks (UWSNs; performance analysis of incremental relaying in terms of outage and error probability, and based on the analysis proposition of two new cooperative routing protocols. Subject to the first contribution, a three step procedure is carried out; a system model is presented, the number of available relays are determined, and based on cooperative incremental retransmission methodology, closed-form expressions for outage and error probability are derived. Subject to the second contribution, Adaptive Cooperation in Energy (ACE efficient depth based routing and Enhanced-ACE (E-ACE are presented. In the proposed model, feedback mechanism indicates success or failure of data transmission. If direct transmission is successful, there is no need for relaying by cooperative relay nodes. In case of failure, all the available relays retransmit the data one by one till the desired signal quality is achieved at destination. Simulation results show that the ACE and E-ACE significantly improves network performance, i.e., throughput, when compared with other incremental relaying protocols like Cooperative Automatic Repeat reQuest (CARQ. E-ACE and ACE achieve 69% and 63% more throughput respectively as compared to CARQ in hard underwater environment.

  16. Lead 210 and moss-increment dating of two Finnish Sphagnum hummocks

    International Nuclear Information System (INIS)

    El-Daoushy, F.

    1982-01-01

    A comparison is presented of 210 Pb dating data with mass-increment dates of selected peat material from Finland. The measurements of 210 Pb were carried out by determining the granddaughter product 210 Po by means of the isotope dilution. The ages in 210 Pb yr were calculated using the constant initial concentration and the constant rate of supply models. (U.K.)

  17. Thurstonian models for sensory discrimination tests as generalized linear models

    DEFF Research Database (Denmark)

    Brockhoff, Per B.; Christensen, Rune Haubo Bojesen

    2010-01-01

    as a so-called generalized linear model. The underlying sensory difference 6 becomes directly a parameter of the statistical model and the estimate d' and it's standard error becomes the "usual" output of the statistical analysis. The d' for the monadic A-NOT A method is shown to appear as a standard......Sensory discrimination tests such as the triangle, duo-trio, 2-AFC and 3-AFC tests produce binary data and the Thurstonian decision rule links the underlying sensory difference 6 to the observed number of correct responses. In this paper it is shown how each of these four situations can be viewed...

  18. Linear models for multivariate, time series, and spatial data

    CERN Document Server

    Christensen, Ronald

    1991-01-01

    This is a companion volume to Plane Answers to Complex Questions: The Theory 0/ Linear Models. It consists of six additional chapters written in the same spirit as the last six chapters of the earlier book. Brief introductions are given to topics related to linear model theory. No attempt is made to give a comprehensive treatment of the topics. Such an effort would be futile. Each chapter is on a topic so broad that an in depth discussion would require a book-Iength treatment. People need to impose structure on the world in order to understand it. There is a limit to the number of unrelated facts that anyone can remem­ ber. If ideas can be put within a broad, sophisticatedly simple structure, not only are they easier to remember but often new insights become avail­ able. In fact, sophisticatedly simple models of the world may be the only ones that work. I have often heard Arnold Zellner say that, to the best of his knowledge, this is true in econometrics. The process of modeling is fundamental to understand...

  19. A simple method for identifying parameter correlations in partially observed linear dynamic models.

    Science.gov (United States)

    Li, Pu; Vu, Quoc Dong

    2015-12-14

    Parameter estimation represents one of the most significant challenges in systems biology. This is because biological models commonly contain a large number of parameters among which there may be functional interrelationships, thus leading to the problem of non-identifiability. Although identifiability analysis has been extensively studied by analytical as well as numerical approaches, systematic methods for remedying practically non-identifiable models have rarely been investigated. We propose a simple method for identifying pairwise correlations and higher order interrelationships of parameters in partially observed linear dynamic models. This is made by derivation of the output sensitivity matrix and analysis of the linear dependencies of its columns. Consequently, analytical relations between the identifiability of the model parameters and the initial conditions as well as the input functions can be achieved. In the case of structural non-identifiability, identifiable combinations can be obtained by solving the resulting homogenous linear equations. In the case of practical non-identifiability, experiment conditions (i.e. initial condition and constant control signals) can be provided which are necessary for remedying the non-identifiability and unique parameter estimation. It is noted that the approach does not consider noisy data. In this way, the practical non-identifiability issue, which is popular for linear biological models, can be remedied. Several linear compartment models including an insulin receptor dynamics model are taken to illustrate the application of the proposed approach. Both structural and practical identifiability of partially observed linear dynamic models can be clarified by the proposed method. The result of this method provides important information for experimental design to remedy the practical non-identifiability if applicable. The derivation of the method is straightforward and thus the algorithm can be easily implemented into a

  20. Wavelet-linear genetic programming: A new approach for modeling monthly streamflow

    Science.gov (United States)

    Ravansalar, Masoud; Rajaee, Taher; Kisi, Ozgur

    2017-06-01

    The streamflows are important and effective factors in stream ecosystems and its accurate prediction is an essential and important issue in water resources and environmental engineering systems. A hybrid wavelet-linear genetic programming (WLGP) model, which includes a discrete wavelet transform (DWT) and a linear genetic programming (LGP) to predict the monthly streamflow (Q) in two gauging stations, Pataveh and Shahmokhtar, on the Beshar River at the Yasuj, Iran were used in this study. In the proposed WLGP model, the wavelet analysis was linked to the LGP model where the original time series of streamflow were decomposed into the sub-time series comprising wavelet coefficients. The results were compared with the single LGP, artificial neural network (ANN), a hybrid wavelet-ANN (WANN) and Multi Linear Regression (MLR) models. The comparisons were done by some of the commonly utilized relevant physical statistics. The Nash coefficients (E) were found as 0.877 and 0.817 for the WLGP model, for the Pataveh and Shahmokhtar stations, respectively. The comparison of the results showed that the WLGP model could significantly increase the streamflow prediction accuracy in both stations. Since, the results demonstrate a closer approximation of the peak streamflow values by the WLGP model, this model could be utilized for the simulation of cumulative streamflow data prediction in one month ahead.

  1. A Model Stitching Architecture for Continuous Full Flight-Envelope Simulation of Fixed-Wing Aircraft and Rotorcraft from Discrete Point Linear Models

    Science.gov (United States)

    2016-04-01

    AND ROTORCRAFT FROM DISCRETE -POINT LINEAR MODELS Eric L. Tobias and Mark B. Tischler Aviation Development Directorate Aviation and Missile...Stitching Architecture for Continuous Full Flight-Envelope Simulation of Fixed-Wing Aircraft and Rotorcraft from Discrete -Point Linear Models 5...of discrete -point linear models and trim data. The model stitching simulation architecture is applicable to any aircraft configuration readily

  2. Creating Helical Tool Paths for Single Point Incremental Forming

    DEFF Research Database (Denmark)

    Skjødt, Martin; Hancock, Michael H.; Bay, Niels

    2007-01-01

    Single point incremental forming (SPIF) is a relatively new sheet forming process. A sheet is clamped in a rig and formed incrementally using a rotating single point tool in the form of a rod with a spherical end. The process is often performed on a CNC milling machine and the tool movement...

  3. Single point incremental forming: Formability of PC sheets

    Science.gov (United States)

    Formisano, A.; Boccarusso, L.; Carrino, L.; Lambiase, F.; Minutolo, F. Memola Capece

    2018-05-01

    Recent research on Single Point Incremental Forming of polymers has slightly covered the possibility of expanding the materials capability window of this flexible forming process beyond metals, by demonstrating the workability of thermoplastic polymers at room temperature. Given the different behaviour of polymers compared to metals, different aspects need to be deepened to better understand the behaviour of these materials when incrementally formed. Thus, the aim of the work is to investigate the formability of incrementally formed polycarbonate thin sheets. To this end, an experimental investigation at room temperature was conducted involving formability tests; varying wall angle cone and pyramid frusta were manufactured by processing polycarbonate sheets with different thicknesses and using tools with different diameters, in order to draw conclusions on the formability of polymer sheets through the evaluation of the forming angles and the observation of the failure mechanisms.

  4. Martingales, nonstationary increments, and the efficient market hypothesis

    Science.gov (United States)

    McCauley, Joseph L.; Bassler, Kevin E.; Gunaratne, Gemunu H.

    2008-06-01

    We discuss the deep connection between nonstationary increments, martingales, and the efficient market hypothesis for stochastic processes x(t) with arbitrary diffusion coefficients D(x,t). We explain why a test for a martingale is generally a test for uncorrelated increments. We explain why martingales look Markovian at the level of both simple averages and 2-point correlations. But while a Markovian market has no memory to exploit and cannot be beaten systematically, a martingale admits memory that might be exploitable in higher order correlations. We also use the analysis of this paper to correct a misstatement of the ‘fair game’ condition in terms of serial correlations in Fama’s paper on the EMH. We emphasize that the use of the log increment as a variable in data analysis generates spurious fat tails and spurious Hurst exponents.

  5. A Hierarchical Linear Model for Estimating Gender-Based Earnings Differentials.

    Science.gov (United States)

    Haberfield, Yitchak; Semyonov, Moshe; Addi, Audrey

    1998-01-01

    Estimates of gender earnings inequality in data from 116,431 Jewish workers were compared using a hierarchical linear model (HLM) and ordinary least squares model. The HLM allows estimation of the extent to which earnings inequality depends on occupational characteristics. (SK)

  6. Modeling Pan Evaporation for Kuwait by Multiple Linear Regression

    Science.gov (United States)

    Almedeij, Jaber

    2012-01-01

    Evaporation is an important parameter for many projects related to hydrology and water resources systems. This paper constitutes the first study conducted in Kuwait to obtain empirical relations for the estimation of daily and monthly pan evaporation as functions of available meteorological data of temperature, relative humidity, and wind speed. The data used here for the modeling are daily measurements of substantial continuity coverage, within a period of 17 years between January 1993 and December 2009, which can be considered representative of the desert climate of the urban zone of the country. Multiple linear regression technique is used with a procedure of variable selection for fitting the best model forms. The correlations of evaporation with temperature and relative humidity are also transformed in order to linearize the existing curvilinear patterns of the data by using power and exponential functions, respectively. The evaporation models suggested with the best variable combinations were shown to produce results that are in a reasonable agreement with observation values. PMID:23226984

  7. Evaluation of a multiple linear regression model and SARIMA model in forecasting heat demand for district heating system

    International Nuclear Information System (INIS)

    Fang, Tingting; Lahdelma, Risto

    2016-01-01

    Highlights: • Social factor is considered for the linear regression models besides weather file. • Simultaneously optimize all the coefficients for linear regression models. • SARIMA combined with linear regression is used to forecast the heat demand. • The accuracy for both linear regression and time series models are evaluated. - Abstract: Forecasting heat demand is necessary for production and operation planning of district heating (DH) systems. In this study we first propose a simple regression model where the hourly outdoor temperature and wind speed forecast the heat demand. Weekly rhythm of heat consumption as a social component is added to the model to significantly improve the accuracy. The other type of model is the seasonal autoregressive integrated moving average (SARIMA) model with exogenous variables as a combination to take weather factors, and the historical heat consumption data as depending variables. One outstanding advantage of the model is that it peruses the high accuracy for both long-term and short-term forecast by considering both exogenous factors and time series. The forecasting performance of both linear regression models and time series model are evaluated based on real-life heat demand data for the city of Espoo in Finland by out-of-sample tests for the last 20 full weeks of the year. The results indicate that the proposed linear regression model (T168h) using 168-h demand pattern with midweek holidays classified as Saturdays or Sundays gives the highest accuracy and strong robustness among all the tested models based on the tested forecasting horizon and corresponding data. Considering the parsimony of the input, the ease of use and the high accuracy, the proposed T168h model is the best in practice. The heat demand forecasting model can also be developed for individual buildings if automated meter reading customer measurements are available. This would allow forecasting the heat demand based on more accurate heat consumption

  8. Current algebra of classical non-linear sigma models

    International Nuclear Information System (INIS)

    Forger, M.; Laartz, J.; Schaeper, U.

    1992-01-01

    The current algebra of classical non-linear sigma models on arbitrary Riemannian manifolds is analyzed. It is found that introducing, in addition to the Noether current j μ associated with the global symmetry of the theory, a composite scalar field j, the algebra closes under Poisson brackets. (orig.)

  9. Incremental Learning for Place Recognition in Dynamic Environments

    OpenAIRE

    Luo, Jie; Pronobis, Andrzej; Caputo, Barbara; Jensfelt, Patric

    2007-01-01

    This paper proposes a discriminative approach to template-based Vision-based place recognition is a desirable feature for an autonomous mobile system. In order to work in realistic scenarios, visual recognition algorithms should be adaptive, i.e. should be able to learn from experience and adapt continuously to changes in the environment. This paper presents a discriminative incremental learning approach to place recognition. We use a recently introduced version of the incremental SVM, which ...

  10. On the instability increments of a stationary pinch

    International Nuclear Information System (INIS)

    Bud'ko, A.B.

    1989-01-01

    The stability of stationary pinch to helical modes is numerically studied. It is shown that in the case of a rather fast plasma pressure decrease to the pinch boundary, for example, for an isothermal diffusion pinch with Gauss density distribution instabilities with m=0 modes are the most quickly growing. Instability increments are calculated. A simple analytical expression of a maximum increment of growth of sausage instability for automodel Gauss profiles is obtained

  11. Incremental Trust in Grid Computing

    DEFF Research Database (Denmark)

    Brinkløv, Michael Hvalsøe; Sharp, Robin

    2007-01-01

    This paper describes a comparative simulation study of some incremental trust and reputation algorithms for handling behavioural trust in large distributed systems. Two types of reputation algorithm (based on discrete and Bayesian evaluation of ratings) and two ways of combining direct trust and ...... of Grid computing systems....

  12. Convergent systems vs. incremental stability

    NARCIS (Netherlands)

    Rüffer, B.S.; Wouw, van de N.; Mueller, M.

    2013-01-01

    Two similar stability notions are considered; one is the long established notion of convergent systems, the other is the younger notion of incremental stability. Both notions require that any two solutions of a system converge to each other. Yet these stability concepts are different, in the sense

  13. Doubly robust estimation of generalized partial linear models for longitudinal data with dropouts.

    Science.gov (United States)

    Lin, Huiming; Fu, Bo; Qin, Guoyou; Zhu, Zhongyi

    2017-12-01

    We develop a doubly robust estimation of generalized partial linear models for longitudinal data with dropouts. Our method extends the highly efficient aggregate unbiased estimating function approach proposed in Qu et al. (2010) to a doubly robust one in the sense that under missing at random (MAR), our estimator is consistent when either the linear conditional mean condition is satisfied or a model for the dropout process is correctly specified. We begin with a generalized linear model for the marginal mean, and then move forward to a generalized partial linear model, allowing for nonparametric covariate effect by using the regression spline smoothing approximation. We establish the asymptotic theory for the proposed method and use simulation studies to compare its finite sample performance with that of Qu's method, the complete-case generalized estimating equation (GEE) and the inverse-probability weighted GEE. The proposed method is finally illustrated using data from a longitudinal cohort study. © 2017, The International Biometric Society.

  14. Linear least squares compartmental-model-independent parameter identification in PET

    International Nuclear Information System (INIS)

    Thie, J.A.; Smith, G.T.; Hubner, K.F.

    1997-01-01

    A simplified approach involving linear-regression straight-line parameter fitting of dynamic scan data is developed for both specific and nonspecific models. Where compartmental-model topologies apply, the measured activity may be expressed in terms of: its integrals, plasma activity and plasma integrals -- all in a linear expression with macroparameters as coefficients. Multiple linear regression, as in spreadsheet software, determines parameters for best data fits. Positron emission tomography (PET)-acquired gray-matter images in a dynamic scan are analyzed: both by this method and by traditional iterative nonlinear least squares. Both patient and simulated data were used. Regression and traditional methods are in expected agreement. Monte-Carlo simulations evaluate parameter standard deviations, due to data noise, and much smaller noise-induced biases. Unique straight-line graphical displays permit visualizing data influences on various macroparameters as changes in slopes. Advantages of regression fitting are: simplicity, speed, ease of implementation in spreadsheet software, avoiding risks of convergence failures or false solutions in iterative least squares, and providing various visualizations of the uptake process by straight line graphical displays. Multiparameter model-independent analyses on lesser understood systems is also made possible

  15. Mutual-Information-Based Incremental Relaying Communications for Wireless Biomedical Implant Systems

    Directory of Open Access Journals (Sweden)

    Yangzhe Liao

    2018-02-01

    Full Text Available Network lifetime maximization of wireless biomedical implant systems is one of the major research challenges of wireless body area networks (WBANs. In this paper, a mutual information (MI-based incremental relaying communication protocol is presented where several on-body relay nodes and one coordinator are attached to the clothes of a patient. Firstly, a comprehensive analysis of a system model is investigated in terms of channel path loss, energy consumption, and the outage probability from the network perspective. Secondly, only when the MI value becomes smaller than the predetermined threshold is data transmission allowed. The communication path selection can be either from the implanted sensor to the on-body relay then forwards to the coordinator or from the implanted sensor to the coordinator directly, depending on the communication distance. Moreover, mathematical models of quality of service (QoS metrics are derived along with the related subjective functions. The results show that the MI-based incremental relaying technique achieves better performance in comparison to our previous proposed protocol techniques regarding several selected performance metrics. The outcome of this paper can be applied to intra-body continuous physiological signal monitoring, artificial biofeedback-oriented WBANs, and telemedicine system design.

  16. A Non-Gaussian Spatial Generalized Linear Latent Variable Model

    KAUST Repository

    Irincheeva, Irina; Cantoni, Eva; Genton, Marc G.

    2012-01-01

    We consider a spatial generalized linear latent variable model with and without normality distributional assumption on the latent variables. When the latent variables are assumed to be multivariate normal, we apply a Laplace approximation. To relax the assumption of marginal normality in favor of a mixture of normals, we construct a multivariate density with Gaussian spatial dependence and given multivariate margins. We use the pairwise likelihood to estimate the corresponding spatial generalized linear latent variable model. The properties of the resulting estimators are explored by simulations. In the analysis of an air pollution data set the proposed methodology uncovers weather conditions to be a more important source of variability than air pollution in explaining all the causes of non-accidental mortality excluding accidents. © 2012 International Biometric Society.

  17. A Non-Gaussian Spatial Generalized Linear Latent Variable Model

    KAUST Repository

    Irincheeva, Irina

    2012-08-03

    We consider a spatial generalized linear latent variable model with and without normality distributional assumption on the latent variables. When the latent variables are assumed to be multivariate normal, we apply a Laplace approximation. To relax the assumption of marginal normality in favor of a mixture of normals, we construct a multivariate density with Gaussian spatial dependence and given multivariate margins. We use the pairwise likelihood to estimate the corresponding spatial generalized linear latent variable model. The properties of the resulting estimators are explored by simulations. In the analysis of an air pollution data set the proposed methodology uncovers weather conditions to be a more important source of variability than air pollution in explaining all the causes of non-accidental mortality excluding accidents. © 2012 International Biometric Society.

  18. Nonlinear aeroacoustic characterization of Helmholtz resonators with a local-linear neuro-fuzzy network model

    Science.gov (United States)

    Förner, K.; Polifke, W.

    2017-10-01

    The nonlinear acoustic behavior of Helmholtz resonators is characterized by a data-based reduced-order model, which is obtained by a combination of high-resolution CFD simulation and system identification. It is shown that even in the nonlinear regime, a linear model is capable of describing the reflection behavior at a particular amplitude with quantitative accuracy. This observation motivates to choose a local-linear model structure for this study, which consists of a network of parallel linear submodels. A so-called fuzzy-neuron layer distributes the input signal over the linear submodels, depending on the root mean square of the particle velocity at the resonator surface. The resulting model structure is referred to as an local-linear neuro-fuzzy network. System identification techniques are used to estimate the free parameters of this model from training data. The training data are generated by CFD simulations of the resonator, with persistent acoustic excitation over a wide range of frequencies and sound pressure levels. The estimated nonlinear, reduced-order models show good agreement with CFD and experimental data over a wide range of amplitudes for several test cases.

  19. Heteroscedasticity as a Basis of Direction Dependence in Reversible Linear Regression Models.

    Science.gov (United States)

    Wiedermann, Wolfgang; Artner, Richard; von Eye, Alexander

    2017-01-01

    Heteroscedasticity is a well-known issue in linear regression modeling. When heteroscedasticity is observed, researchers are advised to remedy possible model misspecification of the explanatory part of the model (e.g., considering alternative functional forms and/or omitted variables). The present contribution discusses another source of heteroscedasticity in observational data: Directional model misspecifications in the case of nonnormal variables. Directional misspecification refers to situations where alternative models are equally likely to explain the data-generating process (e.g., x → y versus y → x). It is shown that the homoscedasticity assumption is likely to be violated in models that erroneously treat true nonnormal predictors as response variables. Recently, Direction Dependence Analysis (DDA) has been proposed as a framework to empirically evaluate the direction of effects in linear models. The present study links the phenomenon of heteroscedasticity with DDA and describes visual diagnostics and nine homoscedasticity tests that can be used to make decisions concerning the direction of effects in linear models. Results of a Monte Carlo simulation that demonstrate the adequacy of the approach are presented. An empirical example is provided, and applicability of the methodology in cases of violated assumptions is discussed.

  20. Bigger is Better, but at What Cost? Estimating the Economic Value of Incremental Data Assets.

    Science.gov (United States)

    Dalessandro, Brian; Perlich, Claudia; Raeder, Troy

    2014-06-01

    Many firms depend on third-party vendors to supply data for commercial predictive modeling applications. An issue that has received very little attention in the prior research literature is the estimation of a fair price for purchased data. In this work we present a methodology for estimating the economic value of adding incremental data to predictive modeling applications and present two cases studies. The methodology starts with estimating the effect that incremental data has on model performance in terms of common classification evaluation metrics. This effect is then translated into economic units, which gives an expected economic value that the firm might realize with the acquisition of a particular data asset. With this estimate a firm can then set a data acquisition price that targets a particular return on investment. This article presents the methodology in full detail and illustrates it in the context of two marketing case studies.

  1. Stochastic modeling of mode interactions via linear parabolized stability equations

    Science.gov (United States)

    Ran, Wei; Zare, Armin; Hack, M. J. Philipp; Jovanovic, Mihailo

    2017-11-01

    Low-complexity approximations of the Navier-Stokes equations have been widely used in the analysis of wall-bounded shear flows. In particular, the parabolized stability equations (PSE) and Floquet theory have been employed to capture the evolution of primary and secondary instabilities in spatially-evolving flows. We augment linear PSE with Floquet analysis to formally treat modal interactions and the evolution of secondary instabilities in the transitional boundary layer via a linear progression. To this end, we leverage Floquet theory by incorporating the primary instability into the base flow and accounting for different harmonics in the flow state. A stochastic forcing is introduced into the resulting linear dynamics to model the effect of nonlinear interactions on the evolution of modes. We examine the H-type transition scenario to demonstrate how our approach can be used to model nonlinear effects and capture the growth of the fundamental and subharmonic modes observed in direct numerical simulations and experiments.

  2. A variational formulation for linear models in coupled dynamic thermoelasticity

    International Nuclear Information System (INIS)

    Feijoo, R.A.; Moura, C.A. de.

    1981-07-01

    A variational formulation for linear models in coupled dynamic thermoelasticity which quite naturally motivates the design of a numerical scheme for the problem, is studied. When linked to regularization or penalization techniques, this algorithm may be applied to more general models, namely, the ones that consider non-linear constraints associated to variational inequalities. The basic postulates of Mechanics and Thermodynamics as well as some well-known mathematical techniques are described. A thorough description of the algorithm implementation with the finite-element method is also provided. Proofs for existence and uniqueness of solutions and for convergence of the approximations are presented, and some numerical results are exhibited. (Author) [pt

  3. Making context explicit for explanation and incremental knowledge acquisition

    Energy Technology Data Exchange (ETDEWEB)

    Brezillon, P. [Univ. Paris (France)

    1996-12-31

    Intelligent systems may be improved by making context explicit in problem solving. This is a lesson drawn from a study of the reasons why a number of knowledge-based systems (KBSs) failed. We discuss the interest to make context explicit in explanation generation and incremental knowledge acquisition, two important aspects of intelligent systems that aim to cooperate with users. We show how context can be used to better explain and incrementally acquire knowledge. The advantages of using context in explanation and incremental knowledge acquisition are discussed through SEPIT, an expert system for supporting diagnosis and explanation through simulation of power plants. We point out how the limitations of such systems may be overcome by making context explicit.

  4. Modeling the Non-Linear Response of Fiber-Reinforced Laminates Using a Combined Damage/Plasticity Model

    Science.gov (United States)

    Schuecker, Clara; Davila, Carlos G.; Pettermann, Heinz E.

    2008-01-01

    The present work is concerned with modeling the non-linear response of fiber reinforced polymer laminates. Recent experimental data suggests that the non-linearity is not only caused by matrix cracking but also by matrix plasticity due to shear stresses. To capture the effects of those two mechanisms, a model combining a plasticity formulation with continuum damage has been developed to simulate the non-linear response of laminates under plane stress states. The model is used to compare the predicted behavior of various laminate lay-ups to experimental data from the literature by looking at the degradation of axial modulus and Poisson s ratio of the laminates. The influence of residual curing stresses and in-situ effect on the predicted response is also investigated. It is shown that predictions of the combined damage/plasticity model, in general, correlate well with the experimental data. The test data shows that there are two different mechanisms that can have opposite effects on the degradation of the laminate Poisson s ratio which is captured correctly by the damage/plasticity model. Residual curing stresses are found to have a minor influence on the predicted response for the cases considered here. Some open questions remain regarding the prediction of damage onset.

  5. Linear summation of outputs in a balanced network model of motor cortex.

    Science.gov (United States)

    Capaday, Charles; van Vreeswijk, Carl

    2015-01-01

    Given the non-linearities of the neural circuitry's elements, we would expect cortical circuits to respond non-linearly when activated. Surprisingly, when two points in the motor cortex are activated simultaneously, the EMG responses are the linear sum of the responses evoked by each of the points activated separately. Additionally, the corticospinal transfer function is close to linear, implying that the synaptic interactions in motor cortex must be effectively linear. To account for this, here we develop a model of motor cortex composed of multiple interconnected points, each comprised of reciprocally connected excitatory and inhibitory neurons. We show how non-linearities in neuronal transfer functions are eschewed by strong synaptic interactions within each point. Consequently, the simultaneous activation of multiple points results in a linear summation of their respective outputs. We also consider the effects of reduction of inhibition at a cortical point when one or more surrounding points are active. The network response in this condition is linear over an approximately two- to three-fold decrease of inhibitory feedback strength. This result supports the idea that focal disinhibition allows linear coupling of motor cortical points to generate movement related muscle activation patterns; albeit with a limitation on gain control. The model also explains why neural activity does not spread as far out as the axonal connectivity allows, whilst also explaining why distant cortical points can be, nonetheless, functionally coupled by focal disinhibition. Finally, we discuss the advantages that linear interactions at the cortical level afford to motor command synthesis.

  6. Frequency Response of Synthetic Vocal Fold Models with Linear and Nonlinear Material Properties

    Science.gov (United States)

    Shaw, Stephanie M.; Thomson, Scott L.; Dromey, Christopher; Smith, Simeon

    2012-01-01

    Purpose: The purpose of this study was to create synthetic vocal fold models with nonlinear stress-strain properties and to investigate the effect of linear versus nonlinear material properties on fundamental frequency (F[subscript 0]) during anterior-posterior stretching. Method: Three materially linear and 3 materially nonlinear models were…

  7. Generalized Linear Models in Vehicle Insurance

    Directory of Open Access Journals (Sweden)

    Silvie Kafková

    2014-01-01

    Full Text Available Actuaries in insurance companies try to find the best model for an estimation of insurance premium. It depends on many risk factors, e.g. the car characteristics and the profile of the driver. In this paper, an analysis of the portfolio of vehicle insurance data using a generalized linear model (GLM is performed. The main advantage of the approach presented in this article is that the GLMs are not limited by inflexible preconditions. Our aim is to predict the relation of annual claim frequency on given risk factors. Based on a large real-world sample of data from 57 410 vehicles, the present study proposed a classification analysis approach that addresses the selection of predictor variables. The models with different predictor variables are compared by analysis of deviance and Akaike information criterion (AIC. Based on this comparison, the model for the best estimate of annual claim frequency is chosen. All statistical calculations are computed in R environment, which contains stats package with the function for the estimation of parameters of GLM and the function for analysis of deviation.

  8. Radio-over-fiber linearization with optimized genetic algorithm CPWL model.

    Science.gov (United States)

    Mateo, Carlos; Carro, Pedro L; García-Dúcar, Paloma; De Mingo, Jesús; Salinas, Íñigo

    2017-02-20

    This article proposes an optimized version of a canonical piece-wise-linear (CPWL) digital predistorter in order to enhance the linearity of a radio-over-fiber (RoF) LTE mobile fronthaul. In this work, we propose a threshold allocation optimization process carried out by a genetic algorithm (GA) in order to optimize the CPWL model (GA-CPWL). Firstly, experiments show how the CPWL model outperforms the classical memory polynomial DPD in an intensity modulation/direct detection (IM/DD) RoF link. Then, the GA-CPWL predistorter is compared with the CPWL model in several scenarios, in order to verify that the proposed DPD offers better performance in different optical transmission conditions. Experimental results reveal that with a proper threshold allocation, the GA-CPWL predistorter offers very promising outcomes.

  9. An electromyographic-based test for estimating neuromuscular fatigue during incremental treadmill running

    International Nuclear Information System (INIS)

    Camic, Clayton L; Kovacs, Attila J; Hill, Ethan C; Calantoni, Austin M; Yemm, Allison J; Enquist, Evan A; VanDusseldorp, Trisha A

    2014-01-01

    The purposes of the present study were two fold: (1) to determine if the model used for estimating the physical working capacity at the fatigue threshold (PWC FT ) from electromyographic (EMG) amplitude data during incremental cycle ergometry could be applied to treadmill running to derive a new neuromuscular fatigue threshold for running, and (2) to compare the running velocities associated with the PWC FT , ventilatory threshold (VT), and respiratory compensation point (RCP). Fifteen college-aged subjects (21.5  ±  1.3 y, 68.7  ±  10.5 kg, 175.9  ±  6.7 cm) performed an incremental treadmill test to exhaustion with bipolar surface EMG signals recorded from the vastus lateralis. There were significant (p < 0.05) mean differences in running velocities between the VT (11.3  ±  1.3 km h −1 ) and PWC FT (14.0  ±  2.3 km h −1 ), VT and RCP (14.0  ±  1.8 km h −1 ), but not the PWC FT and RCP. The findings of the present study indicated that the PWC FT model could be applied to a single continuous, incremental treadmill test to estimate the maximal running velocity that can be maintained prior to the onset of neuromuscular fatigue. In addition, these findings suggested that the PWC FT , like the RCP, may be used to differentiate the heavy from severe domains of exercise intensity. (paper)

  10. Model-Checking of Linear-Time Properties in Multi-Valued Systems

    OpenAIRE

    Li, Yongming; Droste, Manfred; Lei, Lihui

    2012-01-01

    In this paper, we study model-checking of linear-time properties in multi-valued systems. Safety property, invariant property, liveness property, persistence and dual-persistence properties in multi-valued logic systems are introduced. Some algorithms related to the above multi-valued linear-time properties are discussed. The verification of multi-valued regular safety properties and multi-valued $\\omega$-regular properties using lattice-valued automata are thoroughly studied. Since the law o...

  11. Running vacuum cosmological models: linear scalar perturbations

    Energy Technology Data Exchange (ETDEWEB)

    Perico, E.L.D. [Instituto de Física, Universidade de São Paulo, Rua do Matão 1371, CEP 05508-090, São Paulo, SP (Brazil); Tamayo, D.A., E-mail: elduartep@usp.br, E-mail: tamayo@if.usp.br [Departamento de Astronomia, Universidade de São Paulo, Rua do Matão 1226, CEP 05508-900, São Paulo, SP (Brazil)

    2017-08-01

    In cosmology, phenomenologically motivated expressions for running vacuum are commonly parameterized as linear functions typically denoted by Λ( H {sup 2}) or Λ( R ). Such models assume an equation of state for the vacuum given by P-bar {sub Λ} = - ρ-bar {sub Λ}, relating its background pressure P-bar {sub Λ} with its mean energy density ρ-bar {sub Λ} ≡ Λ/8π G . This equation of state suggests that the vacuum dynamics is due to an interaction with the matter content of the universe. Most of the approaches studying the observational impact of these models only consider the interaction between the vacuum and the transient dominant matter component of the universe. We extend such models by assuming that the running vacuum is the sum of independent contributions, namely ρ-bar {sub Λ} = Σ {sub i} ρ-bar {sub Λ} {sub i} . Each Λ i vacuum component is associated and interacting with one of the i matter components in both the background and perturbation levels. We derive the evolution equations for the linear scalar vacuum and matter perturbations in those two scenarios, and identify the running vacuum imprints on the cosmic microwave background anisotropies as well as on the matter power spectrum. In the Λ( H {sup 2}) scenario the vacuum is coupled with every matter component, whereas the Λ( R ) description only leads to a coupling between vacuum and non-relativistic matter, producing different effects on the matter power spectrum.

  12. Role of Statistical Random-Effects Linear Models in Personalized Medicine.

    Science.gov (United States)

    Diaz, Francisco J; Yeh, Hung-Wen; de Leon, Jose

    2012-03-01

    Some empirical studies and recent developments in pharmacokinetic theory suggest that statistical random-effects linear models are valuable tools that allow describing simultaneously patient populations as a whole and patients as individuals. This remarkable characteristic indicates that these models may be useful in the development of personalized medicine, which aims at finding treatment regimes that are appropriate for particular patients, not just appropriate for the average patient. In fact, published developments show that random-effects linear models may provide a solid theoretical framework for drug dosage individualization in chronic diseases. In particular, individualized dosages computed with these models by means of an empirical Bayesian approach may produce better results than dosages computed with some methods routinely used in therapeutic drug monitoring. This is further supported by published empirical and theoretical findings that show that random effects linear models may provide accurate representations of phase III and IV steady-state pharmacokinetic data, and may be useful for dosage computations. These models have applications in the design of clinical algorithms for drug dosage individualization in chronic diseases; in the computation of dose correction factors; computation of the minimum number of blood samples from a patient that are necessary for calculating an optimal individualized drug dosage in therapeutic drug monitoring; measure of the clinical importance of clinical, demographic, environmental or genetic covariates; study of drug-drug interactions in clinical settings; the implementation of computational tools for web-site-based evidence farming; design of pharmacogenomic studies; and in the development of a pharmacological theory of dosage individualization.

  13. Cost of Incremental Expansion of an Existing Family Medicine Residency Program.

    Science.gov (United States)

    Ashkin, Evan A; Newton, Warren P; Toomey, Brian; Lingley, Ronald; Page, Cristen P

    2017-07-01

    Expanding residency training programs to address shortages in the primary care workforce is challenged by the present graduate medical education (GME) environment. The Medicare funding cap on new GME positions and reductions in the Health Resources and Services Administration (HRSA) Teaching Health Center (THC) GME program require innovative solutions to support primary care residency expansion. Sparse literature exists to assist in predicting the actual cost of incremental expansion of a family medicine residency program without federal or state GME support. In 2011 a collaboration to develop a community health center (CHC) academic medical partnership (CHAMP), was formed and created a THC as a training site for expansion of an existing family medicine residency program. The cost of expansion was a critical factor as no Federal GME funding or HRSA THC GME program support was available. Initial start-up costs were supported by a federal grant and local foundations. Careful financial analysis of the expansion has provided actual costs per resident of the incremental expansion of the residencyRESULTS: The CHAMP created a new THC and expanded the residency from eight to ten residents per year. The cost of expansion was approximately $72,000 per resident per year. The cost of incremental expansion of our residency program in the CHAMP model was more than 50% less than that of the recently reported cost of training in the HRSA THC GME program.

  14. Three routes forward for biofuels: Incremental, leapfrog, and transitional

    International Nuclear Information System (INIS)

    Morrison, Geoff M.; Witcover, Julie; Parker, Nathan C.; Fulton, Lew

    2016-01-01

    This paper examines three technology routes for lowering the carbon intensity of biofuels: (1) a leapfrog route that focuses on major technological breakthroughs in lignocellulosic pathways at new, stand-alone biorefineries; (2) an incremental route in which improvements are made to existing U.S. corn ethanol and soybean biodiesel biorefineries; and (3) a transitional route in which biotechnology firms gain experience growing, handling, or chemically converting lignocellulosic biomass in a lower-risk fashion than leapfrog biorefineries by leveraging existing capital stock. We find the incremental route is likely to involve the largest production volumes and greenhouse gas benefits until at least the mid-2020s, but transitional and leapfrog biofuels together have far greater long-term potential. We estimate that the Renewable Fuel Standard, California's Low Carbon Fuel Standard, and federal tax credits provided an incentive of roughly $1.5–2.5 per gallon of leapfrog biofuel between 2012 and 2015, but that regulatory elements in these policies mostly incentivize lower-risk incremental investments. Adjustments in policy may be necessary to bring a greater focus on transitional technologies that provide targeted learning and cost reduction opportunities for leapfrog biofuels. - Highlights: • Three technological pathways are compared that lower carbon intensity of biofuels. • Incremental changes lead to faster greenhouse gas reductions. • Leapfrog changes lead to greatest long-term potential. • Two main biofuel policies (RFS and LCFS) are largely incremental in nature. • Transitional biofuels offer medium-risk, medium reward pathway.

  15. A Multiphase Non-Linear Mixed Effects Model: An Application to Spirometry after Lung Transplantation

    Science.gov (United States)

    Rajeswaran, Jeevanantham; Blackstone, Eugene H.

    2014-01-01

    In medical sciences, we often encounter longitudinal temporal relationships that are non-linear in nature. The influence of risk factors may also change across longitudinal follow-up. A system of multiphase non-linear mixed effects model is presented to model temporal patterns of longitudinal continuous measurements, with temporal decomposition to identify the phases and risk factors within each phase. Application of this model is illustrated using spirometry data after lung transplantation using readily available statistical software. This application illustrates the usefulness of our flexible model when dealing with complex non-linear patterns and time varying coefficients. PMID:24919830

  16. Modeling of non-ideal hard permanent magnets with an affine-linear model, illustrated for a bar and a horseshoe magnet

    Science.gov (United States)

    Glane, Sebastian; Reich, Felix A.; Müller, Wolfgang H.

    2017-11-01

    This study is dedicated to continuum-scale material modeling of isotropic permanent magnets. An affine-linear extension to the commonly used ideal hard model for permanent magnets is proposed, motivated, and detailed. In order to demonstrate the differences between these models, bar and horseshoe magnets are considered. The structure of the boundary value problem for the magnetic field and related solution techniques are discussed. For the ideal model, closed-form analytical solutions were obtained for both geometries. Magnetic fields of the boundary value problems for both models and differently shaped magnets were computed numerically by using the boundary element method. The results show that the character of the magnetic field is strongly influenced by the model that is used. Furthermore, it can be observed that the shape of an affine-linear magnet influences the near-field significantly. Qualitative comparisons with experiments suggest that both the ideal and the affine-linear models are relevant in practice, depending on the magnetic material employed. Mathematically speaking, the ideal magnetic model is a special case of the affine-linear one. Therefore, in applications where knowledge of the near-field is important, the affine-linear model can yield more accurate results—depending on the magnetic material.

  17. Flutter analysis of an airfoil with nonlinear damping using equivalent linearization

    Directory of Open Access Journals (Sweden)

    Chen Feixin

    2014-02-01

    Full Text Available The equivalent linearization method (ELM is modified to investigate the nonlinear flutter system of an airfoil with a cubic damping. After obtaining the linearization quantity of the cubic nonlinearity by the ELM, an equivalent system can be deduced and then investigated by linear flutter analysis methods. Different from the routine procedures of the ELM, the frequency rather than the amplitude of limit cycle oscillation (LCO is chosen as an active increment to produce bifurcation charts. Numerical examples show that this modification makes the ELM much more efficient. Meanwhile, the LCOs obtained by the ELM are in good agreement with numerical solutions. The nonlinear damping can delay the occurrence of secondary bifurcation. On the other hand, it has marginal influence on bifurcation characteristics or LCOs.

  18. Analogical reasoning: An incremental or insightful process? What cognitive and cortical evidence suggests.

    Science.gov (United States)

    Antonietti, Alessandro; Balconi, Michela

    2010-06-01

    Abstract The step-by-step, incremental nature of analogical reasoning can be questioned, since analogy making appears to be an insight-like process. This alternative view of analogical thinking can be integrated in Speed's model, even though the alleged role played by dopaminergic subcortical circuits needs further supporting evidence.

  19. Estimation and variable selection for generalized additive partial linear models

    KAUST Repository

    Wang, Li

    2011-08-01

    We study generalized additive partial linear models, proposing the use of polynomial spline smoothing for estimation of nonparametric functions, and deriving quasi-likelihood based estimators for the linear parameters. We establish asymptotic normality for the estimators of the parametric components. The procedure avoids solving large systems of equations as in kernel-based procedures and thus results in gains in computational simplicity. We further develop a class of variable selection procedures for the linear parameters by employing a nonconcave penalized quasi-likelihood, which is shown to have an asymptotic oracle property. Monte Carlo simulations and an empirical example are presented for illustration. © Institute of Mathematical Statistics, 2011.

  20. Study of linear induction motor characteristics : the Mosebach model

    Science.gov (United States)

    1976-05-31

    This report covers the Mosebach theory of the double-sided linear induction motor, starting with the ideallized model and accompanying assumptions, and ending with relations for thrust, airgap power, and motor efficiency. Solutions of the magnetic in...

  1. Study of linear induction motor characteristics : the Oberretl model

    Science.gov (United States)

    1975-05-30

    The Oberretl theory of the double-sided linear induction motor (LIM) is examined, starting with the idealized model and accompanying assumptions, and ending with relations for predicted thrust, airgap power, and motor efficiency. The effect of varyin...

  2. Performances Of Estimators Of Linear Models With Autocorrelated ...

    African Journals Online (AJOL)

    The performances of five estimators of linear models with Autocorrelated error terms are compared when the independent variable is autoregressive. The results reveal that the properties of the estimators when the sample size is finite is quite similar to the properties of the estimators when the sample size is infinite although ...

  3. Study of Piezoelectric Vibration Energy Harvester with non-linear conditioning circuit using an integrated model

    Science.gov (United States)

    Manzoor, Ali; Rafique, Sajid; Usman Iftikhar, Muhammad; Mahmood Ul Hassan, Khalid; Nasir, Ali

    2017-08-01

    Piezoelectric vibration energy harvester (PVEH) consists of a cantilever bimorph with piezoelectric layers pasted on its top and bottom, which can harvest power from vibrations and feed to low power wireless sensor nodes through some power conditioning circuit. In this paper, a non-linear conditioning circuit, consisting of a full-bridge rectifier followed by a buck-boost converter, is employed to investigate the issues of electrical side of the energy harvesting system. An integrated mathematical model of complete electromechanical system has been developed. Previously, researchers have studied PVEH with sophisticated piezo-beam models but employed simplistic linear circuits, such as resistor, as electrical load. In contrast, other researchers have worked on more complex non-linear circuits but with over-simplified piezo-beam models. Such models neglect different aspects of the system which result from complex interactions of its electrical and mechanical subsystems. In this work, authors have integrated the distributed parameter-based model of piezo-beam presented in literature with a real world non-linear electrical load. Then, the developed integrated model is employed to analyse the stability of complete energy harvesting system. This work provides a more realistic and useful electromechanical model having realistic non-linear electrical load unlike the simplistic linear circuit elements employed by many researchers.

  4. Modeling and analysis of mover gaps in tubular moving-magnet linear oscillating motors

    Directory of Open Access Journals (Sweden)

    Xuesong LUO

    2018-05-01

    Full Text Available A tubular moving-magnet linear oscillating motor (TMMLOM has merits of high efficiency and excellent dynamic capability. To enhance the thrust performance, quasi-Halbach permanent magnet (PM arrays are arranged on its mover in the application of a linear electro-hydrostatic actuator in more electric aircraft. The arrays are assembled by several individual segments, which lead to gaps between them inevitably. To investigate the effects of the gaps on the radial magnetic flux density and the machine thrust in this paper, an analytical model is built considering both axial and radial gaps. The model is validated by finite element simulations and experimental results. Distributions of the magnetic flux are described in condition of different sizes of radial and axial gaps. Besides, the output force is also discussed in normal and end windings. Finally, the model has demonstrated that both kinds of gaps have a negative effect on the thrust, and the linear motor is more sensitive to radial ones. Keywords: Air-gap flux density, Linear motor, Mover gaps, Quasi-Halbach array, Thrust output, Tubular moving-magnet linear oscillating motor (TMMLOM

  5. Risk evaluations of aging phenomena: the linear aging reliability model and its extensions

    International Nuclear Information System (INIS)

    Vesely, W.E.

    1987-01-01

    A model for component failure rates due to aging mechanisms has been developed from basic phenomenological considerations. In the treatment, the occurrences of deterioration are modeled as following a Poisson process. The severity of damage is allowed to have any distribution, however the damage is assumed to accumulate independently. Finally, the failure rate is modeled as being proportional to the accumulated damage. Using this treatment, the linear aging failure rate model is obtained. The applicability of the linear aging model to various mechanisms is discussed. The model can be extended to cover nonlinear and dependent aging phenomena. The implementability of the linear aging model is demonstrated by applying it to the aging data collected in NRC's Nuclear Plant Aging Research (NPAR) Program. The applications show that aging as observed in collected data have significant effects on the component failure probability and component reliability when aging is not effectively detected and controlled by testing and maintenance

  6. Risk evaluations of aging phenomena: The linear aging reliability model and its extensions

    International Nuclear Information System (INIS)

    Vesely, W.E.

    1986-01-01

    A model for component failure rates due to aging mechanisms has been developed from basic phenomenological considerations. In the treatment, the occurrences of deterioration are modeled as following a Poisson process. The severity of damage is allowed to have any distribution, however the damage is assumed to accumulate independently. Finally, the failure rate is modeled as being proportional to the accumulated damage. Using this treatment, the linear aging failure rate model is obtained. The applicability of the linear aging model to various mechanisms is discussed. The model can be extended to cover nonlinear and dependent aging phenomena. The implementability of the linear aging model is demonstrated by applying it of the aging data collected in NRC's Nuclear Plant Aging Research (NPAR) Program. The applications show that aging as observed in collected data have significant effects on the component failure probability and component reliability when aging is not effectively detected and controlled by testing and maintenance

  7. Incrementality in naming and reading complex numerals: Evidence from eyetracking

    NARCIS (Netherlands)

    Korvorst, M.H.W.; Roelofs, A.P.A.; Levelt, W.J.M.

    2006-01-01

    Individuals speak incrementally when they interleave planning and articulation. Eyetracking, along with the measurement of speech onset latencies, can be used to gain more insight into the degree of incrementality adopted by speakers. In the current article, two eyetracking experiments are reported

  8. Application of linear and non-linear low-Re k-ε models in two-dimensional predictions of convective heat transfer in passages with sudden contractions

    International Nuclear Information System (INIS)

    Raisee, M.; Hejazi, S.H.

    2007-01-01

    This paper presents comparisons between heat transfer predictions and measurements for developing turbulent flow through straight rectangular channels with sudden contractions at the mid-channel section. The present numerical results were obtained using a two-dimensional finite-volume code which solves the governing equations in a vertical plane located at the lateral mid-point of the channel. The pressure field is obtained with the well-known SIMPLE algorithm. The hybrid scheme was employed for the discretization of convection in all transport equations. For modeling of the turbulence, a zonal low-Reynolds number k-ε model and the linear and non-linear low-Reynolds number k-ε models with the 'Yap' and 'NYP' length-scale correction terms have been employed. The main objective of present study is to examine the ability of the above turbulence models in the prediction of convective heat transfer in channels with sudden contraction at a mid-channel section. The results of this study show that a sudden contraction creates a relatively small recirculation bubble immediately downstream of the channel contraction. This separation bubble influences the distribution of local heat transfer coefficient and increases the heat transfer levels by a factor of three. Computational results indicate that all the turbulence models employed produce similar flow fields. The zonal k-ε model produces the wrong Nusselt number distribution by underpredicting heat transfer levels in the recirculation bubble and overpredicting them in the developing region. The linear low-Re k-ε model, on the other hand, returns the correct Nusselt number distribution in the recirculation region, although it somewhat overpredicts heat transfer levels in the developing region downstream of the separation bubble. The replacement of the 'Yap' term with the 'NYP' term in the linear low-Re k-ε model results in a more accurate local Nusselt number distribution. Moreover, the application of the non-linear k

  9. Evaluation of a Linear Mixing Model to Retrieve Soil and Vegetation Temperatures of Land Targets

    International Nuclear Information System (INIS)

    Yang, Jinxin; Jia, Li; Cui, Yaokui; Zhou, Jie; Menenti, Massimo

    2014-01-01

    A simple linear mixing model of heterogeneous soil-vegetation system and retrieval of component temperatures from directional remote sensing measurements by inverting this model is evaluated in this paper using observations by a thermal camera. The thermal camera was used to obtain multi-angular TIR (Thermal Infra-Red) images over vegetable and orchard canopies. A whole thermal camera image was treated as a pixel of a satellite image to evaluate the model with the two-component system, i.e. soil and vegetation. The evaluation included two parts: evaluation of the linear mixing model and evaluation of the inversion of the model to retrieve component temperatures. For evaluation of the linear mixing model, the RMSE is 0.2 K between the observed and modelled brightness temperatures, which indicates that the linear mixing model works well under most conditions. For evaluation of the model inversion, the RMSE between the model retrieved and the observed vegetation temperatures is 1.6K, correspondingly, the RMSE between the observed and retrieved soil temperatures is 2.0K. According to the evaluation of the sensitivity of retrieved component temperatures on fractional cover, the linear mixing model gives more accurate retrieval accuracies for both soil and vegetation temperatures under intermediate fractional cover conditions

  10. A quasi-linear gyrokinetic transport model for tokamak plasmas

    International Nuclear Information System (INIS)

    Casati, A.

    2009-10-01

    After a presentation of some basics around nuclear fusion, this research thesis introduces the framework of the tokamak strategy to deal with confinement, hence the main plasma instabilities which are responsible for turbulent transport of energy and matter in such a system. The author also briefly introduces the two principal plasma representations, the fluid and the kinetic ones. He explains why the gyro-kinetic approach has been preferred. A tokamak relevant case is presented in order to highlight the relevance of a correct accounting of the kinetic wave-particle resonance. He discusses the issue of the quasi-linear response. Firstly, the derivation of the model, called QuaLiKiz, and its underlying hypotheses to get the energy and the particle turbulent flux are presented. Secondly, the validity of the quasi-linear response is verified against the nonlinear gyro-kinetic simulations. The saturation model that is assumed in QuaLiKiz, is presented and discussed. Then, the author qualifies the global outcomes of QuaLiKiz. Both the quasi-linear energy and the particle flux are compared to the expectations from the nonlinear simulations, across a wide scan of tokamak relevant parameters. Therefore, the coupling of QuaLiKiz within the integrated transport solver CRONOS is presented: this procedure allows the time-dependent transport problem to be solved, hence the direct application of the model to the experiment. The first preliminary results regarding the experimental analysis are finally discussed

  11. Linear and nonlinear methods in modeling the aqueous solubility of organic compounds.

    Science.gov (United States)

    Catana, Cornel; Gao, Hua; Orrenius, Christian; Stouten, Pieter F W

    2005-01-01

    Solubility data for 930 diverse compounds have been analyzed using linear Partial Least Square (PLS) and nonlinear PLS methods, Continuum Regression (CR), and Neural Networks (NN). 1D and 2D descriptors from MOE package in combination with E-state or ISIS keys have been used. The best model was obtained using linear PLS for a combination between 22 MOE descriptors and 65 ISIS keys. It has a correlation coefficient (r2) of 0.935 and a root-mean-square error (RMSE) of 0.468 log molar solubility (log S(w)). The model validated on a test set of 177 compounds not included in the training set has r2 0.911 and RMSE 0.475 log S(w). The descriptors were ranked according to their importance, and at the top of the list have been found the 22 MOE descriptors. The CR model produced results as good as PLS, and because of the way in which cross-validation has been done it is expected to be a valuable tool in prediction besides PLS model. The statistics obtained using nonlinear methods did not surpass those got with linear ones. The good statistic obtained for linear PLS and CR recommends these models to be used in prediction when it is difficult or impossible to make experimental measurements, for virtual screening, combinatorial library design, and efficient leads optimization.

  12. Possible factors determining the non-linearity in the VO2-power output relationship in humans: theoretical studies.

    Science.gov (United States)

    Korzeniewski, Bernard; Zoladz, Jerzy A

    2003-08-01

    At low power output exercise (below lactate threshold), the oxygen uptake increases linearly with power output, but at high power output exercise (above lactate threshold) some additional oxygen consumption causes a non-linearity in the overall VO(2) (oxygen uptake rate)-power output relationship. The functional significance of this phenomenon for human exercise tolerance is very important, but the mechanisms underlying it remain unknown. In the present work, a computer model of oxidative phosphorylation in intact skeletal muscle developed previously is used to examine the background of this relationship in different modes of exercise. Our simulations demonstrate that the non-linearity in the VO(2)-power output relationship and the difference in the magnitude of this non-linearity between incremental exercise mode and square-wave exercise mode (constant power output exercise) can be generated by introducing into the model some hypothetical factor F (group of associated factors) that accumulate(s) in time during exercise. The performed computer simulations, based on this assumption, give proper time courses of changes in VO(2) and [PCr] after an onset of work of different intensities, including the slow component in VO(2), well matching the experimental results. Moreover, if it is assumed that the exercise terminates because of fatigue when the amount/intensity of F exceed some threshold value, the model allows the generation of a proper shape of the well-known power-duration curve. This fact suggests that the phenomenon of the non-linearity of the VO(2)-power output relationship and the magnitude of this non-linearity in different modes of exercise is determined by some factor(s) responsible for muscle fatigue.

  13. Linear and nonlinear ARMA model parameter estimation using an artificial neural network

    Science.gov (United States)

    Chon, K. H.; Cohen, R. J.

    1997-01-01

    This paper addresses parametric system identification of linear and nonlinear dynamic systems by analysis of the input and output signals. Specifically, we investigate the relationship between estimation of the system using a feedforward neural network model and estimation of the system by use of linear and nonlinear autoregressive moving-average (ARMA) models. By utilizing a neural network model incorporating a polynomial activation function, we show the equivalence of the artificial neural network to the linear and nonlinear ARMA models. We compare the parameterization of the estimated system using the neural network and ARMA approaches by utilizing data generated by means of computer simulations. Specifically, we show that the parameters of a simulated ARMA system can be obtained from the neural network analysis of the simulated data or by conventional least squares ARMA analysis. The feasibility of applying neural networks with polynomial activation functions to the analysis of experimental data is explored by application to measurements of heart rate (HR) and instantaneous lung volume (ILV) fluctuations.

  14. Multi-Repeated Projection Lithography for High-Precision Linear Scale Based on Average Homogenization Effect

    Directory of Open Access Journals (Sweden)

    Dongxu Ren

    2016-04-01

    Full Text Available A multi-repeated photolithography method for manufacturing an incremental linear scale using projection lithography is presented. The method is based on the average homogenization effect that periodically superposes the light intensity of different locations of pitches in the mask to make a consistent energy distribution at a specific wavelength, from which the accuracy of a linear scale can be improved precisely using the average pitch with different step distances. The method’s theoretical error is within 0.01 µm for a periodic mask with a 2-µm sine-wave error. The intensity error models in the focal plane include the rectangular grating error on the mask, static positioning error, and lithography lens focal plane alignment error, which affect pitch uniformity less than in the common linear scale projection lithography splicing process. It was analyzed and confirmed that increasing the repeat exposure number of a single stripe could improve accuracy, as could adjusting the exposure spacing to achieve a set proportion of black and white stripes. According to the experimental results, the effectiveness of the multi-repeated photolithography method is confirmed to easily realize a pitch accuracy of 43 nm in any 10 locations of 1 m, and the whole length accuracy of the linear scale is less than 1 µm/m.

  15. Modelling and Inverse-Modelling: Experiences with O.D.E. Linear Systems in Engineering Courses

    Science.gov (United States)

    Martinez-Luaces, Victor

    2009-01-01

    In engineering careers courses, differential equations are widely used to solve problems concerned with modelling. In particular, ordinary differential equations (O.D.E.) linear systems appear regularly in Chemical Engineering, Food Technology Engineering and Environmental Engineering courses, due to the usefulness in modelling chemical kinetics,…

  16. Non-linear DSGE Models and The Central Difference Kalman Filter

    DEFF Research Database (Denmark)

    Andreasen, Martin Møller

    This paper introduces a Quasi Maximum Likelihood (QML) approach based on the Cen- tral Difference Kalman Filter (CDKF) to estimate non-linear DSGE models with potentially non-Gaussian shocks. We argue that this estimator can be expected to be consistent and asymptotically normal for DSGE models...

  17. CONTRIBUTIONS TO THE FINITE ELEMENT MODELING OF LINEAR ULTRASONIC MOTORS

    Directory of Open Access Journals (Sweden)

    Oana CHIVU

    2013-05-01

    Full Text Available The present paper is concerned with the main modeling elements as produced by means of thefinite element method of linear ultrasonic motors. Hence, first the model is designed and then a modaland harmonic analysis are carried out in view of outlining the main outcomes

  18. Linear theory for filtering nonlinear multiscale systems with model error.

    Science.gov (United States)

    Berry, Tyrus; Harlim, John

    2014-07-08

    In this paper, we study filtering of multiscale dynamical systems with model error arising from limitations in resolving the smaller scale processes. In particular, the analysis assumes the availability of continuous-time noisy observations of all components of the slow variables. Mathematically, this paper presents new results on higher order asymptotic expansion of the first two moments of a conditional measure. In particular, we are interested in the application of filtering multiscale problems in which the conditional distribution is defined over the slow variables, given noisy observation of the slow variables alone. From the mathematical analysis, we learn that for a continuous time linear model with Gaussian noise, there exists a unique choice of parameters in a linear reduced model for the slow variables which gives the optimal filtering when only the slow variables are observed. Moreover, these parameters simultaneously give the optimal equilibrium statistical estimates of the underlying system, and as a consequence they can be estimated offline from the equilibrium statistics of the true signal. By examining a nonlinear test model, we show that the linear theory extends in this non-Gaussian, nonlinear configuration as long as we know the optimal stochastic parametrization and the correct observation model. However, when the stochastic parametrization model is inappropriate, parameters chosen for good filter performance may give poor equilibrium statistical estimates and vice versa; this finding is based on analytical and numerical results on our nonlinear test model and the two-layer Lorenz-96 model. Finally, even when the correct stochastic ansatz is given, it is imperative to estimate the parameters simultaneously and to account for the nonlinear feedback of the stochastic parameters into the reduced filter estimates. In numerical experiments on the two-layer Lorenz-96 model, we find that the parameters estimated online , as part of a filtering

  19. Valid statistical approaches for analyzing sholl data: Mixed effects versus simple linear models.

    Science.gov (United States)

    Wilson, Machelle D; Sethi, Sunjay; Lein, Pamela J; Keil, Kimberly P

    2017-03-01

    The Sholl technique is widely used to quantify dendritic morphology. Data from such studies, which typically sample multiple neurons per animal, are often analyzed using simple linear models. However, simple linear models fail to account for intra-class correlation that occurs with clustered data, which can lead to faulty inferences. Mixed effects models account for intra-class correlation that occurs with clustered data; thus, these models more accurately estimate the standard deviation of the parameter estimate, which produces more accurate p-values. While mixed models are not new, their use in neuroscience has lagged behind their use in other disciplines. A review of the published literature illustrates common mistakes in analyses of Sholl data. Analysis of Sholl data collected from Golgi-stained pyramidal neurons in the hippocampus of male and female mice using both simple linear and mixed effects models demonstrates that the p-values and standard deviations obtained using the simple linear models are biased downwards and lead to erroneous rejection of the null hypothesis in some analyses. The mixed effects approach more accurately models the true variability in the data set, which leads to correct inference. Mixed effects models avoid faulty inference in Sholl analysis of data sampled from multiple neurons per animal by accounting for intra-class correlation. Given the widespread practice in neuroscience of obtaining multiple measurements per subject, there is a critical need to apply mixed effects models more widely. Copyright © 2017 Elsevier B.V. All rights reserved.

  20. A Solvable Dynamic Principal-Agent Model with Linear Marginal Productivity

    Directory of Open Access Journals (Sweden)

    Bing Liu

    2018-01-01

    Full Text Available We study how to design an optimal contract which provides incentives for agent to put forth the desired effort in a continuous time dynamic moral hazard model with linear marginal productivity. Using exponential utility and linear production, three different information structures, full information, hidden actions and hidden savings, are considered in the principal-agent model. Applying the stochastic maximum principle, we solve the model explicitly, where the agent’s optimization problem becomes the principal’s problem of choosing an optimal contract. The explicit solutions to our model allow us to analyze the distortion of allocations. The main effect of hidden actions is a reduction of effort, but the a smaller effect is on the consumption allocation. In the hidden saving case, the consumption distortion almost vanishes but the effort distortion is expanded. In our setting, the agent’s optimal effort is also reduced with the decline of marginal productivity.

  1. Wireless Positioning Based on a Segment-Wise Linear Approach for Modeling the Target Trajectory

    DEFF Research Database (Denmark)

    Figueiras, Joao; Pedersen, Troels; Schwefel, Hans-Peter

    2008-01-01

    Positioning solutions in infrastructure-based wireless networks generally operate by exploiting the channel information of the links between the Wireless Devices and fixed networking Access Points. The major challenge of such solutions is the modeling of both the noise properties of the channel...... measurements and the user mobility patterns. One class of typical human being movement patterns is the segment-wise linear approach, which is studied in this paper. Current tracking solutions, such as the Constant Velocity model, hardly handle such segment-wise linear patterns. In this paper we propose...... a segment-wise linear model, called the Drifting Points model. The model results in an increased performance when compared with traditional solutions....

  2. Non-linear time variant model intended for polypyrrole-based actuators

    Science.gov (United States)

    Farajollahi, Meisam; Madden, John D. W.; Sassani, Farrokh

    2014-03-01

    Polypyrrole-based actuators are of interest due to their biocompatibility, low operation voltage and relatively high strain and force. Modeling and simulation are very important to predict the behaviour of each actuator. To develop an accurate model, we need to know the electro-chemo-mechanical specifications of the Polypyrrole. In this paper, the non-linear time-variant model of Polypyrrole film is derived and proposed using a combination of an RC transmission line model and a state space representation. The model incorporates the potential dependent ionic conductivity. A function of ionic conductivity of Polypyrrole vs. local charge is proposed and implemented in the non-linear model. Matching of the measured and simulated electrical response suggests that ionic conductivity of Polypyrrole decreases significantly at negative potential vs. silver/silver chloride and leads to reduced current in the cyclic voltammetry (CV) tests. The next stage is to relate the distributed charging of the polymer to actuation via the strain to charge ratio. Further work is also needed to identify ionic and electronic conductivities as well as capacitance as a function of oxidation state so that a fully predictive model can be created.

  3. A Linearized Large Signal Model of an LCL-Type Resonant Converter

    Directory of Open Access Journals (Sweden)

    Hong-Yu Li

    2015-03-01

    Full Text Available In this work, an LCL-type resonant dc/dc converter with a capacitive output filter is modeled in two stages. In the first high-frequency ac stage, all ac signals are decomposed into two orthogonal vectors in a synchronous rotating d–q frame using multi-frequency modeling. In the dc stage, all dc quantities are represented by their average values with average state-space modeling. A nonlinear two-stage model is then created by means of a non-linear link. By aligning the transformer voltage on the d-axis, the nonlinear link can be eliminated, and the whole converter can be modeled by a single set of linear state-space equations. Furthermore, a feedback control scheme can be formed according to the steady-state solutions. Simulation and experimental results have proven that the resulted model is good for fast simulation and state variable estimation.

  4. Finance for incremental housing: current status and prospects for expansion

    NARCIS (Netherlands)

    Ferguson, B.; Smets, P.G.S.M.

    2010-01-01

    Appropriate finance can greatly increase the speed and lower the cost of incremental housing - the process used by much of the low/moderate-income majority of most developing countries to acquire shelter. Informal finance continues to dominate the funding of incremental housing. However, new sources

  5. One Step at a Time: SBM as an Incremental Process.

    Science.gov (United States)

    Conrad, Mark

    1995-01-01

    Discusses incremental SBM budgeting and answers questions regarding resource equity, bookkeeping requirements, accountability, decision-making processes, and purchasing. Approaching site-based management as an incremental process recognizes that every school system engages in some level of site-based decisions. Implementation can be gradual and…

  6. Decomposed Implicit Models of Piecewise - Linear Networks

    Directory of Open Access Journals (Sweden)

    J. Brzobohaty

    1992-05-01

    Full Text Available The general matrix form of the implicit description of a piecewise-linear (PWL network and the symbolic block diagram of the corresponding circuit model are proposed. Their decomposed forms enable us to determine quite separately the existence of the individual breakpoints of the resultant PWL characteristic and their coordinates using independent network parameters. For the two-diode and three-diode cases all the attainable types of the PWL characteristic are introduced.

  7. Thresholding projection estimators in functional linear models

    OpenAIRE

    Cardot, Hervé; Johannes, Jan

    2010-01-01

    We consider the problem of estimating the regression function in functional linear regression models by proposing a new type of projection estimators which combine dimension reduction and thresholding. The introduction of a threshold rule allows to get consistency under broad assumptions as well as minimax rates of convergence under additional regularity hypotheses. We also consider the particular case of Sobolev spaces generated by the trigonometric basis which permits to get easily mean squ...

  8. Analyzing longitudinal data with the linear mixed models procedure in SPSS.

    Science.gov (United States)

    West, Brady T

    2009-09-01

    Many applied researchers analyzing longitudinal data share a common misconception: that specialized statistical software is necessary to fit hierarchical linear models (also known as linear mixed models [LMMs], or multilevel models) to longitudinal data sets. Although several specialized statistical software programs of high quality are available that allow researchers to fit these models to longitudinal data sets (e.g., HLM), rapid advances in general purpose statistical software packages have recently enabled analysts to fit these same models when using preferred packages that also enable other more common analyses. One of these general purpose statistical packages is SPSS, which includes a very flexible and powerful procedure for fitting LMMs to longitudinal data sets with continuous outcomes. This article aims to present readers with a practical discussion of how to analyze longitudinal data using the LMMs procedure in the SPSS statistical software package.

  9. History Matters: Incremental Ontology Reasoning Using Modules

    Science.gov (United States)

    Cuenca Grau, Bernardo; Halaschek-Wiener, Christian; Kazakov, Yevgeny

    The development of ontologies involves continuous but relatively small modifications. Existing ontology reasoners, however, do not take advantage of the similarities between different versions of an ontology. In this paper, we propose a technique for incremental reasoning—that is, reasoning that reuses information obtained from previous versions of an ontology—based on the notion of a module. Our technique does not depend on a particular reasoning calculus and thus can be used in combination with any reasoner. We have applied our results to incremental classification of OWL DL ontologies and found significant improvement over regular classification time on a set of real-world ontologies.

  10. Non-destructive linear model for leaf area estimation in Vernonia ferruginea Less

    Directory of Open Access Journals (Sweden)

    MC. Souza

    Full Text Available Leaf area estimation is an important biometrical trait for evaluating leaf development and plant growth in field and pot experiments. We developed a non-destructive model to estimate the leaf area (LA of Vernonia ferruginea using the length (L and width (W leaf dimensions. Different combinations of linear equations were obtained from L, L2, W, W2, LW and L2W2. The linear regressions using the product of LW dimensions were more efficient to estimate the LA of V. ferruginea than models based on a single dimension (L, W, L2 or W2. Therefore, the linear regression “LA=0.463+0.676WL” provided the most accurate estimate of V. ferruginea leaf area. Validation of the selected model showed that the correlation between real measured leaf area and estimated leaf area was very high.

  11. Linear identification and model adjustment of a PEM fuel cell stack

    Energy Technology Data Exchange (ETDEWEB)

    Kunusch, C; Puleston, P F; More, J J [LEICI, Departamento de Electrotecnia, Universidad Nacional de La Plata, calle 1 esq. 47 s/n, 1900 La Plata (Argentina); Consejo de Investigaciones Cientificas y Tecnicas (CONICET) (Argentina); Husar, A [Institut de Robotica i Informatica Industrial (CSIC-UPC), c/ Llorens i Artigas 4-6, 08028 Barcelona (Spain); Mayosky, M A [LEICI, Departamento de Electrotecnia, Universidad Nacional de La Plata, calle 1 esq. 47 s/n, 1900 La Plata (Argentina); Comision de Investigaciones Cientificas (CIC), Provincia de Buenos Aires (Argentina)

    2008-07-15

    In the context of fuel cell stack control a mayor challenge is modeling the interdependence of various complex subsystem dynamics. In many cases, the states interaction is usually modeled through several look-up tables, decision blocks and piecewise continuous functions. Many internal variables are inaccessible for measurement and cannot be used in control algorithms. To make significant contributions in this area, it is necessary to develop reliable models for control and design purposes. In this paper, a linear model based on experimental identification of a 7-cell stack was developed. The procedure followed to obtain a linear model of the system consisted in performing spectroscopy tests of four different single-input single-output subsystems. The considered inputs for the tests were the stack current and the cathode oxygen flow rate, while the measured outputs were the stack voltage and the cathode total pressure. The resulting model can be used either for model-based control design or for on-line analysis and errors detection. (author)

  12. An evaluation of bias in propensity score-adjusted non-linear regression models.

    Science.gov (United States)

    Wan, Fei; Mitra, Nandita

    2018-03-01

    Propensity score methods are commonly used to adjust for observed confounding when estimating the conditional treatment effect in observational studies. One popular method, covariate adjustment of the propensity score in a regression model, has been empirically shown to be biased in non-linear models. However, no compelling underlying theoretical reason has been presented. We propose a new framework to investigate bias and consistency of propensity score-adjusted treatment effects in non-linear models that uses a simple geometric approach to forge a link between the consistency of the propensity score estimator and the collapsibility of non-linear models. Under this framework, we demonstrate that adjustment of the propensity score in an outcome model results in the decomposition of observed covariates into the propensity score and a remainder term. Omission of this remainder term from a non-collapsible regression model leads to biased estimates of the conditional odds ratio and conditional hazard ratio, but not for the conditional rate ratio. We further show, via simulation studies, that the bias in these propensity score-adjusted estimators increases with larger treatment effect size, larger covariate effects, and increasing dissimilarity between the coefficients of the covariates in the treatment model versus the outcome model.

  13. Indexing Density Models for Incremental Learning and Anytime Classification on Data Streams

    DEFF Research Database (Denmark)

    Seidl, Thomas; Assent, Ira; Kranen, Philipp

    2009-01-01

    Classification of streaming data faces three basic challenges: it has to deal with huge amounts of data, the varying time between two stream data items must be used best possible (anytime classification) and additional training data must be incrementally learned (anytime learning) for applying...... to the individual object to be classified) a hierarchy of mixture densities that represent kernel density estimators at successively coarser levels. Our probability density queries together with novel classification improvement strategies provide the necessary information for very effective classification at any...... point of interruption. Moreover, we propose a novel evaluation method for anytime classification using Poisson streams and demonstrate the anytime learning performance of the Bayes tree....

  14. Linear and nonlinear modeling of light propagation in hollow-core photonic crystal fiber

    DEFF Research Database (Denmark)

    Roberts, John; Lægsgaard, Jesper

    2009-01-01

    Hollow core photonic crystal fibers (HC-PCFs) find applications which include quantum and non-linear optics, gas detection and short high-intensity laser pulse delivery. Central to most applications is an understanding of the linear and nonlinear optical properties. These require careful modeling....... The intricacies of modeling various forms of HC-PCF are reviewed. An example of linear dispersion engineering, aimed at reducing and flattening the group velocity dispersion, is then presented. Finally, a study of short high intensity pulse delivery using HC-PCF in both dispersive and nonlinear (solitonic...

  15. Anti-symmetrically fused model and non-linear integral equations in the three-state Uimin-Sutherland model

    International Nuclear Information System (INIS)

    Fujii, Akira; Kluemper, Andreas

    1999-01-01

    We derive the non-linear integral equations determining the free energy of the three-state pure bosonic Uimin-Sutherland model. In order to find a complete set of auxiliary functions, the anti-symmetric fusion procedure is utilized. We solve the non-linear integral equations numerically and see that the low-temperature behavior coincides with that predicted by conformal field theory. The magnetization and magnetic susceptibility are also calculated by means of the non-linear integral equation

  16. The microcomputer scientific software series 2: general linear model--regression.

    Science.gov (United States)

    Harold M. Rauscher

    1983-01-01

    The general linear model regression (GLMR) program provides the microcomputer user with a sophisticated regression analysis capability. The output provides a regression ANOVA table, estimators of the regression model coefficients, their confidence intervals, confidence intervals around the predicted Y-values, residuals for plotting, a check for multicollinearity, a...

  17. New Inference Procedures for Semiparametric Varying-Coefficient Partially Linear Cox Models

    Directory of Open Access Journals (Sweden)

    Yunbei Ma

    2014-01-01

    Full Text Available In biomedical research, one major objective is to identify risk factors and study their risk impacts, as this identification can help clinicians to both properly make a decision and increase efficiency of treatments and resource allocation. A two-step penalized-based procedure is proposed to select linear regression coefficients for linear components and to identify significant nonparametric varying-coefficient functions for semiparametric varying-coefficient partially linear Cox models. It is shown that the penalized-based resulting estimators of the linear regression coefficients are asymptotically normal and have oracle properties, and the resulting estimators of the varying-coefficient functions have optimal convergence rates. A simulation study and an empirical example are presented for illustration.

  18. 76 FR 73475 - Immigration Benefits Business Transformation, Increment I; Correction

    Science.gov (United States)

    2011-11-29

    ... Benefits Business Transformation, Increment I, 76 FR 53764 (Aug. 29, 2011). The final rule removed form... [CIS No. 2481-09; Docket No. USCIS-2009-0022] RIN 1615-AB83 Immigration Benefits Business Transformation, Increment I; Correction AGENCY: U.S. Citizenship and Immigration Services, DHS. ACTION: Final...

  19. Mechanical Behavior of Red Sandstone under Incremental Uniaxial Cyclical Compressive and Tensile Loading

    Directory of Open Access Journals (Sweden)

    Baoyun Zhao

    2017-01-01

    Full Text Available Uniaxial experiments were carried out on red sandstone specimens to investigate their short-term and creep mechanical behavior under incremental cyclic compressive and tensile loading. First, based on the results of short-term uniaxial incremental cyclic compressive and tensile loading experiments, deformation characteristics and energy dissipation were analyzed. The results show that the stress-strain curve of red sandstone has an obvious memory effect in the compressive and tensile loading stages. The strains at peak stresses and residual strains increase with the cycle number. Energy dissipation, defined as the area of the hysteresis loop in the stress-strain curves, increases nearly in a power function with the cycle number. Creep test of the red sandstone was also conducted. Results show that the creep curve under each compressive or tensile stress level can be divided into decay and steady stages, which cannot be described by the conventional Burgers model. Therefore, an improved Burgers creep model of rock material is constructed through viscoplastic mechanics, which agrees very well with the experimental results and can describe the creep behavior of red sandstone better than the Burgers creep model.

  20. Short-term load forecasting with increment regression tree

    Energy Technology Data Exchange (ETDEWEB)

    Yang, Jingfei; Stenzel, Juergen [Darmstadt University of Techonology, Darmstadt 64283 (Germany)

    2006-06-15

    This paper presents a new regression tree method for short-term load forecasting. Both increment and non-increment tree are built according to the historical data to provide the data space partition and input variable selection. Support vector machine is employed to the samples of regression tree nodes for further fine regression. Results of different tree nodes are integrated through weighted average method to obtain the comprehensive forecasting result. The effectiveness of the proposed method is demonstrated through its application to an actual system. (author)