WorldWideScience

Sample records for linear scheduling model

  1. Updating Linear Schedules with Lowest Cost: a Linear Programming Model

    Biruk, Sławomir; Jaśkowski, Piotr; Czarnigowska, Agata

    2017-10-01

    Many civil engineering projects involve sets of tasks repeated in a predefined sequence in a number of work areas along a particular route. A useful graphical representation of schedules of such projects is time-distance diagrams that clearly show what process is conducted at a particular point of time and in particular location. With repetitive tasks, the quality of project performance is conditioned by the ability of the planner to optimize workflow by synchronizing the works and resources, which usually means that resources are planned to be continuously utilized. However, construction processes are prone to risks, and a fully synchronized schedule may expire if a disturbance (bad weather, machine failure etc.) affects even one task. In such cases, works need to be rescheduled, and another optimal schedule should be built for the changed circumstances. This typically means that, to meet the fixed completion date, durations of operations have to be reduced. A number of measures are possible to achieve such reduction: working overtime, employing more resources or relocating resources from less to more critical tasks, but they all come at a considerable cost and affect the whole project. The paper investigates the problem of selecting the measures that reduce durations of tasks of a linear project so that the cost of these measures is kept to the minimum and proposes an algorithm that could be applied to find optimal solutions as the need to reschedule arises. Considering that civil engineering projects, such as road building, usually involve less process types than construction projects, the complexity of scheduling problems is lower, and precise optimization algorithms can be applied. Therefore, the authors put forward a linear programming model of the problem and illustrate its principle of operation with an example.

  2. Microgrid Reliability Modeling and Battery Scheduling Using Stochastic Linear Programming

    Cardoso, Goncalo; Stadler, Michael; Siddiqui, Afzal; Marnay, Chris; DeForest, Nicholas; Barbosa-Povoa, Ana; Ferrao, Paulo

    2013-05-23

    This paper describes the introduction of stochastic linear programming into Operations DER-CAM, a tool used to obtain optimal operating schedules for a given microgrid under local economic and environmental conditions. This application follows previous work on optimal scheduling of a lithium-iron-phosphate battery given the output uncertainty of a 1 MW molten carbonate fuel cell. Both are in the Santa Rita Jail microgrid, located in Dublin, California. This fuel cell has proven unreliable, partially justifying the consideration of storage options. Several stochastic DER-CAM runs are executed to compare different scenarios to values obtained by a deterministic approach. Results indicate that using a stochastic approach provides a conservative yet more lucrative battery schedule. Lower expected energy bills result, given fuel cell outages, in potential savings exceeding 6percent.

  3. Spillways Scheduling for Flood Control of Three Gorges Reservoir Using Mixed Integer Linear Programming Model

    Maoyuan Feng

    2014-01-01

    Full Text Available This study proposes a mixed integer linear programming (MILP model to optimize the spillways scheduling for reservoir flood control. Unlike the conventional reservoir operation model, the proposed MILP model specifies the spillways status (including the number of spillways to be open and the degree of the spillway opened instead of reservoir release, since the release is actually controlled by using the spillway. The piecewise linear approximation is used to formulate the relationship between the reservoir storage and water release for a spillway, which should be open/closed with a status depicted by a binary variable. The control order and symmetry rules of spillways are described and incorporated into the constraints for meeting the practical demand. Thus, a MILP model is set up to minimize the maximum reservoir storage. The General Algebraic Modeling System (GAMS and IBM ILOG CPLEX Optimization Studio (CPLEX software are used to find the optimal solution for the proposed MILP model. The China’s Three Gorges Reservoir, whose spillways are of five types with the total number of 80, is selected as the case study. It is shown that the proposed model decreases the flood risk compared with the conventional operation and makes the operation more practical by specifying the spillways status directly.

  4. Gain scheduling for non-linear time-delay systems using approximated model

    Pham, H.T.; Lim, J.T

    2012-01-01

    The authors investigate a regulation problem of non-linear systems driven by an exogenous signal and time-delay in the input. In order to compensate for the input delay, they propose a reduction transformation containing the past information of the control input. Then, by utilising the Euler

  5. Solving a mixed-integer linear programming model for a multi-skilled project scheduling problem by simulated annealing

    H Kazemipoor

    2012-04-01

    Full Text Available A multi-skilled project scheduling problem (MSPSP has been generally presented to schedule a project with staff members as resources. Each activity in project network requires different skills and also staff members have different skills, too. This causes the MSPSP becomes a special type of a multi-mode resource-constrained project scheduling problem (MM-RCPSP with a huge number of modes. Given the importance of this issue, in this paper, a mixed integer linear programming for the MSPSP is presented. Due to the complexity of the problem, a meta-heuristic algorithm is proposed in order to find near optimal solutions. To validate performance of the algorithm, results are compared against exact solutions solved by the LINGO solver. The results are promising and show that optimal or near-optimal solutions are derived for small instances and good solutions for larger instances in reasonable time.

  6. A comparison of mixed-integer linear programming models for workforce scheduling with position-dependent processing times

    Moreno-Camacho, Carlos A.; Montoya-Torres, Jairo R.; Vélez-Gallego, Mario C.

    2018-06-01

    Only a few studies in the available scientific literature address the problem of having a group of workers that do not share identical levels of productivity during the planning horizon. This study considers a workforce scheduling problem in which the actual processing time is a function of the scheduling sequence to represent the decline in workers' performance, evaluating two classical performance measures separately: makespan and maximum tardiness. Several mathematical models are compared with each other to highlight the advantages of each approach. The mathematical models are tested with randomly generated instances available from a public e-library.

  7. Linear Models

    Searle, Shayle R

    2012-01-01

    This 1971 classic on linear models is once again available--as a Wiley Classics Library Edition. It features material that can be understood by any statistician who understands matrix algebra and basic statistical methods.

  8. Medium-dose-rate brachytherapy of cancer of the cervix: preliminary results of a prospectively designed schedule based on the linear-quadratic model

    Leborgne, Felix; Fowler, Jack F.; Leborgne, Jose H.; Zubizarreta, Eduardo; Curochquin, Rene

    1999-01-01

    Purpose: To compare results and complications of our previous low-dose-rate (LDR) brachytherapy schedule for early-stage cancer of the cervix, with a prospectively designed medium-dose-rate (MDR) schedule, based on the linear-quadratic model (LQ). Methods and Materials: A combination of brachytherapy, external beam pelvic and parametrial irradiation was used in 102 consecutive Stage Ib-IIb LDR treated patients (1986-1990) and 42 equally staged MDR treated patients (1994-1996). The planned MDR schedule consisted of three insertions on three treatment days with six 8-Gy brachytherapy fractions to Point A, two on each treatment day with an interfraction interval of 6 hours, plus 18 Gy external whole pelvic dose, and followed by additional parametrial irradiation. The calculated biologically effective dose (BED) for tumor was 90 Gy 10 and for rectum below 125 Gy 3 . Results: In practice the MDR brachytherapy schedule achieved a tumor BED of 86 Gy 10 and a rectal BED of 101 Gy 3 . The latter was better than originally planned due to a reduction from 85% to 77% in the percentage of the mean dose to the rectum in relation to Point A. The mean overall treatment time was 10 days shorter for MDR in comparison with LDR. The 3-year actuarial central control for LDR and MDR was 97% and 98% (p = NS), respectively. The Grades 2 and 3 late complications (scale 0 to 3) were 1% and 2.4%, respectively for LDR (3-year) and MDR (2-year). Conclusions: LQ is a reliable tool for designing new schedules with altered fractionation and dose rates. The MDR schedule has proven to be an equivalent treatment schedule compared with LDR, with an additional advantage of having a shorter overall treatment time. The mean rectal BED Gy 3 was lower than expected

  9. Gain scheduled linear quadratic control for quadcopter

    Okasha, M.; Shah, J.; Fauzi, W.; Hanouf, Z.

    2017-12-01

    This study exploits the dynamics and control of quadcopters using Linear Quadratic Regulator (LQR) control approach. The quadcopter’s mathematical model is derived using the Newton-Euler method. It is a highly manoeuvrable, nonlinear, coupled with six degrees of freedom (DOF) model, which includes aerodynamics and detailed gyroscopic moments that are often ignored in many literatures. The linearized model is obtained and characterized by the heading angle (i.e. yaw angle) of the quadcopter. The adopted control approach utilizes LQR method to track several reference trajectories including circle and helix curves with significant variation in the yaw angle. The controller is modified to overcome difficulties related to the continuous changes in the operating points and eliminate chattering and discontinuity that is observed in the control input signal. Numerical non-linear simulations are performed using MATLAB and Simulink to illustrate to accuracy and effectiveness of the proposed controller.

  10. An improved scheduling algorithm for linear networks

    Bader, Ahmed; Alouini, Mohamed-Slim; Ayadi, Yassin

    2017-01-01

    In accordance with the present disclosure, embodiments of an exemplary scheduling controller module or device implement an improved scheduling process such that the targeted reduction in schedule length can be achieve while incurring minimal energy penalty by allowing for a large rate (or duration) selection alphabet.

  11. An improved scheduling algorithm for linear networks

    Bader, Ahmed

    2017-02-09

    In accordance with the present disclosure, embodiments of an exemplary scheduling controller module or device implement an improved scheduling process such that the targeted reduction in schedule length can be achieve while incurring minimal energy penalty by allowing for a large rate (or duration) selection alphabet.

  12. Modelling altered fractionation schedules

    Fowler, J.F.

    1993-01-01

    The author discusses the conflicting requirements of hyperfractionation and accelerated fractionation used in radiotherapy, and the development of computer modelling to predict how to obtain an optimum of tumour cell kill without exceeding normal-tissue tolerance. The present trend is to shorten hyperfractionated schedules from 6 or 7 weeks to give overall times of 4 or 5 weeks as in new schedules by Herskovic et al (1992) and Harari (1992). Very high doses are given, much higher than can be given when ultrashort schedules such as CHART (12 days) are used. Computer modelling has suggested that optimum overall times, to yield maximum cell kill in tumours ((α/β = 10 Gy) for a constant level of late complications (α/β = 3 Gy) would be X or X-1 weeks, where X is the doubling time of the tumour cells in days (Fowler 1990). For median doubling times of about 5 days, overall times of 4 or 5 weeks should be ideal. (U.K.)

  13. Linearly Ordered Attribute Grammar Scheduling Using SAT-Solving

    Bransen, Jeroen; van Binsbergen, L.Thomas; Claessen, Koen; Dijkstra, Atze

    2015-01-01

    Many computations over trees can be specified using attribute grammars. Compilers for attribute grammars need to find an evaluation order (or schedule) in order to generate efficient code. For the class of linearly ordered attribute grammars such a schedule can be found statically, but this problem

  14. Modeling the Cray memory scheduler

    Wickham, K.L.; Litteer, G.L.

    1992-04-01

    This report documents the results of a project to evaluate low cost modeling and simulation tools when applied to modeling the Cray memory scheduler. The specific tool used is described and the basics of the memory scheduler are covered. Results of simulations using the model are discussed and a favorable recommendation is made to make more use of this inexpensive technology.

  15. Multiagent scheduling models and algorithms

    Agnetis, Alessandro; Gawiejnowicz, Stanisław; Pacciarelli, Dario; Soukhal, Ameur

    2014-01-01

    This book presents multi-agent scheduling models in which subsets of jobs sharing the same resources are evaluated by different criteria. It discusses complexity results, approximation schemes, heuristics and exact algorithms.

  16. linear-quadratic-linear model

    Tanwiwat Jaikuna

    2017-02-01

    Full Text Available Purpose: To develop an in-house software program that is able to calculate and generate the biological dose distribution and biological dose volume histogram by physical dose conversion using the linear-quadratic-linear (LQL model. Material and methods : The Isobio software was developed using MATLAB version 2014b to calculate and generate the biological dose distribution and biological dose volume histograms. The physical dose from each voxel in treatment planning was extracted through Computational Environment for Radiotherapy Research (CERR, and the accuracy was verified by the differentiation between the dose volume histogram from CERR and the treatment planning system. An equivalent dose in 2 Gy fraction (EQD2 was calculated using biological effective dose (BED based on the LQL model. The software calculation and the manual calculation were compared for EQD2 verification with pair t-test statistical analysis using IBM SPSS Statistics version 22 (64-bit. Results: Two and three-dimensional biological dose distribution and biological dose volume histogram were displayed correctly by the Isobio software. Different physical doses were found between CERR and treatment planning system (TPS in Oncentra, with 3.33% in high-risk clinical target volume (HR-CTV determined by D90%, 0.56% in the bladder, 1.74% in the rectum when determined by D2cc, and less than 1% in Pinnacle. The difference in the EQD2 between the software calculation and the manual calculation was not significantly different with 0.00% at p-values 0.820, 0.095, and 0.593 for external beam radiation therapy (EBRT and 0.240, 0.320, and 0.849 for brachytherapy (BT in HR-CTV, bladder, and rectum, respectively. Conclusions : The Isobio software is a feasible tool to generate the biological dose distribution and biological dose volume histogram for treatment plan evaluation in both EBRT and BT.

  17. Single machine scheduling with time-dependent linear deterioration and rate-modifying maintenance

    Rustogi, Kabir; Strusevich, Vitaly A.

    2015-01-01

    We study single machine scheduling problems with linear time-dependent deterioration effects and maintenance activities. Maintenance periods (MPs) are included into the schedule, so that the machine, that gets worse during the processing, can be restored to a better state. We deal with a job-independent version of the deterioration effects, that is, all jobs share a common deterioration rate. However, we introduce a novel extension to such models and allow the deterioration rates to change af...

  18. Linear models with R

    Faraway, Julian J

    2014-01-01

    A Hands-On Way to Learning Data AnalysisPart of the core of statistics, linear models are used to make predictions and explain the relationship between the response and the predictors. Understanding linear models is crucial to a broader competence in the practice of statistics. Linear Models with R, Second Edition explains how to use linear models in physical science, engineering, social science, and business applications. The book incorporates several improvements that reflect how the world of R has greatly expanded since the publication of the first edition.New to the Second EditionReorganiz

  19. Foundations of linear and generalized linear models

    Agresti, Alan

    2015-01-01

    A valuable overview of the most important ideas and results in statistical analysis Written by a highly-experienced author, Foundations of Linear and Generalized Linear Models is a clear and comprehensive guide to the key concepts and results of linear statistical models. The book presents a broad, in-depth overview of the most commonly used statistical models by discussing the theory underlying the models, R software applications, and examples with crafted models to elucidate key ideas and promote practical model building. The book begins by illustrating the fundamentals of linear models,

  20. Dimension of linear models

    Høskuldsson, Agnar

    1996-01-01

    Determination of the proper dimension of a given linear model is one of the most important tasks in the applied modeling work. We consider here eight criteria that can be used to determine the dimension of the model, or equivalently, the number of components to use in the model. Four of these cri......Determination of the proper dimension of a given linear model is one of the most important tasks in the applied modeling work. We consider here eight criteria that can be used to determine the dimension of the model, or equivalently, the number of components to use in the model. Four...... the basic problems in determining the dimension of linear models. Then each of the eight measures are treated. The results are illustrated by examples....

  1. Gain-scheduled Linear Quadratic Control of Wind Turbines Operating at High Wind Speed

    Østergaard, Kasper Zinck; Stoustrup, Jakob; Brath, Per

    2007-01-01

    This paper addresses state estimation and linear quadratic (LQ) control of variable speed variable pitch wind turbines. On the basis of a nonlinear model of a wind turbine, a set of operating conditions is identified and a LQ controller is designed for each operating point. The controller gains...... are then interpolated linearly to get a control law for the entire operating envelope. A nonlinear state estimator is designed as a combination of two unscented Kalman filters and a linear disturbance estimator. The gain-scheduling variable (wind speed) is then calculated from the output of these state estimators...

  2. Scheduling models in farm management : a new approach

    Wijngaard, P.J.M.

    1988-01-01

    Three operational planning models to calculate schedules for an arable farm are examined. These models are a linear programming model, a dynamic programming model and a simulation model. They are examined at different levels of aggregation and relaxation in a retrospective way. Also a

  3. Dimension of linear models

    Høskuldsson, Agnar

    1996-01-01

    Determination of the proper dimension of a given linear model is one of the most important tasks in the applied modeling work. We consider here eight criteria that can be used to determine the dimension of the model, or equivalently, the number of components to use in the model. Four...... the basic problems in determining the dimension of linear models. Then each of the eight measures are treated. The results are illustrated by examples....... of these criteria are widely used ones, while the remaining four are ones derived from the H-principle of mathematical modeling. Many examples from practice show that the criteria derived from the H-principle function better than the known and popular criteria for the number of components. We shall briefly review...

  4. A LINEAR PROGRAMMING ALGORITHM FOR LEAST-COST SCHEDULING

    AYMAN H AL-MOMANI

    1999-12-01

    Full Text Available In this research, some concepts of linear programming and critical path method are reviewed to describe recent modeling structures that have been of great value in analyzing extended planning horizon project time-cost trade-offs problems. A simplified representation of a small project and a linear programming model is formulated to represent this system. Procedures to solve these various problems formulations were cited and the final solution is obtained using LINDO program. The model developed represents many restrictions and management considerations of the project. It could be used by construction managers in a planning stage to explore numerous possible opportunities to the contractor and predict the effect of a decision on the construction to facilitate a preferred operating policy given different management objectives. An implementation using this method is shown to outperform several other techniques and a large class of test problems. Linear programming show that the algorithm is very promising in practice on a wide variety of time-cost trade-offs problems. This method is simple, applicable to a large network, and generates a shorter computational time at low cost, along with an increase in robustness.

  5. Non linear viscoelastic models

    Agerkvist, Finn T.

    2011-01-01

    Viscoelastic eects are often present in loudspeaker suspensions, this can be seen in the displacement transfer function which often shows a frequency dependent value below the resonance frequency. In this paper nonlinear versions of the standard linear solid model (SLS) are investigated....... The simulations show that the nonlinear version of the Maxwell SLS model can result in a time dependent small signal stiness while the Kelvin Voight version does not....

  6. Reliability modelling and simulation of switched linear system ...

    Reliability modelling and simulation of switched linear system control using temporal databases. ... design of fault-tolerant real-time switching systems control and modelling embedded micro-schedulers for complex systems maintenance.

  7. NASA Instrument Cost/Schedule Model

    Habib-Agahi, Hamid; Mrozinski, Joe; Fox, George

    2011-01-01

    NASA's Office of Independent Program and Cost Evaluation (IPCE) has established a number of initiatives to improve its cost and schedule estimating capabilities. 12One of these initiatives has resulted in the JPL developed NASA Instrument Cost Model. NICM is a cost and schedule estimator that contains: A system level cost estimation tool; a subsystem level cost estimation tool; a database of cost and technical parameters of over 140 previously flown remote sensing and in-situ instruments; a schedule estimator; a set of rules to estimate cost and schedule by life cycle phases (B/C/D); and a novel tool for developing joint probability distributions for cost and schedule risk (Joint Confidence Level (JCL)). This paper describes the development and use of NICM, including the data normalization processes, data mining methods (cluster analysis, principal components analysis, regression analysis and bootstrap cross validation), the estimating equations themselves and a demonstration of the NICM tool suite.

  8. A primer on linear models

    Monahan, John F

    2008-01-01

    Preface Examples of the General Linear Model Introduction One-Sample Problem Simple Linear Regression Multiple Regression One-Way ANOVA First Discussion The Two-Way Nested Model Two-Way Crossed Model Analysis of Covariance Autoregression Discussion The Linear Least Squares Problem The Normal Equations The Geometry of Least Squares Reparameterization Gram-Schmidt Orthonormalization Estimability and Least Squares Estimators Assumptions for the Linear Mean Model Confounding, Identifiability, and Estimability Estimability and Least Squares Estimators F

  9. Fuzzy linear programming based optimal fuel scheduling incorporating blending/transloading facilities

    Djukanovic, M.; Babic, B.; Milosevic, B. [Electrical Engineering Inst. Nikola Tesla, Belgrade (Yugoslavia); Sobajic, D.J. [EPRI, Palo Alto, CA (United States). Power System Control; Pao, Y.H. [Case Western Reserve Univ., Cleveland, OH (United States)]|[AI WARE, Inc., Cleveland, OH (United States)

    1996-05-01

    In this paper the blending/transloading facilities are modeled using an interactive fuzzy linear programming (FLP), in order to allow the decision-maker to solve the problem of uncertainty of input information within the fuel scheduling optimization. An interactive decision-making process is formulated in which decision-maker can learn to recognize good solutions by considering all possibilities of fuzziness. The application of the fuzzy formulation is accompanied by a careful examination of the definition of fuzziness, appropriateness of the membership function and interpretation of results. The proposed concept provides a decision support system with integration-oriented features, whereby the decision-maker can learn to recognize the relative importance of factors in the specific domain of optimal fuel scheduling (OFS) problem. The formulation of a fuzzy linear programming problem to obtain a reasonable nonfuzzy solution under consideration of the ambiguity of parameters, represented by fuzzy numbers, is introduced. An additional advantage of the FLP formulation is its ability to deal with multi-objective problems.

  10. Constrained non-linear multi-objective optimisation of preventive maintenance scheduling for offshore wind farms

    Zhong, Shuya; Pantelous, Athanasios A.; Beer, Michael; Zhou, Jian

    2018-05-01

    Offshore wind farm is an emerging source of renewable energy, which has been shown to have tremendous potential in recent years. In this blooming area, a key challenge is that the preventive maintenance of offshore turbines should be scheduled reasonably to satisfy the power supply without failure. In this direction, two significant goals should be considered simultaneously as a trade-off. One is to maximise the system reliability and the other is to minimise the maintenance related cost. Thus, a non-linear multi-objective programming model is proposed including two newly defined objectives with thirteen families of constraints suitable for the preventive maintenance of offshore wind farms. In order to solve our model effectively, the nondominated sorting genetic algorithm II, especially for the multi-objective optimisation is utilised and Pareto-optimal solutions of schedules can be obtained to offer adequate support to decision-makers. Finally, an example is given to illustrate the performances of the devised model and algorithm, and explore the relationships of the two targets with the help of a contrast model.

  11. DEVELOPMENT OF A MAINTENANCE SCHEDULING MODEL FOR ...

    ... of minor maintenance, for each machine within this time span. In order to minimize the total cost of repairs and production. A numerical application of this development model in a case study is presented. Key words: Maintenance, modeling, scheduling, optimization. [Global Jnl Engineering Res. Vol.1(2) 2002: 107-118] ...

  12. Dynamic Linear Models with R

    Campagnoli, Patrizia; Petris, Giovanni

    2009-01-01

    State space models have gained tremendous popularity in as disparate fields as engineering, economics, genetics and ecology. Introducing general state space models, this book focuses on dynamic linear models, emphasizing their Bayesian analysis. It illustrates the fundamental steps needed to use dynamic linear models in practice, using R package.

  13. Introduction to generalized linear models

    Dobson, Annette J

    2008-01-01

    Introduction Background Scope Notation Distributions Related to the Normal Distribution Quadratic Forms Estimation Model Fitting Introduction Examples Some Principles of Statistical Modeling Notation and Coding for Explanatory Variables Exponential Family and Generalized Linear Models Introduction Exponential Family of Distributions Properties of Distributions in the Exponential Family Generalized Linear Models Examples Estimation Introduction Example: Failure Times for Pressure Vessels Maximum Likelihood Estimation Poisson Regression Example Inference Introduction Sampling Distribution for Score Statistics Taylor Series Approximations Sampling Distribution for MLEs Log-Likelihood Ratio Statistic Sampling Distribution for the Deviance Hypothesis Testing Normal Linear Models Introduction Basic Results Multiple Linear Regression Analysis of Variance Analysis of Covariance General Linear Models Binary Variables and Logistic Regression Probability Distributions ...

  14. (Non) linear regression modelling

    Cizek, P.; Gentle, J.E.; Hardle, W.K.; Mori, Y.

    2012-01-01

    We will study causal relationships of a known form between random variables. Given a model, we distinguish one or more dependent (endogenous) variables Y = (Y1,…,Yl), l ∈ N, which are explained by a model, and independent (exogenous, explanatory) variables X = (X1,…,Xp),p ∈ N, which explain or

  15. Dose fractionated gamma knife radiosurgery for large arteriovenous malformations on daily or alternate day schedule outside the linear quadratic model: Proof of concept and early results. A substitute to volume fractionation.

    Mukherjee, Kanchan Kumar; Kumar, Narendra; Tripathi, Manjul; Oinam, Arun S; Ahuja, Chirag K; Dhandapani, Sivashanmugam; Kapoor, Rakesh; Ghoshal, Sushmita; Kaur, Rupinder; Bhatt, Sandeep

    2017-01-01

    To evaluate the feasibility, safety and efficacy of dose fractionated gamma knife radiosurgery (DFGKRS) on a daily schedule beyond the linear quadratic (LQ) model, for large volume arteriovenous malformations (AVMs). Between 2012-16, 14 patients of large AVMs (median volume 26.5 cc) unsuitable for surgery or embolization were treated in 2-3 of DFGKRS sessions. The Leksell G frame was kept in situ during the whole procedure. 86% (n = 12) patients had radiologic evidence of bleed, and 43% (n = 6) had presented with a history of seizures. 57% (n = 8) patients received a daily treatment for 3 days and 43% (n = 6) were on an alternate day (2 fractions) regimen. The marginal dose was split into 2 or 3 fractions of the ideal prescription dose of a single fraction of 23-25 Gy. The median follow up period was 35.6 months (8-57 months). In the three-fraction scheme, the marginal dose ranged from 8.9-11.5 Gy, while in the two-fraction scheme, the marginal dose ranged from 11.3-15 Gy at 50% per fraction. Headache (43%, n = 6) was the most common early postoperative complication, which was controlled with short course steroids. Follow up evaluation of at least three years was achieved in seven patients, who have shown complete nidus obliteration in 43% patients while the obliteration has been in the range of 50-99% in rest of the patients. Overall, there was a 67.8% reduction in the AVM volume at 3 years. Nidus obliteration at 3 years showed a significant rank order correlation with the cumulative prescription dose (p 0.95, P value 0.01), with attainment of near-total (more than 95%) obliteration rates beyond 29 Gy of the cumulative prescription dose. No patient receiving a cumulative prescription dose of less than 31 Gy had any severe adverse reaction. In co-variate adjusted ordinal regression, only the cumulative prescription dose had a significant correlation with common terminology criteria for adverse events (CTCAE) severity (P value 0.04), independent of age, AVM volume

  16. A linear program for assessing the assignment and scheduling of radioactive wastes for disposal to sea

    Hutchinson, W.

    1983-04-01

    The report takes the form of a user guide to a computer program using linear programming techniques to aid the assignment and scheduling of radioactive wastes for disposal to sea. The program is aimed at the identification of 'optimum' amounts of each waste stream for disposal to sea without violating specific constraints values and/or fairness parameters. (author)

  17. Explorative methods in linear models

    Høskuldsson, Agnar

    2004-01-01

    The author has developed the H-method of mathematical modeling that builds up the model by parts, where each part is optimized with respect to prediction. Besides providing with better predictions than traditional methods, these methods provide with graphic procedures for analyzing different feat...... features in data. These graphic methods extend the well-known methods and results of Principal Component Analysis to any linear model. Here the graphic procedures are applied to linear regression and Ridge Regression....

  18. Generalized, Linear, and Mixed Models

    McCulloch, Charles E; Neuhaus, John M

    2011-01-01

    An accessible and self-contained introduction to statistical models-now in a modernized new editionGeneralized, Linear, and Mixed Models, Second Edition provides an up-to-date treatment of the essential techniques for developing and applying a wide variety of statistical models. The book presents thorough and unified coverage of the theory behind generalized, linear, and mixed models and highlights their similarities and differences in various construction, application, and computational aspects.A clear introduction to the basic ideas of fixed effects models, random effects models, and mixed m

  19. Shift scheduling model considering workload and worker’s preference for security department

    Herawati, A.; Yuniartha, D. R.; Purnama, I. L. I.; Dewi, LT

    2018-04-01

    Security department operates for 24 hours and applies shift scheduling to organize its workers as well as in hotel industry. This research has been conducted to develop shift scheduling model considering the workers physical workload using rating of perceived exertion (RPE) Borg’s Scale and workers’ preference to accommodate schedule flexibility. The mathematic model is developed in integer linear programming and results optimal solution for simple problem. Resulting shift schedule of the developed model has equally distribution shift allocation among workers to balance the physical workload and give flexibility for workers in working hours arrangement.

  20. A multi-criteria model for maintenance job scheduling

    Sunday A. Oke

    2007-12-01

    Full Text Available This paper presents a multi-criteria maintenance job scheduling model, which is formulated using a weighted multi-criteria integer linear programming maintenance scheduling framework. Three criteria, which have direct relationship with the primary objectives of a typical production setting, were used. These criteria are namely minimization of equipment idle time, manpower idle time and lateness of job with unit parity. The mathematical model constrained by available equipment, manpower and job available time within planning horizon was tested with a 10-job, 8-hour time horizon problem with declared equipment and manpower available as against the required. The results, analysis and illustrations justify multi-criteria consideration. Thus, maintenance managers are equipped with a tool for adequate decision making that guides against error in the accumulated data which may lead to wrong decision making. The idea presented is new since it provides an approach that has not been documented previously in the literature.

  1. Parametric Cost and Schedule Modeling for Early Technology Development

    2018-04-02

    Research NoteNational Security Rep rt PARAMETRIC MODELING FOR EARLY TECHNOLOGY DEVELOPMENT COST AND SCHEDULE Chuck...Alexander NSR_11x17_Cover_CostModeling_v8.indd 1 11/20/17 3:15 PM PARAMETRIC COST AND SCHEDULE MODELING FOR EARLY  TECHNOLOGY DEVELOPMENT Chuck...COST AND SCHEDULE MODELING FOR EARLY  TECHNOLOGY DEVELOPMENT iii Contents Figures

  2. Application of linear scheduling method (LSM) for nuclear power plant (NPP) construction

    Kim, Woojoong; Ryu, Dongsoo; Jung, Youngsoo

    2014-01-01

    Highlights: • Mixed use of linear scheduling method with traditional CPM is suggested for NPP. • A methodology for selecting promising areas for LSM application is proposed. • A case-study is conducted to validate the proposed LSM selection methodology. • A case-study of reducing NPP construction duration by using LSM is introduced. - Abstract: According to a forecast, global energy demand is expected to increase by 56% from 2010 to 2040 (EIA, 2013). The nuclear power plant construction market is also growing with sharper competition. In nuclear power plant construction, scheduling is one of the most important functions due to its large size and complexity. Therefore, it is crucial to incorporate the ‘distinct characteristics of construction commodities and the complex characteristics of scheduling techniques’ (Jung and Woo, 2004) when selecting appropriate schedule control methods for nuclear power plant construction. However, among various types of construction scheduling techniques, the traditional critical path method (CPM) has been used most frequently in real-world practice. In this context, the purpose of this paper is to examine the viability and effectiveness of linear scheduling method (LSM) applications for specific areas in nuclear power plant construction. In order to identify the criteria for selecting scheduling techniques, the characteristics of CPM and LSM were compared and analyzed first through a literature review. Distinct characteristics of nuclear power plant construction were then explored by using a case project in order to develop a methodology to select effective areas of LSM application to nuclear power plant construction. Finally, promising areas for actual LSM application are suggested based on the proposed evaluation criteria and the case project. Findings and practical implications are discussed for further implementation

  3. Application of linear scheduling method (LSM) for nuclear power plant (NPP) construction

    Kim, Woojoong, E-mail: minidung@nate.com [Central Research Institute, Korea Hydro and Nuclear Power Co., Ltd, Daejeon 305-343 (Korea, Republic of); Ryu, Dongsoo, E-mail: energyboy@khnp.co.kr [Central Research Institute, Korea Hydro and Nuclear Power Co., Ltd, Daejeon 305-343 (Korea, Republic of); Jung, Youngsoo, E-mail: yjung97@mju.ac.kr [College of Architecture, Myongji University, Yongin 449-728 (Korea, Republic of)

    2014-04-01

    Highlights: • Mixed use of linear scheduling method with traditional CPM is suggested for NPP. • A methodology for selecting promising areas for LSM application is proposed. • A case-study is conducted to validate the proposed LSM selection methodology. • A case-study of reducing NPP construction duration by using LSM is introduced. - Abstract: According to a forecast, global energy demand is expected to increase by 56% from 2010 to 2040 (EIA, 2013). The nuclear power plant construction market is also growing with sharper competition. In nuclear power plant construction, scheduling is one of the most important functions due to its large size and complexity. Therefore, it is crucial to incorporate the ‘distinct characteristics of construction commodities and the complex characteristics of scheduling techniques’ (Jung and Woo, 2004) when selecting appropriate schedule control methods for nuclear power plant construction. However, among various types of construction scheduling techniques, the traditional critical path method (CPM) has been used most frequently in real-world practice. In this context, the purpose of this paper is to examine the viability and effectiveness of linear scheduling method (LSM) applications for specific areas in nuclear power plant construction. In order to identify the criteria for selecting scheduling techniques, the characteristics of CPM and LSM were compared and analyzed first through a literature review. Distinct characteristics of nuclear power plant construction were then explored by using a case project in order to develop a methodology to select effective areas of LSM application to nuclear power plant construction. Finally, promising areas for actual LSM application are suggested based on the proposed evaluation criteria and the case project. Findings and practical implications are discussed for further implementation.

  4. Declarative Modeling for Production Order Portfolio Scheduling

    Banaszak Zbigniew

    2014-12-01

    Full Text Available A declarative framework enabling to determine conditions as well as to develop decision-making software supporting small- and medium-sized enterprises aimed at unique, multi-project-like and mass customized oriented production is discussed. A set of unique production orders grouped into portfolio orders is considered. Operations executed along different production orders share available resources following a mutual exclusion protocol. A unique product or production batch is completed while following a given activity’s network order. The problem concerns scheduling a newly inserted project portfolio subject to constraints imposed by a multi-project environment The answers sought are: Can a given project portfolio specified by its cost and completion time be completed within the assumed time period in a manufacturing system in hand? Which manufacturing system capability guarantees the completion of a given project portfolio ordered under assumed cost and time constraints? The considered problems regard finding a computationally effective approach aimed at simultaneous routing and allocation as well as batching and scheduling of a newly ordered project portfolio subject to constraints imposed by a multi-project environment. The main objective is to provide a declarative model enabling to state a constraint satisfaction problem aimed at multi-project-like and mass customized oriented production scheduling. Multiple illustrative examples are discussed.

  5. Sparse Linear Identifiable Multivariate Modeling

    Henao, Ricardo; Winther, Ole

    2011-01-01

    and bench-marked on artificial and real biological data sets. SLIM is closest in spirit to LiNGAM (Shimizu et al., 2006), but differs substantially in inference, Bayesian network structure learning and model comparison. Experimentally, SLIM performs equally well or better than LiNGAM with comparable......In this paper we consider sparse and identifiable linear latent variable (factor) and linear Bayesian network models for parsimonious analysis of multivariate data. We propose a computationally efficient method for joint parameter and model inference, and model comparison. It consists of a fully...

  6. Group Elevator Peak Scheduling Based on Robust Optimization Model

    ZHANG, J.

    2013-08-01

    Full Text Available Scheduling of Elevator Group Control System (EGCS is a typical combinatorial optimization problem. Uncertain group scheduling under peak traffic flows has become a research focus and difficulty recently. RO (Robust Optimization method is a novel and effective way to deal with uncertain scheduling problem. In this paper, a peak scheduling method based on RO model for multi-elevator system is proposed. The method is immune to the uncertainty of peak traffic flows, optimal scheduling is realized without getting exact numbers of each calling floor's waiting passengers. Specifically, energy-saving oriented multi-objective scheduling price is proposed, RO uncertain peak scheduling model is built to minimize the price. Because RO uncertain model could not be solved directly, RO uncertain model is transformed to RO certain model by elevator scheduling robust counterparts. Because solution space of elevator scheduling is enormous, to solve RO certain model in short time, ant colony solving algorithm for elevator scheduling is proposed. Based on the algorithm, optimal scheduling solutions are found quickly, and group elevators are scheduled according to the solutions. Simulation results show the method could improve scheduling performances effectively in peak pattern. Group elevators' efficient operation is realized by the RO scheduling method.

  7. Using Optimization Models for Scheduling in Enterprise Resource Planning Systems

    Frank Herrmann

    2016-03-01

    Full Text Available Companies often use specially-designed production systems and change them from time to time. They produce small batches in order to satisfy specific demands with the least tardiness. This imposes high demands on high-performance scheduling algorithms which can be rapidly adapted to changes in the production system. As a solution, this paper proposes a generic approach: solutions were obtained using a widely-used commercially-available tool for solving linear optimization models, which is available in an Enterprise Resource Planning System (in the SAP system for example or can be connected to it. In a real-world application of a flow shop with special restrictions this approach is successfully used on a standard personal computer. Thus, the main implication is that optimal scheduling with a commercially-available tool, incorporated in an Enterprise Resource Planning System, may be the best approach.

  8. Humanoid Walking Robot: Modeling, Inverse Dynamics, and Gain Scheduling Control

    Elvedin Kljuno

    2010-01-01

    Full Text Available This article presents reference-model-based control design for a 10 degree-of-freedom bipedal walking robot, using nonlinear gain scheduling. The main goal is to show concentrated mass models can be used for prediction of the required joint torques for a bipedal walking robot. Relatively complicated architecture, high DOF, and balancing requirements make the control task of these robots difficult. Although linear control techniques can be used to control bipedal robots, nonlinear control is necessary for better performance. The emphasis of this work is to show that the reference model can be a bipedal walking model with concentrated mass at the center of gravity, which removes the problems related to design of a pseudo-inverse system. Another significance of this approach is the reduced calculation requirements due to the simplified procedure of nominal joint torques calculation. Kinematic and dynamic analysis is discussed including results for joint torques and ground force necessary to implement a prescribed walking motion. This analysis is accompanied by a comparison with experimental data. An inverse plant and a tracking error linearization-based controller design approach is described. We propose a novel combination of a nonlinear gain scheduling with a concentrated mass model for the MIMO bipedal robot system.

  9. Parameterized Linear Longitudinal Airship Model

    Kulczycki, Eric; Elfes, Alberto; Bayard, David; Quadrelli, Marco; Johnson, Joseph

    2010-01-01

    A parameterized linear mathematical model of the longitudinal dynamics of an airship is undergoing development. This model is intended to be used in designing control systems for future airships that would operate in the atmospheres of Earth and remote planets. Heretofore, the development of linearized models of the longitudinal dynamics of airships has been costly in that it has been necessary to perform extensive flight testing and to use system-identification techniques to construct models that fit the flight-test data. The present model is a generic one that can be relatively easily specialized to approximate the dynamics of specific airships at specific operating points, without need for further system identification, and with significantly less flight testing. The approach taken in the present development is to merge the linearized dynamical equations of an airship with techniques for estimation of aircraft stability derivatives, and to thereby make it possible to construct a linearized dynamical model of the longitudinal dynamics of a specific airship from geometric and aerodynamic data pertaining to that airship. (It is also planned to develop a model of the lateral dynamics by use of the same methods.) All of the aerodynamic data needed to construct the model of a specific airship can be obtained from wind-tunnel testing and computational fluid dynamics

  10. Workforce scheduling: A new model incorporating human factors

    Mohammed Othman

    2012-12-01

    Full Text Available Purpose: The majority of a company’s improvement comes when the right workers with the right skills, behaviors and capacities are deployed appropriately throughout a company. This paper considers a workforce scheduling model including human aspects such as skills, training, workers’ personalities, workers’ breaks and workers’ fatigue and recovery levels. This model helps to minimize the hiring, firing, training and overtime costs, minimize the number of fired workers with high performance, minimize the break time and minimize the average worker’s fatigue level.Design/methodology/approach: To achieve this objective, a multi objective mixed integer programming model is developed to determine the amount of hiring, firing, training and overtime for each worker type.Findings: The results indicate that the worker differences should be considered in workforce scheduling to generate realistic plans with minimum costs. This paper also investigates the effects of human fatigue and recovery on the performance of the production systems.Research limitations/implications: In this research, there are some assumptions that might affect the accuracy of the model such as the assumption of certainty of the demand in each period, and the linearity function of Fatigue accumulation and recovery curves. These assumptions can be relaxed in future work.Originality/value: In this research, a new model for integrating workers’ differences with workforce scheduling is proposed. To the authors' knowledge, it is the first time to study the effects of different important human factors such as human personality, skills and fatigue and recovery in the workforce scheduling process. This research shows that considering both technical and human factors together can reduce the costs in manufacturing systems and ensure the safety of the workers.

  11. Decomposable log-linear models

    Eriksen, Poul Svante

    can be characterized by a structured set of conditional independencies between some variables given some other variables. We term the new model class decomposable log-linear models, which is illustrated to be a much richer class than decomposable graphical models.It covers a wide range of non...... The present paper considers discrete probability models with exact computational properties. In relation to contingency tables this means closed form expressions of the maksimum likelihood estimate and its distribution. The model class includes what is known as decomposable graphicalmodels, which......-hierarchical models, models with structural zeroes, models described by quasi independence and models for level merging. Also, they have a very natural interpretation as they may be formulated by a structured set of conditional independencies between two events given some other event. In relation to contingency...

  12. Linear and Generalized Linear Mixed Models and Their Applications

    Jiang, Jiming

    2007-01-01

    This book covers two major classes of mixed effects models, linear mixed models and generalized linear mixed models, and it presents an up-to-date account of theory and methods in analysis of these models as well as their applications in various fields. The book offers a systematic approach to inference about non-Gaussian linear mixed models. Furthermore, it has included recently developed methods, such as mixed model diagnostics, mixed model selection, and jackknife method in the context of mixed models. The book is aimed at students, researchers and other practitioners who are interested

  13. Multicollinearity in hierarchical linear models.

    Yu, Han; Jiang, Shanhe; Land, Kenneth C

    2015-09-01

    This study investigates an ill-posed problem (multicollinearity) in Hierarchical Linear Models from both the data and the model perspectives. We propose an intuitive, effective approach to diagnosing the presence of multicollinearity and its remedies in this class of models. A simulation study demonstrates the impacts of multicollinearity on coefficient estimates, associated standard errors, and variance components at various levels of multicollinearity for finite sample sizes typical in social science studies. We further investigate the role multicollinearity plays at each level for estimation of coefficient parameters in terms of shrinkage. Based on these analyses, we recommend a top-down method for assessing multicollinearity in HLMs that first examines the contextual predictors (Level-2 in a two-level model) and then the individual predictors (Level-1) and uses the results for data collection, research problem redefinition, model re-specification, variable selection and estimation of a final model. Copyright © 2015 Elsevier Inc. All rights reserved.

  14. Modelling Loudspeaker Non-Linearities

    Agerkvist, Finn T.

    2007-01-01

    This paper investigates different techniques for modelling the non-linear parameters of the electrodynamic loudspeaker. The methods are tested not only for their accuracy within the range of original data, but also for the ability to work reasonable outside that range, and it is demonstrated...... that polynomial expansions are rather poor at this, whereas an inverse polynomial expansion or localized fitting functions such as the gaussian are better suited for modelling the Bl-factor and compliance. For the inductance the sigmoid function is shown to give very good results. Finally the time varying...

  15. Multivariate covariance generalized linear models

    Bonat, W. H.; Jørgensen, Bent

    2016-01-01

    are fitted by using an efficient Newton scoring algorithm based on quasi-likelihood and Pearson estimating functions, using only second-moment assumptions. This provides a unified approach to a wide variety of types of response variables and covariance structures, including multivariate extensions......We propose a general framework for non-normal multivariate data analysis called multivariate covariance generalized linear models, designed to handle multivariate response variables, along with a wide range of temporal and spatial correlation structures defined in terms of a covariance link...... function combined with a matrix linear predictor involving known matrices. The method is motivated by three data examples that are not easily handled by existing methods. The first example concerns multivariate count data, the second involves response variables of mixed types, combined with repeated...

  16. Model Predictive Control of a Nonlinear System with Known Scheduling Variable

    Mirzaei, Mahmood; Poulsen, Niels Kjølstad; Niemann, Hans Henrik

    2012-01-01

    Model predictive control (MPC) of a class of nonlinear systems is considered in this paper. We will use Linear Parameter Varying (LPV) model of the nonlinear system. By taking the advantage of having future values of the scheduling variable, we will simplify state prediction. Consequently...... the control problem of the nonlinear system is simplied into a quadratic programming. Wind turbine is chosen as the case study and we choose wind speed as the scheduling variable. Wind speed is measurable ahead of the turbine, therefore the scheduling variable is known for the entire prediction horizon....

  17. Matrix algebra for linear models

    Gruber, Marvin H J

    2013-01-01

    Matrix methods have evolved from a tool for expressing statistical problems to an indispensable part of the development, understanding, and use of various types of complex statistical analyses. This evolution has made matrix methods a vital part of statistical education. Traditionally, matrix methods are taught in courses on everything from regression analysis to stochastic processes, thus creating a fractured view of the topic. Matrix Algebra for Linear Models offers readers a unique, unified view of matrix analysis theory (where and when necessary), methods, and their applications. Written f

  18. Mathematical models for a batch scheduling problem to minimize earliness and tardiness

    Basar Ogun

    2018-05-01

    Full Text Available Purpose: Today’s manufacturing facilities are challenged by highly customized products and just in time manufacturing and delivery of these products. In this study, a batch scheduling problem is addressed to provide on-time completion of customer orders in the environment of lean manufacturing. The problem is to optimize partitioning of product components into batches and scheduling of the resulting batches where each customer order is received as a set of products made of various components. Design/methodology/approach: Three different mathematical models for minimization of total earliness and tardiness of customer orders are developed to provide on-time completion of customer orders and also, to avoid from inventory of final products. The first model is a non-linear integer programming model while the second is a linearized version of the first. Finally, to solve larger sized instances of the problem, an alternative linear integer model is presented. Findings: Computational study using a suit set of test instances showed that the alternative linear integer model is able to solve all test instances in varying sizes within quite shorter computer times comparing to the other two models. It was also showed that the alternative model can solve moderate sized real-world problems. Originality/value: The problem under study differentiates from existing batch scheduling problems in the literature since it includes new circumstances which may arise in real-world applications. This research, also, contributes the literature of batch scheduling problem by presenting new optimization models.

  19. A Gas Scheduling Optimization Model for Steel Enterprises

    Niu Honghai

    2017-01-01

    Full Text Available Regarding the scheduling problems of steel enterprises, this research designs the gas scheduling optimization model according to the rules and priorities. Considering different features and the process changes of the gas unit in the process of actual production, the calculation model of process state and gas consumption soft measurement together with the rules of scheduling optimization is proposed to provide the dispatchers with real-time gas using status of each process, then help them to timely schedule and reduce the gas volume fluctuations. In the meantime, operation forewarning and alarm functions are provided to avoid the abnormal situation in the scheduling, which has brought about very good application effect in the actual scheduling and ensures the safety of the gas pipe network system and the production stability.

  20. Schedulability of Herschel revisited using statistical model checking

    David, Alexandre; Larsen, Kim Guldstrand; Legay, Axel

    2015-01-01

    -approximation technique. We can safely conclude that the system is schedulable for varying values of BCET. For the cases where deadlines are violated, we use polyhedra to try to confirm the witnesses. Our alternative method to confirm non-schedulability uses statistical model-checking (SMC) to generate counter...... and blocking times of tasks. Consequently, the method may falsely declare deadline violations that will never occur during execution. This paper is a continuation of previous work of the authors in applying extended timed automata model checking (using the tool UPPAAL) to obtain more exact schedulability...... analysis, here in the presence of non-deterministic computation times of tasks given by intervals [BCET,WCET]. Computation intervals with preemptive schedulers make the schedulability analysis of the resulting task model undecidable. Our contribution is to propose a combination of model checking techniques...

  1. Nonabelian Gauged Linear Sigma Model

    Yongbin RUAN

    2017-01-01

    The gauged linear sigma model (GLSM for short) is a 2d quantum field theory introduced by Witten twenty years ago.Since then,it has been investigated extensively in physics by Hori and others.Recently,an algebro-geometric theory (for both abelian and nonabelian GLSMs) was developed by the author and his collaborators so that he can start to rigorously compute its invariants and check against physical predications.The abelian GLSM was relatively better understood and is the focus of current mathematical investigation.In this article,the author would like to look over the horizon and consider the nonabelian GLSM.The nonabelian case possesses some new features unavailable to the abelian GLSM.To aid the future mathematical development,the author surveys some of the key problems inspired by physics in the nonabelian GLSM.

  2. Aeon: Synthesizing Scheduling Algorithms from High-Level Models

    Monette, Jean-Noël; Deville, Yves; van Hentenryck, Pascal

    This paper describes the aeon system whose aim is to synthesize scheduling algorithms from high-level models. A eon, which is entirely written in comet, receives as input a high-level model for a scheduling application which is then analyzed to generate a dedicated scheduling algorithm exploiting the structure of the model. A eon provides a variety of synthesizers for generating complete or heuristic algorithms. Moreover, synthesizers are compositional, making it possible to generate complex hybrid algorithms naturally. Preliminary experimental results indicate that this approach may be competitive with state-of-the-art search algorithms.

  3. Model Justified Search Algorithms for Scheduling Under Uncertainty

    Howe, Adele; Whitley, L. D

    2008-01-01

    .... We also identified plateaus as a significant barrier to superb performance of local search on scheduling and have studied several canonical discrete optimization problems to discover and model the nature of plateaus...

  4. The local–global conjecture for scheduling with non-linear cost

    Bansal, N.; Dürr, C.; Thang, N.K.K.; Vásquez, Ó.C.

    2017-01-01

    We consider the classical scheduling problem on a single machine, on which we need to schedule sequentially n given jobs. Every job j has a processing time pj and a priority weight wj, and for a given schedule a completion time Cj. In this paper, we consider the problem of minimizing the objective

  5. Multivariate generalized linear mixed models using R

    Berridge, Damon Mark

    2011-01-01

    Multivariate Generalized Linear Mixed Models Using R presents robust and methodologically sound models for analyzing large and complex data sets, enabling readers to answer increasingly complex research questions. The book applies the principles of modeling to longitudinal data from panel and related studies via the Sabre software package in R. A Unified Framework for a Broad Class of Models The authors first discuss members of the family of generalized linear models, gradually adding complexity to the modeling framework by incorporating random effects. After reviewing the generalized linear model notation, they illustrate a range of random effects models, including three-level, multivariate, endpoint, event history, and state dependence models. They estimate the multivariate generalized linear mixed models (MGLMMs) using either standard or adaptive Gaussian quadrature. The authors also compare two-level fixed and random effects linear models. The appendices contain additional information on quadrature, model...

  6. Modeling of Agile Intelligent Manufacturing-oriented Production Scheduling System

    Zhong-Qi Sheng; Chang-Ping Tang; Ci-Xing Lv

    2010-01-01

    Agile intelligent manufacturing is one of the new manufacturing paradigms that adapt to the fierce globalizing market competition and meet the survival needs of the enterprises, in which the management and control of the production system have surpassed the scope of individual enterprise and embodied some new features including complexity, dynamicity, distributivity, and compatibility. The agile intelligent manufacturing paradigm calls for a production scheduling system that can support the cooperation among various production sectors, the distribution of various resources to achieve rational organization, scheduling and management of production activities. This paper uses multi-agents technology to build an agile intelligent manufacturing-oriented production scheduling system. Using the hybrid modeling method, the resources and functions of production system are encapsulated, and the agent-based production system model is established. A production scheduling-oriented multi-agents architecture is constructed and a multi-agents reference model is given in this paper.

  7. Nonlinear Modeling by Assembling Piecewise Linear Models

    Yao, Weigang; Liou, Meng-Sing

    2013-01-01

    To preserve nonlinearity of a full order system over a parameters range of interest, we propose a simple modeling approach by assembling a set of piecewise local solutions, including the first-order Taylor series terms expanded about some sampling states. The work by Rewienski and White inspired our use of piecewise linear local solutions. The assembly of these local approximations is accomplished by assigning nonlinear weights, through radial basis functions in this study. The efficacy of the proposed procedure is validated for a two-dimensional airfoil moving at different Mach numbers and pitching motions, under which the flow exhibits prominent nonlinear behaviors. All results confirm that our nonlinear model is accurate and stable for predicting not only aerodynamic forces but also detailed flowfields. Moreover, the model is robustness-accurate for inputs considerably different from the base trajectory in form and magnitude. This modeling preserves nonlinearity of the problems considered in a rather simple and accurate manner.

  8. Linear Logistic Test Modeling with R

    Baghaei, Purya; Kubinger, Klaus D.

    2015-01-01

    The present paper gives a general introduction to the linear logistic test model (Fischer, 1973), an extension of the Rasch model with linear constraints on item parameters, along with eRm (an R package to estimate different types of Rasch models; Mair, Hatzinger, & Mair, 2014) functions to estimate the model and interpret its parameters. The…

  9. Robust Model Predictive Control of a Nonlinear System with Known Scheduling Variable and Uncertain Gain

    Mirzaei, Mahmood; Poulsen, Niels Kjølstad; Niemann, Hans Henrik

    2012-01-01

    Robust model predictive control (RMPC) of a class of nonlinear systems is considered in this paper. We will use Linear Parameter Varying (LPV) model of the nonlinear system. By taking the advantage of having future values of the scheduling variable, we will simplify state prediction. Because...... of the special structure of the problem, uncertainty is only in the B matrix (gain) of the state space model. Therefore by taking advantage of this structure, we formulate a tractable minimax optimization problem to solve robust model predictive control problem. Wind turbine is chosen as the case study and we...... choose wind speed as the scheduling variable. Wind speed is measurable ahead of the turbine, therefore the scheduling variable is known for the entire prediction horizon....

  10. Core seismic behaviour: linear and non-linear models

    Bernard, M.; Van Dorsselaere, M.; Gauvain, M.; Jenapierre-Gantenbein, M.

    1981-08-01

    The usual methodology for the core seismic behaviour analysis leads to a double complementary approach: to define a core model to be included in the reactor-block seismic response analysis, simple enough but representative of basic movements (diagrid or slab), to define a finer core model, with basic data issued from the first model. This paper presents the history of the different models of both kinds. The inert mass model (IMM) yielded a first rough diagrid movement. The direct linear model (DLM), without shocks and with sodium as an added mass, let to two different ones: DLM 1 with independent movements of the fuel and radial blanket subassemblies, and DLM 2 with a core combined movement. The non-linear (NLM) ''CORALIE'' uses the same basic modelization (Finite Element Beams) but accounts for shocks. It studies the response of a diameter on flats and takes into account the fluid coupling and the wrapper tube flexibility at the pad level. Damping consists of one modal part of 2% and one part due to shocks. Finally, ''CORALIE'' yields the time-history of the displacements and efforts on the supports, but damping (probably greater than 2%) and fluid-structures interaction are still to be precised. The validation experiments were performed on a RAPSODIE core mock-up on scale 1, in similitude of 1/3 as to SPX 1. The equivalent linear model (ELM) was developed for the SPX 1 reactor-block response analysis and a specified seismic level (SB or SM). It is composed of several oscillators fixed to the diagrid and yields the same maximum displacements and efforts than the NLM. The SPX 1 core seismic analysis with a diagrid input spectrum which corresponds to a 0,1 g group acceleration, has been carried out with these models: some aspects of these calculations are presented here

  11. Composite Linear Models | Division of Cancer Prevention

    By Stuart G. Baker The composite linear models software is a matrix approach to compute maximum likelihood estimates and asymptotic standard errors for models for incomplete multinomial data. It implements the method described in Baker SG. Composite linear models for incomplete multinomial data. Statistics in Medicine 1994;13:609-622. The software includes a library of thirty

  12. Comparison between linear quadratic and early time dose models

    Chougule, A.A.; Supe, S.J.

    1993-01-01

    During the 70s, much interest was focused on fractionation in radiotherapy with the aim of improving tumor control rate without producing unacceptable normal tissue damage. To compare the radiobiological effectiveness of various fractionation schedules, empirical formulae such as Nominal Standard Dose, Time Dose Factor, Cumulative Radiation Effect and Tumour Significant Dose, were introduced and were used despite many shortcomings. It has been claimed that a recent linear quadratic model is able to predict the radiobiological responses of tumours as well as normal tissues more accurately. We compared Time Dose Factor and Tumour Significant Dose models with the linear quadratic model for tumour regression in patients with carcinomas of the cervix. It was observed that the prediction of tumour regression estimated by the Tumour Significant Dose and Time Dose factor concepts varied by 1.6% from that of the linear quadratic model prediction. In view of the lack of knowledge of the precise values of the parameters of the linear quadratic model, it should be applied with caution. One can continue to use the Time Dose Factor concept which has been in use for more than a decade as its results are within ±2% as compared to that predicted by the linear quadratic model. (author). 11 refs., 3 figs., 4 tabs

  13. Actuarial statistics with generalized linear mixed models

    Antonio, K.; Beirlant, J.

    2007-01-01

    Over the last decade the use of generalized linear models (GLMs) in actuarial statistics has received a lot of attention, starting from the actuarial illustrations in the standard text by McCullagh and Nelder [McCullagh, P., Nelder, J.A., 1989. Generalized linear models. In: Monographs on Statistics

  14. A Modeling Framework for Schedulability Analysis of Distributed Avionics Systems

    Han, Pujie; Zhai, Zhengjun; Nielsen, Brian

    2018-01-01

    This paper presents a modeling framework for schedulability analysis of distributed integrated modular avionics (DIMA) systems that consist of spatially distributed ARINC-653 modules connected by a unified AFDX network. We model a DIMA system as a set of stopwatch automata (SWA) in UPPAAL...

  15. Discrete Optimization Model for Vehicle Routing Problem with Scheduling Side Cosntraints

    Juliandri, Dedy; Mawengkang, Herman; Bu'ulolo, F.

    2018-01-01

    Vehicle Routing Problem (VRP) is an important element of many logistic systems which involve routing and scheduling of vehicles from a depot to a set of customers node. This is a hard combinatorial optimization problem with the objective to find an optimal set of routes used by a fleet of vehicles to serve the demands a set of customers It is required that these vehicles return to the depot after serving customers’ demand. The problem incorporates time windows, fleet and driver scheduling, pick-up and delivery in the planning horizon. The goal is to determine the scheduling of fleet and driver and routing policies of the vehicles. The objective is to minimize the overall costs of all routes over the planning horizon. We model the problem as a linear mixed integer program. We develop a combination of heuristics and exact method for solving the model.

  16. Reliable gain-scheduled control of discrete-time systems and its application to CSTR model

    Sakthivel, R.; Selvi, S.; Mathiyalagan, K.; Shi, Y.

    2016-10-01

    This paper is focused on reliable gain-scheduled controller design for a class of discrete-time systems with randomly occurring nonlinearities and actuator fault. Further, the nonlinearity in the system model is assumed to occur randomly according to a Bernoulli distribution with measurable time-varying probability in real time. The main purpose of this paper is to design a gain-scheduled controller by implementing a probability-dependent Lyapunov function and linear matrix inequality (LMI) approach such that the closed-loop discrete-time system is stochastically stable for all admissible randomly occurring nonlinearities. The existence conditions for the reliable controller is formulated in terms of LMI constraints. Finally, the proposed reliable gain-scheduled control scheme is applied on continuously stirred tank reactor model to demonstrate the effectiveness and applicability of the proposed design technique.

  17. Simulation models generator. Applications in scheduling

    Omar Danilo Castrillón

    2013-08-01

    Rev.Mate.Teor.Aplic. (ISSN 1409-2433 Vol. 20(2: 231–241, July 2013 generador de modelos de simulacion 233 will, in order to have an approach to reality to evaluate decisions in order to take more assertive. To test prototype was used as the modeling example of a production system with 9 machines and 5 works as a job shop configuration, testing stops processing times and stochastic machine to measure rates of use of machines and time average jobs in the system, as measures of system performance. This test shows the goodness of the prototype, to save the user the simulation model building

  18. Comparing linear probability model coefficients across groups

    Holm, Anders; Ejrnæs, Mette; Karlson, Kristian Bernt

    2015-01-01

    of the following three components: outcome truncation, scale parameters and distributional shape of the predictor variable. These results point to limitations in using linear probability model coefficients for group comparisons. We also provide Monte Carlo simulations and real examples to illustrate......This article offers a formal identification analysis of the problem in comparing coefficients from linear probability models between groups. We show that differences in coefficients from these models can result not only from genuine differences in effects, but also from differences in one or more...... these limitations, and we suggest a restricted approach to using linear probability model coefficients in group comparisons....

  19. Spaghetti Bridges: Modeling Linear Relationships

    Kroon, Cindy D.

    2016-01-01

    Mathematics and science are natural partners. One of many examples of this partnership occurs when scientific observations are made, thus providing data that can be used for mathematical modeling. Developing mathematical relationships elucidates such scientific principles. This activity describes a data-collection activity in which students employ…

  20. Bilevel Fuzzy Chance Constrained Hospital Outpatient Appointment Scheduling Model

    Xiaoyang Zhou

    2016-01-01

    Full Text Available Hospital outpatient departments operate by selling fixed period appointments for different treatments. The challenge being faced is to improve profit by determining the mix of full time and part time doctors and allocating appointments (which involves scheduling a combination of doctors, patients, and treatments to a time period in a department optimally. In this paper, a bilevel fuzzy chance constrained model is developed to solve the hospital outpatient appointment scheduling problem based on revenue management. In the model, the hospital, the leader in the hierarchy, decides the mix of the hired full time and part time doctors to maximize the total profit; each department, the follower in the hierarchy, makes the decision of the appointment scheduling to maximize its own profit while simultaneously minimizing surplus capacity. Doctor wage and demand are considered as fuzzy variables to better describe the real-life situation. Then we use chance operator to handle the model with fuzzy parameters and equivalently transform the appointment scheduling model into a crisp model. Moreover, interactive algorithm based on satisfaction is employed to convert the bilevel programming into a single level programming, in order to make it solvable. Finally, the numerical experiments were executed to demonstrate the efficiency and effectiveness of the proposed approaches.

  1. Non-linear finite element modeling

    Mikkelsen, Lars Pilgaard

    The note is written for courses in "Non-linear finite element method". The note has been used by the author teaching non-linear finite element modeling at Civil Engineering at Aalborg University, Computational Mechanics at Aalborg University Esbjerg, Structural Engineering at the University...

  2. A Linear Viscoelastic Model Calibration of Sylgard 184.

    Long, Kevin Nicholas; Brown, Judith Alice

    2017-04-01

    We calibrate a linear thermoviscoelastic model for solid Sylgard 184 (90-10 formulation), a lightly cross-linked, highly flexible isotropic elastomer for use both in Sierra / Solid Mechanics via the Universal Polymer Model as well as in Sierra / Structural Dynamics (Salinas) for use as an isotropic viscoelastic material. Material inputs for the calibration in both codes are provided. The frequency domain master curve of oscillatory shear was obtained from a report from Los Alamos National Laboratory (LANL). However, because the form of that data is different from the constitutive models in Sierra, we also present the mapping of the LANL data onto Sandia’s constitutive models. Finally, blind predictions of cyclic tension and compression out to moderate strains of 40 and 20% respectively are compared with Sandia’s legacy cure schedule material. Although the strain rate of the data is unknown, the linear thermoviscoelastic model accurately predicts the experiments out to moderate strains for the slower strain rates, which is consistent with the expectation that quasistatic test procedures were likely followed. This good agreement comes despite the different cure schedules between the Sandia and LANL data.

  3. Correlations and Non-Linear Probability Models

    Breen, Richard; Holm, Anders; Karlson, Kristian Bernt

    2014-01-01

    the dependent variable of the latent variable model and its predictor variables. We show how this correlation can be derived from the parameters of non-linear probability models, develop tests for the statistical significance of the derived correlation, and illustrate its usefulness in two applications. Under......Although the parameters of logit and probit and other non-linear probability models are often explained and interpreted in relation to the regression coefficients of an underlying linear latent variable model, we argue that they may also be usefully interpreted in terms of the correlations between...... certain circumstances, which we explain, the derived correlation provides a way of overcoming the problems inherent in cross-sample comparisons of the parameters of non-linear probability models....

  4. A stochastic model for forecast consumption in master scheduling

    Weeda, P.J.; Weeda, P.J.

    1994-01-01

    This paper describes a stochastic model for the reduction of the initial forecast in the Master Schedule (MS) of an MRP system during progress of time by the acceptance of customer orders. Results are given for the expectation and variance of the number of yet unknown deliveries as a function of

  5. Extended Linear Models with Gaussian Priors

    Quinonero, Joaquin

    2002-01-01

    In extended linear models the input space is projected onto a feature space by means of an arbitrary non-linear transformation. A linear model is then applied to the feature space to construct the model output. The dimension of the feature space can be very large, or even infinite, giving the model...... a very big flexibility. Support Vector Machines (SVM's) and Gaussian processes are two examples of such models. In this technical report I present a model in which the dimension of the feature space remains finite, and where a Bayesian approach is used to train the model with Gaussian priors...... on the parameters. The Relevance Vector Machine, introduced by Tipping, is a particular case of such a model. I give the detailed derivations of the expectation-maximisation (EM) algorithm used in the training. These derivations are not found in the literature, and might be helpful for newcomers....

  6. Linear mixed models for longitudinal data

    Molenberghs, Geert

    2000-01-01

    This paperback edition is a reprint of the 2000 edition. This book provides a comprehensive treatment of linear mixed models for continuous longitudinal data. Next to model formulation, this edition puts major emphasis on exploratory data analysis for all aspects of the model, such as the marginal model, subject-specific profiles, and residual covariance structure. Further, model diagnostics and missing data receive extensive treatment. Sensitivity analysis for incomplete data is given a prominent place. Several variations to the conventional linear mixed model are discussed (a heterogeity model, conditional linear mixed models). This book will be of interest to applied statisticians and biomedical researchers in industry, public health organizations, contract research organizations, and academia. The book is explanatory rather than mathematically rigorous. Most analyses were done with the MIXED procedure of the SAS software package, and many of its features are clearly elucidated. However, some other commerc...

  7. Linear mixed models in sensometrics

    Kuznetsova, Alexandra

    quality of decision making in Danish as well as international food companies and other companies using the same methods. The two open-source R packages lmerTest and SensMixed implement and support the methodological developments in the research papers as well as the ANOVA modelling part of the Consumer...... an open-source software tool ConsumerCheck was developed in this project and now is available for everyone. will represent a major step forward when concerns this important problem in modern consumer driven product development. Standard statistical software packages can be used for some of the purposes......Today’s companies and researchers gather large amounts of data of different kind. In consumer studies the objective is the collection of the data to better understand consumer acceptance of products. In such studies a number of persons (generally not trained) are selected in order to score products...

  8. Linear causal modeling with structural equations

    Mulaik, Stanley A

    2009-01-01

    Emphasizing causation as a functional relationship between variables that describe objects, Linear Causal Modeling with Structural Equations integrates a general philosophical theory of causation with structural equation modeling (SEM) that concerns the special case of linear causal relations. In addition to describing how the functional relation concept may be generalized to treat probabilistic causation, the book reviews historical treatments of causation and explores recent developments in experimental psychology on studies of the perception of causation. It looks at how to perceive causal

  9. Statistical Tests for Mixed Linear Models

    Khuri, André I; Sinha, Bimal K

    2011-01-01

    An advanced discussion of linear models with mixed or random effects. In recent years a breakthrough has occurred in our ability to draw inferences from exact and optimum tests of variance component models, generating much research activity that relies on linear models with mixed and random effects. This volume covers the most important research of the past decade as well as the latest developments in hypothesis testing. It compiles all currently available results in the area of exact and optimum tests for variance component models and offers the only comprehensive treatment for these models a

  10. Matrix Tricks for Linear Statistical Models

    Puntanen, Simo; Styan, George PH

    2011-01-01

    In teaching linear statistical models to first-year graduate students or to final-year undergraduate students there is no way to proceed smoothly without matrices and related concepts of linear algebra; their use is really essential. Our experience is that making some particular matrix tricks very familiar to students can substantially increase their insight into linear statistical models (and also multivariate statistical analysis). In matrix algebra, there are handy, sometimes even very simple "tricks" which simplify and clarify the treatment of a problem - both for the student and

  11. Routing and Scheduling Optimization Model of Sea Transportation

    barus, Mika debora br; asyrafy, Habib; nababan, Esther; mawengkang, Herman

    2018-01-01

    This paper examines the routing and scheduling optimization model of sea transportation. One of the issues discussed is about the transportation of ships carrying crude oil (tankers) which is distributed to many islands. The consideration is the cost of transportation which consists of travel costs and the cost of layover at the port. Crude oil to be distributed consists of several types. This paper develops routing and scheduling model taking into consideration some objective functions and constraints. The formulation of the mathematical model analyzed is to minimize costs based on the total distance visited by the tanker and minimize the cost of the ports. In order for the model of the problem to be more realistic and the cost calculated to be more appropriate then added a parameter that states the multiplier factor of cost increases as the charge of crude oil is filled.

  12. Applying dynamic priority scheduling scheme to static systems of pinwheel task model in power-aware scheduling.

    Seol, Ye-In; Kim, Young-Kuk

    2014-01-01

    Power-aware scheduling reduces CPU energy consumption in hard real-time systems through dynamic voltage scaling (DVS). In this paper, we deal with pinwheel task model which is known as static and predictable task model and could be applied to various embedded or ubiquitous systems. In pinwheel task model, each task's priority is static and its execution sequence could be predetermined. There have been many static approaches to power-aware scheduling in pinwheel task model. But, in this paper, we will show that the dynamic priority scheduling results in power-aware scheduling could be applied to pinwheel task model. This method is more effective than adopting the previous static priority scheduling methods in saving energy consumption and, for the system being still static, it is more tractable and applicable to small sized embedded or ubiquitous computing. Also, we introduce a novel power-aware scheduling algorithm which exploits all slacks under preemptive earliest-deadline first scheduling which is optimal in uniprocessor system. The dynamic priority method presented in this paper could be applied directly to static systems of pinwheel task model. The simulation results show that the proposed algorithm with the algorithmic complexity of O(n) reduces the energy consumption by 10-80% over the existing algorithms.

  13. Modeling digital switching circuits with linear algebra

    Thornton, Mitchell A

    2014-01-01

    Modeling Digital Switching Circuits with Linear Algebra describes an approach for modeling digital information and circuitry that is an alternative to Boolean algebra. While the Boolean algebraic model has been wildly successful and is responsible for many advances in modern information technology, the approach described in this book offers new insight and different ways of solving problems. Modeling the bit as a vector instead of a scalar value in the set {0, 1} allows digital circuits to be characterized with transfer functions in the form of a linear transformation matrix. The use of transf

  14. Optimized Waterspace Management and Scheduling Using Mixed-Integer Linear Programming

    2016-01-01

    PANAMA CITY DIVISION PANAMA CITY, FLORIDA 32407-7001 5(3257...task. This paper is outlined as follows: in Section 2, we discuss the general setup of the the MCM scheduling problem, including the definition of the...Suite 1425 Arlington, VA 22203-1995 Naval Surface Warfare Center, Panama City Division 1 ATTN: Technical Library 110 Vernon Avenue Panama City, FL 32407 27

  15. A linear model of ductile plastic damage

    Lemaitre, J.

    1983-01-01

    A three-dimensional model of isotropic ductile plastic damage based on a continuum damage variable on the effective stress concept and on thermodynamics is derived. As shown by experiments on several metals and alloys, the model, integrated in the case of proportional loading, is linear with respect to the accumulated plastic strain and shows a large influence of stress triaxiality [fr

  16. Application of Hierarchical Linear Models/Linear Mixed-Effects Models in School Effectiveness Research

    Ker, H. W.

    2014-01-01

    Multilevel data are very common in educational research. Hierarchical linear models/linear mixed-effects models (HLMs/LMEs) are often utilized to analyze multilevel data nowadays. This paper discusses the problems of utilizing ordinary regressions for modeling multilevel educational data, compare the data analytic results from three regression…

  17. Extending the linear model with R generalized linear, mixed effects and nonparametric regression models

    Faraway, Julian J

    2005-01-01

    Linear models are central to the practice of statistics and form the foundation of a vast range of statistical methodologies. Julian J. Faraway''s critically acclaimed Linear Models with R examined regression and analysis of variance, demonstrated the different methods available, and showed in which situations each one applies. Following in those footsteps, Extending the Linear Model with R surveys the techniques that grow from the regression model, presenting three extensions to that framework: generalized linear models (GLMs), mixed effect models, and nonparametric regression models. The author''s treatment is thoroughly modern and covers topics that include GLM diagnostics, generalized linear mixed models, trees, and even the use of neural networks in statistics. To demonstrate the interplay of theory and practice, throughout the book the author weaves the use of the R software environment to analyze the data of real examples, providing all of the R commands necessary to reproduce the analyses. All of the ...

  18. Ground Motion Models for Future Linear Colliders

    Seryi, Andrei

    2000-01-01

    Optimization of the parameters of a future linear collider requires comprehensive models of ground motion. Both general models of ground motion and specific models of the particular site and local conditions are essential. Existing models are not completely adequate, either because they are too general, or because they omit important peculiarities of ground motion. The model considered in this paper is based on recent ground motion measurements performed at SLAC and at other accelerator laboratories, as well as on historical data. The issues to be studied for the models to become more predictive are also discussed

  19. A model for scheduling projects under the condition of inflation and under penalty and reward arrangements

    J.K. Jolayemi

    2014-01-01

    Full Text Available A zero-one mixed integer linear programming model is developed for the scheduling of projects under the condition of inflation and under penalty and reward arrangements. The effects of inflation on time-cost trade-off curves are illustrated and a modified approach to time-cost trade-off analysis presented. Numerical examples are given to illustrate the model and its properties. The examples show that misleading schedules and inaccurate project-cost estimates will be produced if the inflation factor is neglected in an environment of high inflation. They also show that award of penalty or bonus is a catalyst for early completion of a project, just as it can be expected.

  20. Multiresolution Network Temporal and Spatial Scheduling Model of Scenic Spot

    Peng Ge

    2013-01-01

    Full Text Available Tourism is one of pillar industries of the world economy. Low-carbon tourism will be the mainstream direction of the scenic spots' development, and the ω path of low-carbon tourism development is to develop economy and protect environment simultaneously. However, as the tourists' quantity is increasing, the loads of scenic spots are out of control. And the instantaneous overload in some spots caused the image phenomenon of full capacity of the whole scenic spot. Therefore, realizing the real-time schedule becomes the primary purpose of scenic spot’s management. This paper divides the tourism distribution system into several logically related subsystems and constructs a temporal and spatial multiresolution network scheduling model according to the regularity of scenic spots’ overload phenomenon in time and space. It also defines dynamic distribution probability and equivalent dynamic demand to realize the real-time prediction. We define gravitational function between fields and takes it as the utility of schedule, after resolving the transportation model of each resolution, it achieves hierarchical balance between demand and capacity of the system. The last part of the paper analyzes the time complexity of constructing a multiresolution distribution system.

  1. A model for generating master surgical schedules to allow cyclic scheduling in operating room departments

    van Oostrum, J.M.; van Houdenhoven, M.; Hurink, Johann L.; Hans, Elias W.; Wullink, Gerhard; Kazemier, G.

    2005-01-01

    This paper addresses the problem of operating room scheduling at the tactical level of hospital planning and control. Hospitals repetitively construct operating room schedules, which is a time consuming tedious and complex task. The stochasticity of the durations of surgical procedures complicates

  2. Modelling female fertility traits in beef cattle using linear and non-linear models.

    Naya, H; Peñagaricano, F; Urioste, J I

    2017-06-01

    Female fertility traits are key components of the profitability of beef cattle production. However, these traits are difficult and expensive to measure, particularly under extensive pastoral conditions, and consequently, fertility records are in general scarce and somehow incomplete. Moreover, fertility traits are usually dominated by the effects of herd-year environment, and it is generally assumed that relatively small margins are kept for genetic improvement. New ways of modelling genetic variation in these traits are needed. Inspired in the methodological developments made by Prof. Daniel Gianola and co-workers, we assayed linear (Gaussian), Poisson, probit (threshold), censored Poisson and censored Gaussian models to three different kinds of endpoints, namely calving success (CS), number of days from first calving (CD) and number of failed oestrus (FE). For models involving FE and CS, non-linear models overperformed their linear counterparts. For models derived from CD, linear versions displayed better adjustment than the non-linear counterparts. Non-linear models showed consistently higher estimates of heritability and repeatability in all cases (h 2  linear models; h 2  > 0.23 and r > 0.24, for non-linear models). While additive and permanent environment effects showed highly favourable correlations between all models (>0.789), consistency in selecting the 10% best sires showed important differences, mainly amongst the considered endpoints (FE, CS and CD). In consequence, endpoints should be considered as modelling different underlying genetic effects, with linear models more appropriate to describe CD and non-linear models better for FE and CS. © 2017 Blackwell Verlag GmbH.

  3. LMI-based gain scheduled controller synthesis for a class of linear parameter varying systems

    Bendtsen, Jan Dimon; Anderson, Brian; Lanzon, Alexander

    2006-01-01

    This paper presents a novel method for constructing controllers for a class of single-input multiple-output (SIMO) linear parameter varying (LPV) systems. This class of systems encompasses many physical systems, in particular systems where individual components vary with time, and is therefore...... of significant practical relevance to control designers. The control design presented in this paper has the properties that the system matrix of the closed loop is multi-affine in the various scalar parameters, and that the resulting controller ensures a certain degree of stability for the closed loop even when...... as a standard linear time-invariant (LTI) design combined with a set of linear matrix inequalities, which can be solved efficiently with software tools. The design procedure is illustrated by a numerical example....

  4. Modelling point patterns with linear structures

    Møller, Jesper; Rasmussen, Jakob Gulddahl

    2009-01-01

    processes whose realizations contain such linear structures. Such a point process is constructed sequentially by placing one point at a time. The points are placed in such a way that new points are often placed close to previously placed points, and the points form roughly line shaped structures. We...... consider simulations of this model and compare with real data....

  5. Modelling point patterns with linear structures

    Møller, Jesper; Rasmussen, Jakob Gulddahl

    processes whose realizations contain such linear structures. Such a point process is constructed sequentially by placing one point at a time. The points are placed in such a way that new points are often placed close to previously placed points, and the points form roughly line shaped structures. We...... consider simulations of this model and compare with real data....

  6. Optimal designs for linear mixture models

    Mendieta, E.J.; Linssen, H.N.; Doornbos, R.

    1975-01-01

    In a recent paper Snee and Marquardt [8] considered designs for linear mixture models, where the components are subject to individual lower and/or upper bounds. When the number of components is large their algorithm XVERT yields designs far too extensive for practical purposes. The purpose of this

  7. Optimal designs for linear mixture models

    Mendieta, E.J.; Linssen, H.N.; Doornbos, R.

    1975-01-01

    In a recent paper Snee and Marquardt (1974) considered designs for linear mixture models, where the components are subject to individual lower and/or upper bounds. When the number of components is large their algorithm XVERT yields designs far too extensive for practical purposes. The purpose of

  8. Linear factor copula models and their properties

    Krupskii, Pavel; Genton, Marc G.

    2018-01-01

    We consider a special case of factor copula models with additive common factors and independent components. These models are flexible and parsimonious with O(d) parameters where d is the dimension. The linear structure allows one to obtain closed form expressions for some copulas and their extreme‐value limits. These copulas can be used to model data with strong tail dependencies, such as extreme data. We study the dependence properties of these linear factor copula models and derive the corresponding limiting extreme‐value copulas with a factor structure. We show how parameter estimates can be obtained for these copulas and apply one of these copulas to analyse a financial data set.

  9. Linear factor copula models and their properties

    Krupskii, Pavel

    2018-04-25

    We consider a special case of factor copula models with additive common factors and independent components. These models are flexible and parsimonious with O(d) parameters where d is the dimension. The linear structure allows one to obtain closed form expressions for some copulas and their extreme‐value limits. These copulas can be used to model data with strong tail dependencies, such as extreme data. We study the dependence properties of these linear factor copula models and derive the corresponding limiting extreme‐value copulas with a factor structure. We show how parameter estimates can be obtained for these copulas and apply one of these copulas to analyse a financial data set.

  10. Diagnostics for Linear Models With Functional Responses

    Xu, Hongquan; Shen, Qing

    2005-01-01

    Linear models where the response is a function and the predictors are vectors are useful in analyzing data from designed experiments and other situations with functional observations. Residual analysis and diagnostics are considered for such models. Studentized residuals are defined and their properties are studied. Chi-square quantile-quantile plots are proposed to check the assumption of Gaussian error process and outliers. Jackknife residuals and an associated test are proposed to det...

  11. Non-linear Loudspeaker Unit Modelling

    Pedersen, Bo Rohde; Agerkvist, Finn T.

    2008-01-01

    Simulations of a 6½-inch loudspeaker unit are performed and compared with a displacement measurement. The non-linear loudspeaker model is based on the major nonlinear functions and expanded with time-varying suspension behaviour and flux modulation. The results are presented with FFT plots of thr...... frequencies and different displacement levels. The model errors are discussed and analysed including a test with loudspeaker unit where the diaphragm is removed....

  12. Communication scheduling in robust self-triggered MPC for linear discrete-time systems

    Brunner, F.D.; Gommans, T.M.P.; Heemels, W.P.M.H.; Allgöwer, F.

    2015-01-01

    We consider a networked control system consisting of a physical plant, an actuator, a sensor, and a controller that is connected to the actuator and sensor via a communication network. The plant is described by a linear discrete-time system subject to additive disturbances. In order to reduce the

  13. Integrated model for pricing, delivery time setting, and scheduling in make-to-order environments

    Garmdare, Hamid Sattari; Lotfi, M. M.; Honarvar, Mahboobeh

    2018-03-01

    Usually, in make-to-order environments which work only in response to the customer's orders, manufacturers for maximizing the profits should offer the best price and delivery time for an order considering the existing capacity and the customer's sensitivity to both the factors. In this paper, an integrated approach for pricing, delivery time setting and scheduling of new arrival orders are proposed based on the existing capacity and accepted orders in system. In the problem, the acquired market demands dependent on the price and delivery time of both the manufacturer and its competitors. A mixed-integer non-linear programming model is presented for the problem. After converting to a pure non-linear model, it is validated through a case study. The efficiency of proposed model is confirmed by comparing it to both the literature and the current practice. Finally, sensitivity analysis for the key parameters is carried out.

  14. Evaluating the effectiveness of mixed-integer linear programming for day-ahead hydro-thermal self-scheduling considering price uncertainty and forced outage rate

    Esmaeily, Ali; Ahmadi, Abdollah; Raeisi, Fatima; Ahmadi, Mohammad Reza; Esmaeel Nezhad, Ali; Janghorbani, Mohammadreza

    2017-01-01

    A new optimization framework based on MILP model is introduced in the paper for the problem of stochastic self-scheduling of hydrothermal units known as HTSS Problem implemented in a joint energy and reserve electricity market with day-ahead mechanism. The proposed MILP framework includes some practical constraints such as the cost due to valve-loading effect, the limit due to DRR and also multi-POZs, which have been less investigated in electricity market models. For the sake of more accuracy, for hydro generating units’ model, multi performance curves are also used. The problem proposed in this paper is formulated using a model on the basis of a stochastic optimization technique while the objective function is maximizing the expected profit utilizing MILP technique. The suggested stochastic self-scheduling model employs the price forecast error in order to take into account the uncertainty due to price. Besides, LMCS is combined with roulette wheel mechanism so that the scenarios corresponding to the non-spinning reserve price and spinning reserve price as well as the energy price at each hour of the scheduling are generated. Finally, the IEEE 118-bus power system is used to indicate the performance and the efficiency of the suggested technique. - Highlights: • Characterizing the uncertainties of price and FOR of units. • Replacing the fixed ramping rate constraints with the dynamic ones. • Proposing linearized model for the valve-point effects of thermal units. • Taking into consideration the multi-POZs relating to the thermal units. • Taking into consideration the multi-performance curves of hydroelectric units.

  15. Decision Model for Planning and Scheduling of Seafood Product Considering Traceability

    Agustin; Mawengkang, Herman; Mathelinea, Devy

    2018-01-01

    Due to the global challenges, it is necessary for an industrial company to integrate production scheduling and distribution planning, in order to be more efficient and to get more economics advantages. This paper presents seafood production planning and scheduling of a seafood manufacture company which produces simultaneously multi kind of seafood products, located at Aceh Province, Indonesia. The perishability nature of fish highly restricts its storage duration and delivery conditions. Traceability is a tracking requirement to check whether the quality of the product is satisfied. The production and distribution planning problem aims to meet customer demand subject to traceability of the seafood product and other restrictions. The problem is modeled as a mixed integer linear program, and then it is solved using neighborhood search approach.

  16. [From clinical judgment to linear regression model.

    Palacios-Cruz, Lino; Pérez, Marcela; Rivas-Ruiz, Rodolfo; Talavera, Juan O

    2013-01-01

    When we think about mathematical models, such as linear regression model, we think that these terms are only used by those engaged in research, a notion that is far from the truth. Legendre described the first mathematical model in 1805, and Galton introduced the formal term in 1886. Linear regression is one of the most commonly used regression models in clinical practice. It is useful to predict or show the relationship between two or more variables as long as the dependent variable is quantitative and has normal distribution. Stated in another way, the regression is used to predict a measure based on the knowledge of at least one other variable. Linear regression has as it's first objective to determine the slope or inclination of the regression line: Y = a + bx, where "a" is the intercept or regression constant and it is equivalent to "Y" value when "X" equals 0 and "b" (also called slope) indicates the increase or decrease that occurs when the variable "x" increases or decreases in one unit. In the regression line, "b" is called regression coefficient. The coefficient of determination (R 2 ) indicates the importance of independent variables in the outcome.

  17. Testing Parametric versus Semiparametric Modelling in Generalized Linear Models

    Härdle, W.K.; Mammen, E.; Müller, M.D.

    1996-01-01

    We consider a generalized partially linear model E(Y|X,T) = G{X'b + m(T)} where G is a known function, b is an unknown parameter vector, and m is an unknown function.The paper introduces a test statistic which allows to decide between a parametric and a semiparametric model: (i) m is linear, i.e.

  18. Modeling of Volatility with Non-linear Time Series Model

    Kim Song Yon; Kim Mun Chol

    2013-01-01

    In this paper, non-linear time series models are used to describe volatility in financial time series data. To describe volatility, two of the non-linear time series are combined into form TAR (Threshold Auto-Regressive Model) with AARCH (Asymmetric Auto-Regressive Conditional Heteroskedasticity) error term and its parameter estimation is studied.

  19. Thresholding projection estimators in functional linear models

    Cardot, Hervé; Johannes, Jan

    2010-01-01

    We consider the problem of estimating the regression function in functional linear regression models by proposing a new type of projection estimators which combine dimension reduction and thresholding. The introduction of a threshold rule allows to get consistency under broad assumptions as well as minimax rates of convergence under additional regularity hypotheses. We also consider the particular case of Sobolev spaces generated by the trigonometric basis which permits to get easily mean squ...

  20. Decomposed Implicit Models of Piecewise - Linear Networks

    J. Brzobohaty

    1992-05-01

    Full Text Available The general matrix form of the implicit description of a piecewise-linear (PWL network and the symbolic block diagram of the corresponding circuit model are proposed. Their decomposed forms enable us to determine quite separately the existence of the individual breakpoints of the resultant PWL characteristic and their coordinates using independent network parameters. For the two-diode and three-diode cases all the attainable types of the PWL characteristic are introduced.

  1. From spiking neuron models to linear-nonlinear models.

    Ostojic, Srdjan; Brunel, Nicolas

    2011-01-20

    Neurons transform time-varying inputs into action potentials emitted stochastically at a time dependent rate. The mapping from current input to output firing rate is often represented with the help of phenomenological models such as the linear-nonlinear (LN) cascade, in which the output firing rate is estimated by applying to the input successively a linear temporal filter and a static non-linear transformation. These simplified models leave out the biophysical details of action potential generation. It is not a priori clear to which extent the input-output mapping of biophysically more realistic, spiking neuron models can be reduced to a simple linear-nonlinear cascade. Here we investigate this question for the leaky integrate-and-fire (LIF), exponential integrate-and-fire (EIF) and conductance-based Wang-Buzsáki models in presence of background synaptic activity. We exploit available analytic results for these models to determine the corresponding linear filter and static non-linearity in a parameter-free form. We show that the obtained functions are identical to the linear filter and static non-linearity determined using standard reverse correlation analysis. We then quantitatively compare the output of the corresponding linear-nonlinear cascade with numerical simulations of spiking neurons, systematically varying the parameters of input signal and background noise. We find that the LN cascade provides accurate estimates of the firing rates of spiking neurons in most of parameter space. For the EIF and Wang-Buzsáki models, we show that the LN cascade can be reduced to a firing rate model, the timescale of which we determine analytically. Finally we introduce an adaptive timescale rate model in which the timescale of the linear filter depends on the instantaneous firing rate. This model leads to highly accurate estimates of instantaneous firing rates.

  2. Stochastic linear programming models, theory, and computation

    Kall, Peter

    2011-01-01

    This new edition of Stochastic Linear Programming: Models, Theory and Computation has been brought completely up to date, either dealing with or at least referring to new material on models and methods, including DEA with stochastic outputs modeled via constraints on special risk functions (generalizing chance constraints, ICC’s and CVaR constraints), material on Sharpe-ratio, and Asset Liability Management models involving CVaR in a multi-stage setup. To facilitate use as a text, exercises are included throughout the book, and web access is provided to a student version of the authors’ SLP-IOR software. Additionally, the authors have updated the Guide to Available Software, and they have included newer algorithms and modeling systems for SLP. The book is thus suitable as a text for advanced courses in stochastic optimization, and as a reference to the field. From Reviews of the First Edition: "The book presents a comprehensive study of stochastic linear optimization problems and their applications. … T...

  3. Limited Area Forecasting and Statistical Modelling for Wind Energy Scheduling

    Rosgaard, Martin Haubjerg

    forecast accuracy for operational wind power scheduling. Numerical weather prediction history and scales of atmospheric motion are summarised, followed by a literature review of limited area wind speed forecasting. Hereafter, the original contribution to research on the topic is outlined. The quality...... control of wind farm data used as forecast reference is described in detail, and a preliminary limited area forecasting study illustrates the aggravation of issues related to numerical orography representation and accurate reference coordinates at ne weather model resolutions. For the o shore and coastal...... sites studied limited area forecasting is found to deteriorate wind speed prediction accuracy, while inland results exhibit a steady forecast performance increase with weather model resolution. Temporal smoothing of wind speed forecasts is shown to improve wind power forecast performance by up to almost...

  4. WE-D-BRE-04: Modeling Optimal Concurrent Chemotherapy Schedules

    Jeong, J; Deasy, J O

    2014-01-01

    Purpose: Concurrent chemo-radiation therapy (CCRT) has become a more common cancer treatment option with a better tumor control rate for several tumor sites, including head and neck and lung cancer. In this work, possible optimal chemotherapy schedules were investigated by implementing chemotherapy cell-kill into a tumor response model of RT. Methods: The chemotherapy effect has been added into a published model (Jeong et al., PMB (2013) 58:4897), in which the tumor response to RT can be simulated with the effects of hypoxia and proliferation. Based on the two-compartment pharmacokinetic model, the temporal concentration of chemotherapy agent was estimated. Log cell-kill was assumed and the cell-kill constant was estimated from the observed increase in local control due to concurrent chemotherapy. For a simplified two cycle CCRT regime, several different starting times and intervals were simulated with conventional RT regime (2Gy/fx, 5fx/wk). The effectiveness of CCRT was evaluated in terms of reduction in radiation dose required for 50% of control to find the optimal chemotherapy schedule. Results: Assuming the typical slope of dose response curve (γ50=2), the observed 10% increase in local control rate was evaluated to be equivalent to an extra RT dose of about 4 Gy, from which the cell-kill rate of chemotherapy was derived to be about 0.35. Best response was obtained when chemotherapy was started at about 3 weeks after RT began. As the interval between two cycles decreases, the efficacy of chemotherapy increases with broader range of optimal starting times. Conclusion: The effect of chemotherapy has been implemented into the resource-conservation tumor response model to investigate CCRT. The results suggest that the concurrent chemotherapy might be more effective when delayed for about 3 weeks, due to lower tumor burden and a larger fraction of proliferating cells after reoxygenation

  5. Linear accelerator modeling: development and application

    Jameson, R.A.; Jule, W.D.

    1977-01-01

    Most of the parameters of a modern linear accelerator can be selected by simulating the desired machine characteristics in a computer code and observing how the parameters affect the beam dynamics. The code PARMILA is used at LAMPF for the low-energy portion of linacs. Collections of particles can be traced with a free choice of input distributions in six-dimensional phase space. Random errors are often included in order to study the tolerances which should be imposed during manufacture or in operation. An outline is given of the modifications made to the model, the results of experiments which indicate the validity of the model, and the use of the model to optimize the longitudinal tuning of the Alvarez linac

  6. Running vacuum cosmological models: linear scalar perturbations

    Perico, E.L.D. [Instituto de Física, Universidade de São Paulo, Rua do Matão 1371, CEP 05508-090, São Paulo, SP (Brazil); Tamayo, D.A., E-mail: elduartep@usp.br, E-mail: tamayo@if.usp.br [Departamento de Astronomia, Universidade de São Paulo, Rua do Matão 1226, CEP 05508-900, São Paulo, SP (Brazil)

    2017-08-01

    In cosmology, phenomenologically motivated expressions for running vacuum are commonly parameterized as linear functions typically denoted by Λ( H {sup 2}) or Λ( R ). Such models assume an equation of state for the vacuum given by P-bar {sub Λ} = - ρ-bar {sub Λ}, relating its background pressure P-bar {sub Λ} with its mean energy density ρ-bar {sub Λ} ≡ Λ/8π G . This equation of state suggests that the vacuum dynamics is due to an interaction with the matter content of the universe. Most of the approaches studying the observational impact of these models only consider the interaction between the vacuum and the transient dominant matter component of the universe. We extend such models by assuming that the running vacuum is the sum of independent contributions, namely ρ-bar {sub Λ} = Σ {sub i} ρ-bar {sub Λ} {sub i} . Each Λ i vacuum component is associated and interacting with one of the i matter components in both the background and perturbation levels. We derive the evolution equations for the linear scalar vacuum and matter perturbations in those two scenarios, and identify the running vacuum imprints on the cosmic microwave background anisotropies as well as on the matter power spectrum. In the Λ( H {sup 2}) scenario the vacuum is coupled with every matter component, whereas the Λ( R ) description only leads to a coupling between vacuum and non-relativistic matter, producing different effects on the matter power spectrum.

  7. Linear Parametric Model Checking of Timed Automata

    Hune, Tohmas Seidelin; Romijn, Judi; Stoelinga, Mariëlle

    2001-01-01

    We present an extension of the model checker Uppaal capable of synthesize linear parameter constraints for the correctness of parametric timed automata. The symbolic representation of the (parametric) state-space is shown to be correct. A second contribution of this paper is the identication...... of a subclass of parametric timed automata (L/U automata), for which the emptiness problem is decidable, contrary to the full class where it is know to be undecidable. Also we present a number of lemmas enabling the verication eort to be reduced for L/U automata in some cases. We illustrate our approach...

  8. Design Change Model for Effective Scheduling Change Propagation Paths

    Zhang, Hai-Zhu; Ding, Guo-Fu; Li, Rong; Qin, Sheng-Feng; Yan, Kai-Yin

    2017-09-01

    Changes in requirements may result in the increasing of product development project cost and lead time, therefore, it is important to understand how requirement changes propagate in the design of complex product systems and be able to select best options to guide design. Currently, a most approach for design change is lack of take the multi-disciplinary coupling relationships and the number of parameters into account integrally. A new design change model is presented to systematically analyze and search change propagation paths. Firstly, a PDS-Behavior-Structure-based design change model is established to describe requirement changes causing the design change propagation in behavior and structure domains. Secondly, a multi-disciplinary oriented behavior matrix is utilized to support change propagation analysis of complex product systems, and the interaction relationships of the matrix elements are used to obtain an initial set of change paths. Finally, a rough set-based propagation space reducing tool is developed to assist in narrowing change propagation paths by computing the importance of the design change parameters. The proposed new design change model and its associated tools have been demonstrated by the scheduling change propagation paths of high speed train's bogie to show its feasibility and effectiveness. This model is not only supportive to response quickly to diversified market requirements, but also helpful to satisfy customer requirements and reduce product development lead time. The proposed new design change model can be applied in a wide range of engineering systems design with improved efficiency.

  9. Aspects of general linear modelling of migration.

    Congdon, P

    1992-01-01

    "This paper investigates the application of general linear modelling principles to analysing migration flows between areas. Particular attention is paid to specifying the form of the regression and error components, and the nature of departures from Poisson randomness. Extensions to take account of spatial and temporal correlation are discussed as well as constrained estimation. The issue of specification bears on the testing of migration theories, and assessing the role migration plays in job and housing markets: the direction and significance of the effects of economic variates on migration depends on the specification of the statistical model. The application is in the context of migration in London and South East England in the 1970s and 1980s." excerpt

  10. Model Selection with the Linear Mixed Model for Longitudinal Data

    Ryoo, Ji Hoon

    2011-01-01

    Model building or model selection with linear mixed models (LMMs) is complicated by the presence of both fixed effects and random effects. The fixed effects structure and random effects structure are codependent, so selection of one influences the other. Most presentations of LMM in psychology and education are based on a multilevel or…

  11. Modelling and Predicting Backstroke Start Performance Using Non-Linear and Linear Models.

    de Jesus, Karla; Ayala, Helon V H; de Jesus, Kelly; Coelho, Leandro Dos S; Medeiros, Alexandre I A; Abraldes, José A; Vaz, Mário A P; Fernandes, Ricardo J; Vilas-Boas, João Paulo

    2018-03-01

    Our aim was to compare non-linear and linear mathematical model responses for backstroke start performance prediction. Ten swimmers randomly completed eight 15 m backstroke starts with feet over the wedge, four with hands on the highest horizontal and four on the vertical handgrip. Swimmers were videotaped using a dual media camera set-up, with the starts being performed over an instrumented block with four force plates. Artificial neural networks were applied to predict 5 m start time using kinematic and kinetic variables and to determine the accuracy of the mean absolute percentage error. Artificial neural networks predicted start time more robustly than the linear model with respect to changing training to the validation dataset for the vertical handgrip (3.95 ± 1.67 vs. 5.92 ± 3.27%). Artificial neural networks obtained a smaller mean absolute percentage error than the linear model in the horizontal (0.43 ± 0.19 vs. 0.98 ± 0.19%) and vertical handgrip (0.45 ± 0.19 vs. 1.38 ± 0.30%) using all input data. The best artificial neural network validation revealed a smaller mean absolute error than the linear model for the horizontal (0.007 vs. 0.04 s) and vertical handgrip (0.01 vs. 0.03 s). Artificial neural networks should be used for backstroke 5 m start time prediction due to the quite small differences among the elite level performances.

  12. New Mathematical Model and Algorithm for Economic Lot Scheduling Problem in Flexible Flow Shop

    H. Zohali

    2018-03-01

    Full Text Available This paper addresses the lot sizing and scheduling problem for a number of products in flexible flow shop with identical parallel machines. The production stages are in series, while separated by finite intermediate buffers. The objective is to minimize the sum of setup and inventory holding costs per unit of time. The available mathematical model of this problem in the literature suffers from huge complexity in terms of size and computation. In this paper, a new mixed integer linear program is developed for delay with the huge dimentions of the problem. Also, a new meta heuristic algorithm is developed for the problem. The results of the numerical experiments represent a significant advantage of the proposed model and algorithm compared with the available models and algorithms in the literature.

  13. Optimization Model for Capacity Management and Bed Scheduling for Hospital

    Sitepu, Suryati; Mawengkang, Herman; Husein, Ismail

    2018-01-01

    Hospital is a very important institution to provide health care for people. It is not surprising that nowadays the people’s demands for hospital is increasing.. However, due to the rising cost of healthcare services, hospitals need to consider efficiencies in order to overcome these two problems. This paper deals with an integrated strategy of staff capacity management and bed allocation planning to tackle these problems. Mathematically, the strategy can be modeled as an integer linear programming problem. We solve the model using a direct neighborhood search approach, based on the notion of superbasic variables.

  14. Modeling patterns in data using linear and related models

    Engelhardt, M.E.

    1996-06-01

    This report considers the use of linear models for analyzing data related to reliability and safety issues of the type usually associated with nuclear power plants. The report discusses some of the general results of linear regression analysis, such as the model assumptions and properties of the estimators of the parameters. The results are motivated with examples of operational data. Results about the important case of a linear regression model with one covariate are covered in detail. This case includes analysis of time trends. The analysis is applied with two different sets of time trend data. Diagnostic procedures and tests for the adequacy of the model are discussed. Some related methods such as weighted regression and nonlinear models are also considered. A discussion of the general linear model is also included. Appendix A gives some basic SAS programs and outputs for some of the analyses discussed in the body of the report. Appendix B is a review of some of the matrix theoretic results which are useful in the development of linear models

  15. Electron Model of Linear-Field FFAG

    Koscielniak, Shane R

    2005-01-01

    A fixed-field alternating-gradient accelerator (FFAG) that employs only linear-field elements ushers in a new regime in accelerator design and dynamics. The linear-field machine has the ability to compact an unprecedented range in momenta within a small component aperture. With a tune variation which results from the natural chromaticity, the beam crosses many strong, uncorrec-table, betatron resonances during acceleration. Further, relativistic particles in this machine exhibit a quasi-parabolic time-of-flight that cannot be addressed with a fixed-frequency rf system. This leads to a new concept of bucketless acceleration within a rotation manifold. With a large energy jump per cell, there is possibly strong synchro-betatron coupling. A few-MeV electron model has been proposed to demonstrate the feasibility of these untested acceleration features and to investigate them at length under a wide range of operating conditions. This paper presents a lattice optimized for a 1.3 GHz rf, initial technology choices f...

  16. Linear models in the mathematics of uncertainty

    Mordeson, John N; Clark, Terry D; Pham, Alex; Redmond, Michael A

    2013-01-01

    The purpose of this book is to present new mathematical techniques for modeling global issues. These mathematical techniques are used to determine linear equations between a dependent variable and one or more independent variables in cases where standard techniques such as linear regression are not suitable. In this book, we examine cases where the number of data points is small (effects of nuclear warfare), where the experiment is not repeatable (the breakup of the former Soviet Union), and where the data is derived from expert opinion (how conservative is a political party). In all these cases the data  is difficult to measure and an assumption of randomness and/or statistical validity is questionable.  We apply our methods to real world issues in international relations such as  nuclear deterrence, smart power, and cooperative threat reduction. We next apply our methods to issues in comparative politics such as successful democratization, quality of life, economic freedom, political stability, and fail...

  17. Generalized Linear Models in Vehicle Insurance

    Silvie Kafková

    2014-01-01

    Full Text Available Actuaries in insurance companies try to find the best model for an estimation of insurance premium. It depends on many risk factors, e.g. the car characteristics and the profile of the driver. In this paper, an analysis of the portfolio of vehicle insurance data using a generalized linear model (GLM is performed. The main advantage of the approach presented in this article is that the GLMs are not limited by inflexible preconditions. Our aim is to predict the relation of annual claim frequency on given risk factors. Based on a large real-world sample of data from 57 410 vehicles, the present study proposed a classification analysis approach that addresses the selection of predictor variables. The models with different predictor variables are compared by analysis of deviance and Akaike information criterion (AIC. Based on this comparison, the model for the best estimate of annual claim frequency is chosen. All statistical calculations are computed in R environment, which contains stats package with the function for the estimation of parameters of GLM and the function for analysis of deviation.

  18. Nonlinear price impact from linear models

    Patzelt, Felix; Bouchaud, Jean-Philippe

    2017-12-01

    The impact of trades on asset prices is a crucial aspect of market dynamics for academics, regulators, and practitioners alike. Recently, universal and highly nonlinear master curves were observed for price impacts aggregated on all intra-day scales (Patzelt and Bouchaud 2017 arXiv:1706.04163). Here we investigate how well these curves, their scaling, and the underlying return dynamics are captured by linear ‘propagator’ models. We find that the classification of trades as price-changing versus non-price-changing can explain the price impact nonlinearities and short-term return dynamics to a very high degree. The explanatory power provided by the change indicator in addition to the order sign history increases with increasing tick size. To obtain these results, several long-standing technical issues for model calibration and testing are addressed. We present new spectral estimators for two- and three-point cross-correlations, removing the need for previously used approximations. We also show when calibration is unbiased and how to accurately reveal previously overlooked biases. Therefore, our results contribute significantly to understanding both recent empirical results and the properties of a popular class of impact models.

  19. Linear Equating for the NEAT Design: Parameter Substitution Models and Chained Linear Relationship Models

    Kane, Michael T.; Mroch, Andrew A.; Suh, Youngsuk; Ripkey, Douglas R.

    2009-01-01

    This paper analyzes five linear equating models for the "nonequivalent groups with anchor test" (NEAT) design with internal anchors (i.e., the anchor test is part of the full test). The analysis employs a two-dimensional framework. The first dimension contrasts two general approaches to developing the equating relationship. Under a "parameter…

  20. Two-MILP models for scheduling elective surgeries within a private healthcare facility.

    Khlif Hachicha, Hejer; Zeghal Mansour, Farah

    2016-11-05

    This paper deals with an Integrated Elective Surgery-Scheduling Problem (IESSP) that arises in a privately operated healthcare facility. It aims to optimize the resource utilization of the entire surgery process including pre-operative, per-operative and post-operative activities. Moreover, it addresses a specific feature of private facilities where surgeons are independent service providers and may conduct their surgeries in different private healthcare facilities. Thus, the problem requires the assignment of surgery patients to hospital beds, operating rooms and recovery beds as well as their sequencing over a 1-day period while taking into account surgeons' availability constraints. We present two Mixed Integer Linear Programs (MILP) that model the IESSP as a three-stage hybrid flow-shop scheduling problem with recirculation, resource synchronization, dedicated machines, and blocking constraints. To assess the empirical performance of the proposed models, we conducted experiments on real-world data of a Tunisian private clinic: Clinique Ennasr and on randomly generated instances. Two criteria were minimised: the patients' average length of stay and the number of patients' overnight stays. The computational results show that the proposed models can solve instances with up to 44 surgical cases in a reasonable CPU time using a general-purpose MILP solver.

  1. Modelling a Nurse Shift Schedule with Multiple Preference Ranks for Shifts and Days-Off

    Chun-Cheng Lin

    2014-01-01

    Full Text Available When it comes to nurse shift schedules, it is found that the nursing staff have diverse preferences about shift rotations and days-off. The previous studies only focused on the most preferred work shift and the number of satisfactory days-off of the schedule at the current schedule period but had few discussions on the previous schedule periods and other preference levels for shifts and days-off, which may affect fairness of shift schedules. As a result, this paper proposes a nurse scheduling model based upon integer programming that takes into account constraints of the schedule, different preference ranks towards each shift, and the historical data of previous schedule periods to maximize the satisfaction of all the nursing staff's preferences about the shift schedule. The main contribution of the proposed model is that we consider that the nursing staff’s satisfaction level is affected by multiple preference ranks and their priority ordering to be scheduled, so that the quality of the generated shift schedule is more reasonable. Numerical results show that the planned shifts and days-off are fair and successfully meet the preferences of all the nursing staff.

  2. Green vessel scheduling in liner shipping: Modeling carbon dioxide emission costs in sea and at ports of call

    Maxim A. Dulebenets

    2018-03-01

    Full Text Available Considering a substantial increase in volumes of the international seaborne trade and drastic climate changes due to carbon dioxide emissions, liner shipping companies have to improve planning of their vessel schedules and improve energy efficiency. This paper presents a novel mixed integer non-linear mathematical model for the green vessel scheduling problem, which directly accounts for the carbon dioxide emission costs in sea and at ports of call. The original non-linear model is linearized and then solved using CPLEX. A set of numerical experiments are conducted for a real-life liner shipping route to reveal managerial insights that can be of importance to liner shipping companies. Results indicate that the proposed mathematical model can serve as an efficient planning tool for liner shipping companies and may assist with evaluation of various carbon dioxide taxation schemes. Increasing carbon dioxide tax may substantially change the design of vessel schedules, incur additional route service costs, and improve the environmental sustainability. However, the effects from increasing carbon dioxide tax on the marine container terminal operations are found to be very limited.

  3. Piecewise Linear-Linear Latent Growth Mixture Models with Unknown Knots

    Kohli, Nidhi; Harring, Jeffrey R.; Hancock, Gregory R.

    2013-01-01

    Latent growth curve models with piecewise functions are flexible and useful analytic models for investigating individual behaviors that exhibit distinct phases of development in observed variables. As an extension of this framework, this study considers a piecewise linear-linear latent growth mixture model (LGMM) for describing segmented change of…

  4. From linear to generalized linear mixed models: A case study in repeated measures

    Compared to traditional linear mixed models, generalized linear mixed models (GLMMs) can offer better correspondence between response variables and explanatory models, yielding more efficient estimates and tests in the analysis of data from designed experiments. Using proportion data from a designed...

  5. Applying mathematical models to predict resident physician performance and alertness on traditional and novel work schedules.

    Klerman, Elizabeth B; Beckett, Scott A; Landrigan, Christopher P

    2016-09-13

    In 2011 the U.S. Accreditation Council for Graduate Medical Education began limiting first year resident physicians (interns) to shifts of ≤16 consecutive hours. Controversy persists regarding the effectiveness of this policy for reducing errors and accidents while promoting education and patient care. Using a mathematical model of the effects of circadian rhythms and length of time awake on objective performance and subjective alertness, we quantitatively compared predictions for traditional intern schedules to those that limit work to ≤ 16 consecutive hours. We simulated two traditional schedules and three novel schedules using the mathematical model. The traditional schedules had extended duration work shifts (≥24 h) with overnight work shifts every second shift (including every third night, Q3) or every third shift (including every fourth night, Q4) night; the novel schedules had two different cross-cover (XC) night team schedules (XC-V1 and XC-V2) and a Rapid Cycle Rotation (RCR) schedule. Predicted objective performance and subjective alertness for each work shift were computed for each individual's schedule within a team and then combined for the team as a whole. Our primary outcome was the amount of time within a work shift during which a team's model-predicted objective performance and subjective alertness were lower than that expected after 16 or 24 h of continuous wake in an otherwise rested individual. The model predicted fewer hours with poor performance and alertness, especially during night-time work hours, for all three novel schedules than for either the traditional Q3 or Q4 schedules. Three proposed schedules that eliminate extended shifts may improve performance and alertness compared with traditional Q3 or Q4 schedules. Predicted times of worse performance and alertness were at night, which is also a time when supervision of trainees is lower. Mathematical modeling provides a quantitative comparison approach with potential to aid

  6. Comparison of linear and non-linear models for predicting energy expenditure from raw accelerometer data.

    Montoye, Alexander H K; Begum, Munni; Henning, Zachary; Pfeiffer, Karin A

    2017-02-01

    This study had three purposes, all related to evaluating energy expenditure (EE) prediction accuracy from body-worn accelerometers: (1) compare linear regression to linear mixed models, (2) compare linear models to artificial neural network models, and (3) compare accuracy of accelerometers placed on the hip, thigh, and wrists. Forty individuals performed 13 activities in a 90 min semi-structured, laboratory-based protocol. Participants wore accelerometers on the right hip, right thigh, and both wrists and a portable metabolic analyzer (EE criterion). Four EE prediction models were developed for each accelerometer: linear regression, linear mixed, and two ANN models. EE prediction accuracy was assessed using correlations, root mean square error (RMSE), and bias and was compared across models and accelerometers using repeated-measures analysis of variance. For all accelerometer placements, there were no significant differences for correlations or RMSE between linear regression and linear mixed models (correlations: r  =  0.71-0.88, RMSE: 1.11-1.61 METs; p  >  0.05). For the thigh-worn accelerometer, there were no differences in correlations or RMSE between linear and ANN models (ANN-correlations: r  =  0.89, RMSE: 1.07-1.08 METs. Linear models-correlations: r  =  0.88, RMSE: 1.10-1.11 METs; p  >  0.05). Conversely, one ANN had higher correlations and lower RMSE than both linear models for the hip (ANN-correlation: r  =  0.88, RMSE: 1.12 METs. Linear models-correlations: r  =  0.86, RMSE: 1.18-1.19 METs; p  linear models for the wrist-worn accelerometers (ANN-correlations: r  =  0.82-0.84, RMSE: 1.26-1.32 METs. Linear models-correlations: r  =  0.71-0.73, RMSE: 1.55-1.61 METs; p  models offer a significant improvement in EE prediction accuracy over linear models. Conversely, linear models showed similar EE prediction accuracy to machine learning models for hip- and thigh

  7. Data Model Approach And Markov Chain Based Analysis Of Multi-Level Queue Scheduling

    Diwakar Shukla

    2010-01-01

    Full Text Available There are many CPU scheduling algorithms inliterature like FIFO, Round Robin, Shortest-Job-First and so on.The Multilevel-Queue-Scheduling is superior to these due to itsbetter management of a variety of processes. In this paper, aMarkov chain model is used for a general setup of Multilevelqueue-scheduling and the scheduler is assumed to performrandom movement on queue over the quantum of time.Performance of scheduling is examined through a rowdependent data model. It is found that with increasing value of αand d, the chance of system going over the waiting state reduces.At some of the interesting combinations of α and d, it diminishesto zero, thereby, provides us some clue regarding better choice ofqueues over others for high priority jobs. It is found that ifqueue priorities are added in the scheduling intelligently thenbetter performance could be obtained. Data model helpschoosing appropriate preferences.

  8. Evaluating the double Poisson generalized linear model.

    Zou, Yaotian; Geedipally, Srinivas Reddy; Lord, Dominique

    2013-10-01

    The objectives of this study are to: (1) examine the applicability of the double Poisson (DP) generalized linear model (GLM) for analyzing motor vehicle crash data characterized by over- and under-dispersion and (2) compare the performance of the DP GLM with the Conway-Maxwell-Poisson (COM-Poisson) GLM in terms of goodness-of-fit and theoretical soundness. The DP distribution has seldom been investigated and applied since its first introduction two decades ago. The hurdle for applying the DP is related to its normalizing constant (or multiplicative constant) which is not available in closed form. This study proposed a new method to approximate the normalizing constant of the DP with high accuracy and reliability. The DP GLM and COM-Poisson GLM were developed using two observed over-dispersed datasets and one observed under-dispersed dataset. The modeling results indicate that the DP GLM with its normalizing constant approximated by the new method can handle crash data characterized by over- and under-dispersion. Its performance is comparable to the COM-Poisson GLM in terms of goodness-of-fit (GOF), although COM-Poisson GLM provides a slightly better fit. For the over-dispersed data, the DP GLM performs similar to the NB GLM. Considering the fact that the DP GLM can be easily estimated with inexpensive computation and that it is simpler to interpret coefficients, it offers a flexible and efficient alternative for researchers to model count data. Copyright © 2013 Elsevier Ltd. All rights reserved.

  9. Petri Nets as Models of Linear Logic

    Engberg, Uffe Henrik; Winskel, Glynn

    1990-01-01

    The chief purpose of this paper is to appraise the feasibility of Girad's linear logic as a specification language for parallel processes. To this end we propose an interpretation of linear logic in Petri nets, with respect to which we investigate the expressive power of the logic...

  10. Model-based schedulability analysis of safety critical hard real-time Java programs

    Bøgholm, Thomas; Kragh-Hansen, Henrik; Olsen, Petur

    2008-01-01

    verifiable by the Uppaal model checker [23]. Schedulability analysis is reduced to a simple reachability question, checking for deadlock freedom. Model-based schedulability analysis has been developed by Amnell et al. [2], but has so far only been applied to high level specifications, not actual...

  11. Development of an irrigation scheduling software based on model predicted crop water stress

    Modern irrigation scheduling methods are generally based on sensor-monitored soil moisture regimes rather than crop water stress which is difficult to measure in real-time, but can be computed using agricultural system models. In this study, an irrigation scheduling software based on RZWQM2 model pr...

  12. A QUADTREE ORGANIZATION CONSTRUCTION AND SCHEDULING METHOD FOR URBAN 3D MODEL BASED ON WEIGHT

    C. Yao; G. Peng; Y. Song; M. Duan

    2017-01-01

    The increasement of Urban 3D model precision and data quantity puts forward higher requirements for real-time rendering of digital city model. Improving the organization, management and scheduling of 3D model data in 3D digital city can improve the rendering effect and efficiency. This paper takes the complexity of urban models into account, proposes a Quadtree construction and scheduling rendering method for Urban 3D model based on weight. Divide Urban 3D model into different rendering weigh...

  13. Research on information models for the construction schedule management based on the IFC standard

    Weirui Xue

    2015-05-01

    Full Text Available Purpose: The purpose of this article is to study the description and extension of the Industry Foundation Classes (IFC standard in construction schedule management, which achieves the information exchange and sharing among the different information systems and stakeholders, and facilitates the collaborative construction in the construction projects. Design/methodology/approach: The schedule information processing and coordination are difficult in the complex construction project. Building Information Modeling (BIM provides the platform for exchanging and sharing information among information systems and stakeholders based on the IFC standard. Through analyzing the schedule plan, implementing, check and control, the information flow in the schedule management is reflected based on the IDEF. According to the IFC4, the information model for the schedule management is established, which not only includes the each aspect of the schedule management, but also includes the cost management, the resource management, the quality management and the risk management. Findings: The information requirement for the construction schedule management can be summarized into three aspects: the schedule plan information, the implementing information and the check and control information. The three aspects can be described through the existing and extended entities of IFC4, and the information models are established. Originality/value: The main contribution of the article is to establish the construction schedule management information model, which achieves the information exchange and share in the construction project, and facilitates the development of the application software to meet the requirements of the construction project.

  14. Linear approximation model network and its formation via ...

    To overcome the deficiency of `local model network' (LMN) techniques, an alternative `linear approximation model' (LAM) network approach is proposed. Such a network models a nonlinear or practical system with multiple linear models fitted along operating trajectories, where individual models are simply networked ...

  15. A new mathematical programming model for long-term production scheduling considering geological uncertainty

    Gholamnejad, J.; Moosavi, E.

    2012-01-01

    Determination of the optimum production schedules over the life of a mine is a critical mechanism in open pit mine planning procedures. Long-term production scheduling is used to maximize the net present value of the project under technical, financial, and environmental constraints. Mathematical programming models are well suited for optimizing long-term production schedules of open pit mines. There are two approaches to solving long-term production problems: deterministic- and uncertainty- b...

  16. Tramp Ship Routing and Scheduling - Models, Methods and Opportunities

    Vilhelmsen, Charlotte; Larsen, Jesper; Lusby, Richard Martin

    of their demand in advance. However, the detailed requirements of these contract cargoes can be subject to ongoing changes, e.g. the destination port can be altered. For tramp operators, a main concern is therefore the efficient and continuous planning of routes and schedules for the individual ships. Due...... and scheduling problem, focus should now be on extending this basic problem to include additional real-world complexities and develop suitable solution methods for those extensions. Such extensions will enable more tramp operators to benefit from the solution methods while simultaneously creating new...

  17. Linear regression crash prediction models : issues and proposed solutions.

    2010-05-01

    The paper develops a linear regression model approach that can be applied to : crash data to predict vehicle crashes. The proposed approach involves novice data aggregation : to satisfy linear regression assumptions; namely error structure normality ...

  18. Game Theory and its Relationship with Linear Programming Models ...

    Game Theory and its Relationship with Linear Programming Models. ... This paper shows that game theory and linear programming problem are closely related subjects since any computing method devised for ... AJOL African Journals Online.

  19. Refinery scheduling

    Magalhaes, Marcus V.; Fraga, Eder T. [PETROBRAS, Rio de Janeiro, RJ (Brazil); Shah, Nilay [Imperial College, London (United Kingdom)

    2004-07-01

    This work addresses the refinery scheduling problem using mathematical programming techniques. The solution adopted was to decompose the entire refinery model into a crude oil scheduling and a product scheduling problem. The envelope for the crude oil scheduling problem is composed of a terminal, a pipeline and the crude area of a refinery, including the crude distillation units. The solution method adopted includes a decomposition technique based on the topology of the system. The envelope for the product scheduling comprises all tanks, process units and products found in a refinery. Once crude scheduling decisions are Also available the product scheduling is solved using a rolling horizon algorithm. All models were tested with real data from PETROBRAS' REFAP refinery, located in Canoas, Southern Brazil. (author)

  20. A Framework for Uplink Intercell Interference Modeling with Channel-Based Scheduling

    Tabassum, Hina; Yilmaz, Ferkan; Dawy, Zaher; Alouini, Mohamed-Slim

    2012-01-01

    This paper presents a novel framework for modeling the uplink intercell interference(ICI) in a multiuser cellular network. The proposed framework assists in quantifying the impact of various fading channel models and state-of-the-art scheduling

  1. A Note on the Identifiability of Generalized Linear Mixed Models

    Labouriau, Rodrigo

    2014-01-01

    I present here a simple proof that, under general regularity conditions, the standard parametrization of generalized linear mixed model is identifiable. The proof is based on the assumptions of generalized linear mixed models on the first and second order moments and some general mild regularity...... conditions, and, therefore, is extensible to quasi-likelihood based generalized linear models. In particular, binomial and Poisson mixed models with dispersion parameter are identifiable when equipped with the standard parametrization...

  2. The development of stochastic process modeling through risk analysis derived from scheduling of NPP project

    Lee, Kwang Ho; Roh, Myung Sub

    2013-01-01

    There are so many different factors to consider when constructing a nuclear power plant successfully from planning to decommissioning. According to PMBOK, all projects have nine domains from a holistic project management perspective. They are equally important to all projects, however, this study focuses mostly on the processes required to manage timely completion of the project and conduct risk management. The overall objective of this study is to let you know what the risk analysis derived from scheduling of NPP project is, and understand how to implement the stochastic process modeling through risk management. Building the Nuclear Power Plant is required a great deal of time and fundamental knowledge related to all engineering. That means that integrated project scheduling management with so many activities is necessary and very important. Simulation techniques for scheduling of NPP project using Open Plan program, Crystal Ball program, and Minitab program can be useful tools for designing optimal schedule planning. Thus far, Open Plan and Monte Carlo programs have been used to calculate the critical path for scheduling network analysis. And also, Minitab program has been applied to monitor the scheduling risk. This approach to stochastic modeling through risk analysis of project activities is very useful for optimizing the schedules of activities using Critical Path Method and managing the scheduling control of NPP project. This study has shown new approach to optimal scheduling of NPP project, however, this does not consider the characteristic of activities according to the NPP site conditions. Hence, this study needs more research considering those factors

  3. The development of stochastic process modeling through risk analysis derived from scheduling of NPP project

    Lee, Kwang Ho; Roh, Myung Sub [KEPCO International Nuclear Graduate School, Ulsan (Korea, Republic of)

    2013-10-15

    There are so many different factors to consider when constructing a nuclear power plant successfully from planning to decommissioning. According to PMBOK, all projects have nine domains from a holistic project management perspective. They are equally important to all projects, however, this study focuses mostly on the processes required to manage timely completion of the project and conduct risk management. The overall objective of this study is to let you know what the risk analysis derived from scheduling of NPP project is, and understand how to implement the stochastic process modeling through risk management. Building the Nuclear Power Plant is required a great deal of time and fundamental knowledge related to all engineering. That means that integrated project scheduling management with so many activities is necessary and very important. Simulation techniques for scheduling of NPP project using Open Plan program, Crystal Ball program, and Minitab program can be useful tools for designing optimal schedule planning. Thus far, Open Plan and Monte Carlo programs have been used to calculate the critical path for scheduling network analysis. And also, Minitab program has been applied to monitor the scheduling risk. This approach to stochastic modeling through risk analysis of project activities is very useful for optimizing the schedules of activities using Critical Path Method and managing the scheduling control of NPP project. This study has shown new approach to optimal scheduling of NPP project, however, this does not consider the characteristic of activities according to the NPP site conditions. Hence, this study needs more research considering those factors.

  4. Taking the lag out of jet lag through model-based schedule design.

    Dean, Dennis A; Forger, Daniel B; Klerman, Elizabeth B

    2009-06-01

    Travel across multiple time zones results in desynchronization of environmental time cues and the sleep-wake schedule from their normal phase relationships with the endogenous circadian system. Circadian misalignment can result in poor neurobehavioral performance, decreased sleep efficiency, and inappropriately timed physiological signals including gastrointestinal activity and hormone release. Frequent and repeated transmeridian travel is associated with long-term cognitive deficits, and rodents experimentally exposed to repeated schedule shifts have increased death rates. One approach to reduce the short-term circadian, sleep-wake, and performance problems is to use mathematical models of the circadian pacemaker to design countermeasures that rapidly shift the circadian pacemaker to align with the new schedule. In this paper, the use of mathematical models to design sleep-wake and countermeasure schedules for improved performance is demonstrated. We present an approach to designing interventions that combines an algorithm for optimal placement of countermeasures with a novel mode of schedule representation. With these methods, rapid circadian resynchrony and the resulting improvement in neurobehavioral performance can be quickly achieved even after moderate to large shifts in the sleep-wake schedule. The key schedule design inputs are endogenous circadian period length, desired sleep-wake schedule, length of intervention, background light level, and countermeasure strength. The new schedule representation facilitates schedule design, simulation studies, and experiment design and significantly decreases the amount of time to design an appropriate intervention. The method presented in this paper has direct implications for designing jet lag, shift-work, and non-24-hour schedules, including scheduling for extreme environments, such as in space, undersea, or in polar regions.

  5. Taking the lag out of jet lag through model-based schedule design.

    Dennis A Dean

    2009-06-01

    Full Text Available Travel across multiple time zones results in desynchronization of environmental time cues and the sleep-wake schedule from their normal phase relationships with the endogenous circadian system. Circadian misalignment can result in poor neurobehavioral performance, decreased sleep efficiency, and inappropriately timed physiological signals including gastrointestinal activity and hormone release. Frequent and repeated transmeridian travel is associated with long-term cognitive deficits, and rodents experimentally exposed to repeated schedule shifts have increased death rates. One approach to reduce the short-term circadian, sleep-wake, and performance problems is to use mathematical models of the circadian pacemaker to design countermeasures that rapidly shift the circadian pacemaker to align with the new schedule. In this paper, the use of mathematical models to design sleep-wake and countermeasure schedules for improved performance is demonstrated. We present an approach to designing interventions that combines an algorithm for optimal placement of countermeasures with a novel mode of schedule representation. With these methods, rapid circadian resynchrony and the resulting improvement in neurobehavioral performance can be quickly achieved even after moderate to large shifts in the sleep-wake schedule. The key schedule design inputs are endogenous circadian period length, desired sleep-wake schedule, length of intervention, background light level, and countermeasure strength. The new schedule representation facilitates schedule design, simulation studies, and experiment design and significantly decreases the amount of time to design an appropriate intervention. The method presented in this paper has direct implications for designing jet lag, shift-work, and non-24-hour schedules, including scheduling for extreme environments, such as in space, undersea, or in polar regions.

  6. Application of 3D model in the schedule management of nuclear power plant construction

    Nian Fayang

    2009-01-01

    While 3D technology has been widely used in engineering design, the 3D model of engineering design also includes information that can be used to construction. By the visual interface, the 3D model can be used in different aspects of construction. By linking the 3D model with the construction schedule, the 4D model can be created, through which the visual manage of the construction schedule can be achieved. (authors)

  7. Modeling a content-aware LTE MAC downlink scheduler with heterogeneous traffic

    Artuso, Matteo; Christiansen, Henrik Lehrmann

    2013-01-01

    The scheduling policy adopted in the LTE (Long Term Evolution) MAC layer is the most valuable degree of freedom left from the 3GPP (3rd Generation Partnership Project) consortium to the industry and the research community . This paper presents an OPNET model of the downlink scheduling in a one...

  8. Linear control theory for gene network modeling.

    Shin, Yong-Jun; Bleris, Leonidas

    2010-09-16

    Systems biology is an interdisciplinary field that aims at understanding complex interactions in cells. Here we demonstrate that linear control theory can provide valuable insight and practical tools for the characterization of complex biological networks. We provide the foundation for such analyses through the study of several case studies including cascade and parallel forms, feedback and feedforward loops. We reproduce experimental results and provide rational analysis of the observed behavior. We demonstrate that methods such as the transfer function (frequency domain) and linear state-space (time domain) can be used to predict reliably the properties and transient behavior of complex network topologies and point to specific design strategies for synthetic networks.

  9. LINEAR MODEL FOR NON ISOSCELES ABSORBERS.

    BERG,J.S.

    2003-05-12

    Previous analyses have assumed that wedge absorbers are triangularly shaped with equal angles for the two faces. In this case, to linear order, the energy loss depends only on the position in the direction of the face tilt, and is independent of the incoming angle. One can instead construct an absorber with entrance and exit faces facing rather general directions. In this case, the energy loss can depend on both the position and the angle of the particle in question. This paper demonstrates that and computes the effect to linear order.

  10. Equivalent linear damping characterization in linear and nonlinear force-stiffness muscle models.

    Ovesy, Marzieh; Nazari, Mohammad Ali; Mahdavian, Mohammad

    2016-02-01

    In the current research, the muscle equivalent linear damping coefficient which is introduced as the force-velocity relation in a muscle model and the corresponding time constant are investigated. In order to reach this goal, a 1D skeletal muscle model was used. Two characterizations of this model using a linear force-stiffness relationship (Hill-type model) and a nonlinear one have been implemented. The OpenSim platform was used for verification of the model. The isometric activation has been used for the simulation. The equivalent linear damping and the time constant of each model were extracted by using the results obtained from the simulation. The results provide a better insight into the characteristics of each model. It is found that the nonlinear models had a response rate closer to the reality compared to the Hill-type models.

  11. An online re-linearization scheme suited for Model Predictive and Linear Quadratic Control

    Henriksen, Lars Christian; Poulsen, Niels Kjølstad

    This technical note documents the equations for primal-dual interior-point quadratic programming problem solver used for MPC. The algorithm exploits the special structure of the MPC problem and is able to reduce the computational burden such that the computational burden scales with prediction...... horizon length in a linear way rather than cubic, which would be the case if the structure was not exploited. It is also shown how models used for design of model-based controllers, e.g. linear quadratic and model predictive, can be linearized both at equilibrium and non-equilibrium points, making...

  12. Tried and True: Springing into Linear Models

    Darling, Gerald

    2012-01-01

    In eighth grade, students usually learn about forces in science class and linear relationships in math class, crucial topics that form the foundation for further study in science and engineering. An activity that links these two fundamental concepts involves measuring the distance a spring stretches as a function of how much weight is suspended…

  13. Model Predictive Control for Linear Complementarity and Extended Linear Complementarity Systems

    Bambang Riyanto

    2005-11-01

    Full Text Available In this paper, we propose model predictive control method for linear complementarity and extended linear complementarity systems by formulating optimization along prediction horizon as mixed integer quadratic program. Such systems contain interaction between continuous dynamics and discrete event systems, and therefore, can be categorized as hybrid systems. As linear complementarity and extended linear complementarity systems finds applications in different research areas, such as impact mechanical systems, traffic control and process control, this work will contribute to the development of control design method for those areas as well, as shown by three given examples.

  14. Ordinal Log-Linear Models for Contingency Tables

    Brzezińska Justyna

    2016-12-01

    Full Text Available A log-linear analysis is a method providing a comprehensive scheme to describe the association for categorical variables in a contingency table. The log-linear model specifies how the expected counts depend on the levels of the categorical variables for these cells and provide detailed information on the associations. The aim of this paper is to present theoretical, as well as empirical, aspects of ordinal log-linear models used for contingency tables with ordinal variables. We introduce log-linear models for ordinal variables: linear-by-linear association, row effect model, column effect model and RC Goodman’s model. Algorithm, advantages and disadvantages will be discussed in the paper. An empirical analysis will be conducted with the use of R.

  15. Recent Updates to the GEOS-5 Linear Model

    Holdaway, Dan; Kim, Jong G.; Errico, Ron; Gelaro, Ronald; Mahajan, Rahul

    2014-01-01

    Global Modeling and Assimilation Office (GMAO) is close to having a working 4DVAR system and has developed a linearized version of GEOS-5.This talk outlines a series of improvements made to the linearized dynamics, physics and trajectory.Of particular interest is the development of linearized cloud microphysics, which provides the framework for 'all-sky' data assimilation.

  16. Simulation of less master production schedule nervousness model

    Herrera , Carlos; Thomas , André

    2009-01-01

    International audience; In production decision making systems, Master Production Schedule (MPS) states the requirements for individual end items by date and quantity. The solution sensitivity to demand forecast changes, unforeseen supplier and production problem occurrences, is known as nervousness. This feature cause undesirable effects at tactical and operational levels. Some of these effects are production and inventory cost increases and, also, negative impacts on overall and labor produc...

  17. Military Free Fall Scheduling And Manifest Optimization Model

    2016-12-01

    zone. As interest in qualifying more personnel increased, the course expanded. By the mid-1990s, Reyes explains, a new location was required to better...Since 2005, the Chilean Professional Soccer Association has used operations research techniques to schedule professional leagues in Chile . These...a new parachute the students are using, the RA-1. The RA-1 parachute has a longer glide ratio, which means the rate of descent is slower than with

  18. On using the linear-quadratic model in daily clinical practice

    Yaes, R.J.; Patel, P.; Maruyama, Y.

    1991-01-01

    To facilitate its use in the clinic, Barendsen's formulation of the Linear-Quadratic (LQ) model is modified by expressing isoeffect doses in terms of the Standard Effective Dose, Ds, the isoeffective dose for the standard fractionation schedule of 2 Gy fractions given once per day, 5 days per week. For any arbitrary fractionation schedule, where total dose D is given in N fractions of size d in a total time T, the corresponding Standard Effective Dose, Ds, will be proportional to the total dose D and the proportionality constant will be called the Standard Relative Effectiveness, SRE, to distinguish it from Barendsen's Relative Effectiveness, RE. Thus, Ds = SRE.D. The constant SRE depends on the parameters of the fractionation schedule, and on the tumor or normal tissue being irradiated. For the simple LQ model with no time dependence, which is applicable to late reacting tissue, SRE = [(d + delta)/(2 + delta)], where d is the fraction size and delta = alpha/beta is the alpha/beta ratio for the tissue of interest, with both d and delta expressed in units of Gy. Application of this method to the Linear Quadratic model with a time dependence, the LQ + time model, and to low dose rate brachytherapy will be discussed. To clarify the method of calculation, and to demonstrate its simplicity, examples from the clinical literature will be used

  19. Double generalized linear compound poisson models to insurance claims data

    Andersen, Daniel Arnfeldt; Bonat, Wagner Hugo

    2017-01-01

    This paper describes the specification, estimation and comparison of double generalized linear compound Poisson models based on the likelihood paradigm. The models are motivated by insurance applications, where the distribution of the response variable is composed by a degenerate distribution...... implementation and illustrate the application of double generalized linear compound Poisson models using a data set about car insurances....

  20. Determining Predictor Importance in Hierarchical Linear Models Using Dominance Analysis

    Luo, Wen; Azen, Razia

    2013-01-01

    Dominance analysis (DA) is a method used to evaluate the relative importance of predictors that was originally proposed for linear regression models. This article proposes an extension of DA that allows researchers to determine the relative importance of predictors in hierarchical linear models (HLM). Commonly used measures of model adequacy in…

  1. Thurstonian models for sensory discrimination tests as generalized linear models

    Brockhoff, Per B.; Christensen, Rune Haubo Bojesen

    2010-01-01

    as a so-called generalized linear model. The underlying sensory difference 6 becomes directly a parameter of the statistical model and the estimate d' and it's standard error becomes the "usual" output of the statistical analysis. The d' for the monadic A-NOT A method is shown to appear as a standard......Sensory discrimination tests such as the triangle, duo-trio, 2-AFC and 3-AFC tests produce binary data and the Thurstonian decision rule links the underlying sensory difference 6 to the observed number of correct responses. In this paper it is shown how each of these four situations can be viewed...

  2. Linear control theory for gene network modeling.

    Yong-Jun Shin

    Full Text Available Systems biology is an interdisciplinary field that aims at understanding complex interactions in cells. Here we demonstrate that linear control theory can provide valuable insight and practical tools for the characterization of complex biological networks. We provide the foundation for such analyses through the study of several case studies including cascade and parallel forms, feedback and feedforward loops. We reproduce experimental results and provide rational analysis of the observed behavior. We demonstrate that methods such as the transfer function (frequency domain and linear state-space (time domain can be used to predict reliably the properties and transient behavior of complex network topologies and point to specific design strategies for synthetic networks.

  3. Forecasting Volatility of Dhaka Stock Exchange: Linear Vs Non-linear models

    Masudul Islam

    2012-10-01

    Full Text Available Prior information about a financial market is very essential for investor to invest money on parches share from the stock market which can strengthen the economy. The study examines the relative ability of various models to forecast daily stock indexes future volatility. The forecasting models that employed from simple to relatively complex ARCH-class models. It is found that among linear models of stock indexes volatility, the moving average model ranks first using root mean square error, mean absolute percent error, Theil-U and Linex loss function  criteria. We also examine five nonlinear models. These models are ARCH, GARCH, EGARCH, TGARCH and restricted GARCH models. We find that nonlinear models failed to dominate linear models utilizing different error measurement criteria and moving average model appears to be the best. Then we forecast the next two months future stock index price volatility by the best (moving average model.

  4. Generalised linear models for correlated pseudo-observations, with applications to multi-state models

    Andersen, Per Kragh; Klein, John P.; Rosthøj, Susanne

    2003-01-01

    Generalised estimating equation; Generalised linear model; Jackknife pseudo-value; Logistic regression; Markov Model; Multi-state model......Generalised estimating equation; Generalised linear model; Jackknife pseudo-value; Logistic regression; Markov Model; Multi-state model...

  5. Linear and non-linear autoregressive models for short-term wind speed forecasting

    Lydia, M.; Suresh Kumar, S.; Immanuel Selvakumar, A.; Edwin Prem Kumar, G.

    2016-01-01

    Highlights: • Models for wind speed prediction at 10-min intervals up to 1 h built on time-series wind speed data. • Four different multivariate models for wind speed built based on exogenous variables. • Non-linear models built using three data mining algorithms outperform the linear models. • Autoregressive models based on wind direction perform better than other models. - Abstract: Wind speed forecasting aids in estimating the energy produced from wind farms. The soaring energy demands of the world and minimal availability of conventional energy sources have significantly increased the role of non-conventional sources of energy like solar, wind, etc. Development of models for wind speed forecasting with higher reliability and greater accuracy is the need of the hour. In this paper, models for predicting wind speed at 10-min intervals up to 1 h have been built based on linear and non-linear autoregressive moving average models with and without external variables. The autoregressive moving average models based on wind direction and annual trends have been built using data obtained from Sotavento Galicia Plc. and autoregressive moving average models based on wind direction, wind shear and temperature have been built on data obtained from Centre for Wind Energy Technology, Chennai, India. While the parameters of the linear models are obtained using the Gauss–Newton algorithm, the non-linear autoregressive models are developed using three different data mining algorithms. The accuracy of the models has been measured using three performance metrics namely, the Mean Absolute Error, Root Mean Squared Error and Mean Absolute Percentage Error.

  6. SURVEY - Multicriteria Models for Just-in-Time Scheduling

    T'Kindt , Vincent

    2010-01-01

    Abstract Just-in-Time manufacturing consists in organizing the production of elements in order to meet a certain number of objectives or requirements according to the so-called ?Just-in-Time philosophy?. Just-in-Time has been extensively studied in the literature for many years due to the high number of real-life situations where it can be applied. This paper aims at revisiting Just-in-Time principles and detailing how they can be applied to the scheduling stage of a manufacturi...

  7. Applicability of linear and non-linear potential flow models on a Wavestar float

    Bozonnet, Pauline; Dupin, Victor; Tona, Paolino

    2017-01-01

    as a model based on non-linear potential flow theory and weakscatterer hypothesis are successively considered. Simple tests, such as dip tests, decay tests and captive tests enable to highlight the improvements obtained with the introduction of nonlinearities. Float motion under wave actions and without...... control action, limited to small amplitude motion with a single float, is well predicted by the numerical models, including the linear one. Still, float velocity is better predicted by accounting for non-linear hydrostatic and Froude-Krylov forces.......Numerical models based on potential flow theory, including different types of nonlinearities are compared and validated against experimental data for the Wavestar wave energy converter technology. Exact resolution of the rotational motion, non-linear hydrostatic and Froude-Krylov forces as well...

  8. A linear model of population dynamics

    Lushnikov, A. A.; Kagan, A. I.

    2016-08-01

    The Malthus process of population growth is reformulated in terms of the probability w(n,t) to find exactly n individuals at time t assuming that both the birth and the death rates are linear functions of the population size. The master equation for w(n,t) is solved exactly. It is shown that w(n,t) strongly deviates from the Poisson distribution and is expressed in terms either of Laguerre’s polynomials or a modified Bessel function. The latter expression allows for considerable simplifications of the asymptotic analysis of w(n,t).

  9. A test for the parameters of multiple linear regression models ...

    A test for the parameters of multiple linear regression models is developed for conducting tests simultaneously on all the parameters of multiple linear regression models. The test is robust relative to the assumptions of homogeneity of variances and absence of serial correlation of the classical F-test. Under certain null and ...

  10. Modeling Non-Linear Material Properties in Composite Materials

    2016-06-28

    Technical Report ARWSB-TR-16013 MODELING NON-LINEAR MATERIAL PROPERTIES IN COMPOSITE MATERIALS Michael F. Macri Andrew G...REPORT TYPE Technical 3. DATES COVERED (From - To) 4. TITLE AND SUBTITLE MODELING NON-LINEAR MATERIAL PROPERTIES IN COMPOSITE MATERIALS ...systems are increasingly incorporating composite materials into their design. Many of these systems subject the composites to environmental conditions

  11. Multivariate statistical modelling based on generalized linear models

    Fahrmeir, Ludwig

    1994-01-01

    This book is concerned with the use of generalized linear models for univariate and multivariate regression analysis. Its emphasis is to provide a detailed introductory survey of the subject based on the analysis of real data drawn from a variety of subjects including the biological sciences, economics, and the social sciences. Where possible, technical details and proofs are deferred to an appendix in order to provide an accessible account for non-experts. Topics covered include: models for multi-categorical responses, model checking, time series and longitudinal data, random effects models, and state-space models. Throughout, the authors have taken great pains to discuss the underlying theoretical ideas in ways that relate well to the data at hand. As a result, numerous researchers whose work relies on the use of these models will find this an invaluable account to have on their desks. "The basic aim of the authors is to bring together and review a large part of recent advances in statistical modelling of m...

  12. Approximating chiral quark models with linear σ-models

    Broniowski, Wojciech; Golli, Bojan

    2003-01-01

    We study the approximation of chiral quark models with simpler models, obtained via gradient expansion. The resulting Lagrangian of the type of the linear σ-model contains, at the lowest level of the gradient-expanded meson action, an additional term of the form ((1)/(2))A(σ∂ μ σ+π∂ μ π) 2 . We investigate the dynamical consequences of this term and its relevance to the phenomenology of the soliton models of the nucleon. It is found that the inclusion of the new term allows for a more efficient approximation of the underlying quark theory, especially in those cases where dynamics allows for a large deviation of the chiral fields from the chiral circle, such as in quark models with non-local regulators. This is of practical importance, since the σ-models with valence quarks only are technically much easier to treat and simpler to solve than the quark models with the full-fledged Dirac sea

  13. a Quadtree Organization Construction and Scheduling Method for Urban 3d Model Based on Weight

    Yao, C.; Peng, G.; Song, Y.; Duan, M.

    2017-09-01

    The increasement of Urban 3D model precision and data quantity puts forward higher requirements for real-time rendering of digital city model. Improving the organization, management and scheduling of 3D model data in 3D digital city can improve the rendering effect and efficiency. This paper takes the complexity of urban models into account, proposes a Quadtree construction and scheduling rendering method for Urban 3D model based on weight. Divide Urban 3D model into different rendering weights according to certain rules, perform Quadtree construction and schedule rendering according to different rendering weights. Also proposed an algorithm for extracting bounding box extraction based on model drawing primitives to generate LOD model automatically. Using the algorithm proposed in this paper, developed a 3D urban planning&management software, the practice has showed the algorithm is efficient and feasible, the render frame rate of big scene and small scene are both stable at around 25 frames.

  14. A QUADTREE ORGANIZATION CONSTRUCTION AND SCHEDULING METHOD FOR URBAN 3D MODEL BASED ON WEIGHT

    C. Yao

    2017-09-01

    Full Text Available The increasement of Urban 3D model precision and data quantity puts forward higher requirements for real-time rendering of digital city model. Improving the organization, management and scheduling of 3D model data in 3D digital city can improve the rendering effect and efficiency. This paper takes the complexity of urban models into account, proposes a Quadtree construction and scheduling rendering method for Urban 3D model based on weight. Divide Urban 3D model into different rendering weights according to certain rules, perform Quadtree construction and schedule rendering according to different rendering weights. Also proposed an algorithm for extracting bounding box extraction based on model drawing primitives to generate LOD model automatically. Using the algorithm proposed in this paper, developed a 3D urban planning&management software, the practice has showed the algorithm is efficient and feasible, the render frame rate of big scene and small scene are both stable at around 25 frames.

  15. Latent log-linear models for handwritten digit classification.

    Deselaers, Thomas; Gass, Tobias; Heigold, Georg; Ney, Hermann

    2012-06-01

    We present latent log-linear models, an extension of log-linear models incorporating latent variables, and we propose two applications thereof: log-linear mixture models and image deformation-aware log-linear models. The resulting models are fully discriminative, can be trained efficiently, and the model complexity can be controlled. Log-linear mixture models offer additional flexibility within the log-linear modeling framework. Unlike previous approaches, the image deformation-aware model directly considers image deformations and allows for a discriminative training of the deformation parameters. Both are trained using alternating optimization. For certain variants, convergence to a stationary point is guaranteed and, in practice, even variants without this guarantee converge and find models that perform well. We tune the methods on the USPS data set and evaluate on the MNIST data set, demonstrating the generalization capabilities of our proposed models. Our models, although using significantly fewer parameters, are able to obtain competitive results with models proposed in the literature.

  16. SMI Compatible Simulation Scheduler Design for Reuse of Model Complying with SMP Standard

    Cheol-Hea Koo

    2010-12-01

    Full Text Available Software reusability is one of key factors which impacts cost and schedule on a software development project. It is very crucial also in satellite simulator development since there are many commercial simulator models related to satellite and dynamics. If these models can be used in another simulator platform, great deal of confidence and cost/schedule reduction would be achieved. Simulation model portability (SMP is maintained by European Space Agency and many models compatible with SMP/simulation model interface (SMI are available. Korea Aerospace Research Institute (KARI is developing hardware abstraction layer (HAL supported satellite simulator to verify on-board software of satellite. From above reasons, KARI wants to port these SMI compatible models to the HAL supported satellite simulator. To port these SMI compatible models to the HAL supported satellite simulator, simulation scheduler is preliminary designed according to the SMI standard.

  17. Constraint optimization model of a scheduling problem for a robotic arm in automatic systems

    Kristiansen, Ewa; Smith, Stephen F.; Kristiansen, Morten

    2014-01-01

    are characteristics of the painting process application itself. Unlike spot-welding, painting tasks require movement of the entire robot arm. In addition to minimizing intertask duration, the scheduler must strive to maximize painting quality and the problem is formulated as a multi-objective optimization problem....... The scheduling model is implemented as a stand-alone module using constraint programming, and integrated with a larger automatic system. The results of a number of simulation experiments with simple parts are reported, both to characterize the functionality of the scheduler and to illustrate the operation...... of the entire software system for automatic generation of robot programs for painting....

  18. Linear Regression Models for Estimating True Subsurface ...

    47

    The objective is to minimize the processing time and computer memory required. 10 to carry out inversion .... to the mainland by two long bridges. .... term. In this approach, the model converges when the squared sum of the differences. 143.

  19. Numerical modelling in non linear fracture mechanics

    Viggo Tvergaard

    2007-07-01

    Full Text Available Some numerical studies of crack propagation are based on using constitutive models that accountfor damage evolution in the material. When a critical damage value has been reached in a materialpoint, it is natural to assume that this point has no more carrying capacity, as is done numerically in the elementvanish technique. In the present review this procedure is illustrated for micromechanically based materialmodels, such as a ductile failure model that accounts for the nucleation and growth of voids to coalescence, and a model for intergranular creep failure with diffusive growth of grain boundary cavities leading to micro-crack formation. The procedure is also illustrated for low cycle fatigue, based on continuum damage mechanics. In addition, the possibility of crack growth predictions for elastic-plastic solids using cohesive zone models to represent the fracture process is discussed.

  20. Optimal Day-ahead Charging Scheduling of Electric Vehicles through an Aggregative Game Model

    Liu, Zhaoxi; Wu, Qiuwei; Huang, Shaojun

    2017-01-01

    The electric vehicle (EV) market has been growing rapidly around the world. With large scale deployment of EVs in power systems, both the grid and EV owners will benefit if the flexible demand of EV charging is properly managed through the electricity market. When EV charging demand is considerable...... in a grid, it will impact spot prices in the electricity market and consequently influence the charging scheduling itself. The interaction between the spot prices and the EV demand needs to be considered in the EV charging scheduling, otherwise it will lead to a higher charging cost. A day-ahead EV charging...... scheduling based on an aggregative game model is proposed in this paper. The impacts of the EV demand on the electricity prices are formulated with the game model in the scheduling considering possible actions of other EVs. The existence and uniqueness of the pure strategy Nash equilibrium are proved...

  1. A collaborative scheduling model for the supply-hub with multiple suppliers and multiple manufacturers.

    Li, Guo; Lv, Fei; Guan, Xu

    2014-01-01

    This paper investigates a collaborative scheduling model in the assembly system, wherein multiple suppliers have to deliver their components to the multiple manufacturers under the operation of Supply-Hub. We first develop two different scenarios to examine the impact of Supply-Hub. One is that suppliers and manufacturers make their decisions separately, and the other is that the Supply-Hub makes joint decisions with collaborative scheduling. The results show that our scheduling model with the Supply-Hub is a NP-complete problem, therefore, we propose an auto-adapted differential evolution algorithm to solve this problem. Moreover, we illustrate that the performance of collaborative scheduling by the Supply-Hub is superior to separate decision made by each manufacturer and supplier. Furthermore, we also show that the algorithm proposed has good convergence and reliability, which can be applicable to more complicated supply chain environment.

  2. A Collaborative Scheduling Model for the Supply-Hub with Multiple Suppliers and Multiple Manufacturers

    Guo Li

    2014-01-01

    Full Text Available This paper investigates a collaborative scheduling model in the assembly system, wherein multiple suppliers have to deliver their components to the multiple manufacturers under the operation of Supply-Hub. We first develop two different scenarios to examine the impact of Supply-Hub. One is that suppliers and manufacturers make their decisions separately, and the other is that the Supply-Hub makes joint decisions with collaborative scheduling. The results show that our scheduling model with the Supply-Hub is a NP-complete problem, therefore, we propose an auto-adapted differential evolution algorithm to solve this problem. Moreover, we illustrate that the performance of collaborative scheduling by the Supply-Hub is superior to separate decision made by each manufacturer and supplier. Furthermore, we also show that the algorithm proposed has good convergence and reliability, which can be applicable to more complicated supply chain environment.

  3. A Recovery Model for Production Scheduling: Combination of Disruption Management and Internet of Things

    Yang Jiang

    2016-01-01

    Full Text Available It is difficult to generate the new schedule effectively for minimizing the negative impact when an unanticipated disruption occurs after a subset of tasks has been finished in production scheduling. In such cases, continuing with the original schedule may not be optimal or feasible. Based on disruption management and Internet of things (IoT, this study designs a real-time status analyzer to identify the disruption and propose a recovery model to deal with the disruption. The computational result proves that our algorithm is competitive with the existing heuristics. Furthermore, due to the tradeoff between all participators (mainly including customers, managers of production enterprise, and workers involved in production scheduling, our model is more effective than the total rescheduling and right-shift rescheduling.

  4. A Collaborative Scheduling Model for the Supply-Hub with Multiple Suppliers and Multiple Manufacturers

    Lv, Fei; Guan, Xu

    2014-01-01

    This paper investigates a collaborative scheduling model in the assembly system, wherein multiple suppliers have to deliver their components to the multiple manufacturers under the operation of Supply-Hub. We first develop two different scenarios to examine the impact of Supply-Hub. One is that suppliers and manufacturers make their decisions separately, and the other is that the Supply-Hub makes joint decisions with collaborative scheduling. The results show that our scheduling model with the Supply-Hub is a NP-complete problem, therefore, we propose an auto-adapted differential evolution algorithm to solve this problem. Moreover, we illustrate that the performance of collaborative scheduling by the Supply-Hub is superior to separate decision made by each manufacturer and supplier. Furthermore, we also show that the algorithm proposed has good convergence and reliability, which can be applicable to more complicated supply chain environment. PMID:24892104

  5. TaskMaster: a prototype graphical user interface to a schedule optimization model

    Banham, Stephen R.

    1990-01-01

    Approved for public release, distribution is unlimited This thesis investigates the use of current graphical interface techniques to build more effective computer-user interfaces to Operations Research (OR) schedule optimization models. The design is directed at the scheduling decision maker who possesses limited OR experience. The feasibility and validity of building an interface for this kind of user is demonstrated in the development of a prototype graphical user interface called TaskMa...

  6. Random effect selection in generalised linear models

    Denwood, Matt; Houe, Hans; Forkman, Björn

    We analysed abattoir recordings of meat inspection codes with possible relevance to onfarm animal welfare in cattle. Random effects logistic regression models were used to describe individual-level data obtained from 461,406 cattle slaughtered in Denmark. Our results demonstrate that the largest...

  7. Model Order Reduction for Non Linear Mechanics

    Pinillo, Rubén

    2017-01-01

    Context: Automotive industry is moving towards a new generation of cars. Main idea: Cars are furnished with radars, cameras, sensors, etc… providing useful information about the environment surrounding the car. Goals: Provide an efficient model for the radar input/output. Reducing computational costs by means of big data techniques.

  8. Identification of Influential Points in a Linear Regression Model

    Jan Grosz

    2011-03-01

    Full Text Available The article deals with the detection and identification of influential points in the linear regression model. Three methods of detection of outliers and leverage points are described. These procedures can also be used for one-sample (independentdatasets. This paper briefly describes theoretical aspects of several robust methods as well. Robust statistics is a powerful tool to increase the reliability and accuracy of statistical modelling and data analysis. A simulation model of the simple linear regression is presented.

  9. Heterotic sigma models and non-linear strings

    Hull, C.M.

    1986-01-01

    The two-dimensional supersymmetric non-linear sigma models are examined with respect to the heterotic string. The paper was presented at the workshop on :Supersymmetry and its applications', Cambridge, United Kingdom, 1985. The non-linear sigma model with Wess-Zumino-type term, the coupling of the fermionic superfields to the sigma model, super-conformal invariance, and the supersymmetric string, are all discussed. (U.K.)

  10. Linear latent variable models: the lava-package

    Holst, Klaus Kähler; Budtz-Jørgensen, Esben

    2013-01-01

    are implemented including robust standard errors for clustered correlated data, multigroup analyses, non-linear parameter constraints, inference with incomplete data, maximum likelihood estimation with censored and binary observations, and instrumental variable estimators. In addition an extensive simulation......An R package for specifying and estimating linear latent variable models is presented. The philosophy of the implementation is to separate the model specification from the actual data, which leads to a dynamic and easy way of modeling complex hierarchical structures. Several advanced features...

  11. On-line control models for the Stanford Linear Collider

    Sheppard, J.C.; Helm, R.H.; Lee, M.J.; Woodley, M.D.

    1983-03-01

    Models for computer control of the SLAC three-kilometer linear accelerator and damping rings have been developed as part of the control system for the Stanford Linear Collider. Some of these models have been tested experimentally and implemented in the control program for routine linac operations. This paper will describe the development and implementation of these models, as well as some of the operational results

  12. Short-term generation scheduling model of Fujian hydro system

    Wang Jinwen [School of Hydropower and Information Engineering, Huazhong University of Science and Technology, Wuhan 430074 (China)], E-mail: dr.jinwen.wang@gmail.com

    2009-04-15

    The Fujian hydropower system (FHS) is one of the provincial hydropower systems with the most complicated hydraulic topology in China. This paper describes an optimization program that is required by Fujian Electric Power Company Ltd. (FEPCL) to aid the shift engineers in making decisions with the short-term hydropower scheduling such that the generation benefit can be maximal. The problem involves 27 reservoirs and is formulated as a nonlinear and discrete programming. It is a very challenging task to solve such a large-scale problem. In this paper, the Lagrangian multipliers are introduced to decompose the primal problem into a hydro subproblem and many individual plant-based subproblems, which are respectively solved by the improved simplex-like method (SLM) and the dynamic programming (DP). A numerical example is given and the derived solution is very close to the optimal one, with the distance in benefit less than 0.004%. All the data needed for the numerical example are presented in detail for further tests and studies from more experts and researchers.

  13. Short-term generation scheduling model of Fujian hydro system

    Wang, Jinwen [School of Hydropower and Information Engineering, Huazhong University of Science and Technology, Wuhan 430074 (China)

    2009-04-15

    The Fujian hydropower system (FHS) is one of the provincial hydropower systems with the most complicated hydraulic topology in China. This paper describes an optimization program that is required by Fujian Electric Power Company Ltd. (FEPCL) to aid the shift engineers in making decisions with the short-term hydropower scheduling such that the generation benefit can be maximal. The problem involves 27 reservoirs and is formulated as a nonlinear and discrete programming. It is a very challenging task to solve such a large-scale problem. In this paper, the Lagrangian multipliers are introduced to decompose the primal problem into a hydro subproblem and many individual plant-based subproblems, which are respectively solved by the improved simplex-like method (SLM) and the dynamic programming (DP). A numerical example is given and the derived solution is very close to the optimal one, with the distance in benefit less than 0.004%. All the data needed for the numerical example are presented in detail for further tests and studies from more experts and researchers. (author)

  14. Bayesian Subset Modeling for High-Dimensional Generalized Linear Models

    Liang, Faming

    2013-06-01

    This article presents a new prior setting for high-dimensional generalized linear models, which leads to a Bayesian subset regression (BSR) with the maximum a posteriori model approximately equivalent to the minimum extended Bayesian information criterion model. The consistency of the resulting posterior is established under mild conditions. Further, a variable screening procedure is proposed based on the marginal inclusion probability, which shares the same properties of sure screening and consistency with the existing sure independence screening (SIS) and iterative sure independence screening (ISIS) procedures. However, since the proposed procedure makes use of joint information from all predictors, it generally outperforms SIS and ISIS in real applications. This article also makes extensive comparisons of BSR with the popular penalized likelihood methods, including Lasso, elastic net, SIS, and ISIS. The numerical results indicate that BSR can generally outperform the penalized likelihood methods. The models selected by BSR tend to be sparser and, more importantly, of higher prediction ability. In addition, the performance of the penalized likelihood methods tends to deteriorate as the number of predictors increases, while this is not significant for BSR. Supplementary materials for this article are available online. © 2013 American Statistical Association.

  15. Generalized Linear Models with Applications in Engineering and the Sciences

    Myers, Raymond H; Vining, G Geoffrey; Robinson, Timothy J

    2012-01-01

    Praise for the First Edition "The obvious enthusiasm of Myers, Montgomery, and Vining and their reliance on their many examples as a major focus of their pedagogy make Generalized Linear Models a joy to read. Every statistician working in any area of applied science should buy it and experience the excitement of these new approaches to familiar activities."-Technometrics Generalized Linear Models: With Applications in Engineering and the Sciences, Second Edition continues to provide a clear introduction to the theoretical foundations and key applications of generalized linear models (GLMs). Ma

  16. Modelling a linear PM motor including magnetic saturation

    Polinder, H.; Slootweg, J.G.; Compter, J.C.; Hoeijmakers, M.J.

    2002-01-01

    The use of linear permanent-magnet (PM) actuators increases in a wide variety of applications because of the high force density, robustness and accuracy. The paper describes the modelling of a linear PM motor applied in, for example, wafer steppers, including magnetic saturation. This is important

  17. Application of the simplex method of linear programming model to ...

    This work discussed how the simplex method of linear programming could be used to maximize the profit of any business firm using Saclux Paint Company as a case study. It equally elucidated the effect variation in the optimal result obtained from linear programming model, will have on any given firm. It was demonstrated ...

  18. Estimating exponential scheduling preferences

    Hjorth, Katrine; Börjesson, Maria; Engelson, Leonid

    2015-01-01

    of car drivers' route and mode choice under uncertain travel times. Our analysis exposes some important methodological issues related to complex non-linear scheduling models: One issue is identifying the point in time where the marginal utility of being at the destination becomes larger than the marginal......Different assumptions about travelers' scheduling preferences yield different measures of the cost of travel time variability. Only few forms of scheduling preferences provide non-trivial measures which are additive over links in transport networks where link travel times are arbitrarily...... utility of being at the origin. Another issue is that models with the exponential marginal utility formulation suffer from empirical identification problems. Though our results are not decisive, they partly support the constant-affine specification, in which the value of travel time variability...

  19. Genetic parameters for racing records in trotters using linear and generalized linear models.

    Suontama, M; van der Werf, J H J; Juga, J; Ojala, M

    2012-09-01

    Heritability and repeatability and genetic and phenotypic correlations were estimated for trotting race records with linear and generalized linear models using 510,519 records on 17,792 Finnhorses and 513,161 records on 25,536 Standardbred trotters. Heritability and repeatability were estimated for single racing time and earnings traits with linear models, and logarithmic scale was used for racing time and fourth-root scale for earnings to correct for nonnormality. Generalized linear models with a gamma distribution were applied for single racing time and with a multinomial distribution for single earnings traits. In addition, genetic parameters for annual earnings were estimated with linear models on the observed and fourth-root scales. Racing success traits of single placings, winnings, breaking stride, and disqualifications were analyzed using generalized linear models with a binomial distribution. Estimates of heritability were greatest for racing time, which ranged from 0.32 to 0.34. Estimates of heritability were low for single earnings with all distributions, ranging from 0.01 to 0.09. Annual earnings were closer to normal distribution than single earnings. Heritability estimates were moderate for annual earnings on the fourth-root scale, 0.19 for Finnhorses and 0.27 for Standardbred trotters. Heritability estimates for binomial racing success variables ranged from 0.04 to 0.12, being greatest for winnings and least for breaking stride. Genetic correlations among racing traits were high, whereas phenotypic correlations were mainly low to moderate, except correlations between racing time and earnings were high. On the basis of a moderate heritability and moderate to high repeatability for racing time and annual earnings, selection of horses for these traits is effective when based on a few repeated records. Because of high genetic correlations, direct selection for racing time and annual earnings would also result in good genetic response in racing success.

  20. CQPSO scheduling algorithm for heterogeneous multi-core DAG task model

    Zhai, Wenzheng; Hu, Yue-Li; Ran, Feng

    2017-07-01

    Efficient task scheduling is critical to achieve high performance in a heterogeneous multi-core computing environment. The paper focuses on the heterogeneous multi-core directed acyclic graph (DAG) task model and proposes a novel task scheduling method based on an improved chaotic quantum-behaved particle swarm optimization (CQPSO) algorithm. A task priority scheduling list was built. A processor with minimum cumulative earliest finish time (EFT) was acted as the object of the first task assignment. The task precedence relationships were satisfied and the total execution time of all tasks was minimized. The experimental results show that the proposed algorithm has the advantage of optimization abilities, simple and feasible, fast convergence, and can be applied to the task scheduling optimization for other heterogeneous and distributed environment.

  1. Model predictive control-based scheduler for repetitive discrete event systems with capacity constraints

    Hiroyuki Goto

    2013-07-01

    Full Text Available A model predictive control-based scheduler for a class of discrete event systems is designed and developed. We focus on repetitive, multiple-input, multiple-output, and directed acyclic graph structured systems on which capacity constraints can be imposed. The target system’s behaviour is described by linear equations in max-plus algebra, referred to as state-space representation. Assuming that the system’s performance can be improved by paying additional cost, we adjust the system parameters and determine control inputs for which the reference output signals can be observed. The main contribution of this research is twofold, 1: For systems with capacity constraints, we derived an output prediction equation as functions of adjustable variables in a recursive form, 2: Regarding the construct for the system’s representation, we improved the structure to accomplish general operations which are essential for adjusting the system parameters. The result of numerical simulation in a later section demonstrates the effectiveness of the developed controller.

  2. A heuristic model for risk and cost impacts of plant outage maintenance schedule

    Mohammad Hadi Hadavi, S.

    2009-01-01

    Cost and risk are two major competing criteria in maintenance optimization problems. If a plant is forced to shutdown because of accident or fear of accident happening, beside loss of revenue, it causes damage to the credibility and reputation of the business operation. In this paper a heuristic model for incorporating three compelling optimization criteria (i.e., risk, cost, and loss) into a single evaluation function is proposed. Such a model could be used in any evaluation engine of outage maintenance schedule optimizer. It is attempted to make the model realistic and to address the ongoing challenges facing a schedule planner in a simple and commonly understandable fashion. Two simple competing schedules for the NPP feedwater system are examined against the model. The results show that while the model successfully addresses the current challenges for outage maintenance optimization, it properly demonstrates the dynamics of schedule in regards to risk, cost, and losses endured by maintenance schedule, particularly when prolonged outage and lack of maintenance for equipments in need of urgent care are of concern.

  3. Linear approximation model network and its formation via ...

    niques, an alternative `linear approximation model' (LAM) network approach is .... network is LPV, existing LTI theory is difficult to apply (Kailath 1980). ..... Beck J V, Arnold K J 1977 Parameter estimation in engineering and science (New York: ...

  4. Sphaleron in a non-linear sigma model

    Sogo, Kiyoshi; Fujimoto, Yasushi.

    1989-08-01

    We present an exact classical saddle point solution in a non-linear sigma model. It has a topological charge 1/2 and mediates the vacuum transition. The quantum fluctuations and the transition rate are also examined. (author)

  5. On D-branes from gauged linear sigma models

    Govindarajan, S.; Jayaraman, T.; Sarkar, T.

    2001-01-01

    We study both A-type and B-type D-branes in the gauged linear sigma model by considering worldsheets with boundary. The boundary conditions on the matter and vector multiplet fields are first considered in the large-volume phase/non-linear sigma model limit of the corresponding Calabi-Yau manifold, where we find that we need to add a contact term on the boundary. These considerations enable to us to derive the boundary conditions in the full gauged linear sigma model, including the addition of the appropriate boundary contact terms, such that these boundary conditions have the correct non-linear sigma model limit. Most of the analysis is for the case of Calabi-Yau manifolds with one Kaehler modulus (including those corresponding to hypersurfaces in weighted projective space), though we comment on possible generalisations

  6. Optimization for decision making linear and quadratic models

    Murty, Katta G

    2010-01-01

    While maintaining the rigorous linear programming instruction required, Murty's new book is unique in its focus on developing modeling skills to support valid decision-making for complex real world problems, and includes solutions to brand new algorithms.

  7. Study of linear induction motor characteristics : the Mosebach model

    1976-05-31

    This report covers the Mosebach theory of the double-sided linear induction motor, starting with the ideallized model and accompanying assumptions, and ending with relations for thrust, airgap power, and motor efficiency. Solutions of the magnetic in...

  8. Study of linear induction motor characteristics : the Oberretl model

    1975-05-30

    The Oberretl theory of the double-sided linear induction motor (LIM) is examined, starting with the idealized model and accompanying assumptions, and ending with relations for predicted thrust, airgap power, and motor efficiency. The effect of varyin...

  9. A genetic algorithm-based job scheduling model for big data analytics.

    Lu, Qinghua; Li, Shanshan; Zhang, Weishan; Zhang, Lei

    Big data analytics (BDA) applications are a new category of software applications that process large amounts of data using scalable parallel processing infrastructure to obtain hidden value. Hadoop is the most mature open-source big data analytics framework, which implements the MapReduce programming model to process big data with MapReduce jobs. Big data analytics jobs are often continuous and not mutually separated. The existing work mainly focuses on executing jobs in sequence, which are often inefficient and consume high energy. In this paper, we propose a genetic algorithm-based job scheduling model for big data analytics applications to improve the efficiency of big data analytics. To implement the job scheduling model, we leverage an estimation module to predict the performance of clusters when executing analytics jobs. We have evaluated the proposed job scheduling model in terms of feasibility and accuracy.

  10. Optimization Research of Generation Investment Based on Linear Programming Model

    Wu, Juan; Ge, Xueqian

    Linear programming is an important branch of operational research and it is a mathematical method to assist the people to carry out scientific management. GAMS is an advanced simulation and optimization modeling language and it will combine a large number of complex mathematical programming, such as linear programming LP, nonlinear programming NLP, MIP and other mixed-integer programming with the system simulation. In this paper, based on the linear programming model, the optimized investment decision-making of generation is simulated and analyzed. At last, the optimal installed capacity of power plants and the final total cost are got, which provides the rational decision-making basis for optimized investments.

  11. Modeling of capacitated transportation systems for integral scheduling

    Ebben, Mark; van der Heijden, Matthijs C.; Hurink, Johann L.; Schutten, Johannes M.J.

    2003-01-01

    Motivated by a planned automated cargo transportation network, we consider transportation problems in which the finite capacity of resources has to be taken into account. We present a flexible modeling methodology which allows to construct, evaluate, and improve feasible solutions. The modeling is

  12. Modeling of capacitated transportation systems for integral scheduling

    Ebben, Mark; van der Heijden, Matthijs C.; Hurink, Johann L.; Schutten, Johannes M.J.

    2003-01-01

    Motivated by a planned automated cargo transportation network, we consider transportation problems in which the finite capacity of resources has to be taken nto account. We present a flexible modeling methodology which allows to construct, evaluate, and improve feasible solutions. The modeling is

  13. Generalized linear mixed models modern concepts, methods and applications

    Stroup, Walter W

    2012-01-01

    PART I The Big PictureModeling BasicsWhat Is a Model?Two Model Forms: Model Equation and Probability DistributionTypes of Model EffectsWriting Models in Matrix FormSummary: Essential Elements for a Complete Statement of the ModelDesign MattersIntroductory Ideas for Translating Design and Objectives into ModelsDescribing ""Data Architecture"" to Facilitate Model SpecificationFrom Plot Plan to Linear PredictorDistribution MattersMore Complex Example: Multiple Factors with Different Units of ReplicationSetting the StageGoals for Inference with Models: OverviewBasic Tools of InferenceIssue I: Data

  14. A comparison of linear tyre models for analysing shimmy

    Besselink, I.J.M.; Maas, J.W.L.H.; Nijmeijer, H.

    2011-01-01

    A comparison is made between three linear, dynamic tyre models using low speed step responses and yaw oscillation tests. The match with the measurements improves with increasing complexity of the tyre model. Application of the different tyre models to a two degree of freedom trailing arm suspension

  15. Unification of three linear models for the transient visual system

    Brinker, den A.C.

    1989-01-01

    Three different linear filters are considered as a model describing the experimentally determined triphasic impulse responses of discs. These impulse responses arc associated with the transient visual system. Each model reveals a different feature of the system. Unification of the models is

  16. A BEHAVIORAL-APPROACH TO LINEAR EXACT MODELING

    ANTOULAS, AC; WILLEMS, JC

    1993-01-01

    The behavioral approach to system theory provides a parameter-free framework for the study of the general problem of linear exact modeling and recursive modeling. The main contribution of this paper is the solution of the (continuous-time) polynomial-exponential time series modeling problem. Both

  17. A Framework for Uplink Intercell Interference Modeling with Channel-Based Scheduling

    Tabassum, Hina

    2012-12-29

    This paper presents a novel framework for modeling the uplink intercell interference(ICI) in a multiuser cellular network. The proposed framework assists in quantifying the impact of various fading channel models and state-of-the-art scheduling schemes on the uplink ICI. Firstly, we derive a semianalytical expression for the distribution of the location of the scheduled user in a given cell considering a wide range of scheduling schemes. Based on this, we derive the distribution and moment generating function (MGF) of the uplink ICI considering a single interfering cell. Consequently, we determine the MGF of the cumulative ICI observed from all interfering cells and derive explicit MGF expressions for three typical fading models. Finally, we utilize the obtained expressions to evaluate important network performance metrics such as the outage probability, ergodic capacity, and average fairness numerically. Monte-Carlo simulation results are provided to demonstrate the efficacy of the derived analytical expressions.

  18. Linearized models for a new magnetic control in MAST

    Artaserse, G., E-mail: giovanni.artaserse@enea.it [Associazione Euratom-ENEA sulla Fusione, Via Enrico Fermi 45, I-00044 Frascati (RM) (Italy); Maviglia, F.; Albanese, R. [Associazione Euratom-ENEA-CREATE sulla Fusione, Via Claudio 21, I-80125 Napoli (Italy); McArdle, G.J.; Pangione, L. [EURATOM/CCFE Fusion Association, Culham Science Centre, Abingdon, Oxon, OX14 3DB (United Kingdom)

    2013-10-15

    Highlights: ► We applied linearized models for a new magnetic control on MAST tokamak. ► A suite of procedures, conceived to be machine independent, have been used. ► We carried out model-based simulations, taking into account eddy currents effects. ► Comparison with the EFIT flux maps and the experimental magnetic signals are shown. ► A current driven model for the dynamic simulations of the experimental data have been performed. -- Abstract: The aim of this work is to provide reliable linearized models for the design and assessment of a new magnetic control system for MAST (Mega Ampère Spherical Tokamak) using rtEFIT, which can easily be exported to MAST Upgrade. Linearized models for magnetic control have been obtained using the 2D axisymmetric finite element code CREATE L. MAST linearized models include equivalent 2D axisymmetric schematization of poloidal field (PF) coils, vacuum vessel, and other conducting structures. A plasmaless and a double null configuration have been chosen as benchmark cases for the comparison with experimental data and EFIT reconstructions. Good agreement has been found with the EFIT flux map and the experimental signals coming from magnetic probes with only few mismatches probably due to broken sensors. A suite of procedures (equipped with a user friendly interface to be run even remotely) to provide linearized models for magnetic control is now available on the MAST linux machines. A new current driven model has been used to obtain a state space model having the PF coil currents as inputs. Dynamic simulations of experimental data have been carried out using linearized models, including modelling of the effects of the passive structures, showing a fair agreement. The modelling activity has been useful also to reproduce accurately the interaction between plasma current and radial position control loops.

  19. Linearized models for a new magnetic control in MAST

    Artaserse, G.; Maviglia, F.; Albanese, R.; McArdle, G.J.; Pangione, L.

    2013-01-01

    Highlights: ► We applied linearized models for a new magnetic control on MAST tokamak. ► A suite of procedures, conceived to be machine independent, have been used. ► We carried out model-based simulations, taking into account eddy currents effects. ► Comparison with the EFIT flux maps and the experimental magnetic signals are shown. ► A current driven model for the dynamic simulations of the experimental data have been performed. -- Abstract: The aim of this work is to provide reliable linearized models for the design and assessment of a new magnetic control system for MAST (Mega Ampère Spherical Tokamak) using rtEFIT, which can easily be exported to MAST Upgrade. Linearized models for magnetic control have been obtained using the 2D axisymmetric finite element code CREATE L. MAST linearized models include equivalent 2D axisymmetric schematization of poloidal field (PF) coils, vacuum vessel, and other conducting structures. A plasmaless and a double null configuration have been chosen as benchmark cases for the comparison with experimental data and EFIT reconstructions. Good agreement has been found with the EFIT flux map and the experimental signals coming from magnetic probes with only few mismatches probably due to broken sensors. A suite of procedures (equipped with a user friendly interface to be run even remotely) to provide linearized models for magnetic control is now available on the MAST linux machines. A new current driven model has been used to obtain a state space model having the PF coil currents as inputs. Dynamic simulations of experimental data have been carried out using linearized models, including modelling of the effects of the passive structures, showing a fair agreement. The modelling activity has been useful also to reproduce accurately the interaction between plasma current and radial position control loops

  20. Solving scheduling problems by untimed model checking. The clinical chemical analyser case study

    Margaria, T.; Wijs, Anton J.; Massink, M.; van de Pol, Jan Cornelis; Bortnik, Elena M.

    2009-01-01

    In this article, we show how scheduling problems can be modelled in untimed process algebra, by using special tick actions. A minimal-cost trace leading to a particular action, is one that minimises the number of tick steps. As a result, we can use any (timed or untimed) model checking tool to find

  1. Optimizing Intermodal Train Schedules with a Design Balanced Network Design Model

    Pedersen, Michael Berliner; Crainic, Teodor Gabriel

    We present a modeling approach for optimizing intermodal trains schedules based on an infrastructure divided into time-dependent train paths. The formulation can be generalized to a capacitated multi commodity network design model with additional design balance constraints. We present a Tabu Search...

  2. Modeling the violation of reward maximization and invariance in reinforcement schedules.

    Giancarlo La Camera

    2008-08-01

    Full Text Available It is often assumed that animals and people adjust their behavior to maximize reward acquisition. In visually cued reinforcement schedules, monkeys make errors in trials that are not immediately rewarded, despite having to repeat error trials. Here we show that error rates are typically smaller in trials equally distant from reward but belonging to longer schedules (referred to as "schedule length effect". This violates the principles of reward maximization and invariance and cannot be predicted by the standard methods of Reinforcement Learning, such as the method of temporal differences. We develop a heuristic model that accounts for all of the properties of the behavior in the reinforcement schedule task but whose predictions are not different from those of the standard temporal difference model in choice tasks. In the modification of temporal difference learning introduced here, the effect of schedule length emerges spontaneously from the sensitivity to the immediately preceding trial. We also introduce a policy for general Markov Decision Processes, where the decision made at each node is conditioned on the motivation to perform an instrumental action, and show that the application of our model to the reinforcement schedule task and the choice task are special cases of this general theoretical framework. Within this framework, Reinforcement Learning can approach contextual learning with the mixture of empirical findings and principled assumptions that seem to coexist in the best descriptions of animal behavior. As examples, we discuss two phenomena observed in humans that often derive from the violation of the principle of invariance: "framing," wherein equivalent options are treated differently depending on the context in which they are presented, and the "sunk cost" effect, the greater tendency to continue an endeavor once an investment in money, effort, or time has been made. The schedule length effect might be a manifestation of these

  3. Presenting a multi-objective generation scheduling model for pricing demand response rate in micro-grid energy management

    Aghajani, G.R.; Shayanfar, H.A.; Shayeghi, H.

    2015-01-01

    Highlights: • Using DRPs to cover the uncertainties resulted from power generation by WT and PV. • Proposing the use of price-offer packages and amount of DR for implement DRPs. • Considering a multi-objective scheduling model and use of MOPSO algorithm. - Abstract: In this paper, a multi-objective energy management system is proposed in order to optimize micro-grid (MG) performance in a short-term in the presence of Renewable Energy Sources (RESs) for wind and solar energy generation with a randomized natural behavior. Considering the existence of different types of customers including residential, commercial, and industrial consumers can participate in demand response programs. As with declare their interruptible/curtailable demand rate or select from among different proposed prices so as to assist the central micro-grid control in terms of optimizing micro-grid operation and covering energy generation uncertainty from the renewable sources. In this paper, to implement Demand Response (DR) schedules, incentive-based payment in the form of offered packages of price and DR quantity collected by Demand Response Providers (DRPs) is used. In the typical micro-grid, different technologies including Wind Turbine (WT), PhotoVoltaic (PV) cell, Micro-Turbine (MT), Full Cell (FC), battery hybrid power source and responsive loads are used. The simulation results are considered in six different cases in order to optimize operation cost and emission with/without DR. Considering the complexity and non-linearity of the proposed problem, Multi-Objective Particle Swarm Optimization (MOPSO) is utilized. Also, fuzzy-based mechanism and non-linear sorting system are applied to determine the best compromise considering the set of solutions from Pareto-front space. The numerical results represented the effect of the proposed Demand Side Management (DSM) scheduling model on reducing the effect of uncertainty obtained from generation power and predicted by WT and PV in a MG.

  4. H∞ /H2 model reduction through dilated linear matrix inequalities

    Adegas, Fabiano Daher; Stoustrup, Jakob

    2012-01-01

    This paper presents sufficient dilated linear matrix inequalities (LMI) conditions to the $H_{infty}$ and $H_{2}$ model reduction problem. A special structure of the auxiliary (slack) variables allows the original model of order $n$ to be reduced to an order $r=n/s$ where $n,r,s in field{N}$. Arb......This paper presents sufficient dilated linear matrix inequalities (LMI) conditions to the $H_{infty}$ and $H_{2}$ model reduction problem. A special structure of the auxiliary (slack) variables allows the original model of order $n$ to be reduced to an order $r=n/s$ where $n,r,s in field...

  5. Non-linear Growth Models in Mplus and SAS

    Grimm, Kevin J.; Ram, Nilam

    2013-01-01

    Non-linear growth curves or growth curves that follow a specified non-linear function in time enable researchers to model complex developmental patterns with parameters that are easily interpretable. In this paper we describe how a variety of sigmoid curves can be fit using the Mplus structural modeling program and the non-linear mixed-effects modeling procedure NLMIXED in SAS. Using longitudinal achievement data collected as part of a study examining the effects of preschool instruction on academic gain we illustrate the procedures for fitting growth models of logistic, Gompertz, and Richards functions. Brief notes regarding the practical benefits, limitations, and choices faced in the fitting and estimation of such models are included. PMID:23882134

  6. Variance Function Partially Linear Single-Index Models1.

    Lian, Heng; Liang, Hua; Carroll, Raymond J

    2015-01-01

    We consider heteroscedastic regression models where the mean function is a partially linear single index model and the variance function depends upon a generalized partially linear single index model. We do not insist that the variance function depend only upon the mean function, as happens in the classical generalized partially linear single index model. We develop efficient and practical estimation methods for the variance function and for the mean function. Asymptotic theory for the parametric and nonparametric parts of the model is developed. Simulations illustrate the results. An empirical example involving ozone levels is used to further illustrate the results, and is shown to be a case where the variance function does not depend upon the mean function.

  7. A note on a model for quay crane scheduling with non-crossing constraints

    Santini, Alberto; Friberg, Henrik Alsing; Røpke, Stefan

    2015-01-01

    This article studies the quay crane scheduling problem with non-crossing constraints, which is an operational problem that arises in container terminals. An enhancement to a mixed integer programming model for the problem is proposed and a new class of valid inequalities is introduced. Computatio......This article studies the quay crane scheduling problem with non-crossing constraints, which is an operational problem that arises in container terminals. An enhancement to a mixed integer programming model for the problem is proposed and a new class of valid inequalities is introduced...

  8. Phylogenetic mixtures and linear invariants for equal input models.

    Casanellas, Marta; Steel, Mike

    2017-04-01

    The reconstruction of phylogenetic trees from molecular sequence data relies on modelling site substitutions by a Markov process, or a mixture of such processes. In general, allowing mixed processes can result in different tree topologies becoming indistinguishable from the data, even for infinitely long sequences. However, when the underlying Markov process supports linear phylogenetic invariants, then provided these are sufficiently informative, the identifiability of the tree topology can be restored. In this paper, we investigate a class of processes that support linear invariants once the stationary distribution is fixed, the 'equal input model'. This model generalizes the 'Felsenstein 1981' model (and thereby the Jukes-Cantor model) from four states to an arbitrary number of states (finite or infinite), and it can also be described by a 'random cluster' process. We describe the structure and dimension of the vector spaces of phylogenetic mixtures and of linear invariants for any fixed phylogenetic tree (and for all trees-the so called 'model invariants'), on any number n of leaves. We also provide a precise description of the space of mixtures and linear invariants for the special case of [Formula: see text] leaves. By combining techniques from discrete random processes and (multi-) linear algebra, our results build on a classic result that was first established by James Lake (Mol Biol Evol 4:167-191, 1987).

  9. Non-linear calibration models for near infrared spectroscopy

    Ni, Wangdong; Nørgaard, Lars; Mørup, Morten

    2014-01-01

    by ridge regression (RR). The performance of the different methods is demonstrated by their practical applications using three real-life near infrared (NIR) data sets. Different aspects of the various approaches including computational time, model interpretability, potential over-fitting using the non-linear...... models on linear problems, robustness to small or medium sample sets, and robustness to pre-processing, are discussed. The results suggest that GPR and BANN are powerful and promising methods for handling linear as well as nonlinear systems, even when the data sets are moderately small. The LS......-SVM), relevance vector machines (RVM), Gaussian process regression (GPR), artificial neural network (ANN), and Bayesian ANN (BANN). In this comparison, partial least squares (PLS) regression is used as a linear benchmark, while the relationship of the methods is considered in terms of traditional calibration...

  10. Estimation and variable selection for generalized additive partial linear models

    Wang, Li

    2011-08-01

    We study generalized additive partial linear models, proposing the use of polynomial spline smoothing for estimation of nonparametric functions, and deriving quasi-likelihood based estimators for the linear parameters. We establish asymptotic normality for the estimators of the parametric components. The procedure avoids solving large systems of equations as in kernel-based procedures and thus results in gains in computational simplicity. We further develop a class of variable selection procedures for the linear parameters by employing a nonconcave penalized quasi-likelihood, which is shown to have an asymptotic oracle property. Monte Carlo simulations and an empirical example are presented for illustration. © Institute of Mathematical Statistics, 2011.

  11. Aspects if stochastic models for short-term hydropower scheduling and bidding

    Belsnes, Michael Martin [Sintef Energy, Trondheim (Norway); Follestad, Turid [Sintef Energy, Trondheim (Norway); Wolfgang, Ove [Sintef Energy, Trondheim (Norway); Fosso, Olav B. [Dep. of electric power engineering NTNU, Trondheim (Norway)

    2012-07-01

    This report discusses challenges met when turning from deterministic to stochastic decision support models for short-term hydropower scheduling and bidding. The report describes characteristics of the short-term scheduling and bidding problem, different market and bidding strategies, and how a stochastic optimization model can be formulated. A review of approaches for stochastic short-term modelling and stochastic modelling for the input variables inflow and market prices is given. The report discusses methods for approximating the predictive distribution of uncertain variables by scenario trees. Benefits of using a stochastic over a deterministic model are illustrated by a case study, where increased profit is obtained to a varying degree depending on the reservoir filling and price structure. Finally, an approach for assessing the effect of using a size restricted scenario tree to approximate the predictive distribution for stochastic input variables is described. The report is a summary of the findings of Work package 1 of the research project #Left Double Quotation Mark#Optimal short-term scheduling of wind and hydro resources#Right Double Quotation Mark#. The project aims at developing a prototype for an operational stochastic short-term scheduling model. Based on the investigations summarized in the report, it is concluded that using a deterministic equivalent formulation of the stochastic optimization problem is convenient and sufficient for obtaining a working prototype. (author)

  12. Mathematical model for scheduling irrigation for swamp rice in Port ...

    While the mother model indicated that the planted crops will be under severe water stress because the values of their d2 were below the allowable range of water depletion except in weeks 1,7,10,16 and 17 with their d2 values to be; 178.50mm, 181.47mm, 162.11mm, 198.80mm and 187.60mm respectively. Water ...

  13. Matrix model and time-like linear dila ton matter

    Takayanagi, Tadashi

    2004-01-01

    We consider a matrix model description of the 2d string theory whose matter part is given by a time-like linear dilaton CFT. This is equivalent to the c=1 matrix model with a deformed, but very simple Fermi surface. Indeed, after a Lorentz transformation, the corresponding 2d spacetime is a conventional linear dila ton background with a time-dependent tachyon field. We show that the tree level scattering amplitudes in the matrix model perfectly agree with those computed in the world-sheet theory. The classical trajectories of fermions correspond to the decaying D-boranes in the time-like linear dilaton CFT. We also discuss the ground ring structure. Furthermore, we study the properties of the time-like Liouville theory by applying this matrix model description. We find that its ground ring structure is very similar to that of the minimal string. (author)

  14. Vortices, semi-local vortices in gauged linear sigma model

    Kim, Namkwon

    1998-11-01

    We consider the static (2+1)D gauged linear sigma model. By analyzing the governing system of partial differential equations, we investigate various aspects of the model. We show the existence of energy finite vortices under a partially broken symmetry on R 2 with the necessary condition suggested by Y. Yang. We also introduce generalized semi-local vortices and show the existence of energy finite semi-local vortices under a certain condition. The vacuum manifold for the semi-local vortices turns out to be graded. Besides, with a special choice of a representation, we show that the O(3) sigma model of which target space is nonlinear is a singular limit of the gauged linear sigma model of which target space is linear. (author)

  15. TENSOR DECOMPOSITIONS AND SPARSE LOG-LINEAR MODELS

    Johndrow, James E.; Bhattacharya, Anirban; Dunson, David B.

    2017-01-01

    Contingency table analysis routinely relies on log-linear models, with latent structure analysis providing a common alternative. Latent structure models lead to a reduced rank tensor factorization of the probability mass function for multivariate categorical data, while log-linear models achieve dimensionality reduction through sparsity. Little is known about the relationship between these notions of dimensionality reduction in the two paradigms. We derive several results relating the support of a log-linear model to nonnegative ranks of the associated probability tensor. Motivated by these findings, we propose a new collapsed Tucker class of tensor decompositions, which bridge existing PARAFAC and Tucker decompositions, providing a more flexible framework for parsimoniously characterizing multivariate categorical data. Taking a Bayesian approach to inference, we illustrate empirical advantages of the new decompositions. PMID:29332971

  16. Linear mixed models a practical guide using statistical software

    West, Brady T; Galecki, Andrzej T

    2006-01-01

    Simplifying the often confusing array of software programs for fitting linear mixed models (LMMs), Linear Mixed Models: A Practical Guide Using Statistical Software provides a basic introduction to primary concepts, notation, software implementation, model interpretation, and visualization of clustered and longitudinal data. This easy-to-navigate reference details the use of procedures for fitting LMMs in five popular statistical software packages: SAS, SPSS, Stata, R/S-plus, and HLM. The authors introduce basic theoretical concepts, present a heuristic approach to fitting LMMs based on bo

  17. Inverse Modelling Problems in Linear Algebra Undergraduate Courses

    Martinez-Luaces, Victor E.

    2013-01-01

    This paper will offer an analysis from a theoretical point of view of mathematical modelling, applications and inverse problems of both causation and specification types. Inverse modelling problems give the opportunity to establish connections between theory and practice and to show this fact, a simple linear algebra example in two different…

  18. Optical linear algebra processors - Noise and error-source modeling

    Casasent, D.; Ghosh, A.

    1985-01-01

    The modeling of system and component noise and error sources in optical linear algebra processors (OLAPs) are considered, with attention to the frequency-multiplexed OLAP. General expressions are obtained for the output produced as a function of various component errors and noise. A digital simulator for this model is discussed.

  19. Optical linear algebra processors: noise and error-source modeling.

    Casasent, D; Ghosh, A

    1985-06-01

    The modeling of system and component noise and error sources in optical linear algebra processors (OLAP's) are considered, with attention to the frequency-multiplexed OLAP. General expressions are obtained for the output produced as a function of various component errors and noise. A digital simulator for this model is discussed.

  20. CONTRIBUTIONS TO THE FINITE ELEMENT MODELING OF LINEAR ULTRASONIC MOTORS

    Oana CHIVU

    2013-05-01

    Full Text Available The present paper is concerned with the main modeling elements as produced by means of thefinite element method of linear ultrasonic motors. Hence, first the model is designed and then a modaland harmonic analysis are carried out in view of outlining the main outcomes

  1. Linear and Nonlinear Career Models: Metaphors, Paradigms, and Ideologies.

    Buzzanell, Patrice M.; Goldzwig, Steven R.

    1991-01-01

    Examines the linear or bureaucratic career models (dominant in career research, metaphors, paradigms, and ideologies) which maintain career myths of flexibility and individualized routes to success in organizations incapable of offering such versatility. Describes nonlinear career models which offer suggestive metaphors for re-visioning careers…

  2. Mathematical modeling identifies optimum lapatinib dosing schedules for the treatment of glioblastoma patients.

    Shayna Stein

    2018-01-01

    Full Text Available Human primary glioblastomas (GBM often harbor mutations within the epidermal growth factor receptor (EGFR. Treatment of EGFR-mutant GBM cell lines with the EGFR/HER2 tyrosine kinase inhibitor lapatinib can effectively induce cell death in these models. However, EGFR inhibitors have shown little efficacy in the clinic, partly because of inappropriate dosing. Here, we developed a computational approach to model the in vitro cellular dynamics of the EGFR-mutant cell line SF268 in response to different lapatinib concentrations and dosing schedules. We then used this approach to identify an effective treatment strategy within the clinical toxicity limits of lapatinib, and developed a partial differential equation modeling approach to study the in vivo GBM treatment response by taking into account the heterogeneous and diffusive nature of the disease. Despite the inability of lapatinib to induce tumor regressions with a continuous daily schedule, our modeling approach consistently predicts that continuous dosing remains the best clinically feasible strategy for slowing down tumor growth and lowering overall tumor burden, compared to pulsatile schedules currently known to be tolerated, even when considering drug resistance, reduced lapatinib tumor concentrations due to the blood brain barrier, and the phenotypic switch from proliferative to migratory cell phenotypes that occurs in hypoxic microenvironments. Our mathematical modeling and statistical analysis platform provides a rational method for comparing treatment schedules in search for optimal dosing strategies for glioblastoma and other cancer types.

  3. Identification of Affine Linear Parameter Varying Models for Adaptive Interventions in Fibromyalgia Treatment.

    Dos Santos, P Lopes; Deshpande, Sunil; Rivera, Daniel E; Azevedo-Perdicoúlis, T-P; Ramos, J A; Younger, Jarred

    2013-12-31

    There is good evidence that naltrexone, an opioid antagonist, has a strong neuroprotective role and may be a potential drug for the treatment of fibromyalgia. In previous work, some of the authors used experimental clinical data to identify input-output linear time invariant models that were used to extract useful information about the effect of this drug on fibromyalgia symptoms. Additional factors such as anxiety, stress, mood, and headache, were considered as additive disturbances. However, it seems reasonable to think that these factors do not affect the drug actuation, but only the way in which a participant perceives how the drug actuates on herself. Under this hypothesis the linear time invariant models can be replaced by State-Space Affine Linear Parameter Varying models where the disturbances are seen as a scheduling signal signal only acting at the parameters of the output equation. In this paper a new algorithm for identifying such a model is proposed. This algorithm minimizes a quadratic criterion of the output error. Since the output error is a linear function of some parameters, the Affine Linear Parameter Varying system identification is formulated as a separable nonlinear least squares problem. Likewise other identification algorithms using gradient optimization methods several parameter derivatives are dynamical systems that must be simulated. In order to increase time efficiency a canonical parametrization that minimizes the number of systems to be simulated is chosen. The effectiveness of the algorithm is assessed in a case study where an Affine Parameter Varying Model is identified from the experimental data used in the previous study and compared with the time-invariant model.

  4. A computer tool for daily application of the linear quadratic model

    Macias Jaen, J.; Galan Montenegro, P.; Bodineau Gil, C.; Wals Zurita, A.; Serradilla Gil, A.M.

    2001-01-01

    The aim of this paper is to indicate the relevance of the criteria A.S.A.R.A. (As Short As Reasonably Achievable) in the optimization of a fractionated radiotherapy schedule and the presentation of a Windows computer program as an easy tool in order to: Evaluate the Biological Equivalent Dose (BED) in a fractionated schedule; Make comparison between different treatments; Compensate a treatment when a delay has been happened with a version of the Linear Quadratic model that has into account the factor of accelerated repopulation. Conclusions: Delays in the normal radiotherapy schedule are items that have to be controlled as much as possible because it is able to be a very important parameter in order to release a good application of treatment, principally when the tumour is fast growing. It is necessary to evaluate them. ASARA criteria is useful to indicate the relevance of this aspect. Also, computer tools like this one could help us in order to achieve this. (author)

  5. Low-energy limit of the extended Linear Sigma Model

    Divotgey, Florian [Johann Wolfgang Goethe-Universitaet, Institut fuer Theoretische Physik, Frankfurt am Main (Germany); Kovacs, Peter [Wigner Research Center for Physics, Hungarian Academy of Sciences, Institute for Particle and Nuclear Physics, Budapest (Hungary); GSI Helmholtzzentrum fuer Schwerionenforschung, ExtreMe Matter Institute, Darmstadt (Germany); Giacosa, Francesco [Johann Wolfgang Goethe-Universitaet, Institut fuer Theoretische Physik, Frankfurt am Main (Germany); Jan-Kochanowski University, Institute of Physics, Kielce (Poland); Rischke, Dirk H. [Johann Wolfgang Goethe-Universitaet, Institut fuer Theoretische Physik, Frankfurt am Main (Germany); University of Science and Technology of China, Interdisciplinary Center for Theoretical Study and Department of Modern Physics, Hefei, Anhui (China)

    2018-01-15

    The extended Linear Sigma Model is an effective hadronic model based on the linear realization of chiral symmetry SU(N{sub f}){sub L} x SU(N{sub f}){sub R}, with (pseudo)scalar and (axial-)vector mesons as degrees of freedom. In this paper, we study the low-energy limit of the extended Linear Sigma Model (eLSM) for N{sub f} = flavors by integrating out all fields except for the pions, the (pseudo-)Nambu-Goldstone bosons of chiral symmetry breaking. The resulting low-energy effective action is identical to Chiral Perturbation Theory (ChPT) after choosing a representative for the coset space generated by chiral symmetry breaking and expanding it in powers of (derivatives of) the pion fields. The tree-level values of the coupling constants of the effective low-energy action agree remarkably well with those of ChPT. (orig.)

  6. Linear Power-Flow Models in Multiphase Distribution Networks: Preprint

    Bernstein, Andrey; Dall' Anese, Emiliano

    2017-05-26

    This paper considers multiphase unbalanced distribution systems and develops approximate power-flow models where bus-voltages, line-currents, and powers at the point of common coupling are linearly related to the nodal net power injections. The linearization approach is grounded on a fixed-point interpretation of the AC power-flow equations, and it is applicable to distribution systems featuring (i) wye connections; (ii) ungrounded delta connections; (iii) a combination of wye-connected and delta-connected sources/loads; and, (iv) a combination of line-to-line and line-to-grounded-neutral devices at the secondary of distribution transformers. The proposed linear models can facilitate the development of computationally-affordable optimization and control applications -- from advanced distribution management systems settings to online and distributed optimization routines. Performance of the proposed models is evaluated on different test feeders.

  7. Model-based development of a course of action scheduling tool

    Kristensen, Lars Michael; Mechlenborg, Peter; Zhang, Lin

    2008-01-01

    . The scheduling capabilities of COAST are based on state space exploration of the embedded CPN model. Planners interact with COAST using a domain-specific graphical user interface (GUI) that hides the embedded CPN model and analysis algorithms. This means that COAST is based on a rigorous semantical model......, but the use of formal methods is transparent to the users. Trials of operational planning using COAST have been conducted within the Australian Defence Force....

  8. Modelling and measurement of a moving magnet linear compressor performance

    Liang, Kun; Stone, Richard; Davies, Gareth; Dadd, Mike; Bailey, Paul

    2014-01-01

    A novel moving magnet linear compressor with clearance seals and flexure bearings has been designed and constructed. It is suitable for a refrigeration system with a compact heat exchanger, such as would be needed for CPU cooling. The performance of the compressor has been experimentally evaluated with nitrogen and a mathematical model has been developed to evaluate the performance of the linear compressor. The results from the compressor model and the measurements have been compared in terms of cylinder pressure, the ‘P–V’ loop, stroke, mass flow rate and shaft power. The cylinder pressure was not measured directly but was derived from the compressor dynamics and the motor magnetic force characteristics. The comparisons indicate that the compressor model is well validated and can be used to study the performance of this type of compressor, to help with design optimization and the identification of key parameters affecting the system transients. The electrical and thermodynamic losses were also investigated, particularly for the design point (stroke of 13 mm and pressure ratio of 3.0), since a full understanding of these can lead to an increase in compressor efficiency. - Highlights: • Model predictions of the performance of a novel moving magnet linear compressor. • Prototype linear compressor performance measurements using nitrogen. • Reconstruction of P–V loops using a model of the dynamics and electromagnetics. • Close agreement between the model and measurements for the P–V loops. • The design point motor efficiency was 74%, with potential improvements identified

  9. An Extended Flexible Job Shop Scheduling Model for Flight Deck Scheduling with Priority, Parallel Operations, and Sequence Flexibility

    Lianfei Yu

    2017-01-01

    Full Text Available Efficient scheduling for the supporting operations of aircrafts in flight deck is critical to the aircraft carrier, and even several seconds’ improvement may lead to totally converse outcome of a battle. In the paper, we ameliorate the supporting operations of carrier-based aircrafts and investigate three simultaneous operation relationships during the supporting process, including precedence constraints, parallel operations, and sequence flexibility. Furthermore, multifunctional aircrafts have to take off synergistically and participate in a combat cooperatively. However, their takeoff order must be restrictively prioritized during the scheduling period accorded by certain operational regulations. To efficiently prioritize the takeoff order while minimizing the total time budget on the whole takeoff duration, we propose a novel mixed integer liner programming formulation (MILP for the flight deck scheduling problem. Motivated by the hardness of MILP, we design an improved differential evolution algorithm combined with typical local search strategies to improve computational efficiency. We numerically compare the performance of our algorithm with the classical genetic algorithm and normal differential evolution algorithm and the results show that our algorithm obtains better scheduling schemes that can meet both the operational relations and the takeoff priority requirements.

  10. The minimal linear σ model for the Goldstone Higgs

    Feruglio, F.; Gavela, M.B.; Kanshin, K.; Machado, P.A.N.; Rigolin, S.; Saa, S.

    2016-01-01

    In the context of the minimal SO(5) linear σ-model, a complete renormalizable Lagrangian -including gauge bosons and fermions- is considered, with the symmetry softly broken to SO(4). The scalar sector describes both the electroweak Higgs doublet and the singlet σ. Varying the σ mass would allow to sweep from the regime of perturbative ultraviolet completion to the non-linear one assumed in models in which the Higgs particle is a low-energy remnant of some strong dynamics. We analyze the phenomenological implications and constraints from precision observables and LHC data. Furthermore, we derive the d≤6 effective Lagrangian in the limit of heavy exotic fermions.

  11. A variational formulation for linear models in coupled dynamic thermoelasticity

    Feijoo, R.A.; Moura, C.A. de.

    1981-07-01

    A variational formulation for linear models in coupled dynamic thermoelasticity which quite naturally motivates the design of a numerical scheme for the problem, is studied. When linked to regularization or penalization techniques, this algorithm may be applied to more general models, namely, the ones that consider non-linear constraints associated to variational inequalities. The basic postulates of Mechanics and Thermodynamics as well as some well-known mathematical techniques are described. A thorough description of the algorithm implementation with the finite-element method is also provided. Proofs for existence and uniqueness of solutions and for convergence of the approximations are presented, and some numerical results are exhibited. (Author) [pt

  12. Comparison of linear, mixed integer and non-linear programming methods in energy system dispatch modelling

    Ommen, Torben Schmidt; Markussen, Wiebke Brix; Elmegaard, Brian

    2014-01-01

    In the paper, three frequently used operation optimisation methods are examined with respect to their impact on operation management of the combined utility technologies for electric power and DH (district heating) of eastern Denmark. The investigation focusses on individual plant operation...... differences and differences between the solution found by each optimisation method. One of the investigated approaches utilises LP (linear programming) for optimisation, one uses LP with binary operation constraints, while the third approach uses NLP (non-linear programming). The LP model is used...... as a benchmark, as this type is frequently used, and has the lowest amount of constraints of the three. A comparison of the optimised operation of a number of units shows significant differences between the three methods. Compared to the reference, the use of binary integer variables, increases operation...

  13. Modeling the transient security constraints of natural gas network in day-ahead power system scheduling

    Yang, Jingwei; Zhang, Ning; Kang, Chongqing

    2017-01-01

    The rapid deployment of gas-fired generating units makes the power system more vulnerable to failures in the natural gas system. To reduce the risk of gas system failure and to guarantee the security of power system operation, it is necessary to take the security constraints of natural gas...... accurately, they are hard to be embedded into the power system scheduling model, which consists of algebraic equations and inequations. This paper addresses this dilemma by proposing an algebraic transient model of natural gas network which is similar to the branch-node model of power network. Based...... pipelines into account in the day-ahead power generation scheduling model. However, the minute- and hour-level dynamic characteristics of gas systems prevents an accurate decision-making simply with the steady-state gas flow model. Although the partial differential equations depict the dynamics of gas flow...

  14. Defining a Family of Cognitive Diagnosis Models Using Log-Linear Models with Latent Variables

    Henson, Robert A.; Templin, Jonathan L.; Willse, John T.

    2009-01-01

    This paper uses log-linear models with latent variables (Hagenaars, in "Loglinear Models with Latent Variables," 1993) to define a family of cognitive diagnosis models. In doing so, the relationship between many common models is explicitly defined and discussed. In addition, because the log-linear model with latent variables is a general model for…

  15. Estimating the price elasticity of expenditure for prescription drugs in the presence of non-linear price schedules: an illustration from Quebec, Canada.

    Contoyannis, Paul; Hurley, Jeremiah; Grootendorst, Paul; Jeon, Sung-Hee; Tamblyn, Robyn

    2005-09-01

    The price elasticity of demand for prescription drugs is a crucial parameter of interest in designing pharmaceutical benefit plans. Estimating the elasticity using micro-data, however, is challenging because insurance coverage that includes deductibles, co-insurance provisions and maximum expenditure limits create a non-linear price schedule, making price endogenous (a function of drug consumption). In this paper we exploit an exogenous change in cost-sharing within the Quebec (Canada) public Pharmacare program to estimate the price elasticity of expenditure for drugs using IV methods. This approach corrects for the endogeneity of price and incorporates the concept of a 'rational' consumer who factors into consumption decisions the price they expect to face at the margin given their expected needs. The IV method is adapted from an approach developed in the public finance literature used to estimate income responses to changes in tax schedules. The instrument is based on the price an individual would face under the new cost-sharing policy if their consumption remained at the pre-policy level. Our preferred specification leads to expenditure elasticities that are in the low range of previous estimates (between -0.12 and -0.16). Naïve OLS estimates are between 1 and 4 times these magnitudes. (c) 2005 John Wiley & Sons, Ltd.

  16. Functional linear models for association analysis of quantitative traits.

    Fan, Ruzong; Wang, Yifan; Mills, James L; Wilson, Alexander F; Bailey-Wilson, Joan E; Xiong, Momiao

    2013-11-01

    Functional linear models are developed in this paper for testing associations between quantitative traits and genetic variants, which can be rare variants or common variants or the combination of the two. By treating multiple genetic variants of an individual in a human population as a realization of a stochastic process, the genome of an individual in a chromosome region is a continuum of sequence data rather than discrete observations. The genome of an individual is viewed as a stochastic function that contains both linkage and linkage disequilibrium (LD) information of the genetic markers. By using techniques of functional data analysis, both fixed and mixed effect functional linear models are built to test the association between quantitative traits and genetic variants adjusting for covariates. After extensive simulation analysis, it is shown that the F-distributed tests of the proposed fixed effect functional linear models have higher power than that of sequence kernel association test (SKAT) and its optimal unified test (SKAT-O) for three scenarios in most cases: (1) the causal variants are all rare, (2) the causal variants are both rare and common, and (3) the causal variants are common. The superior performance of the fixed effect functional linear models is most likely due to its optimal utilization of both genetic linkage and LD information of multiple genetic variants in a genome and similarity among different individuals, while SKAT and SKAT-O only model the similarities and pairwise LD but do not model linkage and higher order LD information sufficiently. In addition, the proposed fixed effect models generate accurate type I error rates in simulation studies. We also show that the functional kernel score tests of the proposed mixed effect functional linear models are preferable in candidate gene analysis and small sample problems. The methods are applied to analyze three biochemical traits in data from the Trinity Students Study. © 2013 WILEY

  17. Practical likelihood analysis for spatial generalized linear mixed models

    Bonat, W. H.; Ribeiro, Paulo Justiniano

    2016-01-01

    We investigate an algorithm for maximum likelihood estimation of spatial generalized linear mixed models based on the Laplace approximation. We compare our algorithm with a set of alternative approaches for two datasets from the literature. The Rhizoctonia root rot and the Rongelap are......, respectively, examples of binomial and count datasets modeled by spatial generalized linear mixed models. Our results show that the Laplace approximation provides similar estimates to Markov Chain Monte Carlo likelihood, Monte Carlo expectation maximization, and modified Laplace approximation. Some advantages...... of Laplace approximation include the computation of the maximized log-likelihood value, which can be used for model selection and tests, and the possibility to obtain realistic confidence intervals for model parameters based on profile likelihoods. The Laplace approximation also avoids the tuning...

  18. Stochastic modeling of mode interactions via linear parabolized stability equations

    Ran, Wei; Zare, Armin; Hack, M. J. Philipp; Jovanovic, Mihailo

    2017-11-01

    Low-complexity approximations of the Navier-Stokes equations have been widely used in the analysis of wall-bounded shear flows. In particular, the parabolized stability equations (PSE) and Floquet theory have been employed to capture the evolution of primary and secondary instabilities in spatially-evolving flows. We augment linear PSE with Floquet analysis to formally treat modal interactions and the evolution of secondary instabilities in the transitional boundary layer via a linear progression. To this end, we leverage Floquet theory by incorporating the primary instability into the base flow and accounting for different harmonics in the flow state. A stochastic forcing is introduced into the resulting linear dynamics to model the effect of nonlinear interactions on the evolution of modes. We examine the H-type transition scenario to demonstrate how our approach can be used to model nonlinear effects and capture the growth of the fundamental and subharmonic modes observed in direct numerical simulations and experiments.

  19. A Scheduling Model for the Re-entrant Manufacturing System and Its Optimization by NSGA-II

    Masoud Rabbani

    2016-11-01

    Full Text Available In this study, a two-objective mixed-integer linear programming model (MILP for multi-product re-entrant flow shop scheduling problem has been designed. As a result, two objectives are considered. One of them is maximization of the production rate and the other is the minimization of processing time. The system has m stations and can process several products in a moment. The re-entrant flow shop scheduling problem is well known as NP-hard problem and its complexity has been discussed by several researchers. Given that NSGA-II algorithm is one of the strongest and most applicable algorithm in solving multi-objective optimization problems, it is used to solve this problem. To increase algorithm performance, Taguchi technique is used to design experiments for algorithm’s parameters. Numerical experiments are proposed to show the efficiency and effectiveness of the model. Finally, the results of NSGA-II are compared with SPEA2 algorithm (Strength Pareto Evolutionary Algorithm 2. The experimental results show that the proposed algorithm performs significantly better than the SPEA2.

  20. A Statistical Model for Uplink Intercell Interference with Power Adaptation and Greedy Scheduling

    Tabassum, Hina

    2012-10-03

    This paper deals with the statistical modeling of uplink inter-cell interference (ICI) considering greedy scheduling with power adaptation based on channel conditions. The derived model is implicitly generalized for any kind of shadowing and fading environments. More precisely, we develop a generic model for the distribution of ICI based on the locations of the allocated users and their transmit powers. The derived model is utilized to evaluate important network performance metrics such as ergodic capacity, average fairness and average power preservation numerically. Monte-Carlo simulation details are included to support the analysis and show the accuracy of the derived expressions. In parallel to the literature, we show that greedy scheduling with power adaptation reduces the ICI, average power consumption of users, and enhances the average fairness among users, compared to the case without power adaptation. © 2012 IEEE.

  1. A Statistical Model for Uplink Intercell Interference with Power Adaptation and Greedy Scheduling

    Tabassum, Hina; Yilmaz, Ferkan; Dawy, Zaher; Alouini, Mohamed-Slim

    2012-01-01

    This paper deals with the statistical modeling of uplink inter-cell interference (ICI) considering greedy scheduling with power adaptation based on channel conditions. The derived model is implicitly generalized for any kind of shadowing and fading environments. More precisely, we develop a generic model for the distribution of ICI based on the locations of the allocated users and their transmit powers. The derived model is utilized to evaluate important network performance metrics such as ergodic capacity, average fairness and average power preservation numerically. Monte-Carlo simulation details are included to support the analysis and show the accuracy of the derived expressions. In parallel to the literature, we show that greedy scheduling with power adaptation reduces the ICI, average power consumption of users, and enhances the average fairness among users, compared to the case without power adaptation. © 2012 IEEE.

  2. Analysis and Enhancement of IEEE 802.15.4e DSME Beacon Scheduling Model

    Kwang-il Hwang

    2014-01-01

    Full Text Available In order to construct a successful Internet of things (IoT, reliable network construction and maintenance in a sensor domain should be supported. However, IEEE 802.15.4, which is the most representative wireless standard for IoT, still has problems in constructing a large-scale sensor network, such as beacon collision. To overcome some problems in IEEE 802.15.4, the 15.4e task group proposed various different modes of operation. Particularly, the IEEE 802.15.4e deterministic and synchronous multichannel extension (DSME mode presents a novel scheduling model to solve beacon collision problems. However, the DSME model specified in the 15.4e draft does not present a concrete design model but a conceptual abstract model. Therefore, in this paper we introduce a DSME beacon scheduling model and present a concrete design model. Furthermore, validity and performance of DSME are evaluated through experiments. Based on experiment results, we analyze the problems and limitations of DSME, present solutions step by step, and finally propose an enhanced DSME beacon scheduling model. Through additional experiments, we prove the performance superiority of enhanced DSME.

  3. Linear modeling of possible mechanisms for parkinson tremor generation

    Lohnberg, P.

    1978-01-01

    The power of Parkinson tremor is expressed in terms of possibly changed frequency response functions between relevant variables in the neuromuscular system. The derivation starts out from a linear loopless equivalent model of mechanisms for general tremor generation. Hypothetical changes in this

  4. Current algebra of classical non-linear sigma models

    Forger, M.; Laartz, J.; Schaeper, U.

    1992-01-01

    The current algebra of classical non-linear sigma models on arbitrary Riemannian manifolds is analyzed. It is found that introducing, in addition to the Noether current j μ associated with the global symmetry of the theory, a composite scalar field j, the algebra closes under Poisson brackets. (orig.)

  5. Mathematical modelling and linear stability analysis of laser fusion cutting

    Hermanns, Torsten; Schulz, Wolfgang; Vossen, Georg; Thombansen, Ulrich

    2016-01-01

    A model for laser fusion cutting is presented and investigated by linear stability analysis in order to study the tendency for dynamic behavior and subsequent ripple formation. The result is a so called stability function that describes the correlation of the setting values of the process and the process’ amount of dynamic behavior.

  6. Non Linear signa models probing the string structure

    Abdalla, E.

    1987-01-01

    The introduction of a term depending on the extrinsic curvature to the string action, and related non linear sigma models defined on a symmetric space SO(D)/SO(2) x SO(d-2) is descussed . Coupling to fermions are also treated. (author) [pt

  7. Confidence Intervals for Assessing Heterogeneity in Generalized Linear Mixed Models

    Wagler, Amy E.

    2014-01-01

    Generalized linear mixed models are frequently applied to data with clustered categorical outcomes. The effect of clustering on the response is often difficult to practically assess partly because it is reported on a scale on which comparisons with regression parameters are difficult to make. This article proposes confidence intervals for…

  8. Penalized Estimation in Large-Scale Generalized Linear Array Models

    Lund, Adam; Vincent, Martin; Hansen, Niels Richard

    2017-01-01

    Large-scale generalized linear array models (GLAMs) can be challenging to fit. Computation and storage of its tensor product design matrix can be impossible due to time and memory constraints, and previously considered design matrix free algorithms do not scale well with the dimension...

  9. Expressions for linearized perturbations in ideal-fluid cosmological models

    Ratra, B.

    1988-01-01

    We present closed-form solutions of the relativistic linear perturbation equations (in synchronous gauge) that govern the evolution of inhomogeneities in homogeneous, spatially flat, ideal-fluid, cosmological models. These expressions, which are valid for irregularities on any scale, allow one to analytically interpolate between the known approximate solutions which are valid at early times and at late times

  10. S-AMP for non-linear observation models

    Cakmak, Burak; Winther, Ole; Fleury, Bernard H.

    2015-01-01

    Recently we presented the S-AMP approach, an extension of approximate message passing (AMP), to be able to handle general invariant matrix ensembles. In this contribution we extend S-AMP to non-linear observation models. We obtain generalized AMP (GAMP) as the special case when the measurement...

  11. Plane answers to complex questions the theory of linear models

    Christensen, Ronald

    1987-01-01

    This book was written to rigorously illustrate the practical application of the projective approach to linear models. To some, this may seem contradictory. I contend that it is possible to be both rigorous and illustrative and that it is possible to use the projective approach in practical applications. Therefore, unlike many other books on linear models, the use of projections and sub­ spaces does not stop after the general theory. They are used wherever I could figure out how to do it. Solving normal equations and using calculus (outside of maximum likelihood theory) are anathema to me. This is because I do not believe that they contribute to the understanding of linear models. I have similar feelings about the use of side conditions. Such topics are mentioned when appropriate and thenceforward avoided like the plague. On the other side of the coin, I just as strenuously reject teaching linear models with a coordinate free approach. Although Joe Eaton assures me that the issues in complicated problems freq...

  12. A simulation model of a coordinated decentralized linear supply chain

    Ashayeri, Jalal; Cannella, S.; Lopez Campos, M.; Miranda, P.A.

    2015-01-01

    This paper presents a simulation-based study of a coordinated, decentralized linear supply chain (SC) system. In the proposed model, any supply tier considers its successors as part of its inventory system and generates replenishment orders on the basis of its partners’ operational information. We

  13. Mathematical modelling and linear stability analysis of laser fusion cutting

    Hermanns, Torsten; Schulz, Wolfgang [RWTH Aachen University, Chair for Nonlinear Dynamics, Steinbachstr. 15, 52047 Aachen (Germany); Vossen, Georg [Niederrhein University of Applied Sciences, Chair for Applied Mathematics and Numerical Simulations, Reinarzstr.. 49, 47805 Krefeld (Germany); Thombansen, Ulrich [RWTH Aachen University, Chair for Laser Technology, Steinbachstr. 15, 52047 Aachen (Germany)

    2016-06-08

    A model for laser fusion cutting is presented and investigated by linear stability analysis in order to study the tendency for dynamic behavior and subsequent ripple formation. The result is a so called stability function that describes the correlation of the setting values of the process and the process’ amount of dynamic behavior.

  14. Performances Of Estimators Of Linear Models With Autocorrelated ...

    The performances of five estimators of linear models with Autocorrelated error terms are compared when the independent variable is autoregressive. The results reveal that the properties of the estimators when the sample size is finite is quite similar to the properties of the estimators when the sample size is infinite although ...

  15. Performances of estimators of linear auto-correlated error model ...

    The performances of five estimators of linear models with autocorrelated disturbance terms are compared when the independent variable is exponential. The results reveal that for both small and large samples, the Ordinary Least Squares (OLS) compares favourably with the Generalized least Squares (GLS) estimators in ...

  16. A non-linear dissipative model of magnetism

    Durand, P.; Paidarová, Ivana

    2010-01-01

    Roč. 89, č. 6 (2010), s. 67004 ISSN 1286-4854 R&D Projects: GA AV ČR IAA100400501 Institutional research plan: CEZ:AV0Z40400503 Keywords : non-linear dissipative model of magnetism * thermodynamics * physical chemistry Subject RIV: CF - Physical ; Theoretical Chemistry http://epljournal.edpsciences.org/

  17. Modeling and verifying non-linearities in heterodyne displacement interferometry

    Cosijns, S.J.A.G.; Haitjema, H.; Schellekens, P.H.J.

    2002-01-01

    The non-linearities in a heterodyne laser interferometer system occurring from the phase measurement system of the interferometer andfrom non-ideal polarization effects of the optics are modeled into one analytical expression which includes the initial polarization state ofthe laser source, the

  18. Generalized linear longitudinal mixed models with linear covariance structure and multiplicative random effects

    Holst, René; Jørgensen, Bent

    2015-01-01

    The paper proposes a versatile class of multiplicative generalized linear longitudinal mixed models (GLLMM) with additive dispersion components, based on explicit modelling of the covariance structure. The class incorporates a longitudinal structure into the random effects models and retains...... a marginal as well as a conditional interpretation. The estimation procedure is based on a computationally efficient quasi-score method for the regression parameters combined with a REML-like bias-corrected Pearson estimating function for the dispersion and correlation parameters. This avoids...... the multidimensional integral of the conventional GLMM likelihood and allows an extension of the robust empirical sandwich estimator for use with both association and regression parameters. The method is applied to a set of otholit data, used for age determination of fish....

  19. Identifiability Results for Several Classes of Linear Compartment Models.

    Meshkat, Nicolette; Sullivant, Seth; Eisenberg, Marisa

    2015-08-01

    Identifiability concerns finding which unknown parameters of a model can be estimated, uniquely or otherwise, from given input-output data. If some subset of the parameters of a model cannot be determined given input-output data, then we say the model is unidentifiable. In this work, we study linear compartment models, which are a class of biological models commonly used in pharmacokinetics, physiology, and ecology. In past work, we used commutative algebra and graph theory to identify a class of linear compartment models that we call identifiable cycle models, which are unidentifiable but have the simplest possible identifiable functions (so-called monomial cycles). Here we show how to modify identifiable cycle models by adding inputs, adding outputs, or removing leaks, in such a way that we obtain an identifiable model. We also prove a constructive result on how to combine identifiable models, each corresponding to strongly connected graphs, into a larger identifiable model. We apply these theoretical results to several real-world biological models from physiology, cell biology, and ecology.

  20. Finite element modeling of nanotube structures linear and non-linear models

    Awang, Mokhtar; Muhammad, Ibrahim Dauda

    2016-01-01

    This book presents a new approach to modeling carbon structures such as graphene and carbon nanotubes using finite element methods, and addresses the latest advances in numerical studies for these materials. Based on the available findings, the book develops an effective finite element approach for modeling the structure and the deformation of grapheme-based materials. Further, modeling processing for single-walled and multi-walled carbon nanotubes is demonstrated in detail.

  1. Real-Time Robust Adaptive Modeling and Scheduling for an Electronic Commerce Server

    Du, Bing; Ruan, Chun

    With the increasing importance and pervasiveness of Internet services, it is becoming a challenge for the proliferation of electronic commerce services to provide performance guarantees under extreme overload. This paper describes a real-time optimization modeling and scheduling approach for performance guarantee of electronic commerce servers. We show that an electronic commerce server may be simulated as a multi-tank system. A robust adaptive server model is subject to unknown additive load disturbances and uncertain model matching. Overload control techniques are based on adaptive admission control to achieve timing guarantees. We evaluate the performance of the model using a complex simulation that is subjected to varying model parameters and massive overload.

  2. Linear Dynamics Model for Steam Cooled Fast Power Reactors

    Vollmer, H

    1968-04-15

    A linear analytical dynamic model is developed for steam cooled fast power reactors. All main components of such a plant are investigated on a general though relatively simple basis. The model is distributed in those parts concerning the core but lumped as to the external plant components. Coolant is considered as compressible and treated by the actual steam law. Combined use of analogue and digital computer seems most attractive.

  3. Deterministic operations research models and methods in linear optimization

    Rader, David J

    2013-01-01

    Uniquely blends mathematical theory and algorithm design for understanding and modeling real-world problems Optimization modeling and algorithms are key components to problem-solving across various fields of research, from operations research and mathematics to computer science and engineering. Addressing the importance of the algorithm design process. Deterministic Operations Research focuses on the design of solution methods for both continuous and discrete linear optimization problems. The result is a clear-cut resource for understanding three cornerstones of deterministic operations resear

  4. One-loop dimensional reduction of the linear σ model

    Malbouisson, A.P.C.; Silva-Neto, M.B.; Svaiter, N.F.

    1997-05-01

    We perform the dimensional reduction of the linear σ model at one-loop level. The effective of the reduced theory obtained from the integration over the nonzero Matsubara frequencies is exhibited. Thermal mass and coupling constant renormalization constants are given, as well as the thermal renormalization group which controls the dependence of the counterterms on the temperature. We also recover, for the reduced theory, the vacuum instability of the model for large N. (author)

  5. Artificial Neural Network versus Linear Models Forecasting Doha Stock Market

    Yousif, Adil; Elfaki, Faiz

    2017-12-01

    The purpose of this study is to determine the instability of Doha stock market and develop forecasting models. Linear time series models are used and compared with a nonlinear Artificial Neural Network (ANN) namely Multilayer Perceptron (MLP) Technique. It aims to establish the best useful model based on daily and monthly data which are collected from Qatar exchange for the period starting from January 2007 to January 2015. Proposed models are for the general index of Qatar stock exchange and also for the usages in other several sectors. With the help of these models, Doha stock market index and other various sectors were predicted. The study was conducted by using various time series techniques to study and analyze data trend in producing appropriate results. After applying several models, such as: Quadratic trend model, double exponential smoothing model, and ARIMA, it was concluded that ARIMA (2,2) was the most suitable linear model for the daily general index. However, ANN model was found to be more accurate than time series models.

  6. A linearized dispersion relation for orthorhombic pseudo-acoustic modeling

    Song, Xiaolei; Alkhalifah, Tariq Ali

    2012-01-01

    Wavefield extrapolation in acoustic orthorhombic anisotropic media suffers from wave-mode coupling and stability limitations in the parameter range. We introduce a linearized form of the dispersion relation for acoustic orthorhombic media to model acoustic wavefields. We apply the lowrank approximation approach to handle the corresponding space-wavenumber mixed-domain operator. Numerical experiments show that the proposed wavefield extrapolator is accurate and practically free of dispersions. Further, there is no coupling of qSv and qP waves, because we use the analytical dispersion relation. No constraints on Thomsen's parameters are required for stability. The linearized expression may provide useful application for parameter estimation in orthorhombic media.

  7. Personnel scheduling using an integer programming model- an application at Avanti Blue-Nile Hotels.

    Kassa, Biniyam Asmare; Tizazu, Anteneh Eshetu

    2013-01-01

    In this paper, we report perhaps a first of its kind application of management science in the Ethiopian hotel industry. Avanti Blue Nile Hotels, a newly established five star hotel in Bahir Dar, is the company for which we developed an integer programming model that determines an optimal weekly shift schedule for the Hotel's engineering department personnel while satisfying several constraints including weekly rest requirements per employee, rest requirements between working shifts per employee, required number of personnel per shift, and other constraints. The model is implemented on an excel solver routine. The model enables the company's personnel department management to develop a fair personnel schedule as needed and to effectively utilize personnel resources while satisfying several technical, legal and economic requirements. These encouraging achievements make us optimistic about the gains other Ethiopian organizations can amass by introducing management science approaches in their management planning and decision making systems.

  8. Non-linear sigma model on the fuzzy supersphere

    Kurkcuoglu, Seckin

    2004-01-01

    In this note we develop fuzzy versions of the supersymmetric non-linear sigma model on the supersphere S (2,2) . In hep-th/0212133 Bott projectors have been used to obtain the fuzzy C P 1 model. Our approach utilizes the use of supersymmetric extensions of these projectors. Here we obtain these (super)-projectors and quantize them in a fashion similar to the one given in hep-th/0212133. We discuss the interpretation of the resulting model as a finite dimensional matrix model. (author)

  9. Optimal difference-based estimation for partially linear models

    Zhou, Yuejin; Cheng, Yebin; Dai, Wenlin; Tong, Tiejun

    2017-01-01

    Difference-based methods have attracted increasing attention for analyzing partially linear models in the recent literature. In this paper, we first propose to solve the optimal sequence selection problem in difference-based estimation for the linear component. To achieve the goal, a family of new sequences and a cross-validation method for selecting the adaptive sequence are proposed. We demonstrate that the existing sequences are only extreme cases in the proposed family. Secondly, we propose a new estimator for the residual variance by fitting a linear regression method to some difference-based estimators. Our proposed estimator achieves the asymptotic optimal rate of mean squared error. Simulation studies also demonstrate that our proposed estimator performs better than the existing estimator, especially when the sample size is small and the nonparametric function is rough.

  10. Modeling and analysis of linear hyperbolic systems of balance laws

    Bartecki, Krzysztof

    2016-01-01

    This monograph focuses on the mathematical modeling of distributed parameter systems in which mass/energy transport or wave propagation phenomena occur and which are described by partial differential equations of hyperbolic type. The case of linear (or linearized) 2 x 2 hyperbolic systems of balance laws is considered, i.e., systems described by two coupled linear partial differential equations with two variables representing physical quantities, depending on both time and one-dimensional spatial variable. Based on practical examples of a double-pipe heat exchanger and a transportation pipeline, two typical configurations of boundary input signals are analyzed: collocated, wherein both signals affect the system at the same spatial point, and anti-collocated, in which the input signals are applied to the two different end points of the system. The results of this book emerge from the practical experience of the author gained during his studies conducted in the experimental installation of a heat exchange cente...

  11. Optimal difference-based estimation for partially linear models

    Zhou, Yuejin

    2017-12-16

    Difference-based methods have attracted increasing attention for analyzing partially linear models in the recent literature. In this paper, we first propose to solve the optimal sequence selection problem in difference-based estimation for the linear component. To achieve the goal, a family of new sequences and a cross-validation method for selecting the adaptive sequence are proposed. We demonstrate that the existing sequences are only extreme cases in the proposed family. Secondly, we propose a new estimator for the residual variance by fitting a linear regression method to some difference-based estimators. Our proposed estimator achieves the asymptotic optimal rate of mean squared error. Simulation studies also demonstrate that our proposed estimator performs better than the existing estimator, especially when the sample size is small and the nonparametric function is rough.

  12. Modeling the World Health Organization Disability Assessment Schedule II using non-parametric item response models.

    Galindo-Garre, Francisca; Hidalgo, María Dolores; Guilera, Georgina; Pino, Oscar; Rojo, J Emilio; Gómez-Benito, Juana

    2015-03-01

    The World Health Organization Disability Assessment Schedule II (WHO-DAS II) is a multidimensional instrument developed for measuring disability. It comprises six domains (getting around, self-care, getting along with others, life activities and participation in society). The main purpose of this paper is the evaluation of the psychometric properties for each domain of the WHO-DAS II with parametric and non-parametric Item Response Theory (IRT) models. A secondary objective is to assess whether the WHO-DAS II items within each domain form a hierarchy of invariantly ordered severity indicators of disability. A sample of 352 patients with a schizophrenia spectrum disorder is used in this study. The 36 items WHO-DAS II was administered during the consultation. Partial Credit and Mokken scale models are used to study the psychometric properties of the questionnaire. The psychometric properties of the WHO-DAS II scale are satisfactory for all the domains. However, we identify a few items that do not discriminate satisfactorily between different levels of disability and cannot be invariantly ordered in the scale. In conclusion the WHO-DAS II can be used to assess overall disability in patients with schizophrenia, but some domains are too general to assess functionality in these patients because they contain items that are not applicable to this pathology. Copyright © 2014 John Wiley & Sons, Ltd.

  13. A penalized framework for distributed lag non-linear models.

    Gasparrini, Antonio; Scheipl, Fabian; Armstrong, Ben; Kenward, Michael G

    2017-09-01

    Distributed lag non-linear models (DLNMs) are a modelling tool for describing potentially non-linear and delayed dependencies. Here, we illustrate an extension of the DLNM framework through the use of penalized splines within generalized additive models (GAM). This extension offers built-in model selection procedures and the possibility of accommodating assumptions on the shape of the lag structure through specific penalties. In addition, this framework includes, as special cases, simpler models previously proposed for linear relationships (DLMs). Alternative versions of penalized DLNMs are compared with each other and with the standard unpenalized version in a simulation study. Results show that this penalized extension to the DLNM class provides greater flexibility and improved inferential properties. The framework exploits recent theoretical developments of GAMs and is implemented using efficient routines within freely available software. Real-data applications are illustrated through two reproducible examples in time series and survival analysis. © 2017 The Authors Biometrics published by Wiley Periodicals, Inc. on behalf of International Biometric Society.

  14. General mirror pairs for gauged linear sigma models

    Aspinwall, Paul S.; Plesser, M. Ronen [Departments of Mathematics and Physics, Duke University,Box 90320, Durham, NC 27708-0320 (United States)

    2015-11-05

    We carefully analyze the conditions for an abelian gauged linear σ-model to exhibit nontrivial IR behavior described by a nonsingular superconformal field theory determining a superstring vacuum. This is done without reference to a geometric phase, by associating singular behavior to a noncompact space of (semi-)classical vacua. We find that models determined by reflexive combinatorial data are nonsingular for generic values of their parameters. This condition has the pleasant feature that the mirror of a nonsingular gauged linear σ-model is another such model, but it is clearly too strong and we provide an example of a non-reflexive mirror pair. We discuss a weaker condition inspired by considering extremal transitions, which is also mirror symmetric and which we conjecture to be sufficient. We apply these ideas to extremal transitions and to understanding the way in which both Berglund-Hübsch mirror symmetry and the Vafa-Witten mirror orbifold with discrete torsion can be seen as special cases of the general combinatorial duality of gauged linear σ-models. In the former case we encounter an example showing that our weaker condition is still not necessary.

  15. General mirror pairs for gauged linear sigma models

    Aspinwall, Paul S.; Plesser, M. Ronen

    2015-01-01

    We carefully analyze the conditions for an abelian gauged linear σ-model to exhibit nontrivial IR behavior described by a nonsingular superconformal field theory determining a superstring vacuum. This is done without reference to a geometric phase, by associating singular behavior to a noncompact space of (semi-)classical vacua. We find that models determined by reflexive combinatorial data are nonsingular for generic values of their parameters. This condition has the pleasant feature that the mirror of a nonsingular gauged linear σ-model is another such model, but it is clearly too strong and we provide an example of a non-reflexive mirror pair. We discuss a weaker condition inspired by considering extremal transitions, which is also mirror symmetric and which we conjecture to be sufficient. We apply these ideas to extremal transitions and to understanding the way in which both Berglund-Hübsch mirror symmetry and the Vafa-Witten mirror orbifold with discrete torsion can be seen as special cases of the general combinatorial duality of gauged linear σ-models. In the former case we encounter an example showing that our weaker condition is still not necessary.

  16. Robust Linear Models for Cis-eQTL Analysis.

    Rantalainen, Mattias; Lindgren, Cecilia M; Holmes, Christopher C

    2015-01-01

    Expression Quantitative Trait Loci (eQTL) analysis enables characterisation of functional genetic variation influencing expression levels of individual genes. In outbread populations, including humans, eQTLs are commonly analysed using the conventional linear model, adjusting for relevant covariates, assuming an allelic dosage model and a Gaussian error term. However, gene expression data generally have noise that induces heavy-tailed errors relative to the Gaussian distribution and often include atypical observations, or outliers. Such departures from modelling assumptions can lead to an increased rate of type II errors (false negatives), and to some extent also type I errors (false positives). Careful model checking can reduce the risk of type-I errors but often not type II errors, since it is generally too time-consuming to carefully check all models with a non-significant effect in large-scale and genome-wide studies. Here we propose the application of a robust linear model for eQTL analysis to reduce adverse effects of deviations from the assumption of Gaussian residuals. We present results from a simulation study as well as results from the analysis of real eQTL data sets. Our findings suggest that in many situations robust models have the potential to provide more reliable eQTL results compared to conventional linear models, particularly in respect to reducing type II errors due to non-Gaussian noise. Post-genomic data, such as that generated in genome-wide eQTL studies, are often noisy and frequently contain atypical observations. Robust statistical models have the potential to provide more reliable results and increased statistical power under non-Gaussian conditions. The results presented here suggest that robust models should be considered routinely alongside other commonly used methodologies for eQTL analysis.

  17. Job shop scheduling model for non-identic machine with fixed delivery time to minimize tardiness

    Kusuma, K. K.; Maruf, A.

    2016-02-01

    Scheduling non-identic machines problem with low utilization characteristic and fixed delivery time are frequent in manufacture industry. This paper propose a mathematical model to minimize total tardiness for non-identic machines in job shop environment. This model will be categorized as an integer linier programming model and using branch and bound algorithm as the solver method. We will use fixed delivery time as main constraint and different processing time to process a job. The result of this proposed model shows that the utilization of production machines can be increase with minimal tardiness using fixed delivery time as constraint.

  18. Comparing three scheduling methods using BIM models in the Last Planner System

    Brioso Xavier

    2017-12-01

    Full Text Available This article presents strategies for teaching scheduling methods such as takt-time, flowlines, and point-to-point precedence relations (PTPPRs using build­ing information modeling (BIM models in the Last Planner System. This article is the extended version of the article entitled “Teaching Takt-Time, Flowline and Point-to-point Precedence Relations: A Peruvian Case Study,” which has been published in Procedia Engineering (Vol. 196, 2017, pages 666-673. A case study is conducted in final year students of civil engineering at the Pontifical Catholic University of Peru. The mock-up project is an educational building that has high repetitive processes in the struc­tural works phase. First, traditional tools such as Excel spreadsheets and 2D drawings were used to teach produc­tion system design with takt-time, flowlines, and PTPPR. Second, 3D and 4D models with Revit 2016 and Navis­works 2016 were used to integrate the previous schedules with a BIM model and to identify its strengths and weak­nesses. Finally, Vico Office was used for the automation of schedules and comparison of the methods in 4D and 5D. This article describes the lectures, workshops, and simu­lations employed, as well as the feedback from students and researchers. The success of the teaching strategy is reflected in the survey responses from students and the final perceptions of the construction management tools presented.

  19. Inclusion of molecular biotherapies with radical radiotherapy: modeling of combined modality treatment schedules

    Jones, Bleddyn; Dale, Roger G.

    1999-01-01

    Purpose: The use of molecular biology based therapies concurrently with radical radiotherapy is likely to offer potential benefits, but there is relatively little use of classical radiobiology in the rationale for such applications. The biological mechanisms that govern the outcomes of radiotherapy need to be completely understood before rational application and optimization of such adjuvant biotherapies with radiotherapy. Methods and Materials: Existing biomathematical models of radiotherapy can be used to explore the possible impact of biotherapies that modify tumor proliferation rates and/or radiosensitivity parameters during radiotherapy. Equations that show how to incorporate biotherapies with the linear-quadratic model of radiation cell kill are presented. Also considered are changes in tumor physiology, such as improved blood flow with enhanced delivery of biotherapy to the tumor cells and accelerated clonogen repopulation during radiotherapy. Monte Carlo random sampling methods are used to simulate these effects in heterogenous tumor populations with variation in radiosensitivities, clonogen numbers, and doubling times, as well as variations in repopulation onset rates and in vascular perfusion rates with time. Results: The time onset and duration of exposure of each type of biotherapy during radical radiotherapy can influence the predicted tumor cure probabilities in subtle ways. In general, the efficacy of biotherapies that radiosensitize will depend upon the number of radiotherapy fractions that are sensitized and the change in blood flow with time during radiotherapy. Biotherapies that control repopulation will depend not only on the duration of exposure but also, where accelerated repopulation occurs, on the time at which biotherapy is initiated during radiotherapy. From the ranges of radiobiological parameters and biotherapy efficacies assumed for exploratory examples, large changes of tumor control probability (TCP) are encountered in individual

  20. Linear models for joint association and linkage QTL mapping

    Fernando Rohan L

    2009-09-01

    Full Text Available Abstract Background Populational linkage disequilibrium and within-family linkage are commonly used for QTL mapping and marker assisted selection. The combination of both results in more robust and accurate locations of the QTL, but models proposed so far have been either single marker, complex in practice or well fit to a particular family structure. Results We herein present linear model theory to come up with additive effects of the QTL alleles in any member of a general pedigree, conditional to observed markers and pedigree, accounting for possible linkage disequilibrium among QTLs and markers. The model is based on association analysis in the founders; further, the additive effect of the QTLs transmitted to the descendants is a weighted (by the probabilities of transmission average of the substitution effects of founders' haplotypes. The model allows for non-complete linkage disequilibrium QTL-markers in the founders. Two submodels are presented: a simple and easy to implement Haley-Knott type regression for half-sib families, and a general mixed (variance component model for general pedigrees. The model can use information from all markers. The performance of the regression method is compared by simulation with a more complex IBD method by Meuwissen and Goddard. Numerical examples are provided. Conclusion The linear model theory provides a useful framework for QTL mapping with dense marker maps. Results show similar accuracies but a bias of the IBD method towards the center of the region. Computations for the linear regression model are extremely simple, in contrast with IBD methods. Extensions of the model to genomic selection and multi-QTL mapping are straightforward.

  1. Hypothetical operation model for the multi-bed system of the Tritium plant based on the scheduling approach

    Lee, Jae-Uk, E-mail: eslee@dongguk.edu [Department of Chemical Engineering, Pohang University of Science and Technology, San 31, Hyoja-Dong, Pohang 790-784 (Korea, Republic of); Chang, Min Ho; Yun, Sei-Hun [National Fusion Research Institute, 169-148-gil Kwahak-ro, Yusong-gu, Daejon 34133 (Korea, Republic of); Lee, Euy Soo [Department of Chemical & Biochemical Engineering, Dongguk University, Seoul 100-715 (Korea, Republic of); Lee, In-Beum [Department of Chemical Engineering and Graduate School of Engineering Mastership, Pohang University of Science and Technology, San 31, Hyoja-Dong, Pohang 790-784 (Korea, Republic of); Lee, Kun-Hong [Department of Chemical Engineering, Pohang University of Science and Technology, San 31, Hyoja-Dong, Pohang 790-784 (Korea, Republic of)

    2016-11-01

    Highlights: • We introduce a mathematical model for the multi-bed storage system in the tritium plant. • We obtain details of operation by solving the model. • The model assesses diverse operation scenarios with respect to risk. - Abstract: In this paper, we describe our hypothetical operation model (HOM) for the multi-bed system of the storage and delivery system (SDS) of the ITER tritium plant. The multi-bed system consists of multiple getter beds (i.e., for batch operation) and buffer vessels (i.e., for continuous operation). Our newly developed HOM is formulated as a mixed-integer linear programming (MILP) model and has been extensively investigated to optimize chemical and petrochemical production planning and scheduling. Our model determines the timing, duration, and size of tasks corresponding to each set of equipment. Further, inventory levels for each set of equipment are calculated. Our proposed model considers the operation of one cycle of one set of getter beds and is implemented and assessed as a case study problem.

  2. A Graphical User Interface to Generalized Linear Models in MATLAB

    Peter Dunn

    1999-07-01

    Full Text Available Generalized linear models unite a wide variety of statistical models in a common theoretical framework. This paper discusses GLMLAB-software that enables such models to be fitted in the popular mathematical package MATLAB. It provides a graphical user interface to the powerful MATLAB computational engine to produce a program that is easy to use but with many features, including offsets, prior weights and user-defined distributions and link functions. MATLAB's graphical capacities are also utilized in providing a number of simple residual diagnostic plots.

  3. MAGDM linear-programming models with distinct uncertain preference structures.

    Xu, Zeshui S; Chen, Jian

    2008-10-01

    Group decision making with preference information on alternatives is an interesting and important research topic which has been receiving more and more attention in recent years. The purpose of this paper is to investigate multiple-attribute group decision-making (MAGDM) problems with distinct uncertain preference structures. We develop some linear-programming models for dealing with the MAGDM problems, where the information about attribute weights is incomplete, and the decision makers have their preferences on alternatives. The provided preference information can be represented in the following three distinct uncertain preference structures: 1) interval utility values; 2) interval fuzzy preference relations; and 3) interval multiplicative preference relations. We first establish some linear-programming models based on decision matrix and each of the distinct uncertain preference structures and, then, develop some linear-programming models to integrate all three structures of subjective uncertain preference information provided by the decision makers and the objective information depicted in the decision matrix. Furthermore, we propose a simple and straightforward approach in ranking and selecting the given alternatives. It is worth pointing out that the developed models can also be used to deal with the situations where the three distinct uncertain preference structures are reduced to the traditional ones, i.e., utility values, fuzzy preference relations, and multiplicative preference relations. Finally, we use a practical example to illustrate in detail the calculation process of the developed approach.

  4. Efficient Work Team Scheduling: Using Psychological Models of Knowledge Retention to Improve Code Writing Efficiency

    Michael J. Pelosi

    2014-12-01

    Full Text Available Development teams and programmers must retain critical information about their work during work intervals and gaps in order to improve future performance when work resumes. Despite time lapses, project managers want to maximize coding efficiency and effectiveness. By developing a mathematically justified, practically useful, and computationally tractable quantitative and cognitive model of learning and memory retention, this study establishes calculations designed to maximize scheduling payoff and optimize developer efficiency and effectiveness.

  5. Improving the Prediction of Total Surgical Procedure Time Using Linear Regression Modeling.

    Edelman, Eric R; van Kuijk, Sander M J; Hamaekers, Ankie E W; de Korte, Marcel J M; van Merode, Godefridus G; Buhre, Wolfgang F F A

    2017-01-01

    For efficient utilization of operating rooms (ORs), accurate schedules of assigned block time and sequences of patient cases need to be made. The quality of these planning tools is dependent on the accurate prediction of total procedure time (TPT) per case. In this paper, we attempt to improve the accuracy of TPT predictions by using linear regression models based on estimated surgeon-controlled time (eSCT) and other variables relevant to TPT. We extracted data from a Dutch benchmarking database of all surgeries performed in six academic hospitals in The Netherlands from 2012 till 2016. The final dataset consisted of 79,983 records, describing 199,772 h of total OR time. Potential predictors of TPT that were included in the subsequent analysis were eSCT, patient age, type of operation, American Society of Anesthesiologists (ASA) physical status classification, and type of anesthesia used. First, we computed the predicted TPT based on a previously described fixed ratio model for each record, multiplying eSCT by 1.33. This number is based on the research performed by van Veen-Berkx et al., which showed that 33% of SCT is generally a good approximation of anesthesia-controlled time (ACT). We then systematically tested all possible linear regression models to predict TPT using eSCT in combination with the other available independent variables. In addition, all regression models were again tested without eSCT as a predictor to predict ACT separately (which leads to TPT by adding SCT). TPT was most accurately predicted using a linear regression model based on the independent variables eSCT, type of operation, ASA classification, and type of anesthesia. This model performed significantly better than the fixed ratio model and the method of predicting ACT separately. Making use of these more accurate predictions in planning and sequencing algorithms may enable an increase in utilization of ORs, leading to significant financial and productivity related benefits.

  6. Improving the Prediction of Total Surgical Procedure Time Using Linear Regression Modeling

    Eric R. Edelman

    2017-06-01

    Full Text Available For efficient utilization of operating rooms (ORs, accurate schedules of assigned block time and sequences of patient cases need to be made. The quality of these planning tools is dependent on the accurate prediction of total procedure time (TPT per case. In this paper, we attempt to improve the accuracy of TPT predictions by using linear regression models based on estimated surgeon-controlled time (eSCT and other variables relevant to TPT. We extracted data from a Dutch benchmarking database of all surgeries performed in six academic hospitals in The Netherlands from 2012 till 2016. The final dataset consisted of 79,983 records, describing 199,772 h of total OR time. Potential predictors of TPT that were included in the subsequent analysis were eSCT, patient age, type of operation, American Society of Anesthesiologists (ASA physical status classification, and type of anesthesia used. First, we computed the predicted TPT based on a previously described fixed ratio model for each record, multiplying eSCT by 1.33. This number is based on the research performed by van Veen-Berkx et al., which showed that 33% of SCT is generally a good approximation of anesthesia-controlled time (ACT. We then systematically tested all possible linear regression models to predict TPT using eSCT in combination with the other available independent variables. In addition, all regression models were again tested without eSCT as a predictor to predict ACT separately (which leads to TPT by adding SCT. TPT was most accurately predicted using a linear regression model based on the independent variables eSCT, type of operation, ASA classification, and type of anesthesia. This model performed significantly better than the fixed ratio model and the method of predicting ACT separately. Making use of these more accurate predictions in planning and sequencing algorithms may enable an increase in utilization of ORs, leading to significant financial and productivity related

  7. Forecasting the EMU inflation rate: Linear econometric vs. non-linear computational models using genetic neural fuzzy systems

    Kooths, Stefan; Mitze, Timo Friedel; Ringhut, Eric

    2004-01-01

    This paper compares the predictive power of linear econometric and non-linear computational models for forecasting the inflation rate in the European Monetary Union (EMU). Various models of both types are developed using different monetary and real activity indicators. They are compared according...

  8. Optimal Cluster Mill Pass Scheduling With an Accurate and Rapid New Strip Crown Model

    Malik, Arif S.; Grandhi, Ramana V.; Zipf, Mark E.

    2007-01-01

    Besides the requirement to roll coiled sheet at high levels of productivity, the optimal pass scheduling of cluster-type reversing cold mills presents the added challenge of assigning mill parameters that facilitate the best possible strip flatness. The pressures of intense global competition, and the requirements for increasingly thinner, higher quality specialty sheet products that are more difficult to roll, continue to force metal producers to commission innovative flatness-control technologies. This means that during the on-line computerized set-up of rolling mills, the mathematical model should not only determine the minimum total number of passes and maximum rolling speed, it should simultaneously optimize the pass-schedule so that desired flatness is assured, either by manual or automated means. In many cases today, however, on-line prediction of strip crown and corresponding flatness for the complex cluster-type rolling mills is typically addressed either by trial and error, by approximate deflection models for equivalent vertical roll-stacks, or by non-physical pattern recognition style models. The abundance of the aforementioned methods is largely due to the complexity of cluster-type mill configurations and the lack of deflection models with sufficient accuracy and speed for on-line use. Without adequate assignment of the pass-schedule set-up parameters, it may be difficult or impossible to achieve the required strip flatness. In this paper, we demonstrate optimization of cluster mill pass-schedules using a new accurate and rapid strip crown model. This pass-schedule optimization includes computations of the predicted strip thickness profile to validate mathematical constraints. In contrast to many of the existing methods for on-line prediction of strip crown and flatness on cluster mills, the demonstrated method requires minimal prior tuning and no extensive training with collected mill data. To rapidly and accurately solve the multi-contact problem

  9. Modelling of Asphalt Concrete Stiffness in the Linear Viscoelastic Region

    Mazurek, Grzegorz; Iwański, Marek

    2017-10-01

    Stiffness modulus is a fundamental parameter used in the modelling of the viscoelastic behaviour of bituminous mixtures. On the basis of the master curve in the linear viscoelasticity range, the mechanical properties of asphalt concrete at different loading times and temperatures can be predicted. This paper discusses the construction of master curves under rheological mathematical models i.e. the sigmoidal function model (MEPDG), the fractional model, and Bahia and co-workers’ model in comparison to the results from mechanistic rheological models i.e. the generalized Huet-Sayegh model, the generalized Maxwell model and the Burgers model. For the purposes of this analysis, the reference asphalt concrete mix (denoted as AC16W) intended for the binder coarse layer and for traffic category KR3 (5×105 controlled strain mode. The fixed strain level was set at 25με to guarantee that the stiffness modulus of the asphalt concrete would be tested in a linear viscoelasticity range. The master curve was formed using the time-temperature superposition principle (TTSP). The stiffness modulus of asphalt concrete was determined at temperatures 10°C, 20°C and 40°C and at loading times (frequency) of 0.1, 0.3, 1, 3, 10, 20 Hz. The model parameters were fitted to the rheological models using the original programs based on the nonlinear least squares sum method. All the rheological models under analysis were found to be capable of predicting changes in the stiffness modulus of the reference asphalt concrete to satisfactory accuracy. In the cases of the fractional model and the generalized Maxwell model, their accuracy depends on a number of elements in series. The best fit was registered for Bahia and co-workers model, generalized Maxwell model and fractional model. As for predicting the phase angle parameter, the largest discrepancies between experimental and modelled results were obtained using the fractional model. Except the Burgers model, the model matching quality was

  10. Improving financial performance by modeling and analysis of radiology procedure scheduling at a large community hospital.

    Lu, Lingbo; Li, Jingshan; Gisler, Paula

    2011-06-01

    Radiology tests, such as MRI, CT-scan, X-ray and ultrasound, are cost intensive and insurance pre-approvals are necessary to get reimbursement. In some cases, tests may be denied for payments by insurance companies due to lack of pre-approvals, inaccurate or missing necessary information. This can lead to substantial revenue losses for the hospital. In this paper, we present a simulation study of a centralized scheduling process for outpatient radiology tests at a large community hospital (Central Baptist Hospital in Lexington, Kentucky). Based on analysis of the central scheduling process, a simulation model of information flow in the process has been developed. Using such a model, the root causes of financial losses associated with errors and omissions in this process were identified and analyzed, and their impacts were quantified. In addition, "what-if" analysis was conducted to identify potential process improvement strategies in the form of recommendations to the hospital leadership. Such a model provides a quantitative tool for continuous improvement and process control in radiology outpatient test scheduling process to reduce financial losses associated with process error. This method of analysis is also applicable to other departments in the hospital.

  11. An order insertion scheduling model of logistics service supply chain considering capacity and time factors.

    Liu, Weihua; Yang, Yi; Wang, Shuqing; Liu, Yang

    2014-01-01

    Order insertion often occurs in the scheduling process of logistics service supply chain (LSSC), which disturbs normal time scheduling especially in the environment of mass customization logistics service. This study analyses order similarity coefficient and order insertion operation process and then establishes an order insertion scheduling model of LSSC with service capacity and time factors considered. This model aims to minimize the average unit volume operation cost of logistics service integrator and maximize the average satisfaction degree of functional logistics service providers. In order to verify the viability and effectiveness of our model, a specific example is numerically analyzed. Some interesting conclusions are obtained. First, along with the increase of completion time delay coefficient permitted by customers, the possible inserting order volume first increases and then trends to be stable. Second, supply chain performance reaches the best when the volume of inserting order is equal to the surplus volume of the normal operation capacity in mass service process. Third, the larger the normal operation capacity in mass service process is, the bigger the possible inserting order's volume will be. Moreover, compared to increasing the completion time delay coefficient, improving the normal operation capacity of mass service process is more useful.

  12. Performance deterioration modeling and optimal preventive maintenance strategy under scheduled servicing subject to mission time

    Li Dawei

    2014-08-01

    Full Text Available Servicing is applied periodically in practice with the aim of restoring the system state and prolonging the lifetime. It is generally seen as an imperfect maintenance action which has a chief influence on the maintenance strategy. In order to model the maintenance effect of servicing, this study analyzes the deterioration characteristics of system under scheduled servicing. And then the deterioration model is established from the failure mechanism by compound Poisson process. On the basis of the system damage value and failure mechanism, the failure rate refresh factor is proposed to describe the maintenance effect of servicing. A maintenance strategy is developed which combines the benefits of scheduled servicing and preventive maintenance. Then the optimization model is given to determine the optimal servicing period and preventive maintenance time, with an objective to minimize the system expected life-cycle cost per unit time and a constraint on system survival probability for the duration of mission time. Subject to mission time, it can control the ability of accomplishing the mission at any time so as to ensure the high dependability. An example of water pump rotor relating to scheduled servicing is introduced to illustrate the failure rate refresh factor and the proposed maintenance strategy. Compared with traditional methods, the numerical results show that the failure rate refresh factor can describe the maintenance effect of servicing more intuitively and objectively. It also demonstrates that this maintenance strategy can prolong the lifetime, reduce the total lifetime maintenance cost and guarantee the dependability of system.

  13. A Non-Gaussian Spatial Generalized Linear Latent Variable Model

    Irincheeva, Irina

    2012-08-03

    We consider a spatial generalized linear latent variable model with and without normality distributional assumption on the latent variables. When the latent variables are assumed to be multivariate normal, we apply a Laplace approximation. To relax the assumption of marginal normality in favor of a mixture of normals, we construct a multivariate density with Gaussian spatial dependence and given multivariate margins. We use the pairwise likelihood to estimate the corresponding spatial generalized linear latent variable model. The properties of the resulting estimators are explored by simulations. In the analysis of an air pollution data set the proposed methodology uncovers weather conditions to be a more important source of variability than air pollution in explaining all the causes of non-accidental mortality excluding accidents. © 2012 International Biometric Society.

  14. Linear Model for Optimal Distributed Generation Size Predication

    Ahmed Al Ameri

    2017-01-01

    Full Text Available This article presents a linear model predicting optimal size of Distributed Generation (DG that addresses the minimum power loss. This method is based fundamentally on strong coupling between active power and voltage angle as well as between reactive power and voltage magnitudes. This paper proposes simplified method to calculate the total power losses in electrical grid for different distributed generation sizes and locations. The method has been implemented and tested on several IEEE bus test systems. The results show that the proposed method is capable of predicting approximate optimal size of DG when compared with precision calculations. The method that linearizes a complex model showed a good result, which can actually reduce processing time required. The acceptable accuracy with less time and memory required can help the grid operator to assess power system integrated within large-scale distribution generation.

  15. A non-linear model of economic production processes

    Ponzi, A.; Yasutomi, A.; Kaneko, K.

    2003-06-01

    We present a new two phase model of economic production processes which is a non-linear dynamical version of von Neumann's neoclassical model of production, including a market price-setting phase as well as a production phase. The rate of an economic production process is observed, for the first time, to depend on the minimum of its input supplies. This creates highly non-linear supply and demand dynamics. By numerical simulation, production networks are shown to become unstable when the ratio of different products to total processes increases. This provides some insight into observed stability of competitive capitalist economies in comparison to monopolistic economies. Capitalist economies are also shown to have low unemployment.

  16. A Non-Gaussian Spatial Generalized Linear Latent Variable Model

    Irincheeva, Irina; Cantoni, Eva; Genton, Marc G.

    2012-01-01

    We consider a spatial generalized linear latent variable model with and without normality distributional assumption on the latent variables. When the latent variables are assumed to be multivariate normal, we apply a Laplace approximation. To relax the assumption of marginal normality in favor of a mixture of normals, we construct a multivariate density with Gaussian spatial dependence and given multivariate margins. We use the pairwise likelihood to estimate the corresponding spatial generalized linear latent variable model. The properties of the resulting estimators are explored by simulations. In the analysis of an air pollution data set the proposed methodology uncovers weather conditions to be a more important source of variability than air pollution in explaining all the causes of non-accidental mortality excluding accidents. © 2012 International Biometric Society.

  17. NON-LINEAR FINITE ELEMENT MODELING OF DEEP DRAWING PROCESS

    Hasan YILDIZ

    2004-03-01

    Full Text Available Deep drawing process is one of the main procedures used in different branches of industry. Finding numerical solutions for determination of the mechanical behaviour of this process will save time and money. In die surfaces, which have complex geometries, it is hard to determine the effects of parameters of sheet metal forming. Some of these parameters are wrinkling, tearing, and determination of the flow of the thin sheet metal in the die and thickness change. However, the most difficult one is determination of material properties during plastic deformation. In this study, the effects of all these parameters are analyzed before producing the dies. The explicit non-linear finite element method is chosen to be used in the analysis. The numerical results obtained for non-linear material and contact models are also compared with the experiments. A good agreement between the numerical and the experimental results is obtained. The results obtained for the models are given in detail.

  18. Dynamic generalized linear models for monitoring endemic diseases

    Lopes Antunes, Ana Carolina; Jensen, Dan; Hisham Beshara Halasa, Tariq

    2016-01-01

    The objective was to use a Dynamic Generalized Linear Model (DGLM) based on abinomial distribution with a linear trend, for monitoring the PRRS (Porcine Reproductive and Respiratory Syndrome sero-prevalence in Danish swine herds. The DGLM was described and its performance for monitoring control...... and eradication programmes based on changes in PRRS sero-prevalence was explored. Results showed a declining trend in PRRS sero-prevalence between 2007 and 2014 suggesting that Danish herds are slowly eradicating PRRS. The simulation study demonstrated the flexibility of DGLMs in adapting to changes intrends...... in sero-prevalence. Based on this, it was possible to detect variations in the growth model component. This study is a proof-of-concept, demonstrating the use of DGLMs for monitoring endemic diseases. In addition, the principles stated might be useful in general research on monitoring and surveillance...

  19. Modeling Nurse Scheduling Problem Using 0-1 Goal Programming A Case Study Of Tafo Government Hospital Kumasi-Ghana

    Wallace Agyei

    2015-03-01

    Full Text Available Abstract The problem of scheduling nurses at the Out-Patient Department OPD at Tafo Government Hospital Kumasi Ghana is presented. Currently the schedules are prepared by head nurse who performs this difficult and time consuming task by hand. Due to the existence of many constraints the resulting schedule usually does not guarantee the fairness of distribution of work. The problem was formulated as 0-1goal programming model with the of objective of evenly balancing the workload among nurses and satisfying their preferences as much as possible while complying with the legal and working regulations.. The developed model was then solved using LINGO14.0 software. The resulting schedules based on 0-1goal programming model balanced the workload in terms of the distribution of shift duties fairness in terms of the number of consecutive night duties and satisfied the preferences of the nurses. This is an improvement over the schedules done manually.

  20. Estimation and Inference for Very Large Linear Mixed Effects Models

    Gao, K.; Owen, A. B.

    2016-01-01

    Linear mixed models with large imbalanced crossed random effects structures pose severe computational problems for maximum likelihood estimation and for Bayesian analysis. The costs can grow as fast as $N^{3/2}$ when there are N observations. Such problems arise in any setting where the underlying factors satisfy a many to many relationship (instead of a nested one) and in electronic commerce applications, the N can be quite large. Methods that do not account for the correlation structure can...

  1. Using Quartile-Quartile Lines as Linear Models

    Gordon, Sheldon P.

    2015-01-01

    This article introduces the notion of the quartile-quartile line as an alternative to the regression line and the median-median line to produce a linear model based on a set of data. It is based on using the first and third quartiles of a set of (x, y) data. Dynamic spreadsheets are used as exploratory tools to compare the different approaches and…

  2. NON-LINEAR MODELING OF THE RHIC INTERACTION REGIONS

    TOMAS, R.; FISCHER, W.; JAIN, A.; LUO, Y.; PILAT, F.

    2004-01-01

    For RHIC's collision lattices the dominant sources of transverse non-linearities are located in the interaction regions. The field quality is available for most of the magnets in the interaction regions from the magnetic measurements, or from extrapolations of these measurements. We discuss the implementation of these measurements in the MADX models of the Blue and the Yellow rings and their impact on beam stability

  3. Electromagnetic axial anomaly in a generalized linear sigma model

    Fariborz, Amir H.; Jora, Renata

    2017-06-01

    We construct the electromagnetic anomaly effective term for a generalized linear sigma model with two chiral nonets, one with a quark-antiquark structure, the other one with a four-quark content. We compute in the leading order of this framework the decays into two photons of six pseudoscalars: π0(137 ), π0(1300 ), η (547 ), η (958 ), η (1295 ) and η (1760 ). Our results agree well with the available experimental data.

  4. Parallel-Batch Scheduling with Two Models of Deterioration to Minimize the Makespan

    Cuixia Miao

    2014-01-01

    Full Text Available We consider the bounded parallel-batch scheduling with two models of deterioration, in which the processing time of the first model is pj=aj+αt and of the second model is pj=a+αjt. The objective is to minimize the makespan. We present O(n log n time algorithms for the single-machine problems, respectively. And we propose fully polynomial time approximation schemes to solve the identical-parallel-machine problem and uniform-parallel-machine problem, respectively.

  5. Short-term bulk energy storage system scheduling for load leveling in unit commitment: modeling, optimization, and sensitivity analysis.

    Hemmati, Reza; Saboori, Hedayat

    2016-05-01

    Energy storage systems (ESSs) have experienced a very rapid growth in recent years and are expected to be a promising tool in order to improving power system reliability and being economically efficient. The ESSs possess many potential benefits in various areas in the electric power systems. One of the main benefits of an ESS, especially a bulk unit, relies on smoothing the load pattern by decreasing on-peak and increasing off-peak loads, known as load leveling. These devices require new methods and tools in order to model and optimize their effects in the power system studies. In this respect, this paper will model bulk ESSs based on the several technical characteristics, introduce the proposed model in the thermal unit commitment (UC) problem, and analyze it with respect to the various sensitive parameters. The technical limitations of the thermal units and transmission network constraints are also considered in the model. The proposed model is a Mixed Integer Linear Programming (MILP) which can be easily solved by strong commercial solvers (for instance CPLEX) and it is appropriate to be used in the practical large scale networks. The results of implementing the proposed model on a test system reveal that proper load leveling through optimum storage scheduling leads to considerable operation cost reduction with respect to the storage system characteristics.

  6. Short-term bulk energy storage system scheduling for load leveling in unit commitment: modeling, optimization, and sensitivity analysis

    Hemmati, Reza; Saboori, Hedayat

    2016-01-01

    Energy storage systems (ESSs) have experienced a very rapid growth in recent years and are expected to be a promising tool in order to improving power system reliability and being economically efficient. The ESSs possess many potential benefits in various areas in the electric power systems. One of the main benefits of an ESS, especially a bulk unit, relies on smoothing the load pattern by decreasing on-peak and increasing off-peak loads, known as load leveling. These devices require new methods and tools in order to model and optimize their effects in the power system studies. In this respect, this paper will model bulk ESSs based on the several technical characteristics, introduce the proposed model in the thermal unit commitment (UC) problem, and analyze it with respect to the various sensitive parameters. The technical limitations of the thermal units and transmission network constraints are also considered in the model. The proposed model is a Mixed Integer Linear Programming (MILP) which can be easily solved by strong commercial solvers (for instance CPLEX) and it is appropriate to be used in the practical large scale networks. The results of implementing the proposed model on a test system reveal that proper load leveling through optimum storage scheduling leads to considerable operation cost reduction with respect to the storage system characteristics. PMID:27222741

  7. Comparison of Linear Prediction Models for Audio Signals

    2009-03-01

    Full Text Available While linear prediction (LP has become immensely popular in speech modeling, it does not seem to provide a good approach for modeling audio signals. This is somewhat surprising, since a tonal signal consisting of a number of sinusoids can be perfectly predicted based on an (all-pole LP model with a model order that is twice the number of sinusoids. We provide an explanation why this result cannot simply be extrapolated to LP of audio signals. If noise is taken into account in the tonal signal model, a low-order all-pole model appears to be only appropriate when the tonal components are uniformly distributed in the Nyquist interval. Based on this observation, different alternatives to the conventional LP model can be suggested. Either the model should be changed to a pole-zero, a high-order all-pole, or a pitch prediction model, or the conventional LP model should be preceded by an appropriate frequency transform, such as a frequency warping or downsampling. By comparing these alternative LP models to the conventional LP model in terms of frequency estimation accuracy, residual spectral flatness, and perceptual frequency resolution, we obtain several new and promising approaches to LP-based audio modeling.

  8. A quasi-linear gyrokinetic transport model for tokamak plasmas

    Casati, A.

    2009-10-01

    After a presentation of some basics around nuclear fusion, this research thesis introduces the framework of the tokamak strategy to deal with confinement, hence the main plasma instabilities which are responsible for turbulent transport of energy and matter in such a system. The author also briefly introduces the two principal plasma representations, the fluid and the kinetic ones. He explains why the gyro-kinetic approach has been preferred. A tokamak relevant case is presented in order to highlight the relevance of a correct accounting of the kinetic wave-particle resonance. He discusses the issue of the quasi-linear response. Firstly, the derivation of the model, called QuaLiKiz, and its underlying hypotheses to get the energy and the particle turbulent flux are presented. Secondly, the validity of the quasi-linear response is verified against the nonlinear gyro-kinetic simulations. The saturation model that is assumed in QuaLiKiz, is presented and discussed. Then, the author qualifies the global outcomes of QuaLiKiz. Both the quasi-linear energy and the particle flux are compared to the expectations from the nonlinear simulations, across a wide scan of tokamak relevant parameters. Therefore, the coupling of QuaLiKiz within the integrated transport solver CRONOS is presented: this procedure allows the time-dependent transport problem to be solved, hence the direct application of the model to the experiment. The first preliminary results regarding the experimental analysis are finally discussed

  9. Linear theory for filtering nonlinear multiscale systems with model error.

    Berry, Tyrus; Harlim, John

    2014-07-08

    In this paper, we study filtering of multiscale dynamical systems with model error arising from limitations in resolving the smaller scale processes. In particular, the analysis assumes the availability of continuous-time noisy observations of all components of the slow variables. Mathematically, this paper presents new results on higher order asymptotic expansion of the first two moments of a conditional measure. In particular, we are interested in the application of filtering multiscale problems in which the conditional distribution is defined over the slow variables, given noisy observation of the slow variables alone. From the mathematical analysis, we learn that for a continuous time linear model with Gaussian noise, there exists a unique choice of parameters in a linear reduced model for the slow variables which gives the optimal filtering when only the slow variables are observed. Moreover, these parameters simultaneously give the optimal equilibrium statistical estimates of the underlying system, and as a consequence they can be estimated offline from the equilibrium statistics of the true signal. By examining a nonlinear test model, we show that the linear theory extends in this non-Gaussian, nonlinear configuration as long as we know the optimal stochastic parametrization and the correct observation model. However, when the stochastic parametrization model is inappropriate, parameters chosen for good filter performance may give poor equilibrium statistical estimates and vice versa; this finding is based on analytical and numerical results on our nonlinear test model and the two-layer Lorenz-96 model. Finally, even when the correct stochastic ansatz is given, it is imperative to estimate the parameters simultaneously and to account for the nonlinear feedback of the stochastic parameters into the reduced filter estimates. In numerical experiments on the two-layer Lorenz-96 model, we find that the parameters estimated online , as part of a filtering

  10. Technical note: A linear model for predicting δ13 Cprotein.

    Pestle, William J; Hubbe, Mark; Smith, Erin K; Stevenson, Joseph M

    2015-08-01

    Development of a model for the prediction of δ(13) Cprotein from δ(13) Ccollagen and Δ(13) Cap-co . Model-generated values could, in turn, serve as "consumer" inputs for multisource mixture modeling of paleodiet. Linear regression analysis of previously published controlled diet data facilitated the development of a mathematical model for predicting δ(13) Cprotein (and an experimentally generated error term) from isotopic data routinely generated during the analysis of osseous remains (δ(13) Cco and Δ(13) Cap-co ). Regression analysis resulted in a two-term linear model (δ(13) Cprotein (%) = (0.78 × δ(13) Cco ) - (0.58× Δ(13) Cap-co ) - 4.7), possessing a high R-value of 0.93 (r(2)  = 0.86, P analysis of human osseous remains. These predicted values are ideal for use in multisource mixture modeling of dietary protein source contribution. © 2015 Wiley Periodicals, Inc.

  11. Development of the Optimum Operation Scheduling Model of Domestic Electric Appliances for the Supply-Demand Adjustment in a Power System

    Ikegami, Takashi; Iwafune, Yumiko; Ogimoto, Kazuhiko

    The high penetration of variable renewable generation such as Photovoltaic (PV) systems will cause the issue of supply-demand imbalance in a whole power system. The activation of the residential power usage, storage and generation by sophisticated scheduling and control using the Home Energy Management System (HEMS) will be needed to balance power supply and demand in the near future. In order to evaluate the applicability of the HEMS as a distributed controller for local and system-wide supply-demand balances, we developed an optimum operation scheduling model of domestic electric appliances using the mixed integer linear programming. Applying this model to several houses with dynamic electricity prices reflecting the power balance of the total power system, it was found that the adequate changes in electricity prices bring about the shift of residential power usages to control the amount of the reverse power flow due to excess PV generation.

  12. Neutron stars in non-linear coupling models

    Taurines, Andre R.; Vasconcellos, Cesar A.Z.; Malheiro, Manuel; Chiapparini, Marcelo

    2001-01-01

    We present a class of relativistic models for nuclear matter and neutron stars which exhibits a parameterization, through mathematical constants, of the non-linear meson-baryon couplings. For appropriate choices of the parameters, it recovers current QHD models found in the literature: Walecka, ZM and ZM3 models. We have found that the ZM3 model predicts a very small maximum neutron star mass, ∼ 0.72M s un. A strong similarity between the results of ZM-like models and those with exponential couplings is noted. Finally, we discuss the very intense scalar condensates found in the interior of neutron stars which may lead to negative effective masses. (author)

  13. Neutron stars in non-linear coupling models

    Taurines, Andre R.; Vasconcellos, Cesar A.Z. [Rio Grande do Sul Univ., Porto Alegre, RS (Brazil); Malheiro, Manuel [Universidade Federal Fluminense, Niteroi, RJ (Brazil); Chiapparini, Marcelo [Universidade do Estado, Rio de Janeiro, RJ (Brazil)

    2001-07-01

    We present a class of relativistic models for nuclear matter and neutron stars which exhibits a parameterization, through mathematical constants, of the non-linear meson-baryon couplings. For appropriate choices of the parameters, it recovers current QHD models found in the literature: Walecka, ZM and ZM3 models. We have found that the ZM3 model predicts a very small maximum neutron star mass, {approx} 0.72M{sub s}un. A strong similarity between the results of ZM-like models and those with exponential couplings is noted. Finally, we discuss the very intense scalar condensates found in the interior of neutron stars which may lead to negative effective masses. (author)

  14. Modelling of Rotational Capacity in Reinforced Linear Elements

    Hestbech, Lars; Hagsten, Lars German; Fisker, Jakob

    2011-01-01

    on the rotational capacity of the plastic hinges. The documentation of ductility can be a difficult task as modelling of rotational capacity in plastic hinges of frames is not fully developed. On the basis of the Theory of Plasticity a model is developed to determine rotational capacity in plastic hinges in linear......The Capacity Design Method forms the basis of several seismic design codes. This design philosophy allows plastic deformations in order to decrease seismic demands in structures. However, these plastic deformations must be localized in certain zones where ductility requirements can be documented...... reinforced concrete elements. The model is taking several important parameters into account. Empirical values is avoided which is considered an advantage compared to previous models. Furthermore, the model includes force variations in the reinforcement due to moment distributions and shear as well...

  15. Comparison of linear and non-linear models for the adsorption of fluoride onto geo-material: limonite.

    Sahin, Rubina; Tapadia, Kavita

    2015-01-01

    The three widely used isotherms Langmuir, Freundlich and Temkin were examined in an experiment using fluoride (F⁻) ion adsorption on a geo-material (limonite) at four different temperatures by linear and non-linear models. Comparison of linear and non-linear regression models were given in selecting the optimum isotherm for the experimental results. The coefficient of determination, r², was used to select the best theoretical isotherm. The four Langmuir linear equations (1, 2, 3, and 4) are discussed. Langmuir isotherm parameters obtained from the four Langmuir linear equations using the linear model differed but they were the same when using the nonlinear model. Langmuir-2 isotherm is one of the linear forms, and it had the highest coefficient of determination (r² = 0.99) compared to the other Langmuir linear equations (1, 3 and 4) in linear form, whereas, for non-linear, Langmuir-4 fitted best among all the isotherms because it had the highest coefficient of determination (r² = 0.99). The results showed that the non-linear model may be a better way to obtain the parameters. In the present work, the thermodynamic parameters show that the absorption of fluoride onto limonite is both spontaneous (ΔG 0). Scanning electron microscope and X-ray diffraction images also confirm the adsorption of F⁻ ion onto limonite. The isotherm and kinetic study reveals that limonite can be used as an adsorbent for fluoride removal. In future we can develop new technology for fluoride removal in large scale by using limonite which is cost-effective, eco-friendly and is easily available in the study area.

  16. MIP Models and Hybrid Algorithms for Simultaneous Job Splitting and Scheduling on Unrelated Parallel Machines

    Ozmutlu, H. Cenk

    2014-01-01

    We developed mixed integer programming (MIP) models and hybrid genetic-local search algorithms for the scheduling problem of unrelated parallel machines with job sequence and machine-dependent setup times and with job splitting property. The first contribution of this paper is to introduce novel algorithms which make splitting and scheduling simultaneously with variable number of subjobs. We proposed simple chromosome structure which is constituted by random key numbers in hybrid genetic-local search algorithm (GAspLA). Random key numbers are used frequently in genetic algorithms, but it creates additional difficulty when hybrid factors in local search are implemented. We developed algorithms that satisfy the adaptation of results of local search into the genetic algorithms with minimum relocation operation of genes' random key numbers. This is the second contribution of the paper. The third contribution of this paper is three developed new MIP models which are making splitting and scheduling simultaneously. The fourth contribution of this paper is implementation of the GAspLAMIP. This implementation let us verify the optimality of GAspLA for the studied combinations. The proposed methods are tested on a set of problems taken from the literature and the results validate the effectiveness of the proposed algorithms. PMID:24977204

  17. MIP models and hybrid algorithms for simultaneous job splitting and scheduling on unrelated parallel machines.

    Eroglu, Duygu Yilmaz; Ozmutlu, H Cenk

    2014-01-01

    We developed mixed integer programming (MIP) models and hybrid genetic-local search algorithms for the scheduling problem of unrelated parallel machines with job sequence and machine-dependent setup times and with job splitting property. The first contribution of this paper is to introduce novel algorithms which make splitting and scheduling simultaneously with variable number of subjobs. We proposed simple chromosome structure which is constituted by random key numbers in hybrid genetic-local search algorithm (GAspLA). Random key numbers are used frequently in genetic algorithms, but it creates additional difficulty when hybrid factors in local search are implemented. We developed algorithms that satisfy the adaptation of results of local search into the genetic algorithms with minimum relocation operation of genes' random key numbers. This is the second contribution of the paper. The third contribution of this paper is three developed new MIP models which are making splitting and scheduling simultaneously. The fourth contribution of this paper is implementation of the GAspLAMIP. This implementation let us verify the optimality of GAspLA for the studied combinations. The proposed methods are tested on a set of problems taken from the literature and the results validate the effectiveness of the proposed algorithms.

  18. A bi-objective integer programming model for partly-restricted flight departure scheduling.

    Zhong, Han; Guan, Wei; Zhang, Wenyi; Jiang, Shixiong; Fan, Lingling

    2018-01-01

    The normal studies on air traffic departure scheduling problem (DSP) mainly deal with an independent airport in which the departure traffic is not affected by surrounded airports, which, however, is not a consistent case. In reality, there still exist cases where several commercial airports are closely located and one of them possesses a higher priority. During the peak hours, the departure activities of the lower-priority airports are usually required to give way to those of higher-priority airport. These giving-way requirements can inflict a set of changes on the modeling of departure scheduling problem with respect to the lower-priority airports. To the best of our knowledge, studies on DSP under this condition are scarce. Accordingly, this paper develops a bi-objective integer programming model to address the flight departure scheduling of the partly-restricted (e.g., lower-priority) one among several adjacent airports. An adapted tabu search algorithm is designed to solve the current problem. It is demonstrated from the case study of Tianjin Binhai International Airport in China that the proposed method can obviously improve the operation efficiency, while still realizing superior equity and regularity among restricted flows.

  19. The Vessel Schedule Recovery Problem (VSRP) – A MIP model for handling disruptions in liner shipping

    Brouer, Berit Dangaard; Dirksen, Jakob; Pisinger, David

    2013-01-01

    or even omitting one. We present the Vessel Schedule Recovery Problem (VSRP) to evaluate a given disruption scenario and to select a recovery action balancing the trade off between increased bunker consumption and the impact on cargo in the remaining network and the customer service level. It is proven...... due to adverse weather conditions, port contingencies, and many other issues. A common scenario for recovering a schedule is to either increase the speed at the cost of a significant increase in the fuel consumption or delaying cargo. Advanced recovery options might exist by swapping two port calls...... that the VSRP is NP-hard. The model is applied to four real life cases from Maersk Line and results are achieved in less than 5seconds with solutions comparable or superior to those chosen by operations managers in real life. Cost savings of up to 58% may be achieved by the suggested solutions compared...

  20. Network Traffic Monitoring Using Poisson Dynamic Linear Models

    Merl, D. M. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2011-05-09

    In this article, we discuss an approach for network forensics using a class of nonstationary Poisson processes with embedded dynamic linear models. As a modeling strategy, the Poisson DLM (PoDLM) provides a very flexible framework for specifying structured effects that may influence the evolution of the underlying Poisson rate parameter, including diurnal and weekly usage patterns. We develop a novel particle learning algorithm for online smoothing and prediction for the PoDLM, and demonstrate the suitability of the approach to real-time deployment settings via a new application to computer network traffic monitoring.

  1. On the chiral phase transition in the linear sigma model

    Tran Huu Phat; Nguyen Tuan Anh; Le Viet Hoa

    2003-01-01

    The Cornwall- Jackiw-Tomboulis (CJT) effective action for composite operators at finite temperature is used to investigate the chiral phase transition within the framework of the linear sigma model as the low-energy effective model of quantum chromodynamics (QCD). A new renormalization prescription for the CJT effective action in the Hartree-Fock (HF) approximation is proposed. A numerical study, which incorporates both thermal and quantum effect, shows that in this approximation the phase transition is of first order. However, taking into account the higher-loop diagrams contribution the order of phase transition is unchanged. (author)

  2. Bayesian Subset Modeling for High-Dimensional Generalized Linear Models

    Liang, Faming; Song, Qifan; Yu, Kai

    2013-01-01

    criterion model. The consistency of the resulting posterior is established under mild conditions. Further, a variable screening procedure is proposed based on the marginal inclusion probability, which shares the same properties of sure screening

  3. Application of linearized model to the stability analysis of the pressurized water reactor

    Li Haipeng; Huang Xiaojin; Zhang Liangju

    2008-01-01

    A Linear Time-Invariant model of the Pressurized Water Reactor is formulated through the linearization of the nonlinear model. The model simulation results show that the linearized model agrees well with the nonlinear model under small perturbation. Based upon the Lyapunov's First Method, the linearized model is applied to the stability analysis of the Pressurized Water Reactor. The calculation results show that the methodology of linearization to stability analysis is conveniently feasible. (authors)

  4. The Overgeneralization of Linear Models among University Students' Mathematical Productions: A Long-Term Study

    Esteley, Cristina B.; Villarreal, Monica E.; Alagia, Humberto R.

    2010-01-01

    Over the past several years, we have been exploring and researching a phenomenon that occurs among undergraduate students that we called extension of linear models to non-linear contexts or overgeneralization of linear models. This phenomenon appears when some students use linear representations in situations that are non-linear. In a first phase,…

  5. Predicting Madura cattle growth curve using non-linear model

    Widyas, N.; Prastowo, S.; Widi, T. S. M.; Baliarti, E.

    2018-03-01

    Madura cattle is Indonesian native. It is a composite breed that has undergone hundreds of years of selection and domestication to reach nowadays remarkable uniformity. Crossbreeding has reached the isle of Madura and the Madrasin, a cross between Madura cows and Limousine semen emerged. This paper aimed to compare the growth curve between Madrasin and one type of pure Madura cows, the common Madura cattle (Madura) using non-linear models. Madura cattles are kept traditionally thus reliable records are hardly available. Data were collected from small holder farmers in Madura. Cows from different age classes (5years) were observed, and body measurements (chest girth, body length and wither height) were taken. In total 63 Madura and 120 Madrasin records obtained. Linear model was built with cattle sub-populations and age as explanatory variables. Body weights were estimated based on the chest girth. Growth curves were built using logistic regression. Results showed that within the same age, Madrasin has significantly larger body compared to Madura (plogistic models fit better for Madura and Madrasin cattle data; with the estimated MSE for these models were 39.09 and 759.28 with prediction accuracy of 99 and 92% for Madura and Madrasin, respectively. Prediction of growth curve using logistic regression model performed well in both types of Madura cattle. However, attempts to administer accurate data on Madura cattle are necessary to better characterize and study these cattle.

  6. A non-linear model of information seeking behaviour

    Allen E. Foster

    2005-01-01

    Full Text Available The results of a qualitative, naturalistic, study of information seeking behaviour are reported in this paper. The study applied the methods recommended by Lincoln and Guba for maximising credibility, transferability, dependability, and confirmability in data collection and analysis. Sampling combined purposive and snowball methods, and led to a final sample of 45 inter-disciplinary researchers from the University of Sheffield. In-depth semi-structured interviews were used to elicit detailed examples of information seeking. Coding of interview transcripts took place in multiple iterations over time and used Atlas-ti software to support the process. The results of the study are represented in a non-linear Model of Information Seeking Behaviour. The model describes three core processes (Opening, Orientation, and Consolidation and three levels of contextual interaction (Internal Context, External Context, and Cognitive Approach, each composed of several individual activities and attributes. The interactivity and shifts described by the model show information seeking to be non-linear, dynamic, holistic, and flowing. The paper concludes by describing the whole model of behaviours as analogous to an artist's palette, in which activities remain available throughout information seeking. A summary of key implications of the model and directions for further research are included.

  7. Effect Displays in R for Generalised Linear Models

    John Fox

    2003-07-01

    Full Text Available This paper describes the implementation in R of a method for tabular or graphical display of terms in a complex generalised linear model. By complex, I mean a model that contains terms related by marginality or hierarchy, such as polynomial terms, or main effects and interactions. I call these tables or graphs effect displays. Effect displays are constructed by identifying high-order terms in a generalised linear model. Fitted values under the model are computed for each such term. The lower-order "relatives" of a high-order term (e.g., main effects marginal to an interaction are absorbed into the term, allowing the predictors appearing in the high-order term to range over their values. The values of other predictors are fixed at typical values: for example, a covariate could be fixed at its mean or median, a factor at its proportional distribution in the data, or to equal proportions in its several levels. Variations of effect displays are also described, including representation of terms higher-order to any appearing in the model.

  8. Global numerical modeling of magnetized plasma in a linear device

    Magnussen, Michael Løiten

    Understanding the turbulent transport in the plasma-edge in fusion devices is of utmost importance in order to make precise predictions for future fusion devices. The plasma turbulence observed in linear devices shares many important features with the turbulence observed in the edge of fusion dev...... with simulations performed at different ionization levels, using a simple model for plasma interaction with neutrals. It is found that the steady state and the saturated state of the system bifurcates when the neutral interaction dominates the electron-ion collisions.......Understanding the turbulent transport in the plasma-edge in fusion devices is of utmost importance in order to make precise predictions for future fusion devices. The plasma turbulence observed in linear devices shares many important features with the turbulence observed in the edge of fusion...... devices, and are easier to diagnose due to lower temperatures and a better access to the plasma. In order to gain greater insight into this complex turbulent behavior, numerical simulations of plasma in a linear device are performed in this thesis. Here, a three-dimensional drift-fluid model is derived...

  9. Predicting birth weight with conditionally linear transformation models.

    Möst, Lisa; Schmid, Matthias; Faschingbauer, Florian; Hothorn, Torsten

    2016-12-01

    Low and high birth weight (BW) are important risk factors for neonatal morbidity and mortality. Gynecologists must therefore accurately predict BW before delivery. Most prediction formulas for BW are based on prenatal ultrasound measurements carried out within one week prior to birth. Although successfully used in clinical practice, these formulas focus on point predictions of BW but do not systematically quantify uncertainty of the predictions, i.e. they result in estimates of the conditional mean of BW but do not deliver prediction intervals. To overcome this problem, we introduce conditionally linear transformation models (CLTMs) to predict BW. Instead of focusing only on the conditional mean, CLTMs model the whole conditional distribution function of BW given prenatal ultrasound parameters. Consequently, the CLTM approach delivers both point predictions of BW and fetus-specific prediction intervals. Prediction intervals constitute an easy-to-interpret measure of prediction accuracy and allow identification of fetuses subject to high prediction uncertainty. Using a data set of 8712 deliveries at the Perinatal Centre at the University Clinic Erlangen (Germany), we analyzed variants of CLTMs and compared them to standard linear regression estimation techniques used in the past and to quantile regression approaches. The best-performing CLTM variant was competitive with quantile regression and linear regression approaches in terms of conditional coverage and average length of the prediction intervals. We propose that CLTMs be used because they are able to account for possible heteroscedasticity, kurtosis, and skewness of the distribution of BWs. © The Author(s) 2014.

  10. Wavefront Sensing for WFIRST with a Linear Optical Model

    Jurling, Alden S.; Content, David A.

    2012-01-01

    In this paper we develop methods to use a linear optical model to capture the field dependence of wavefront aberrations in a nonlinear optimization-based phase retrieval algorithm for image-based wavefront sensing. The linear optical model is generated from a ray trace model of the system and allows the system state to be described in terms of mechanical alignment parameters rather than wavefront coefficients. This approach allows joint optimization over images taken at different field points and does not require separate convergence of phase retrieval at individual field points. Because the algorithm exploits field diversity, multiple defocused images per field point are not required for robustness. Furthermore, because it is possible to simultaneously fit images of many stars over the field, it is not necessary to use a fixed defocus to achieve adequate signal-to-noise ratio despite having images with high dynamic range. This allows high performance wavefront sensing using in-focus science data. We applied this technique in a simulation model based on the Wide Field Infrared Survey Telescope (WFIRST) Intermediate Design Reference Mission (IDRM) imager using a linear optical model with 25 field points. We demonstrate sub-thousandth-wave wavefront sensing accuracy in the presence of noise and moderate undersampling for both monochromatic and polychromatic images using 25 high-SNR target stars. Using these high-quality wavefront sensing results, we are able to generate upsampled point-spread functions (PSFs) and use them to determine PSF ellipticity to high accuracy in order to reduce the systematic impact of aberrations on the accuracy of galactic ellipticity determination for weak-lensing science.

  11. A linearized dispersion relation for orthorhombic pseudo-acoustic modeling

    Song, Xiaolei

    2012-11-04

    Wavefield extrapolation in acoustic orthorhombic anisotropic media suffers from wave-mode coupling and stability limitations in the parameter range. We introduce a linearized form of the dispersion relation for acoustic orthorhombic media to model acoustic wavefields. We apply the lowrank approximation approach to handle the corresponding space-wavenumber mixed-domain operator. Numerical experiments show that the proposed wavefield extrapolator is accurate and practically free of dispersions. Further, there is no coupling of qSv and qP waves, because we use the analytical dispersion relation. No constraints on Thomsen\\'s parameters are required for stability. The linearized expression may provide useful application for parameter estimation in orthorhombic media.

  12. Linearized vector radiative transfer model MCC++ for a spherical atmosphere

    Postylyakov, O.V.

    2004-01-01

    Application of radiative transfer models has shown that optical remote sensing requires extra characteristics of radiance field in addition to the radiance intensity itself. Simulation of spectral measurements, analysis of retrieval errors and development of retrieval algorithms are in need of derivatives of radiance with respect to atmospheric constituents under investigation. The presented vector spherical radiative transfer model MCC++ was linearized, which allows the calculation of derivatives of all elements of the Stokes vector with respect to the volume absorption coefficient simultaneously with radiance calculation. The model MCC++ employs Monte Carlo algorithm for radiative transfer simulation and takes into account aerosol and molecular scattering, gas and aerosol absorption, and Lambertian surface albedo. The model treats a spherically symmetrical atmosphere. Relation of the estimated derivatives with other forms of radiance derivatives: the weighting functions used in gas retrieval and the air mass factors used in the DOAS retrieval algorithms, is obtained. Validation of the model against other radiative models is overviewed. The computing time of the intensity for the MCC++ model is about that for radiative models treating sphericity of the atmosphere approximately and is significantly shorter than that for the full spherical models used in the comparisons. The simultaneous calculation of all derivatives (i.e. with respect to absorption in all model atmosphere layers) and the intensity is only 1.2-2 times longer than the calculation of the intensity only

  13. An Optimization of Manufacturing Systems using a Feedback Control Scheduling Model

    Ikome, John M.; Kanakana, Grace M.

    2018-03-01

    In complex production system that involves multiple process, unplanned disruption often turn to make the entire production system vulnerable to a number of problems which leads to customer’s dissatisfaction. However, this problem has been an ongoing problem that requires a research and methods to streamline the entire process or develop a model that will address it, in contrast to this, we have developed a feedback scheduling model that can minimize some of this problem and after a number of experiment, it shows that some of this problems can be eliminated if the correct remedial actions are implemented on time.

  14. Methodologic model to scheduling on service systems: a software engineering approach

    Eduyn Ramiro Lopez-Santana

    2016-06-01

    Full Text Available This paper presents an approach of software engineering to a research proposal to make an Expert System to scheduling on service systems using methodologies and processes of software development. We use the adaptive software development as methodology for the software architecture based on the description as a software metaprocess that characterizes the research process. We make UML’s diagrams (Unified Modeling Language to provide a visual modeling that describes the research methodology in order to identify the actors, elements and interactions in the research process.

  15. Effective connectivity between superior temporal gyrus and Heschl's gyrus during white noise listening: linear versus non-linear models.

    Hamid, Ka; Yusoff, An; Rahman, Mza; Mohamad, M; Hamid, Aia

    2012-04-01

    This fMRI study is about modelling the effective connectivity between Heschl's gyrus (HG) and the superior temporal gyrus (STG) in human primary auditory cortices. MATERIALS #ENTITYSTARTX00026; Ten healthy male participants were required to listen to white noise stimuli during functional magnetic resonance imaging (fMRI) scans. Statistical parametric mapping (SPM) was used to generate individual and group brain activation maps. For input region determination, two intrinsic connectivity models comprising bilateral HG and STG were constructed using dynamic causal modelling (DCM). The models were estimated and inferred using DCM while Bayesian Model Selection (BMS) for group studies was used for model comparison and selection. Based on the winning model, six linear and six non-linear causal models were derived and were again estimated, inferred, and compared to obtain a model that best represents the effective connectivity between HG and the STG, balancing accuracy and complexity. Group results indicated significant asymmetrical activation (p(uncorr) Model comparison results showed strong evidence of STG as the input centre. The winning model is preferred by 6 out of 10 participants. The results were supported by BMS results for group studies with the expected posterior probability, r = 0.7830 and exceedance probability, ϕ = 0.9823. One-sample t-tests performed on connection values obtained from the winning model indicated that the valid connections for the winning model are the unidirectional parallel connections from STG to bilateral HG (p model comparison between linear and non-linear models using BMS prefers non-linear connection (r = 0.9160, ϕ = 1.000) from which the connectivity between STG and the ipsi- and contralateral HG is gated by the activity in STG itself. We are able to demonstrate that the effective connectivity between HG and STG while listening to white noise for the respective participants can be explained by a non-linear dynamic causal model with

  16. Exactly soluble two-state quantum models with linear couplings

    Torosov, B T; Vitanov, N V

    2008-01-01

    A class of exact analytic solutions of the time-dependent Schroedinger equation is presented for a two-state quantum system coherently driven by a nonresonant external field. The coupling is a linear function of time with a finite duration and the detuning is constant. Four special models are considered in detail, namely the shark, double-shark, tent and zigzag models. The exact solution is derived by rotation of the Landau-Zener propagator at an angle of π/4 and is expressed in terms of Weber's parabolic cylinder function. Approximations for the transition probabilities are derived for all four models by using the asymptotics of the Weber function; these approximations demonstrate various effects of physical interest for each model

  17. Schedule-induced polydipsia: a rat model of obsessive-compulsive disorder.

    Platt, Brian; Beyer, Chad E; Schechter, Lee E; Rosenzweig-Lipson, Sharon

    2008-04-01

    Obsessive-compulsive disorder (OCD) is difficult to model in animals due to the involvement of both mental (obsessions) and physical (compulsions) symptoms. Due to limitations of using animals to evaluate obsessions, OCD models are limited to evaluation of the compulsive and repetitive behaviors of animals. Of these, models of adjunctive behaviors offer the most value in regard to predicting efficacy of anti-OCD drugs in the clinic. Adjunctive behaviors are those that are maintained indirectly by the variables that control another behavior, rather than directly by their own typical controlling variables. Schedule-induced polydipsia (SIP) is an adjunctive model in which rats exhibit exaggerated drinking behavior (polydipsia) when presented with food pellets under a fixed-time schedule. The polydipsic response is an excessive manifestation of a normal behavior (drinking), providing face validity to the model. Furthermore, clinically effective drugs for the treatment of OCD decrease SIP. This protocol describes a rat SIP model of OCD and provides preclinical data for drugs that decrease polydipsia and are clinically effective in the treatment of OCD.

  18. Parametric Linear Hybrid Automata for Complex Environmental Systems Modeling

    Samar Hayat Khan Tareen

    2015-07-01

    Full Text Available Environmental systems, whether they be weather patterns or predator-prey relationships, are dependent on a number of different variables, each directly or indirectly affecting the system at large. Since not all of these factors are known, these systems take on non-linear dynamics, making it difficult to accurately predict meaningful behavioral trends far into the future. However, such dynamics do not warrant complete ignorance of different efforts to understand and model close approximations of these systems. Towards this end, we have applied a logical modeling approach to model and analyze the behavioral trends and systematic trajectories that these systems exhibit without delving into their quantification. This approach, formalized by René Thomas for discrete logical modeling of Biological Regulatory Networks (BRNs and further extended in our previous studies as parametric biological linear hybrid automata (Bio-LHA, has been previously employed for the analyses of different molecular regulatory interactions occurring across various cells and microbial species. As relationships between different interacting components of a system can be simplified as positive or negative influences, we can employ the Bio-LHA framework to represent different components of the environmental system as positive or negative feedbacks. In the present study, we highlight the benefits of hybrid (discrete/continuous modeling which lead to refinements among the fore-casted behaviors in order to find out which ones are actually possible. We have taken two case studies: an interaction of three microbial species in a freshwater pond, and a more complex atmospheric system, to show the applications of the Bio-LHA methodology for the timed hybrid modeling of environmental systems. Results show that the approach using the Bio-LHA is a viable method for behavioral modeling of complex environmental systems by finding timing constraints while keeping the complexity of the model

  19. Linear models for multivariate, time series, and spatial data

    Christensen, Ronald

    1991-01-01

    This is a companion volume to Plane Answers to Complex Questions: The Theory 0/ Linear Models. It consists of six additional chapters written in the same spirit as the last six chapters of the earlier book. Brief introductions are given to topics related to linear model theory. No attempt is made to give a comprehensive treatment of the topics. Such an effort would be futile. Each chapter is on a topic so broad that an in depth discussion would require a book-Iength treatment. People need to impose structure on the world in order to understand it. There is a limit to the number of unrelated facts that anyone can remem­ ber. If ideas can be put within a broad, sophisticatedly simple structure, not only are they easier to remember but often new insights become avail­ able. In fact, sophisticatedly simple models of the world may be the only ones that work. I have often heard Arnold Zellner say that, to the best of his knowledge, this is true in econometrics. The process of modeling is fundamental to understand...

  20. Linear mixed models a practical guide using statistical software

    West, Brady T; Galecki, Andrzej T

    2014-01-01

    Highly recommended by JASA, Technometrics, and other journals, the first edition of this bestseller showed how to easily perform complex linear mixed model (LMM) analyses via a variety of software programs. Linear Mixed Models: A Practical Guide Using Statistical Software, Second Edition continues to lead readers step by step through the process of fitting LMMs. This second edition covers additional topics on the application of LMMs that are valuable for data analysts in all fields. It also updates the case studies using the latest versions of the software procedures and provides up-to-date information on the options and features of the software procedures available for fitting LMMs in SAS, SPSS, Stata, R/S-plus, and HLM.New to the Second Edition A new chapter on models with crossed random effects that uses a case study to illustrate software procedures capable of fitting these models Power analysis methods for longitudinal and clustered study designs, including software options for power analyses and suggest...

  1. Tip-tilt disturbance model identification based on non-linear least squares fitting for Linear Quadratic Gaussian control

    Yang, Kangjian; Yang, Ping; Wang, Shuai; Dong, Lizhi; Xu, Bing

    2018-05-01

    We propose a method to identify tip-tilt disturbance model for Linear Quadratic Gaussian control. This identification method based on Levenberg-Marquardt method conducts with a little prior information and no auxiliary system and it is convenient to identify the tip-tilt disturbance model on-line for real-time control. This identification method makes it easy that Linear Quadratic Gaussian control runs efficiently in different adaptive optics systems for vibration mitigation. The validity of the Linear Quadratic Gaussian control associated with this tip-tilt disturbance model identification method is verified by experimental data, which is conducted in replay mode by simulation.

  2. Linear parameter-varying modeling and control of the steam temperature in a Canadian SCWR

    Sun, Peiwei, E-mail: sunpeiwei@mail.xjtu.edu.cn; Zhang, Jianmin; Su, Guanghui

    2017-03-15

    Highlights: • Nonlinearity of Canadian SCWR is analyzed based on step responses and Nyquist plots. • LPV model is derived through Jacobian linearization and curve fitting. • An output feedback H{sub ∞} controller is synthesized for the steam temperature. • The control performance is evaluated by step disturbances and wide range operation. • The controller can stabilize the system and reject the reactor power disturbance. - Abstract: The Canadian direct-cycle Supercritical Water-cooled Reactor (SCWR) is a pressure-tube type SCWR under development in Canada. The dynamics of the steam temperature have a high degree of nonlinearity and are highly sensitive to reactor power disturbances. Traditional gain scheduling control cannot theoretically guarantee stability for all operating regions. The control performance can also be deteriorated when the controllers are switched. In this paper, a linear parameter-varying (LPV) strategy is proposed to solve such problems. Jacobian linearization and curve fitting are applied to derive the LPV model, which is verified using a nonlinear dynamic model and determined to be sufficiently accurate for control studies. An output feedback H{sub ∞} controller is synthesized to stabilize the steam temperature system and reject reactor power disturbances. The LPV steam temperature controller is implemented using a nonlinear dynamic model, and step changes in the setpoints and typical load patterns are carried out in the testing process. It is demonstrated through numerical simulation that the LPV controller not only stabilizes the steam temperature under different disturbances but also efficiently rejects reactor power disturbances and suppresses the steam temperature variation at different power levels. The LPV approach is effective in solving control problems of the steam temperature in the Canadian SCWR.

  3. A simple rule based model for scheduling farm management operations in SWAT

    Schürz, Christoph; Mehdi, Bano; Schulz, Karsten

    2016-04-01

    For many interdisciplinary questions at the watershed scale, the Soil and Water Assessment Tool (SWAT; Arnold et al., 1998) has become an accepted and widely used tool. Despite its flexibility, the model is highly demanding when it comes to input data. At SWAT's core the water balance and the modeled nutrient cycles are plant growth driven (implemented with the EPIC crop growth model). Therefore, land use and crop data with high spatial and thematic resolution, as well as detailed information on cultivation and farm management practices are required. For many applications of the model however, these data are unavailable. In order to meet these requirements, SWAT offers the option to trigger scheduled farm management operations by applying the Potential Heat Unit (PHU) concept. The PHU concept solely takes into account the accumulation of daily mean temperature for management scheduling. Hence, it contradicts several farming strategies that take place in reality; such as: i) Planting and harvesting dates are set much too early or too late, as the PHU concept is strongly sensitivity to inter-annual temperature fluctuations; ii) The timing of fertilizer application, in SWAT this often occurs simultaneously on the same date in in each field; iii) and can also coincide with precipitation events. Particularly, the latter two can lead to strong peaks in modeled nutrient loads. To cope with these shortcomings we propose a simple rule based model (RBM) to schedule management operations according to realistic farmer management practices in SWAT. The RBM involves simple strategies requiring only data that are input into the SWAT model initially, such as temperature and precipitation data. The user provides boundaries of time periods for operation schedules to take place for all crops in the model. These data are readily available from the literature or from crop variety trials. The RBM applies the dates by complying with the following rules: i) Operations scheduled in the

  4. Bayesian uncertainty quantification in linear models for diffusion MRI.

    Sjölund, Jens; Eklund, Anders; Özarslan, Evren; Herberthson, Magnus; Bånkestad, Maria; Knutsson, Hans

    2018-03-29

    Diffusion MRI (dMRI) is a valuable tool in the assessment of tissue microstructure. By fitting a model to the dMRI signal it is possible to derive various quantitative features. Several of the most popular dMRI signal models are expansions in an appropriately chosen basis, where the coefficients are determined using some variation of least-squares. However, such approaches lack any notion of uncertainty, which could be valuable in e.g. group analyses. In this work, we use a probabilistic interpretation of linear least-squares methods to recast popular dMRI models as Bayesian ones. This makes it possible to quantify the uncertainty of any derived quantity. In particular, for quantities that are affine functions of the coefficients, the posterior distribution can be expressed in closed-form. We simulated measurements from single- and double-tensor models where the correct values of several quantities are known, to validate that the theoretically derived quantiles agree with those observed empirically. We included results from residual bootstrap for comparison and found good agreement. The validation employed several different models: Diffusion Tensor Imaging (DTI), Mean Apparent Propagator MRI (MAP-MRI) and Constrained Spherical Deconvolution (CSD). We also used in vivo data to visualize maps of quantitative features and corresponding uncertainties, and to show how our approach can be used in a group analysis to downweight subjects with high uncertainty. In summary, we convert successful linear models for dMRI signal estimation to probabilistic models, capable of accurate uncertainty quantification. Copyright © 2018 Elsevier Inc. All rights reserved.

  5. Modelling non-linear effects of dark energy

    Bose, Benjamin; Baldi, Marco; Pourtsidou, Alkistis

    2018-04-01

    We investigate the capabilities of perturbation theory in capturing non-linear effects of dark energy. We test constant and evolving w models, as well as models involving momentum exchange between dark energy and dark matter. Specifically, we compare perturbative predictions at 1-loop level against N-body results for four non-standard equations of state as well as varying degrees of momentum exchange between dark energy and dark matter. The interaction is modelled phenomenologically using a time dependent drag term in the Euler equation. We make comparisons at the level of the matter power spectrum and the redshift space monopole and quadrupole. The multipoles are modelled using the Taruya, Nishimichi and Saito (TNS) redshift space spectrum. We find perturbation theory does very well in capturing non-linear effects coming from dark sector interaction. We isolate and quantify the 1-loop contribution coming from the interaction and from the non-standard equation of state. We find the interaction parameter ξ amplifies scale dependent signatures in the range of scales considered. Non-standard equations of state also give scale dependent signatures within this same regime. In redshift space the match with N-body is improved at smaller scales by the addition of the TNS free parameter σv. To quantify the importance of modelling the interaction, we create mock data sets for varying values of ξ using perturbation theory. This data is given errors typical of Stage IV surveys. We then perform a likelihood analysis using the first two multipoles on these sets and a ξ=0 modelling, ignoring the interaction. We find the fiducial growth parameter f is generally recovered even for very large values of ξ both at z=0.5 and z=1. The ξ=0 modelling is most biased in its estimation of f for the phantom w=‑1.1 case.

  6. A model of alcohol drinking under an intermittent access schedule using group-housed mice.

    Magdalena Smutek

    Full Text Available Here, we describe a new model of voluntary alcohol drinking by group-housed mice. The model employs sensor-equipped cages that track the behaviors of the individual animals via implanted radio chips. After the animals were allowed intermittent access to alcohol (three 24 h intervals every week for 4 weeks, the proportions of licks directed toward bottles containing alcohol were 50.9% and 39.6% for the male and female mice, respectively. We used three approaches (i.e., quinine adulteration, a progressive ratio schedule and a schedule involving a risk of punishment to test for symptoms of compulsive alcohol drinking. The addition of 0.01% quinine to the alcohol solution did not significantly affect intake, but 0.03% quinine induced a greater than 5-fold reduction in the number of licks on the alcohol bottles. When the animals were required to perform increasing numbers of instrumental responses to obtain access to the bottle with alcohol (i.e., a progressive ratio schedule, they frequently reached a maximum of 21 responses irrespective of the available reward. Although the mice rarely achieved higher response criteria, the number of attempts was ∼ 10 times greater in case of alcohol than water. We have developed an approach for mapping social interactions among animals that is based on analysis of the sequences of entries into the cage corners. This approach allowed us to identify the mice that followed other animals in non-random fashions. Approximately half of the mice displayed at least one interaction of this type. We have not yet found a clear correlation between imitative behavior and relative alcohol preference. In conclusion, the model we describe avoids the limitations associated with testing isolated animals and reliably leads to stable alcohol drinking. Therefore, this model may be well suited to screening for the effects of genetic mutations or pharmacological treatments on alcohol-induced behaviors.

  7. Spatial generalised linear mixed models based on distances.

    Melo, Oscar O; Mateu, Jorge; Melo, Carlos E

    2016-10-01

    Risk models derived from environmental data have been widely shown to be effective in delineating geographical areas of risk because they are intuitively easy to understand. We present a new method based on distances, which allows the modelling of continuous and non-continuous random variables through distance-based spatial generalised linear mixed models. The parameters are estimated using Markov chain Monte Carlo maximum likelihood, which is a feasible and a useful technique. The proposed method depends on a detrending step built from continuous or categorical explanatory variables, or a mixture among them, by using an appropriate Euclidean distance. The method is illustrated through the analysis of the variation in the prevalence of Loa loa among a sample of village residents in Cameroon, where the explanatory variables included elevation, together with maximum normalised-difference vegetation index and the standard deviation of normalised-difference vegetation index calculated from repeated satellite scans over time. © The Author(s) 2013.

  8. Linear system identification via backward-time observer models

    Juang, Jer-Nan; Phan, Minh

    1993-01-01

    This paper presents an algorithm to identify a state-space model of a linear system using a backward-time approach. The procedure consists of three basic steps. First, the Markov parameters of a backward-time observer are computed from experimental input-output data. Second, the backward-time observer Markov parameters are decomposed to obtain the backward-time system Markov parameters (backward-time pulse response samples) from which a backward-time state-space model is realized using the Eigensystem Realization Algorithm. Third, the obtained backward-time state space model is converted to the usual forward-time representation. Stochastic properties of this approach will be discussed. Experimental results are given to illustrate when and to what extent this concept works.

  9. Linear mixing model applied to AVHRR LAC data

    Holben, Brent N.; Shimabukuro, Yosio E.

    1993-01-01

    A linear mixing model was applied to coarse spatial resolution data from the NOAA Advanced Very High Resolution Radiometer. The reflective component of the 3.55 - 3.93 microns channel was extracted and used with the two reflective channels 0.58 - 0.68 microns and 0.725 - 1.1 microns to run a Constraine Least Squares model to generate vegetation, soil, and shade fraction images for an area in the Western region of Brazil. The Landsat Thematic Mapper data covering the Emas National park region was used for estimating the spectral response of the mixture components and for evaluating the mixing model results. The fraction images were compared with an unsupervised classification derived from Landsat TM data acquired on the same day. The relationship between the fraction images and normalized difference vegetation index images show the potential of the unmixing techniques when using coarse resolution data for global studies.

  10. Accelerating transient simulation of linear reduced order models.

    Thornquist, Heidi K.; Mei, Ting; Keiter, Eric Richard; Bond, Brad

    2011-10-01

    Model order reduction (MOR) techniques have been used to facilitate the analysis of dynamical systems for many years. Although existing model reduction techniques are capable of providing huge speedups in the frequency domain analysis (i.e. AC response) of linear systems, such speedups are often not obtained when performing transient analysis on the systems, particularly when coupled with other circuit components. Reduced system size, which is the ostensible goal of MOR methods, is often insufficient to improve transient simulation speed on realistic circuit problems. It can be shown that making the correct reduced order model (ROM) implementation choices is crucial to the practical application of MOR methods. In this report we investigate methods for accelerating the simulation of circuits containing ROM blocks using the circuit simulator Xyce.

  11. Behavioral modeling of the dominant dynamics in input-output transfer of linear(ized) circuits

    Beelen, T.G.J.; Maten, ter E.J.W.; Sihaloho, H.J.; Eijndhoven, van S.J.L.

    2010-01-01

    We present a powerful procedure for determining both the dominant dynamics of the inputoutput transfer and the corresponding most influential circuit parameters of a linear(ized) circuit. The procedure consists of several steps in which a specific (sub)problem is solved and its solution is used in

  12. Non Linear Modelling and Control of Hydraulic Actuators

    B. Šulc

    2002-01-01

    Full Text Available This paper deals with non-linear modelling and control of a differential hydraulic actuator. The nonlinear state space equations are derived from basic physical laws. They are more powerful than the transfer function in the case of linear models, and they allow the application of an object oriented approach in simulation programs. The effects of all friction forces (static, Coulomb and viscous have been modelled, and many phenomena that are usually neglected are taken into account, e.g., the static term of friction, the leakage between the two chambers and external space. Proportional Differential (PD and Fuzzy Logic Controllers (FLC have been applied in order to make a comparison by means of simulation. Simulation is performed using Matlab/Simulink, and some of the results are compared graphically. FLC is tuned in a such way that it produces a constant control signal close to its maximum (or minimum, where possible. In the case of PD control the occurrence of peaks cannot be avoided. These peaks produce a very high velocity that oversteps the allowed values.

  13. Modeling Pan Evaporation for Kuwait by Multiple Linear Regression

    Almedeij, Jaber

    2012-01-01

    Evaporation is an important parameter for many projects related to hydrology and water resources systems. This paper constitutes the first study conducted in Kuwait to obtain empirical relations for the estimation of daily and monthly pan evaporation as functions of available meteorological data of temperature, relative humidity, and wind speed. The data used here for the modeling are daily measurements of substantial continuity coverage, within a period of 17 years between January 1993 and December 2009, which can be considered representative of the desert climate of the urban zone of the country. Multiple linear regression technique is used with a procedure of variable selection for fitting the best model forms. The correlations of evaporation with temperature and relative humidity are also transformed in order to linearize the existing curvilinear patterns of the data by using power and exponential functions, respectively. The evaporation models suggested with the best variable combinations were shown to produce results that are in a reasonable agreement with observation values. PMID:23226984

  14. Modeling and optimization of tissue 10B concentration and dosimetry for arbitrary BPA-F infusion schedules in humans

    Kiger, W.S. III; Newton, T.H.; Palmer, M.R.

    2000-01-01

    Separate compartmental models have been derived for the concentration of 10 B resulting from BPA-F infusion in the central vascular space (i.e., blood or, more appropriately, plasma) and in glioblastoma multiforme and normal brain. By coupling the model for the temporal variation of 10 B concentration in the central vascular space with that for tissue, the dynamic behavior of the 10 B concentration and the resulting dosimetry in the relevant tissues and blood may be predicted for arbitrary infusion schedules. This coupled model may be used as a tool for identifying the optimal time for BNCT irradiation and optimal BPA-F infusion schedule (i.e., temporal targeting) in humans without the need for expensive and time-consuming pharmacokinetic studies for every infusion schedule considered. This model was used to analyze the concentration profiles resulting from a wide range of infusion schedules and their implications for dosimetry. (author)

  15. A linear model for flow over complex terrain

    Frank, H P [Risoe National Lab., Wind Energy and Atmospheric Physics Dept., Roskilde (Denmark)

    1999-03-01

    A linear flow model similar to WA{sup s}P or LINCOM has been developed. Major differences are an isentropic temperature equation which allows internal gravity waves, and vertical advection of the shear of the mean flow. The importance of these effects are illustrated by examples. Resource maps are calculated from a distribution of geostrophic winds and stratification for Pyhaetunturi Fell in northern Finland and Acqua Spruzza in Italy. Stratification becomes important if the inverse Froude number formulated with the width of the hill becomes of order one or greater. (au) EU-JOULE-3. 16 refs.

  16. Linear-quadratic model predictions for tumor control probability

    Yaes, R.J.

    1987-01-01

    Sigmoid dose-response curves for tumor control are calculated from the linear-quadratic model parameters α and Β, obtained from human epidermoid carcinoma cell lines, and are much steeper than the clinical dose-response curves for head and neck cancers. One possible explanation is the presence of small radiation-resistant clones arising from mutations in an initially homogeneous tumor. Using the mutation theory of Delbruck and Luria and of Goldie and Coldman, the authors discuss the implications of such radiation-resistant clones for clinical radiation therapy

  17. Inventory model using bayesian dynamic linear model for demand forecasting

    Marisol Valencia-Cárdenas

    2014-12-01

    Full Text Available An important factor of manufacturing process is the inventory management of terminated product. Constantly, industry is looking for better alternatives to establish an adequate plan of production and stored quantities, with optimal cost, getting quantities in a time horizon, which permits to define resources and logistics with anticipation, needed to distribute products on time. Total absence of historical data, required by many statistical models to forecast, demands the search for other kind of accurate techniques. This work presents an alternative that not only permits to forecast, in an adjusted way, but also, to provide optimal quantities to produce and store with an optimal cost, using Bayesian statistics. The proposal is illustrated with real data. Palabras clave: estadística bayesiana, optimización, modelo de inventarios, modelo lineal dinámico bayesiano. Keywords: Bayesian statistics, opti

  18. Linear-quadratic model underestimates sparing effect of small doses per fraction in rat spinal cord

    Shun Wong, C.; Toronto University; Minkin, S.; Hill, R.P.; Toronto University

    1993-01-01

    The application of the linear-quadratic (LQ) model to describe iso-effective fractionation schedules for dose fraction sizes less than 2 Gy has been controversial. Experiments are described in which the effect of daily fractionated irradiation given with a wide range of fraction sizes was assessed in rat cervical spine cord. The first group of rats was given doses in 1, 2, 4, 8 and 40 fractions/day. The second group received 3 initial 'top-up'doses of 9 Gy given once daily, representing 3/4 tolerance, followed by doses in 1, 2, 10, 20, 30 and 40 fractions/day. The fractionated portion of the irradiation schedule therefore constituted only the final quarter of the tolerance dose. The endpoint of the experiments was paralysis of forelimbs secondary to white matter necrosis. Direct analysis of data from experiments with full course fractionation up to 40 fractions/day (25.0-1.98 Gy/fraction) indicated consistency with the LQ model yielding an α/β value of 2.41 Gy. Analysis of data from experiments in which the 3 'top-up' doses were followed by up to 10 fractions (10.0-1.64 Gy/fraction) gave an α/β value of 3.41 Gy. However, data from 'top-up' experiments with 20, 30 and 40 fractions (1.60-0.55 Gy/fraction) were inconsistent with LQ model and gave a very small α/β of 0.48 Gy. It is concluded that LQ model based on data from large doses/fraction underestimates the sparing effect of small doses/fraction, provided sufficient time is allowed between each fraction for repair of sublethal damage. (author). 28 refs., 5 figs., 1 tab

  19. A Novel Energy Efficient Topology Control Scheme Based on a Coverage-Preserving and Sleep Scheduling Model for Sensor Networks

    Shi, Binbin; Wei, Wei; Wang, Yihuai; Shu, Wanneng

    2016-01-01

    In high-density sensor networks, scheduling some sensor nodes to be in the sleep mode while other sensor nodes remain active for monitoring or forwarding packets is an effective control scheme to conserve energy. In this paper, a Coverage-Preserving Control Scheduling Scheme (CPCSS) based on a cloud model and redundancy degree in sensor networks is proposed. Firstly, the normal cloud model is adopted for calculating the similarity degree between the sensor nodes in terms of their historical d...

  20. Phenomenology of non-minimal supersymmetric models at linear colliders

    Porto, Stefano

    2015-06-01

    The focus of this thesis is on the phenomenology of several non-minimal supersymmetric models in the context of future linear colliders (LCs). Extensions of the minimal supersymmetric Standard Model (MSSM) may accommodate the observed Higgs boson mass at about 125 GeV in a more natural way than the MSSM, with a richer phenomenology. We consider both F-term extensions of the MSSM, as for instance the non-minimal supersymmetric Standard Model (NMSSM), as well as D-terms extensions arising at low energies from gauge extended supersymmetric models. The NMSSM offers a solution to the μ-problem with an additional gauge singlet supermultiplet. The enlarged neutralino sector of the NMSSM can be accurately studied at a LC and used to distinguish the model from the MSSM. We show that exploiting the power of the polarised beams of a LC can be used to reconstruct the neutralino and chargino sector and eventually distinguish the NMSSM even considering challenging scenarios that resemble the MSSM. Non-decoupling D-terms extensions of the MSSM can raise the tree-level Higgs mass with respect to the MSSM. This is done through additional contributions to the Higgs quartic potential, effectively generated by an extended gauge group. We study how this can happen and we show how these additional non-decoupling D-terms affect the SM-like Higgs boson couplings to fermions and gauge bosons. We estimate how the deviations from the SM couplings can be spotted at the Large Hadron Collider (LHC) and at the International Linear Collider (ILC), showing how the ILC would be suitable for the model identication. Since our results prove that a linear collider is a fundamental machine for studying supersymmetry phenomenology at a high level of precision, we argue that also a thorough comprehension of the physics at the interaction point (IP) of a LC is needed. Therefore, we finally consider the possibility of observing intense electromagnetic field effects and nonlinear quantum electrodynamics

  1. Non-Linear Slosh Damping Model Development and Validation

    Yang, H. Q.; West, Jeff

    2015-01-01

    Propellant tank slosh dynamics are typically represented by a mechanical model of spring mass damper. This mechanical model is then included in the equation of motion of the entire vehicle for Guidance, Navigation and Control (GN&C) analysis. For a partially-filled smooth wall propellant tank, the critical damping based on classical empirical correlation is as low as 0.05%. Due to this low value of damping, propellant slosh is potential sources of disturbance critical to the stability of launch and space vehicles. It is postulated that the commonly quoted slosh damping is valid only under the linear regime where the slosh amplitude is small. With the increase of slosh amplitude, the critical damping value should also increase. If this nonlinearity can be verified and validated, the slosh stability margin can be significantly improved, and the level of conservatism maintained in the GN&C analysis can be lessened. The purpose of this study is to explore and to quantify the dependence of slosh damping with slosh amplitude. Accurately predicting the extremely low damping value of a smooth wall tank is very challenging for any Computational Fluid Dynamics (CFD) tool. One must resolve thin boundary layers near the wall and limit numerical damping to minimum. This computational study demonstrates that with proper grid resolution, CFD can indeed accurately predict the low damping physics from smooth walls under the linear regime. Comparisons of extracted damping values with experimental data for different tank sizes show very good agreements. Numerical simulations confirm that slosh damping is indeed a function of slosh amplitude. When slosh amplitude is low, the damping ratio is essentially constant, which is consistent with the empirical correlation. Once the amplitude reaches a critical value, the damping ratio becomes a linearly increasing function of the slosh amplitude. A follow-on experiment validated the developed nonlinear damping relationship. This discovery can

  2. A novel modeling approach for job shop scheduling problem under uncertainty

    Behnam Beheshti Pur

    2013-11-01

    Full Text Available When aiming on improving efficiency and reducing cost in manufacturing environments, production scheduling can play an important role. Although a common workshop is full of uncertainties, when using mathematical programs researchers have mainly focused on deterministic problems. After briefly reviewing and discussing popular modeling approaches in the field of stochastic programming, this paper proposes a new approach based on utility theory for a certain range of problems and under some practical assumptions. Expected utility programming, as the proposed approach, will be compared with the other well-known methods and its meaningfulness and usefulness will be illustrated via a numerical examples and a real case.

  3. Scheduling of power generation a large-scale mixed-variable model

    Prékopa, András; Strazicky, Beáta; Deák, István; Hoffer, János; Németh, Ágoston; Potecz, Béla

    2014-01-01

    The book contains description of a real life application of modern mathematical optimization tools in an important problem solution for power networks. The objective is the modelling and calculation of optimal daily scheduling of power generation, by thermal power plants,  to satisfy all demands at minimum cost, in such a way that the  generation and transmission capacities as well as the demands at the nodes of the system appear in an integrated form. The physical parameters of the network are also taken into account. The obtained large-scale mixed variable problem is relaxed in a smart, practical way, to allow for fast numerical solution of the problem.

  4. Investigating transportation system in container terminals and developing a yard crane scheduling model

    Hassan Javanshir

    2012-01-01

    Full Text Available The world trade has tremendous growth in marine transportation. This paper studies yard crane scheduling problem between different blocks in container terminal. Its purpose is to minimize total travel time of cranes between blocks and total delayed workload in blocks at different periods. In this way the problem is formulated as a mixed integer programming (MIP model. The block pairs between which yard cranes will be transferred, during the various periods, is determined by this model. Afterwards the model is coded in LINGO software, which benefits from branch and bound algorithm to solve. Computational results determine the yard cranes movement sequence among blocks to achieve minimum total travel time for cranes and minimum total delayed workload in blocks at different planning periods. Also the results show capability and adequacy of the developed model.

  5. Non linear permanent magnets modelling with the finite element method

    Chavanne, J.; Meunier, G.; Sabonnadiere, J.C.

    1989-01-01

    In order to perform the calculation of permanent magnets with the finite element method, it is necessary to take into account the anisotropic behaviour of hard magnetic materials (Ferrites, NdFeB, SmCo5). In linear cases, the permeability of permanent magnets is a tensor. This one is fully described with the permeabilities parallel and perpendicular to the easy axis of the magnet. In non linear cases, the model uses a texture function which represents the distribution of the local easy axis of the cristallytes of the magnet. This function allows a good representation of the angular dependance of the coercitive field of the magnet. As a result, it is possible to express the magnetic induction B and the tensor as functions of the field and the texture parameter. This model has been implemented in the software FLUX3D where the tensor is used for the Newton-Raphson procedure. 3D demagnetization of a ferrite magnet by a NdFeB magnet is a suitable representative example. They analyze the results obtained for an ideally oriented ferrite magnet and a real one using a measured texture parameter

  6. Linear collider signal of anomaly mediated supersymmetry breaking model

    Ghosh Dilip Kumar; Kundu, Anirban; Roy, Probir; Roy, Sourov

    2001-01-01

    Though the minimal model of anomaly mediated supersymmetry breaking has been significantly constrained by recent experimental and theoretical work, there are still allowed regions of the parameter space for moderate to large values of tan β. We show that these regions will be comprehensively probed in a √s = 1 TeV e + e - linear collider. Diagnostic signals to this end are studied by zeroing in on a unique and distinct feature of a large class of models in this genre: a neutral winolike Lightest Supersymmetric Particle closely degenerate in mass with a winolike chargino. The pair production processes e + e - → e tilde L ± e tilde L ± , e tilde R ± e tilde R ± , e tilde L ± e tilde R ± , ν tilde anti ν tilde, χ tilde 1 0 χ tilde 2 0 , χ tilde 2 0 χ tilde 2 0 are all considered at √s = 1 TeV corresponding to the proposed TESLA linear collider in two natural categories of mass ordering in the sparticle spectra. The signals analysed comprise multiple combinations of fast charged leptons (any of which can act as the trigger) plus displaced vertices X D (any of which can be identified by a heavy ionizing track terminating in the detector) and/or associated soft pions with characteristic momentum distributions. (author)

  7. Linear versus quadratic portfolio optimization model with transaction cost

    Razak, Norhidayah Bt Ab; Kamil, Karmila Hanim; Elias, Siti Masitah

    2014-06-01

    Optimization model is introduced to become one of the decision making tools in investment. Hence, it is always a big challenge for investors to select the best model that could fulfill their goal in investment with respect to risk and return. In this paper we aims to discuss and compare the portfolio allocation and performance generated by quadratic and linear portfolio optimization models namely of Markowitz and Maximin model respectively. The application of these models has been proven to be significant and popular among others. However transaction cost has been debated as one of the important aspects that should be considered for portfolio reallocation as portfolio return could be significantly reduced when transaction cost is taken into consideration. Therefore, recognizing the importance to consider transaction cost value when calculating portfolio' return, we formulate this paper by using data from Shariah compliant securities listed in Bursa Malaysia. It is expected that, results from this paper will effectively justify the advantage of one model to another and shed some lights in quest to find the best decision making tools in investment for individual investors.

  8. Formation of model-free motor memories during motor adaptation depends on perturbation schedule.

    Orban de Xivry, Jean-Jacques; Lefèvre, Philippe

    2015-04-01

    Motor adaptation to an external perturbation relies on several mechanisms such as model-based, model-free, strategic, or repetition-dependent learning. Depending on the experimental conditions, each of these mechanisms has more or less weight in the final adaptation state. Here we focused on the conditions that lead to the formation of a model-free motor memory (Huang VS, Haith AM, Mazzoni P, Krakauer JW. Neuron 70: 787-801, 2011), i.e., a memory that does not depend on an internal model or on the size or direction of the errors experienced during the learning. The formation of such model-free motor memory was hypothesized to depend on the schedule of the perturbation (Orban de Xivry JJ, Ahmadi-Pajouh MA, Harran MD, Salimpour Y, Shadmehr R. J Neurophysiol 109: 124-136, 2013). Here we built on this observation by directly testing the nature of the motor memory after abrupt or gradual introduction of a visuomotor rotation, in an experimental paradigm where the presence of model-free motor memory can be identified (Huang VS, Haith AM, Mazzoni P, Krakauer JW. Neuron 70: 787-801, 2011). We found that relearning was faster after abrupt than gradual perturbation, which suggests that model-free learning is reduced during gradual adaptation to a visuomotor rotation. In addition, the presence of savings after abrupt introduction of the perturbation but gradual extinction of the motor memory suggests that unexpected errors are necessary to induce a model-free motor memory. Overall, these data support the hypothesis that different perturbation schedules do not lead to a more or less stabilized motor memory but to distinct motor memories with different attributes and neural representations. Copyright © 2015 the American Physiological Society.

  9. On the Modelling of the Mobile WiMAX (IEEE 802.16e Uplink Scheduler

    Darmawaty Mohd Ali

    2010-01-01

    Full Text Available Packet scheduling has drawn a great deal of attention in the field of wireless networks as it plays an important role in distributing shared resources in a network. The process involves allocating the bandwidth among users and determining their transmission order. In this paper an uplink (UL scheduling algorithm for the Mobile Worldwide Interoperability for Microwave Access (WiMAX network based on the cyclic polling model is proposed. The model in this study consists of five queues (UGS, ertPS, rtPS, nrtPS, and BE visited by a single server. A threshold policy is imposed to the nrtPS queue to ensure that the delay constraint of real time traffic (UGS, ertPS, and rtPS is not violated making this approach original in comparison to the existing contributions. A mathematical model is formulated for the weighted sum of the mean waiting time of each individual queues based on the pseudo-conservation law. The results of the analysis are useful in obtaining or testing approximation for individual mean waiting time especially when queues are asymmetric (where each queue may have different stochastic characteristic such as arrival rate and service time distribution and when their number is large (more than 2 queues.

  10. A Model-Driven Co-Design Framework for Fusing Control and Scheduling Viewpoints.

    Sundharam, Sakthivel Manikandan; Navet, Nicolas; Altmeyer, Sebastian; Havet, Lionel

    2018-02-20

    Model-Driven Engineering (MDE) is widely applied in the industry to develop new software functions and integrate them into the existing run-time environment of a Cyber-Physical System (CPS). The design of a software component involves designers from various viewpoints such as control theory, software engineering, safety, etc. In practice, while a designer from one discipline focuses on the core aspects of his field (for instance, a control engineer concentrates on designing a stable controller), he neglects or considers less importantly the other engineering aspects (for instance, real-time software engineering or energy efficiency). This may cause some of the functional and non-functional requirements not to be met satisfactorily. In this work, we present a co-design framework based on timing tolerance contract to address such design gaps between control and real-time software engineering. The framework consists of three steps: controller design, verified by jitter margin analysis along with co-simulation, software design verified by a novel schedulability analysis, and the run-time verification by monitoring the execution of the models on target. This framework builds on CPAL (Cyber-Physical Action Language), an MDE design environment based on model-interpretation, which enforces a timing-realistic behavior in simulation through timing and scheduling annotations. The application of our framework is exemplified in the design of an automotive cruise control system.

  11. Simulation-Based Dynamic Passenger Flow Assignment Modelling for a Schedule-Based Transit Network

    Xiangming Yao

    2017-01-01

    Full Text Available The online operation management and the offline policy evaluation in complex transit networks require an effective dynamic traffic assignment (DTA method that can capture the temporal-spatial nature of traffic flows. The objective of this work is to propose a simulation-based dynamic passenger assignment framework and models for such applications in the context of schedule-based rail transit systems. In the simulation framework, travellers are regarded as individual agents who are able to obtain complete information on the current traffic conditions. A combined route selection model integrated with pretrip route selection and entrip route switch is established for achieving the dynamic network flow equilibrium status. The train agent is operated strictly with the timetable and its capacity limitation is considered. A continuous time-driven simulator based on the proposed framework and models is developed, whose performance is illustrated through a large-scale network of Beijing subway. The results indicate that more than 0.8 million individual passengers and thousands of trains can be simulated simultaneously at a speed ten times faster than real time. This study provides an efficient approach to analyze the dynamic demand-supply relationship for large schedule-based transit networks.

  12. A Model-Driven Co-Design Framework for Fusing Control and Scheduling Viewpoints

    Navet, Nicolas; Havet, Lionel

    2018-01-01

    Model-Driven Engineering (MDE) is widely applied in the industry to develop new software functions and integrate them into the existing run-time environment of a Cyber-Physical System (CPS). The design of a software component involves designers from various viewpoints such as control theory, software engineering, safety, etc. In practice, while a designer from one discipline focuses on the core aspects of his field (for instance, a control engineer concentrates on designing a stable controller), he neglects or considers less importantly the other engineering aspects (for instance, real-time software engineering or energy efficiency). This may cause some of the functional and non-functional requirements not to be met satisfactorily. In this work, we present a co-design framework based on timing tolerance contract to address such design gaps between control and real-time software engineering. The framework consists of three steps: controller design, verified by jitter margin analysis along with co-simulation, software design verified by a novel schedulability analysis, and the run-time verification by monitoring the execution of the models on target. This framework builds on CPAL (Cyber-Physical Action Language), an MDE design environment based on model-interpretation, which enforces a timing-realistic behavior in simulation through timing and scheduling annotations. The application of our framework is exemplified in the design of an automotive cruise control system. PMID:29461489

  13. A Model-Driven Co-Design Framework for Fusing Control and Scheduling Viewpoints

    Sakthivel Manikandan Sundharam

    2018-02-01

    Full Text Available Model-Driven Engineering (MDE is widely applied in the industry to develop new software functions and integrate them into the existing run-time environment of a Cyber-Physical System (CPS. The design of a software component involves designers from various viewpoints such as control theory, software engineering, safety, etc. In practice, while a designer from one discipline focuses on the core aspects of his field (for instance, a control engineer concentrates on designing a stable controller, he neglects or considers less importantly the other engineering aspects (for instance, real-time software engineering or energy efficiency. This may cause some of the functional and non-functional requirements not to be met satisfactorily. In this work, we present a co-design framework based on timing tolerance contract to address such design gaps between control and real-time software engineering. The framework consists of three steps: controller design, verified by jitter margin analysis along with co-simulation, software design verified by a novel schedulability analysis, and the run-time verification by monitoring the execution of the models on target. This framework builds on CPAL (Cyber-Physical Action Language, an MDE design environment based on model-interpretation, which enforces a timing-realistic behavior in simulation through timing and scheduling annotations. The application of our framework is exemplified in the design of an automotive cruise control system.

  14. Probabilistic model of ligaments and tendons: Quasistatic linear stretching

    Bontempi, M.

    2009-03-01

    Ligaments and tendons have a significant role in the musculoskeletal system and are frequently subjected to injury. This study presents a model of collagen fibers, based on the study of a statistical distribution of fibers when they are subjected to quasistatic linear stretching. With respect to other methodologies, this model is able to describe the behavior of the bundle using less ad hoc hypotheses and is able to describe all the quasistatic stretch-load responses of the bundle, including the yield and failure regions described in the literature. It has two other important results: the first is that it is able to correlate the mechanical behavior of the bundle with its internal structure, and it suggests a methodology to deduce the fibers population distribution directly from the tensile-test data. The second is that it can follow fibers’ structure evolution during the stretching and it is possible to study the internal adaptation of fibers in physiological and pathological conditions.

  15. Linear mixing model applied to coarse resolution satellite data

    Holben, Brent N.; Shimabukuro, Yosio E.

    1992-01-01

    A linear mixing model typically applied to high resolution data such as Airborne Visible/Infrared Imaging Spectrometer, Thematic Mapper, and Multispectral Scanner System is applied to the NOAA Advanced Very High Resolution Radiometer coarse resolution satellite data. The reflective portion extracted from the middle IR channel 3 (3.55 - 3.93 microns) is used with channels 1 (0.58 - 0.68 microns) and 2 (0.725 - 1.1 microns) to run the Constrained Least Squares model to generate fraction images for an area in the west central region of Brazil. The derived fraction images are compared with an unsupervised classification and the fraction images derived from Landsat TM data acquired in the same day. In addition, the relationship betweeen these fraction images and the well known NDVI images are presented. The results show the great potential of the unmixing techniques for applying to coarse resolution data for global studies.

  16. Relating Cohesive Zone Model to Linear Elastic Fracture Mechanics

    Wang, John T.

    2010-01-01

    The conditions required for a cohesive zone model (CZM) to predict a failure load of a cracked structure similar to that obtained by a linear elastic fracture mechanics (LEFM) analysis are investigated in this paper. This study clarifies why many different phenomenological cohesive laws can produce similar fracture predictions. Analytical results for five cohesive zone models are obtained, using five different cohesive laws that have the same cohesive work rate (CWR-area under the traction-separation curve) but different maximum tractions. The effect of the maximum traction on the predicted cohesive zone length and the remote applied load at fracture is presented. Similar to the small scale yielding condition for an LEFM analysis to be valid. the cohesive zone length also needs to be much smaller than the crack length. This is a necessary condition for a CZM to obtain a fracture prediction equivalent to an LEFM result.

  17. Locally supersymmetric D=3 non-linear sigma models

    Wit, B. de; Tollsten, A.K.; Nicolai, H.

    1993-01-01

    We study non-linear sigma models with N local supersymmetries in three space-time dimensions. For N=1 and 2 the target space of these models is riemannian or Kaehler, respectively. All N>2 theories are associated with Einstein spaces. For N=3 the target space is quaternionic, while for N=4 it generally decomposes, into two separate quaternionic spaces, associated with inequivalent supermultiplets. For N=5, 6, 8 there is a unique (symmetric) space for any given number of supermultiplets. Beyond that there are only theories based on a single supermultiplet for N=9, 10, 12 and 16, associated with coset spaces with the exceptional isometry groups F 4(-20) , E 6(-14) , E 7(-5) and E 8(+8) , respectively. For N=3 and N ≥ 5 the D=2 theories obtained by dimensional reduction are two-loop finite. (orig.)

  18. Explicit estimating equations for semiparametric generalized linear latent variable models

    Ma, Yanyuan

    2010-07-05

    We study generalized linear latent variable models without requiring a distributional assumption of the latent variables. Using a geometric approach, we derive consistent semiparametric estimators. We demonstrate that these models have a property which is similar to that of a sufficient complete statistic, which enables us to simplify the estimating procedure and explicitly to formulate the semiparametric estimating equations. We further show that the explicit estimators have the usual root n consistency and asymptotic normality. We explain the computational implementation of our method and illustrate the numerical performance of the estimators in finite sample situations via extensive simulation studies. The advantage of our estimators over the existing likelihood approach is also shown via numerical comparison. We employ the method to analyse a real data example from economics. © 2010 Royal Statistical Society.

  19. Synthetic Domain Theory and Models of Linear Abadi & Plotkin Logic

    Møgelberg, Rasmus Ejlers; Birkedal, Lars; Rosolini, Guiseppe

    2008-01-01

    Plotkin suggested using a polymorphic dual intuitionistic/linear type theory (PILLY) as a metalanguage for parametric polymorphism and recursion. In recent work the first two authors and R.L. Petersen have defined a notion of parametric LAPL-structure, which are models of PILLY, in which one can...... reason using parametricity and, for example, solve a large class of domain equations, as suggested by Plotkin.In this paper, we show how an interpretation of a strict version of Bierman, Pitts and Russo's language Lily into synthetic domain theory presented by Simpson and Rosolini gives rise...... to a parametric LAPL-structure. This adds to the evidence that the notion of LAPL-structure is a general notion, suitable for treating many different parametric models, and it provides formal proofs of consequences of parametricity expected to hold for the interpretation. Finally, we show how these results...

  20. Metroplex Optimization Model Expansion and Analysis: The Airline Fleet, Route, and Schedule Optimization Model (AFRS-OM)

    Sherry, Lance; Ferguson, John; Hoffman, Karla; Donohue, George; Beradino, Frank

    2012-01-01

    This report describes the Airline Fleet, Route, and Schedule Optimization Model (AFRS-OM) that is designed to provide insights into airline decision-making with regards to markets served, schedule of flights on these markets, the type of aircraft assigned to each scheduled flight, load factors, airfares, and airline profits. The main inputs to the model are hedged fuel prices, airport capacity limits, and candidate markets. Embedded in the model are aircraft performance and associated cost factors, and willingness-to-pay (i.e. demand vs. airfare curves). Case studies demonstrate the application of the model for analysis of the effects of increased capacity and changes in operating costs (e.g. fuel prices). Although there are differences between airports (due to differences in the magnitude of travel demand and sensitivity to airfare), the system is more sensitive to changes in fuel prices than capacity. Further, the benefits of modernization in the form of increased capacity could be undermined by increases in hedged fuel prices

  1. A continuous time model for a short-term multiproduct batch process scheduling

    Jenny Díaz Ramírez

    2018-01-01

    Full Text Available In the chemical industry, it is common to find production systems characterized by having a single stage or a previously identified bottleneck stage, with multiple non-identical parallel stations and with setup costs that depend on the production sequence. This paper proposes a mixed integer production-scheduling model that identifies lot size and product sequence that maximize profit. It considers multiple typical industry conditions, such as penalties for noncompliance or out of service periods of the productive units (or stations for preventive maintenance activities. The model was validated with real data from an oil chemical company.  Aiming to analyze its performance, we applied the model to 155 instances of production, which were obtained using Monte Carlo technique on the historical production data of the same company.  We obtained an average 12 % reduction in the total cost of production and a 19 % increase in the estimated profit.

  2. Three hybridization models based on local search scheme for job shop scheduling problem

    Balbi Fraga, Tatiana

    2015-05-01

    This work presents three different hybridization models based on the general schema of Local Search Heuristics, named Hybrid Successive Application, Hybrid Neighborhood, and Hybrid Improved Neighborhood. Despite similar approaches might have already been presented in the literature in other contexts, in this work these models are applied to analyzes the solution of the job shop scheduling problem, with the heuristics Taboo Search and Particle Swarm Optimization. Besides, we investigate some aspects that must be considered in order to achieve better solutions than those obtained by the original heuristics. The results demonstrate that the algorithms derived from these three hybrid models are more robust than the original algorithms and able to get better results than those found by the single Taboo Search.

  3. Solving large mixed linear models using preconditioned conjugate gradient iteration.

    Strandén, I; Lidauer, M

    1999-12-01

    Continuous evaluation of dairy cattle with a random regression test-day model requires a fast solving method and algorithm. A new computing technique feasible in Jacobi and conjugate gradient based iterative methods using iteration on data is presented. In the new computing technique, the calculations in multiplication of a vector by a matrix were recorded to three steps instead of the commonly used two steps. The three-step method was implemented in a general mixed linear model program that used preconditioned conjugate gradient iteration. Performance of this program in comparison to other general solving programs was assessed via estimation of breeding values using univariate, multivariate, and random regression test-day models. Central processing unit time per iteration with the new three-step technique was, at best, one-third that needed with the old technique. Performance was best with the test-day model, which was the largest and most complex model used. The new program did well in comparison to other general software. Programs keeping the mixed model equations in random access memory required at least 20 and 435% more time to solve the univariate and multivariate animal models, respectively. Computations of the second best iteration on data took approximately three and five times longer for the animal and test-day models, respectively, than did the new program. Good performance was due to fast computing time per iteration and quick convergence to the final solutions. Use of preconditioned conjugate gradient based methods in solving large breeding value problems is supported by our findings.

  4. Linear mixed-effects modeling approach to FMRI group analysis.

    Chen, Gang; Saad, Ziad S; Britton, Jennifer C; Pine, Daniel S; Cox, Robert W

    2013-06-01

    Conventional group analysis is usually performed with Student-type t-test, regression, or standard AN(C)OVA in which the variance-covariance matrix is presumed to have a simple structure. Some correction approaches are adopted when assumptions about the covariance structure is violated. However, as experiments are designed with different degrees of sophistication, these traditional methods can become cumbersome, or even be unable to handle the situation at hand. For example, most current FMRI software packages have difficulty analyzing the following scenarios at group level: (1) taking within-subject variability into account when there are effect estimates from multiple runs or sessions; (2) continuous explanatory variables (covariates) modeling in the presence of a within-subject (repeated measures) factor, multiple subject-grouping (between-subjects) factors, or the mixture of both; (3) subject-specific adjustments in covariate modeling; (4) group analysis with estimation of hemodynamic response (HDR) function by multiple basis functions; (5) various cases of missing data in longitudinal studies; and (6) group studies involving family members or twins. Here we present a linear mixed-effects modeling (LME) methodology that extends the conventional group analysis approach to analyze many complicated cases, including the six prototypes delineated above, whose analyses would be otherwise either difficult or unfeasible under traditional frameworks such as AN(C)OVA and general linear model (GLM). In addition, the strength of the LME framework lies in its flexibility to model and estimate the variance-covariance structures for both random effects and residuals. The intraclass correlation (ICC) values can be easily obtained with an LME model with crossed random effects, even at the presence of confounding fixed effects. The simulations of one prototypical scenario indicate that the LME modeling keeps a balance between the control for false positives and the sensitivity

  5. Maximizing the nurses' preferences in nurse scheduling problem: mathematical modeling and a meta-heuristic algorithm

    Jafari, Hamed; Salmasi, Nasser

    2015-09-01

    The nurse scheduling problem (NSP) has received a great amount of attention in recent years. In the NSP, the goal is to assign shifts to the nurses in order to satisfy the hospital's demand during the planning horizon by considering different objective functions. In this research, we focus on maximizing the nurses' preferences for working shifts and weekends off by considering several important factors such as hospital's policies, labor laws, governmental regulations, and the status of nurses at the end of the previous planning horizon in one of the largest hospitals in Iran i.e., Milad Hospital. Due to the shortage of available nurses, at first, the minimum total number of required nurses is determined. Then, a mathematical programming model is proposed to solve the problem optimally. Since the proposed research problem is NP-hard, a meta-heuristic algorithm based on simulated annealing (SA) is applied to heuristically solve the problem in a reasonable time. An initial feasible solution generator and several novel neighborhood structures are applied to enhance performance of the SA algorithm. Inspired from our observations in Milad hospital, random test problems are generated to evaluate the performance of the SA algorithm. The results of computational experiments indicate that the applied SA algorithm provides solutions with average percentage gap of 5.49 % compared to the upper bounds obtained from the mathematical model. Moreover, the applied SA algorithm provides significantly better solutions in a reasonable time than the schedules provided by the head nurses.

  6. Energy Storage Scheduling with an Advanced Battery Model: A Game–Theoretic Approach

    Matthias Pilz

    2017-11-01

    Full Text Available Energy storage systems will play a key role for individual users in the future smart grid. They serve two purposes: (i handling the intermittent nature of renewable energy resources for a more reliable and efficient system; and (ii preventing the impact of blackouts on users and allowing for more independence from the grid, while saving money through load-shifting. In this paper we investigate the latter scenario by looking at a neighbourhood of 25 households whose demand is satisfied by one utility company. Assuming the users possess lithium-ion batteries, we answer the question of how each household can make the best use of their individual storage system given a real-time pricing policy. To this end, each user is modelled as a player of a non-cooperative scheduling game. The novelty of the game lies in the advanced battery model, which incorporates charging and discharging characteristics of lithium-ion batteries. The action set for each player comprises day-ahead schedules of their respective battery usage. We analyse different user behaviour and are able to obtain a realistic and applicable understanding of the potential of these systems. As a result, we show the correlation between the efficiency of the battery and the outcome of the game.

  7. A Bilevel Scheduling Approach for Modeling Energy Transaction of Virtual Power Plants in Distribution Networks

    F. Nazari

    2017-03-01

    Full Text Available By increasing the use of distributed generation (DG in the distribution network operation, an entity called virtual power plant (VPP has been introduced to control, dispatch and aggregate the generation of DGs, enabling them to participate either in the electricity market or the distribution network operation. The participation of VPPs in the electricity market has made challenges to fairly allocate payments and benefits between VPPs and distribution network operator (DNO. This paper presents a bilevel scheduling approach to model the energy transaction between VPPs and DNO.  The upper level corresponds to the decision making of VPPs which bid their long- term contract prices so that their own profits are maximized and the lower level represents the DNO decision making to supply electricity demand of the network by minimizing its overall cost. The proposed bilevel scheduling approach is transformed to a single level optimizing problem using its Karush-Kuhn-Tucker (KKT optimality conditions. Several scenarios are applied to scrutinize the effectiveness and usefulness of the proposed model

  8. Nonlinear model-based control of the Czochralski process III: Proper choice of manipulated variables and controller parameter scheduling

    Neubert, M.; Winkler, J.

    2012-12-01

    This contribution continues an article series [1,2] about the nonlinear model-based control of the Czochralski crystal growth process. The key idea of the presented approach is to use a sophisticated combination of nonlinear model-based and conventional (linear) PI controllers for tracking of both, crystal radius and growth rate. Using heater power and pulling speed as manipulated variables several controller structures are possible. The present part tries to systematize the properties of the materials to be grown in order to get unambiguous decision criteria for a most profitable choice of the controller structure. For this purpose a material specific constant M called interface mobility and a more process specific constant S called system response number are introduced. While the first one summarizes important material properties like thermal conductivity and latent heat the latter one characterizes the process by evaluating the average axial thermal gradients at the phase boundary and the actual growth rate at which the crystal is grown. Furthermore these characteristic numbers are useful for establishing a scheduling strategy for the PI controller parameters in order to improve the controller performance. Finally, both numbers give a better understanding of the general thermal system dynamics of the Czochralski technique.

  9. Direction of Effects in Multiple Linear Regression Models.

    Wiedermann, Wolfgang; von Eye, Alexander

    2015-01-01

    Previous studies analyzed asymmetric properties of the Pearson correlation coefficient using higher than second order moments. These asymmetric properties can be used to determine the direction of dependence in a linear regression setting (i.e., establish which of two variables is more likely to be on the outcome side) within the framework of cross-sectional observational data. Extant approaches are restricted to the bivariate regression case. The present contribution extends the direction of dependence methodology to a multiple linear regression setting by analyzing distributional properties of residuals of competing multiple regression models. It is shown that, under certain conditions, the third central moments of estimated regression residuals can be used to decide upon direction of effects. In addition, three different approaches for statistical inference are discussed: a combined D'Agostino normality test, a skewness difference test, and a bootstrap difference test. Type I error and power of the procedures are assessed using Monte Carlo simulations, and an empirical example is provided for illustrative purposes. In the discussion, issues concerning the quality of psychological data, possible extensions of the proposed methods to the fourth central moment of regression residuals, and potential applications are addressed.

  10. Linear model applied to the evaluation of pharmaceutical stability data

    Renato Cesar Souza

    2013-09-01

    Full Text Available The expiry date on the packaging of a product gives the consumer the confidence that the product will retain its identity, content, quality and purity throughout the period of validity of the drug. The definition of this term in the pharmaceutical industry is based on stability data obtained during the product registration. By the above, this work aims to apply the linear regression according to the guideline ICH Q1E, 2003, to evaluate some aspects of a product undergoing in a registration phase in Brazil. With this propose, the evaluation was realized with the development center of a multinational company in Brazil, with samples of three different batches composed by two active principal ingredients in two different packages. Based on the preliminary results obtained, it was possible to observe the difference of degradation tendency of the product in two different packages and the relationship between the variables studied, added knowledge so new models of linear equations can be applied and developed for other products.

  11. Fourth standard model family neutrino at future linear colliders

    Ciftci, A.K.; Ciftci, R.; Sultansoy, S.

    2005-01-01

    It is known that flavor democracy favors the existence of the fourth standard model (SM) family. In order to give nonzero masses for the first three-family fermions flavor democracy has to be slightly broken. A parametrization for democracy breaking, which gives the correct values for fundamental fermion masses and, at the same time, predicts quark and lepton Cabibbo-Kobayashi-Maskawa (CKM) matrices in a good agreement with the experimental data, is proposed. The pair productions of the fourth SM family Dirac (ν 4 ) and Majorana (N 1 ) neutrinos at future linear colliders with √(s)=500 GeV, 1 TeV, and 3 TeV are considered. The cross section for the process e + e - →ν 4 ν 4 (N 1 N 1 ) and the branching ratios for possible decay modes of the both neutrinos are determined. The decays of the fourth family neutrinos into muon channels (ν 4 (N 1 )→μ ± W ± ) provide cleanest signature at e + e - colliders. Meanwhile, in our parametrization this channel is dominant. W bosons produced in decays of the fourth family neutrinos will be seen in detector as either di-jets or isolated leptons. As an example, we consider the production of 200 GeV mass fourth family neutrinos at √(s)=500 GeV linear colliders by taking into account di-muon plus four jet events as signatures

  12. Influence of the void fraction in the linear reactivity model

    Castillo, J.A.; Ramirez, J.R.; Alonso, G.

    2003-01-01

    The linear reactivity model allows the multicycle analysis in pressurized water reactors in a simple and quick way. In the case of the Boiling water reactors the void fraction it varies axially from 0% of voids in the inferior part of the fuel assemblies until approximately 70% of voids to the exit of the same ones. Due to this it is very important the determination of the average void fraction during different stages of the reactor operation to predict the burnt one appropriately of the same ones to inclination of the pattern of linear reactivity. In this work a pursuit is made of the profile of power for different steps of burnt of a typical operation cycle of a Boiling water reactor. Starting from these profiles it builds an algorithm that allows to determine the voids profile and this way to obtain the average value of the same one. The results are compared against those reported by the CM-PRESTO code that uses another method to carry out this calculation. Finally, the range in which is the average value of the void fraction during a typical cycle is determined and an estimate of the impact that it would have the use of this value in the prediction of the reactivity produced by the fuel assemblies is made. (Author)

  13. Characteristics and Properties of a Simple Linear Regression Model

    Kowal Robert

    2016-12-01

    Full Text Available A simple linear regression model is one of the pillars of classic econometrics. Despite the passage of time, it continues to raise interest both from the theoretical side as well as from the application side. One of the many fundamental questions in the model concerns determining derivative characteristics and studying the properties existing in their scope, referring to the first of these aspects. The literature of the subject provides several classic solutions in that regard. In the paper, a completely new design is proposed, based on the direct application of variance and its properties, resulting from the non-correlation of certain estimators with the mean, within the scope of which some fundamental dependencies of the model characteristics are obtained in a much more compact manner. The apparatus allows for a simple and uniform demonstration of multiple dependencies and fundamental properties in the model, and it does it in an intuitive manner. The results were obtained in a classic, traditional area, where everything, as it might seem, has already been thoroughly studied and discovered.

  14. A simple non-linear model of immune response

    Gutnikov, Sergei; Melnikov, Yuri

    2003-01-01

    It is still unknown why the adaptive immune response in the natural immune system based on clonal proliferation of lymphocytes requires interaction of at least two different cell types with the same antigen. We present a simple mathematical model illustrating that the system with separate types of cells for antigen recognition and patogen destruction provides more robust adaptive immunity than the system where just one cell type is responsible for both recognition and destruction. The model is over-simplified as we did not have an intention of describing the natural immune system. However, our model provides a tool for testing the proposed approach through qualitative analysis of the immune system dynamics in order to construct more sophisticated models of the immune systems that exist in the living nature. It also opens a possibility to explore specific features of highly non-linear dynamics in nature-inspired computational paradigms like artificial immune systems and immunocomputing . We expect this paper to be of interest not only for mathematicians but also for biologists; therefore we made effort to explain mathematics in sufficient detail for readers without professional mathematical background

  15. Non-linear scaling of a musculoskeletal model of the lower limb using statistical shape models.

    Nolte, Daniel; Tsang, Chui Kit; Zhang, Kai Yu; Ding, Ziyun; Kedgley, Angela E; Bull, Anthony M J

    2016-10-03

    Accurate muscle geometry for musculoskeletal models is important to enable accurate subject-specific simulations. Commonly, linear scaling is used to obtain individualised muscle geometry. More advanced methods include non-linear scaling using segmented bone surfaces and manual or semi-automatic digitisation of muscle paths from medical images. In this study, a new scaling method combining non-linear scaling with reconstructions of bone surfaces using statistical shape modelling is presented. Statistical Shape Models (SSMs) of femur and tibia/fibula were used to reconstruct bone surfaces of nine subjects. Reference models were created by morphing manually digitised muscle paths to mean shapes of the SSMs using non-linear transformations and inter-subject variability was calculated. Subject-specific models of muscle attachment and via points were created from three reference models. The accuracy was evaluated by calculating the differences between the scaled and manually digitised models. The points defining the muscle paths showed large inter-subject variability at the thigh and shank - up to 26mm; this was found to limit the accuracy of all studied scaling methods. Errors for the subject-specific muscle point reconstructions of the thigh could be decreased by 9% to 20% by using the non-linear scaling compared to a typical linear scaling method. We conclude that the proposed non-linear scaling method is more accurate than linear scaling methods. Thus, when combined with the ability to reconstruct bone surfaces from incomplete or scattered geometry data using statistical shape models our proposed method is an alternative to linear scaling methods. Copyright © 2016 The Author. Published by Elsevier Ltd.. All rights reserved.

  16. A Non-linear Stochastic Model for an Office Building with Air Infiltration

    Thavlov, Anders; Madsen, Henrik

    2015-01-01

    This paper presents a non-linear heat dynamic model for a multi-room office building with air infiltration. Several linear and non-linear models, with and without air infiltration, are investigated and compared. The models are formulated using stochastic differential equations and the model...

  17. Distributing Correlation Coefficients of Linear Structure-Activity/Property Models

    Sorana D. BOLBOACA

    2011-12-01

    Full Text Available Quantitative structure-activity/property relationships are mathematical relationships linking chemical structure and activity/property in a quantitative manner. These in silico approaches are frequently used to reduce animal testing and risk-assessment, as well as to increase time- and cost-effectiveness in characterization and identification of active compounds. The aim of our study was to investigate the pattern of correlation coefficients distribution associated to simple linear relationships linking the compounds structure with their activities. A set of the most common ordnance compounds found at naval facilities with a limited data set with a range of toxicities on aquatic ecosystem and a set of seven properties was studied. Statistically significant models were selected and investigated. The probability density function of the correlation coefficients was investigated using a series of possible continuous distribution laws. Almost 48% of the correlation coefficients proved fit Beta distribution, 40% fit Generalized Pareto distribution, and 12% fit Pert distribution.

  18. Modeling and analysis of linearized wheel-rail contact dynamics

    Soomro, Z.

    2014-01-01

    The dynamics of the railway vehicles are nonlinear and depend upon several factors including vehicle speed, normal load and adhesion level. The presence of contaminants on the railway track makes them unpredictable too. Therefore in order to develop an effective control strategy it is important to analyze the effect of each factor on dynamic response thoroughly. In this paper a linearized model of a railway wheel-set is developed and is later analyzed by varying the speed and adhesion level by keeping the normal load constant. A wheel-set is the wheel-axle assembly of a railroad car. Patch contact is the study of the deformation of solids that touch each other at one or more points. (author)

  19. Human visual modeling and image deconvolution by linear filtering

    Larminat, P. de; Barba, D.; Gerber, R.; Ronsin, J.

    1978-01-01

    The problem is the numerical restoration of images degraded by passing through a known and spatially invariant linear system, and by the addition of a stationary noise. We propose an improvement of the Wiener's filter to allow the restoration of such images. This improvement allows to reduce the important drawbacks of classical Wiener's filter: the voluminous data processing, the lack of consideration of the vision's characteristivs which condition the perception by the observer of the restored image. In a first paragraph, we describe the structure of the visual detection system and a modelling method of this system. In the second paragraph we explain a restoration method by Wiener filtering that takes the visual properties into account and that can be adapted to the local properties of the image. Then the results obtained on TV images or scintigrams (images obtained by a gamma-camera) are commented [fr

  20. Convergence diagnostics for Eigenvalue problems with linear regression model

    Shi, Bo; Petrovic, Bojan

    2011-01-01

    Although the Monte Carlo method has been extensively used for criticality/Eigenvalue problems, a reliable, robust, and efficient convergence diagnostics method is still desired. Most methods are based on integral parameters (multiplication factor, entropy) and either condense the local distribution information into a single value (e.g., entropy) or even disregard it. We propose to employ the detailed cycle-by-cycle local flux evolution obtained by using mesh tally mechanism to assess the source and flux convergence. By applying a linear regression model to each individual mesh in a mesh tally for convergence diagnostics, a global convergence criterion can be obtained. We exemplify this method on two problems and obtain promising diagnostics results. (author)

  1. A Dynamic Linear Modeling Approach to Public Policy Change

    Loftis, Matthew; Mortensen, Peter Bjerre

    2017-01-01

    Theories of public policy change, despite their differences, converge on one point of strong agreement. The relationship between policy and its causes can and does change over time. This consensus yields numerous empirical implications, but our standard analytical tools are inadequate for testing...... them. As a result, the dynamic and transformative relationships predicted by policy theories have been left largely unexplored in time-series analysis of public policy. This paper introduces dynamic linear modeling (DLM) as a useful statistical tool for exploring time-varying relationships in public...... policy. The paper offers a detailed exposition of the DLM approach and illustrates its usefulness with a time series analysis of U.S. defense policy from 1957-2010. The results point the way for a new attention to dynamics in the policy process and the paper concludes with a discussion of how...

  2. Baryon and meson phenomenology in the extended Linear Sigma Model

    Giacosa, Francesco; Habersetzer, Anja; Teilab, Khaled; Eshraim, Walaa; Divotgey, Florian; Olbrich, Lisa; Gallas, Susanna; Wolkanowski, Thomas; Janowski, Stanislaus; Heinz, Achim; Deinet, Werner; Rischke, Dirk H. [Institute for Theoretical Physics, J. W. Goethe University, Max-von-Laue-Str. 1, 60438 Frankfurt am Main (Germany); Kovacs, Peter; Wolf, Gyuri [Institute for Particle and Nuclear Physics, Wigner Research Center for Physics, Hungarian Academy of Sciences, H-1525 Budapest (Hungary); Parganlija, Denis [Institute for Theoretical Physics, Vienna University of Technology, Wiedner Hauptstr. 8-10, A-1040 Vienna (Austria)

    2014-07-01

    The vacuum phenomenology obtained within the so-called extended Linear Sigma Model (eLSM) is presented. The eLSM Lagrangian is constructed by including from the very beginning vector and axial-vector d.o.f., and by requiring dilatation invariance and chiral symmetry. After a general introduction of the approach, particular attention is devoted to the latest results. In the mesonic sector the strong decays of the scalar and the pseudoscalar glueballs, the weak decays of the tau lepton into vector and axial-vector mesons, and the description of masses and decays of charmed mesons are shown. In the baryonic sector the omega production in proton-proton scattering and the inclusion of baryons with strangeness are described.

  3. Non Abelian T-duality in Gauged Linear Sigma Models

    Bizet, Nana Cabo; Martínez-Merino, Aldo; Zayas, Leopoldo A. Pando; Santos-Silva, Roberto

    2018-04-01

    Abelian T-duality in Gauged Linear Sigma Models (GLSM) forms the basis of the physical understanding of Mirror Symmetry as presented by Hori and Vafa. We consider an alternative formulation of Abelian T-duality on GLSM's as a gauging of a global U(1) symmetry with the addition of appropriate Lagrange multipliers. For GLSMs with Abelian gauge groups and without superpotential we reproduce the dual models introduced by Hori and Vafa. We extend the construction to formulate non-Abelian T-duality on GLSMs with global non-Abelian symmetries. The equations of motion that lead to the dual model are obtained for a general group, they depend in general on semi-chiral superfields; for cases such as SU(2) they depend on twisted chiral superfields. We solve the equations of motion for an SU(2) gauged group with a choice of a particular Lie algebra direction of the vector superfield. This direction covers a non-Abelian sector that can be described by a family of Abelian dualities. The dual model Lagrangian depends on twisted chiral superfields and a twisted superpotential is generated. We explore some non-perturbative aspects by making an Ansatz for the instanton corrections in the dual theories. We verify that the effective potential for the U(1) field strength in a fixed configuration on the original theory matches the one of the dual theory. Imposing restrictions on the vector superfield, more general non-Abelian dual models are obtained. We analyze the dual models via the geometry of their susy vacua.

  4. A comparison of linear interpolation models for iterative CT reconstruction.

    Hahn, Katharina; Schöndube, Harald; Stierstorfer, Karl; Hornegger, Joachim; Noo, Frédéric

    2016-12-01

    Recent reports indicate that model-based iterative reconstruction methods may improve image quality in computed tomography (CT). One difficulty with these methods is the number of options available to implement them, including the selection of the forward projection model and the penalty term. Currently, the literature is fairly scarce in terms of guidance regarding this selection step, whereas these options impact image quality. Here, the authors investigate the merits of three forward projection models that rely on linear interpolation: the distance-driven method, Joseph's method, and the bilinear method. The authors' selection is motivated by three factors: (1) in CT, linear interpolation is often seen as a suitable trade-off between discretization errors and computational cost, (2) the first two methods are popular with manufacturers, and (3) the third method enables assessing the importance of a key assumption in the other methods. One approach to evaluate forward projection models is to inspect their effect on discretized images, as well as the effect of their transpose on data sets, but significance of such studies is unclear since the matrix and its transpose are always jointly used in iterative reconstruction. Another approach is to investigate the models in the context they are used, i.e., together with statistical weights and a penalty term. Unfortunately, this approach requires the selection of a preferred objective function and does not provide clear information on features that are intrinsic to the model. The authors adopted the following two-stage methodology. First, the authors analyze images that progressively include components of the singular value decomposition of the model in a reconstructed image without statistical weights and penalty term. Next, the authors examine the impact of weights and penalty on observed differences. Image quality metrics were investigated for 16 different fan-beam imaging scenarios that enabled probing various aspects

  5. Mathematical model and algorithm of operation scheduling for monitoring situation in local waters

    Sokolov Boris

    2017-01-01

    Full Text Available A multiple-model approach to description and investigation of control processes in regional maritime security system is presented. The processes considered in this paper were qualified as control processes of computing operations providing monitoring of the situation adding in the local water area and connected to relocation of different ships classes (further the active mobile objects (AMO. Previously developed concept of active moving object (AMO is used. The models describe operation of AMO automated monitoring and control system (AMCS elements as well as their interaction with objects-in-service that are sources or recipients of information being processed. The unified description of various control processes allows synthesizing simultaneously both technical and functional structures of AMO AMCS. The algorithm for solving the scheduling problem is described in terms of the classical theory of optimal automatic control.

  6. Plan maestro de producción basado en programación lineal entera para una empresa de productos químicos || Master Production Scheduling Based on Integer Linear Programming for a Chemical Company

    Reyes Zotelo, Yunuem

    2017-12-01

    Full Text Available En este trabajo se propone un modelo de programación lineal entera para planificar la producción de un conjunto de artículos finales con demanda independiente. El modelo para la planificación maestra de producción (PMP está diseñado considerando los costes de producción e inventario, así como las restricciones definidas por el mismo proceso productivo en cuanto a instalaciones y tiempos de producción. El objetivo del modelo propuesto es la minimización de los costes implicados; concretamente, el tiempo ocioso y extra de los recursos, así como la consideración de un nivel mínimo de servicio ligado a la demanda diferida. La validación del modelo considera datos pertenecientes a la demanda de cada producto en un horizonte de 12 semanas y compara cinco escenarios en los que se modifican algunos aspectos del sistema y diferentes niveles de servicio. Por último, los resultados obtenidos para cada uno de los escenarios exponen la mejora obtenida por el modelo propuesto respecto al procedimiento actual en la empresa objeto de estudio. || In this work, we propose an integer linear programming model for production scheduling of a group of finished products with independent demand. The model for the master production scheduling (MPS is designed by considering production and inventory costs, as well as the productive process constraints regarding installations and production times. The aim of the proposed model is the minimization of the costs involved; specifically, undertime and overtime costs of resources, as well as the consideration of a minimum service level related to the deferred demand. The validation of the model considers data belonging to the demand of each product in a 12-week planning horizon and compares five scenarios in which some characteristics of the system and different service levels are modified. Finally, the results obtained for each one of the scenarios expose the improvement obtained by the proposed model with

  7. A hybrid flow shop model for an ice cream production scheduling problem

    Imma Ribas Vila

    2009-07-01

    Full Text Available Normal 0 21 false false false ES X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Taula normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Times New Roman","serif";} In this paper we address the scheduling problem that comes from an ice cream manufacturing company. This production system can be modelled as a three stage nowait hybrid flow shop with batch dependent setup costs. To contribute reducing the gap between theory and practice we have considered the real constraints and the criteria used by planners. The problem considered has been formulated as a mixed integer programming. Further, two competitive heuristic procedures have been developed and one of them will be proposed to schedule in the ice cream factory.

  8. Manufacturing scheduling systems an integrated view on models, methods and tools

    Framinan, Jose M; Ruiz García, Rubén

    2014-01-01

    The book is devoted to the problem of manufacturing scheduling, which is the efficient allocation of jobs (orders) over machines (resources) in a manufacturing facility. It offers a comprehensive and integrated perspective on the different aspects required to design and implement systems to efficiently and effectively support manufacturing scheduling decisions. Obtaining economic and reliable schedules constitutes the core of excellence in customer service and efficiency in manufacturing operations. Therefore, scheduling forms an area of vital importance for competition in manufacturing companies. However, only a fraction of scheduling research has been translated into practice, due to several reasons. First, the inherent complexity of scheduling has led to an excessively fragmented field in which different sub problems and issues are treated in an independent manner as goals themselves, therefore lacking a unifying view of the scheduling problem. Furthermore, mathematical brilliance and elegance has sometime...

  9. Optimizing Biorefinery Design and Operations via Linear Programming Models

    Talmadge, Michael; Batan, Liaw; Lamers, Patrick; Hartley, Damon; Biddy, Mary; Tao, Ling; Tan, Eric

    2017-03-28

    The ability to assess and optimize economics of biomass resource utilization for the production of fuels, chemicals and power is essential for the ultimate success of a bioenergy industry. The team of authors, consisting of members from the National Renewable Energy Laboratory (NREL) and the Idaho National Laboratory (INL), has developed simple biorefinery linear programming (LP) models to enable the optimization of theoretical or existing biorefineries. The goal of this analysis is to demonstrate how such models can benefit the developing biorefining industry. It focuses on a theoretical multi-pathway, thermochemical biorefinery configuration and demonstrates how the biorefinery can use LP models for operations planning and optimization in comparable ways to the petroleum refining industry. Using LP modeling tools developed under U.S. Department of Energy's Bioenergy Technologies Office (DOE-BETO) funded efforts, the authors investigate optimization challenges for the theoretical biorefineries such as (1) optimal feedstock slate based on available biomass and prices, (2) breakeven price analysis for available feedstocks, (3) impact analysis for changes in feedstock costs and product prices, (4) optimal biorefinery operations during unit shutdowns / turnarounds, and (5) incentives for increased processing capacity. These biorefinery examples are comparable to crude oil purchasing and operational optimization studies that petroleum refiners perform routinely using LPs and other optimization models. It is important to note that the analyses presented in this article are strictly theoretical and they are not based on current energy market prices. The pricing structure assigned for this demonstrative analysis is consistent with $4 per gallon gasoline, which clearly assumes an economic environment that would favor the construction and operation of biorefineries. The analysis approach and examples provide valuable insights into the usefulness of analysis tools for

  10. A New Model for Predicting Acute Mucosal Toxicity in Head-and-Neck Cancer Patients Undergoing Radiotherapy With Altered Schedules

    Strigari, Lidia; Pedicini, Piernicola; D’Andrea, Marco; Pinnarò, Paola; Marucci, Laura; Giordano, Carolina; Benassi, Marcello

    2012-01-01

    Purpose: One of the worst radiation-induced acute effects in treating head-and-neck (HN) cancer is grade 3 or higher acute (oral and pharyngeal) mucosal toxicity (AMT), caused by the killing/depletion of mucosa cells. Here we aim to testing a predictive model of the AMT in HN cancer patients receiving different radiotherapy schedules. Methods and Materials: Various radiotherapeutic schedules have been reviewed and classified as tolerable or intolerable based on AMT severity. A modified normal tissue complication probability (NTCP) model has been investigated to describe AMT data in radiotherapy regimens, both conventional and altered in dose and overall treatment time (OTT). We tested the hypothesis that such a model could also be applied to identify intolerable treatment and to predict AMT. This AMT NTCP model has been compared with other published predictive models to identify schedules that are either tolerable or intolerable. The area under the curve (AUC) was calculated for all models, assuming treatment tolerance as the gold standard. The correlation between AMT and the predicted toxicity rate was assessed by a Pearson correlation test. Results: The AMT NTCP model was able to distinguish between acceptable and intolerable schedules among the data available for the study (AUC = 0.84, 95% confidence interval = 0.75-0.92). In the equivalent dose at 2 Gy/fraction (EQD2) vs OTT space, the proposed model shows a trend similar to that of models proposed by other authors, but was superior in detecting some intolerable schedules. Moreover, it was able to predict the incidence of ≥G3 AMT. Conclusion: The proposed model is able to predict ≥G3 AMT after HN cancer radiotherapy, and could be useful for designing altered/hypofractionated schedules to reduce the incidence of AMT.

  11. Work Scheduling by Use of Worker Model in Consideration of Learning by On-The-Job Training

    Tateno, Toshitake; Shimizu, Keiko

    This paper deals with a method of scheduling manual work in consideration of learning by on-the-job training (OJT). In skilled work such as maintenance of trains and airplanes, workers must learn many tasks by OJT. While the work processing time of novice workers is longer than that of experts, the time will be reduced with repeated OJT. Therefore, OJT is important for maintaining the skill level and the long-term work efficiency of an organization. In order to devise a schedule considering OJT, the scheduler must incorporate a management function of workers to trace dynamically changing work experience. In this paper, after the relationship between scheduling problems and worker management problems is defined, a simulation method, in which a worker model and an agent-based mechanism are utilized, is proposed to derive the optimal OJT strategy toward high long-term performance. Finally, we present some case studies showing the effectiveness of OJT planning based on the simulation.

  12. Linear models for sound from supersonic reacting mixing layers

    Chary, P. Shivakanth; Samanta, Arnab

    2016-12-01

    We perform a linearized reduced-order modeling of the aeroacoustic sound sources in supersonic reacting mixing layers to explore their sensitivities to some of the flow parameters in radiating sound. Specifically, we investigate the role of outer modes as the effective flow compressibility is raised, when some of these are expected to dominate over the traditional Kelvin-Helmholtz (K-H) -type central mode. Although the outer modes are known to be of lesser importance in the near-field mixing, how these radiate to the far-field is uncertain, on which we focus. On keeping the flow compressibility fixed, the outer modes are realized via biasing the respective mean densities of the fast (oxidizer) or slow (fuel) side. Here the mean flows are laminar solutions of two-dimensional compressible boundary layers with an imposed composite (turbulent) spreading rate, which we show to significantly alter the growth of instability waves by saturating them earlier, similar to in nonlinear calculations, achieved here via solving the linear parabolized stability equations. As the flow parameters are varied, instability of the slow modes is shown to be more sensitive to heat release, potentially exceeding equivalent central modes, as these modes yield relatively compact sound sources with lesser spreading of the mixing layer, when compared to the corresponding fast modes. In contrast, the radiated sound seems to be relatively unaffected when the mixture equivalence ratio is varied, except for a lean mixture which is shown to yield a pronounced effect on the slow mode radiation by reducing its modal growth.

  13. Linear programming model can explain respiration of fermentation products

    Möller, Philip; Liu, Xiaochen; Schuster, Stefan

    2018-01-01

    Many differentiated cells rely primarily on mitochondrial oxidative phosphorylation for generating energy in the form of ATP needed for cellular metabolism. In contrast most tumor cells instead rely on aerobic glycolysis leading to lactate to about the same extent as on respiration. Warburg found that cancer cells to support oxidative phosphorylation, tend to ferment glucose or other energy source into lactate even in the presence of sufficient oxygen, which is an inefficient way to generate ATP. This effect also occurs in striated muscle cells, activated lymphocytes and microglia, endothelial cells and several mammalian cell types, a phenomenon termed the “Warburg effect”. The effect is paradoxical at first glance because the ATP production rate of aerobic glycolysis is much slower than that of respiration and the energy demands are better to be met by pure oxidative phosphorylation. We tackle this question by building a minimal model including three combined reactions. The new aspect in extension to earlier models is that we take into account the possible uptake and oxidation of the fermentation products. We examine the case where the cell can allocate protein on several enzymes in a varying distribution and model this by a linear programming problem in which the objective is to maximize the ATP production rate under different combinations of constraints on enzymes. Depending on the cost of reactions and limitation of the substrates, this leads to pure respiration, pure fermentation, and a mixture of respiration and fermentation. The model predicts that fermentation products are only oxidized when glucose is scarce or its uptake is severely limited. PMID:29415045

  14. Linear programming model can explain respiration of fermentation products.

    Möller, Philip; Liu, Xiaochen; Schuster, Stefan; Boley, Daniel

    2018-01-01

    Many differentiated cells rely primarily on mitochondrial oxidative phosphorylation for generating energy in the form of ATP needed for cellular metabolism. In contrast most tumor cells instead rely on aerobic glycolysis leading to lactate to about the same extent as on respiration. Warburg found that cancer cells to support oxidative phosphorylation, tend to ferment glucose or other energy source into lactate even in the presence of sufficient oxygen, which is an inefficient way to generate ATP. This effect also occurs in striated muscle cells, activated lymphocytes and microglia, endothelial cells and several mammalian cell types, a phenomenon termed the "Warburg effect". The effect is paradoxical at first glance because the ATP production rate of aerobic glycolysis is much slower than that of respiration and the energy demands are better to be met by pure oxidative phosphorylation. We tackle this question by building a minimal model including three combined reactions. The new aspect in extension to earlier models is that we take into account the possible uptake and oxidation of the fermentation products. We examine the case where the cell can allocate protein on several enzymes in a varying distribution and model this by a linear programming problem in which the objective is to maximize the ATP production rate under different combinations of constraints on enzymes. Depending on the cost of reactions and limitation of the substrates, this leads to pure respiration, pure fermentation, and a mixture of respiration and fermentation. The model predicts that fermentation products are only oxidized when glucose is scarce or its uptake is severely limited.

  15. Transport coefficients from SU(3) Polyakov linearmodel

    Tawfik, A.; Diab, A.

    2015-01-01

    In the mean field approximation, the grand potential of SU(3) Polyakov linearmodel (PLSM) is analyzed for the order parameter of the light and strange chiral phase-transitions, σ l and σ s , respectively, and for the deconfinement order parameters φ and φ*. Furthermore, the subtracted condensate Δ l,s and the chiral order-parameters M b are compared with lattice QCD calculations. By using the dynamical quasiparticle model (DQPM), which can be considered as a system of noninteracting massive quasiparticles, we have evaluated the decay width and the relaxation time of quarks and gluons. In the framework of LSM and with Polyakov loop corrections included, the interaction measure Δ/T 4 , the specific heat c v and speed of sound squared c s 2 have been determined, as well as the temperature dependence of the normalized quark number density n q /T 3 and the quark number susceptibilities χ q /T 2 at various values of the baryon chemical potential. The electric and heat conductivity, σ e and κ, and the bulk and shear viscosities normalized to the thermal entropy, ζ/s and η/s, are compared with available results of lattice QCD calculations.

  16. Generalized Functional Linear Models With Semiparametric Single-Index Interactions

    Li, Yehua

    2010-06-01

    We introduce a new class of functional generalized linear models, where the response is a scalar and some of the covariates are functional. We assume that the response depends on multiple covariates, a finite number of latent features in the functional predictor, and interaction between the two. To achieve parsimony, the interaction between the multiple covariates and the functional predictor is modeled semiparametrically with a single-index structure. We propose a two step estimation procedure based on local estimating equations, and investigate two situations: (a) when the basis functions are pre-determined, e.g., Fourier or wavelet basis functions and the functional features of interest are known; and (b) when the basis functions are data driven, such as with functional principal components. Asymptotic properties are developed. Notably, we show that when the functional features are data driven, the parameter estimates have an increased asymptotic variance, due to the estimation error of the basis functions. Our methods are illustrated with a simulation study and applied to an empirical data set, where a previously unknown interaction is detected. Technical proofs of our theoretical results are provided in the online supplemental materials.

  17. Sparse linear models: Variational approximate inference and Bayesian experimental design

    Seeger, Matthias W

    2009-01-01

    A wide range of problems such as signal reconstruction, denoising, source separation, feature selection, and graphical model search are addressed today by posterior maximization for linear models with sparsity-favouring prior distributions. The Bayesian posterior contains useful information far beyond its mode, which can be used to drive methods for sampling optimization (active learning), feature relevance ranking, or hyperparameter estimation, if only this representation of uncertainty can be approximated in a tractable manner. In this paper, we review recent results for variational sparse inference, and show that they share underlying computational primitives. We discuss how sampling optimization can be implemented as sequential Bayesian experimental design. While there has been tremendous recent activity to develop sparse estimation, little attendance has been given to sparse approximate inference. In this paper, we argue that many problems in practice, such as compressive sensing for real-world image reconstruction, are served much better by proper uncertainty approximations than by ever more aggressive sparse estimation algorithms. Moreover, since some variational inference methods have been given strong convex optimization characterizations recently, theoretical analysis may become possible, promising new insights into nonlinear experimental design.

  18. Sparse linear models: Variational approximate inference and Bayesian experimental design

    Seeger, Matthias W [Saarland University and Max Planck Institute for Informatics, Campus E1.4, 66123 Saarbruecken (Germany)

    2009-12-01

    A wide range of problems such as signal reconstruction, denoising, source separation, feature selection, and graphical model search are addressed today by posterior maximization for linear models with sparsity-favouring prior distributions. The Bayesian posterior contains useful information far beyond its mode, which can be used to drive methods for sampling optimization (active learning), feature relevance ranking, or hyperparameter estimation, if only this representation of uncertainty can be approximated in a tractable manner. In this paper, we review recent results for variational sparse inference, and show that they share underlying computational primitives. We discuss how sampling optimization can be implemented as sequential Bayesian experimental design. While there has been tremendous recent activity to develop sparse estimation, little attendance has been given to sparse approximate inference. In this paper, we argue that many problems in practice, such as compressive sensing for real-world image reconstruction, are served much better by proper uncertainty approximations than by ever more aggressive sparse estimation algorithms. Moreover, since some variational inference methods have been given strong convex optimization characterizations recently, theoretical analysis may become possible, promising new insights into nonlinear experimental design.

  19. Generalized Functional Linear Models With Semiparametric Single-Index Interactions

    Li, Yehua; Wang, Naisyin; Carroll, Raymond J.

    2010-01-01

    We introduce a new class of functional generalized linear models, where the response is a scalar and some of the covariates are functional. We assume that the response depends on multiple covariates, a finite number of latent features in the functional predictor, and interaction between the two. To achieve parsimony, the interaction between the multiple covariates and the functional predictor is modeled semiparametrically with a single-index structure. We propose a two step estimation procedure based on local estimating equations, and investigate two situations: (a) when the basis functions are pre-determined, e.g., Fourier or wavelet basis functions and the functional features of interest are known; and (b) when the basis functions are data driven, such as with functional principal components. Asymptotic properties are developed. Notably, we show that when the functional features are data driven, the parameter estimates have an increased asymptotic variance, due to the estimation error of the basis functions. Our methods are illustrated with a simulation study and applied to an empirical data set, where a previously unknown interaction is detected. Technical proofs of our theoretical results are provided in the online supplemental materials.

  20. Modeling containment of large wildfires using generalized linear mixed-model analysis

    Mark Finney; Isaac C. Grenfell; Charles W. McHugh

    2009-01-01

    Billions of dollars are spent annually in the United States to contain large wildland fires, but the factors contributing to suppression success remain poorly understood. We used a regression model (generalized linear mixed-model) to model containment probability of individual fires, assuming that containment was a repeated-measures problem (fixed effect) and...

  1. Stochastic linear hybrid systems: Modeling, estimation, and application

    Seah, Chze Eng

    Hybrid systems are dynamical systems which have interacting continuous state and discrete state (or mode). Accurate modeling and state estimation of hybrid systems are important in many applications. We propose a hybrid system model, known as the Stochastic Linear Hybrid System (SLHS), to describe hybrid systems with stochastic linear system dynamics in each mode and stochastic continuous-state-dependent mode transitions. We then develop a hybrid estimation algorithm, called the State-Dependent-Transition Hybrid Estimation (SDTHE) algorithm, to estimate the continuous state and discrete state of the SLHS from noisy measurements. It is shown that the SDTHE algorithm is more accurate or more computationally efficient than existing hybrid estimation algorithms. Next, we develop a performance analysis algorithm to evaluate the performance of the SDTHE algorithm in a given operating scenario. We also investigate sufficient conditions for the stability of the SDTHE algorithm. The proposed SLHS model and SDTHE algorithm are illustrated to be useful in several applications. In Air Traffic Control (ATC), to facilitate implementations of new efficient operational concepts, accurate modeling and estimation of aircraft trajectories are needed. In ATC, an aircraft's trajectory can be divided into a number of flight modes. Furthermore, as the aircraft is required to follow a given flight plan or clearance, its flight mode transitions are dependent of its continuous state. However, the flight mode transitions are also stochastic due to navigation uncertainties or unknown pilot intents. Thus, we develop an aircraft dynamics model in ATC based on the SLHS. The SDTHE algorithm is then used in aircraft tracking applications to estimate the positions/velocities of aircraft and their flight modes accurately. Next, we develop an aircraft conformance monitoring algorithm to detect any deviations of aircraft trajectories in ATC that might compromise safety. In this application, the SLHS

  2. Methods to model and predict the ViewRay treatment deliveries to aid patient scheduling and treatment planning.

    Liu, Shi; Wu, Yu; Wooten, H Omar; Green, Olga; Archer, Brent; Li, Harold; Yang, Deshan

    2016-03-08

    A software tool is developed, given a new treatment plan, to predict treatment delivery time for radiation therapy (RT) treatments of patients on ViewRay magnetic resonance image-guided radiation therapy (MR-IGRT) delivery system. This tool is necessary for managing patient treatment scheduling in our clinic. The predicted treatment delivery time and the assessment of plan complexities could also be useful to aid treatment planning. A patient's total treatment delivery time, not including time required for localization, is modeled as the sum of four components: 1) the treatment initialization time; 2) the total beam-on time; 3) the gantry rotation time; and 4) the multileaf collimator (MLC) motion time. Each of the four components is predicted separately. The total beam-on time can be calculated using both the planned beam-on time and the decay-corrected dose rate. To predict the remain-ing components, we retrospectively analyzed the patient treatment delivery record files. The initialization time is demonstrated to be random since it depends on the final gantry angle of the previous treatment. Based on modeling the relationships between the gantry rotation angles and the corresponding rotation time, linear regression is applied to predict the gantry rotation time. The MLC motion time is calculated using the leaves delay modeling method and the leaf motion speed. A quantitative analysis was performed to understand the correlation between the total treatment time and the plan complexity. The proposed algorithm is able to predict the ViewRay treatment delivery time with the average prediction error 0.22min or 1.82%, and the maximal prediction error 0.89 min or 7.88%. The analysis has shown the correlation between the plan modulation (PM) factor and the total treatment delivery time, as well as the treatment delivery duty cycle. A possibility has been identified to significantly reduce MLC motion time by optimizing the positions of closed MLC pairs. The accuracy of

  3. Identification of an Equivalent Linear Model for a Non-Linear Time-Variant RC-Structure

    Kirkegaard, Poul Henning; Andersen, P.; Brincker, Rune

    are investigated and compared with ARMAX models used on a running window. The techniques are evaluated using simulated data generated by the non-linear finite element program SARCOF modeling a 10-storey 3-bay concrete structure subjected to amplitude modulated Gaussian white noise filtered through a Kanai......This paper considers estimation of the maximum softening for a RC-structure subjected to earthquake excitation. The so-called Maximum Softening damage indicator relates the global damage state of the RC-structure to the relative decrease of the fundamental eigenfrequency in an equivalent linear...

  4. Modelling and Metaheuristic for Gantry Crane Scheduling and Storage Space Allocation Problem in Railway Container Terminals

    Ming Zeng

    2017-01-01

    Full Text Available The gantry crane scheduling and storage space allocation problem in the main containers yard of railway container terminal is studied. A mixed integer programming model which comprehensively considers the handling procedures, noncrossing constraints, the safety margin and traveling time of gantry cranes, and the storage modes in the main area is formulated. A metaheuristic named backtracking search algorithm (BSA is then improved to solve this intractable problem. A series of computational experiments are carried out to evaluate the performance of the proposed algorithm under some randomly generated cases based on the practical operation conditions. The results show that the proposed algorithm can gain the near-optimal solutions within a reasonable computation time.

  5. On the modeling of uplink inter-cell interference based on proportional fair scheduling

    Tabassum, Hina

    2012-10-03

    We derive a semi-analytical expression for the uplink inter-cell interference (ICI) assuming proportional fair scheduling (with a maximum normalized signal-to-noise ratio (SNR) criterion) deployed in the cellular network. The derived expression can be customized for different models of channel statistics that can capture path loss, shadowing, and fading. Firstly, we derive an expression for the distribution of the locations of the allocated user in a given cell. Then, we derive the distribution and moment generating function of the uplink ICI from one interfering cell. Finally, we determine the moment generating function of the cumulative uplink ICI from all interfering cells. The derived expression is utilized to evaluate important network performance metrics such as outage probability and fairness among users. The accuracy of the derived expressions is verified by comparing the obtained results to Monte Carlo simulations. © 2012 IEEE.

  6. Mathematical Model and Algorithm for the Reefer Mechanic Scheduling Problem at Seaports

    Jiantong Zhang

    2017-01-01

    Full Text Available With the development of seaborne logistics, the international trade of goods transported in refrigerated containers is growing fast. Refrigerated containers, also known as reefers, are used in transportation of temperature sensitive cargo, such as perishable fruits. This trend brings new challenges to terminal managers, that is, how to efficiently arrange mechanics to plug and unplug power for the reefers (i.e., tasks at yards. This work investigates the reefer mechanics scheduling problem at container ports. To minimize the sum of the total tardiness of all tasks and the total working distance of all mechanics, we formulate a mathematical model. For the resolution of this problem, we propose a DE algorithm which is combined with efficient heuristics, local search strategies, and parameter adaption scheme. The proposed algorithm is tested and validated through numerical experiments. Computational results demonstrate the effectiveness and efficiency of the proposed algorithm.

  7. Scheduling Model for Renewable Energy Sources Integration in an Insular Power System

    Gerardo J. Osório

    2018-01-01

    Full Text Available Insular power systems represent an asset and an excellent starting point for the development and analysis of innovative tools and technologies. The integration of renewable energy resources that has taken place in several islands in the south of Europe, particularly in Portugal, has brought more uncertainty to production management. In this work, an innovative scheduling model is proposed, which considers the integration of wind and solar resources in an insular power system in Portugal, with a strong conventional generation basis. This study aims to show the benefits of increasing the integration of renewable energy resources in this insular power system, and the objectives are related to minimizing the time for which conventional generation is in operation, maximizing profits, reducing production costs, and consequently, reducing greenhouse gas emissions.

  8. On the modeling of uplink inter-cell interference based on proportional fair scheduling

    Tabassum, Hina; Yilmaz, Ferkan; Dawy, Zaher; Alouini, Mohamed-Slim

    2012-01-01

    We derive a semi-analytical expression for the uplink inter-cell interference (ICI) assuming proportional fair scheduling (with a maximum normalized signal-to-noise ratio (SNR) criterion) deployed in the cellular network. The derived expression can be customized for different models of channel statistics that can capture path loss, shadowing, and fading. Firstly, we derive an expression for the distribution of the locations of the allocated user in a given cell. Then, we derive the distribution and moment generating function of the uplink ICI from one interfering cell. Finally, we determine the moment generating function of the cumulative uplink ICI from all interfering cells. The derived expression is utilized to evaluate important network performance metrics such as outage probability and fairness among users. The accuracy of the derived expressions is verified by comparing the obtained results to Monte Carlo simulations. © 2012 IEEE.

  9. Second-order kinetic model for the sorption of cadmium onto tree fern: a comparison of linear and non-linear methods.

    Ho, Yuh-Shan

    2006-01-01

    A comparison was made of the linear least-squares method and a trial-and-error non-linear method of the widely used pseudo-second-order kinetic model for the sorption of cadmium onto ground-up tree fern. Four pseudo-second-order kinetic linear equations are discussed. Kinetic parameters obtained from the four kinetic linear equations using the linear method differed but they were the same when using the non-linear method. A type 1 pseudo-second-order linear kinetic model has the highest coefficient of determination. Results show that the non-linear method may be a better way to obtain the desired parameters.

  10. Online Semiparametric Identification of Lithium-Ion Batteries Using the Wavelet-Based Partially Linear Battery Model

    Caiping Zhang

    2013-05-01

    Full Text Available Battery model identification is very important for reliable battery management as well as for battery system design process. The common problem in identifying battery models is how to determine the most appropriate mathematical model structure and parameterized coefficients based on the measured terminal voltage and current. This paper proposes a novel semiparametric approach using the wavelet-based partially linear battery model (PLBM and a recursive penalized wavelet estimator for online battery model identification. Three main contributions are presented. First, the semiparametric PLBM is proposed to simulate the battery dynamics. Compared with conventional electrical models of a battery, the proposed PLBM is equipped with a semiparametric partially linear structure, which includes a parametric part (involving the linear equivalent circuit parameters and a nonparametric part [involving the open-circuit voltage (OCV]. Thus, even with little prior knowledge about the OCV, the PLBM can be identified using a semiparametric identification framework. Second, we model the nonparametric part of the PLBM using the truncated wavelet multiresolution analysis (MRA expansion, which leads to a parsimonious model structure that is highly desirable for model identification; using this model, the PLBM could be represented in a linear-in-parameter manner. Finally, to exploit the sparsity of the wavelet MRA representation and allow for online implementation, a penalized wavelet estimator that uses a modified online cyclic coordinate descent algorithm is proposed to identify the PLBM in a recursive fashion. The simulation and experimental results demonstrate that the proposed PLBM with the corresponding identification algorithm can accurately simulate the dynamic behavior of a lithium-ion battery in the Federal Urban Driving Schedule tests.

  11. Investigation of various growth mechanisms of solid tumour growth within the linear-quadratic model for radiotherapy

    McAneney, H; O'Rourke, S F C

    2007-01-01

    The standard linear-quadratic survival model for radiotherapy is used to investigate different schedules of radiation treatment planning to study how these may be affected by different tumour repopulation kinetics between treatments. The laws for tumour cell repopulation include the logistic and Gompertz models and this extends the work of Wheldon et al (1977 Br. J. Radiol. 50 681), which was concerned with the case of exponential re-growth between treatments. Here we also consider the restricted exponential model. This has been successfully used by Panetta and Adam (1995 Math. Comput. Modelling 22 67) in the case of chemotherapy treatment planning.Treatment schedules investigated include standard fractionation of daily treatments, weekday treatments, accelerated fractionation, optimized uniform schedules and variation of the dosage and α/β ratio, where α and β are radiobiological parameters for the tumour tissue concerned. Parameters for these treatment strategies are extracted from the literature on advanced head and neck cancer, prostate cancer, as well as radiosensitive parameters. Standardized treatment protocols are also considered. Calculations based on the present analysis indicate that even with growth laws scaled to mimic initial growth, such that growth mechanisms are comparable, variation in survival fraction to orders of magnitude emerged. Calculations show that the logistic and exponential models yield similar results in tumour eradication. By comparison the Gompertz model calculations indicate that tumours described by this law result in a significantly poorer prognosis for tumour eradication than either the exponential or logistic models. The present study also shows that the faster the tumour growth rate and the higher the repair capacity of the cell line, the greater the variation in outcome of the survival fraction. Gaps in treatment, planned or unplanned, also accentuate the differences of the survival fraction given alternative growth

  12. Comparison of linear, skewed-linear, and proportional hazard models for the analysis of lambing interval in Ripollesa ewes.

    Casellas, J; Bach, R

    2012-06-01

    Lambing interval is a relevant reproductive indicator for sheep populations under continuous mating systems, although there is a shortage of selection programs accounting for this trait in the sheep industry. Both the historical assumption of small genetic background and its unorthodox distribution pattern have limited its implementation as a breeding objective. In this manuscript, statistical performances of 3 alternative parametrizations [i.e., symmetric Gaussian mixed linear (GML) model, skew-Gaussian mixed linear (SGML) model, and piecewise Weibull proportional hazard (PWPH) model] have been compared to elucidate the preferred methodology to handle lambing interval data. More specifically, flock-by-flock analyses were performed on 31,986 lambing interval records (257.3 ± 0.2 d) from 6 purebred Ripollesa flocks. Model performances were compared in terms of deviance information criterion (DIC) and Bayes factor (BF). For all flocks, PWPH models were clearly preferred; they generated a reduction of 1,900 or more DIC units and provided BF estimates larger than 100 (i.e., PWPH models against linear models). These differences were reduced when comparing PWPH models with different number of change points for the baseline hazard function. In 4 flocks, only 2 change points were required to minimize the DIC, whereas 4 and 6 change points were needed for the 2 remaining flocks. These differences demonstrated a remarkable degree of heterogeneity across sheep flocks that must be properly accounted for in genetic evaluation models to avoid statistical biases and suboptimal genetic trends. Within this context, all 6 Ripollesa flocks revealed substantial genetic background for lambing interval with heritabilities ranging between 0.13 and 0.19. This study provides the first evidence of the suitability of PWPH models for lambing interval analysis, clearly discarding previous parametrizations focused on mixed linear models.

  13. Self-management model in the scheduling of successive appointments in rheumatology.

    Castro Corredor, David; Cuadra Díaz, José Luis; Mateos Rodríguez, Javier José; Anino Fernández, Joaquín; Mínguez Sánchez, María Dolores; de Lara Simón, Isabel María; Tébar, María Ángeles; Añó, Encarnación; Sanz, María Dolores; Ballester, María Nieves

    2018-01-08

    The rheumatology service of Ciudad Real Hospital, located in an autonomous community of that same name that is nearly in the center of Spain, implemented a self-management model of successive appointments more than 10 years ago. Since then, the physicians of the department schedule follow-up visits for their patients depending on the disease, its course and ancillary tests. The purpose of this study is to evaluate and compare the self-management model for successive appointments in the rheumatology service of Ciudad Real Hospital versus the model of external appointment management implemented in 8 of the hospital's 15 medical services. A comparative and multivariate analysis was performed to identify variables with statistically significant differences, in terms of activity and/or performance indicators and quality perceived by users. The comparison involved the self-management model for successive appointments employed in the rheumatology service of Ciudad Real Hospital and the model for external appointment management used in 8 hospital medical services between January 1 and May 31, 2016. In a database with more than 100,000 records of appointments involving the set of services included in the study, the mean waiting time and the numbers of non-appearances and rescheduling of follow-up visits in the rheumatology department were significantly lower than in the other services. The number of individuals treated in outpatient rheumatology services was 7,768, and a total of 280 patients were surveyed (response rate 63.21%). They showed great overall satisfaction, and the incidence rate of claims was low. Our results show that the self-management model of scheduling appointments has better results in terms of activity indicators and in quality perceived by users, despite the intense activity. Thus, this study could be fundamental for decision making in the management of health care organizations. Copyright © 2017 Elsevier España, S.L.U. and Sociedad Española de

  14. Behavioral and macro modeling using piecewise linear techniques

    Kruiskamp, M.W.; Leenaerts, D.M.W.; Antao, B.

    1998-01-01

    In this paper we will demonstrate that most digital, analog as well as behavioral components can be described using piecewise linear approximations of their real behavior. This leads to several advantages from the viewpoint of simulation. We will also give a method to store the resulting linear

  15. Simultaneous Balancing and Model Reduction of Switched Linear Systems

    Monshizadeh, Nima; Trentelman, Hendrikus; Camlibel, M.K.

    2011-01-01

    In this paper, first, balanced truncation of linear systems is revisited. Then, simultaneous balancing of multiple linear systems is investigated. Necessary and sufficient conditions are introduced to identify the case where simultaneous balancing is possible. The validity of these conditions is not

  16. Modeling duration choice in space–time multi-state supernetworks for individual activity-travel scheduling

    Liao, F.

    2016-01-01

    Multi-state supernetworks have been advanced recently for modeling individual activity-travel scheduling decisions. The main advantage is that multi-dimensional choice facets are modeled simultaneously within an integral framework, supporting systematic assessments of a large spectrum of policies

  17. Genomic prediction based on data from three layer lines using non-linear regression models.

    Huang, Heyun; Windig, Jack J; Vereijken, Addie; Calus, Mario P L

    2014-11-06

    Most studies on genomic prediction with reference populations that include multiple lines or breeds have used linear models. Data heterogeneity due to using multiple populations may conflict with model assumptions used in linear regression methods. In an attempt to alleviate potential discrepancies between assumptions of linear models and multi-population data, two types of alternative models were used: (1) a multi-trait genomic best linear unbiased prediction (GBLUP) model that modelled trait by line combinations as separate but correlated traits and (2) non-linear models based on kernel learning. These models were compared to conventional linear models for genomic prediction for two lines of brown layer hens (B1 and B2) and one line of white hens (W1). The three lines each had 1004 to 1023 training and 238 to 240 validation animals. Prediction accuracy was evaluated by estimating the correlation between observed phenotypes and predicted breeding values. When the training dataset included only data from the evaluated line, non-linear models yielded at best a similar accuracy as linear models. In some cases, when adding a distantly related line, the linear models showed a slight decrease in performance, while non-linear models generally showed no change in accuracy. When only information from a closely related line was used for training, linear models and non-linear radial basis function (RBF) kernel models performed similarly. The multi-trait GBLUP model took advantage of the estimated genetic correlations between the lines. Combining linear and non-linear models improved the accuracy of multi-line genomic prediction. Linear models and non-linear RBF models performed very similarly for genomic prediction, despite the expectation that non-linear models could deal better with the heterogeneous multi-population data. This heterogeneity of the data can be overcome by modelling trait by line combinations as separate but correlated traits, which avoids the occasional

  18. Sampled-data models for linear and nonlinear systems

    Yuz, Juan I

    2014-01-01

    Sampled-data Models for Linear and Nonlinear Systems provides a fresh new look at a subject with which many researchers may think themselves familiar. Rather than emphasising the differences between sampled-data and continuous-time systems, the authors proceed from the premise that, with modern sampling rates being as high as they are, it is becoming more appropriate to emphasise connections and similarities. The text is driven by three motives: ·      the ubiquity of computers in modern control and signal-processing equipment means that sampling of systems that really evolve continuously is unavoidable; ·      although superficially straightforward, sampling can easily produce erroneous results when not treated properly; and ·      the need for a thorough understanding of many aspects of sampling among researchers and engineers dealing with applications to which they are central. The authors tackle many misconceptions which, although appearing reasonable at first sight, are in fact either p...

  19. Dynamics of edge currents in a linearly quenched Haldane model

    Mardanya, Sougata; Bhattacharya, Utso; Agarwal, Amit; Dutta, Amit

    2018-03-01

    In a finite-time quantum quench of the Haldane model, the Chern number determining the topology of the bulk remains invariant, as long as the dynamics is unitary. Nonetheless, the corresponding boundary attribute, the edge current, displays interesting dynamics. For the case of sudden and adiabatic quenches the postquench edge current is solely determined by the initial and the final Hamiltonians, respectively. However for a finite-time (τ ) linear quench in a Haldane nanoribbon, we show that the evolution of the edge current from the sudden to the adiabatic limit is not monotonic in τ and has a turning point at a characteristic time scale τ =τ0 . For small τ , the excited states lead to a huge unidirectional surge in the edge current of both edges. On the other hand, in the limit of large τ , the edge current saturates to its expected equilibrium ground-state value. This competition between the two limits lead to the observed nonmonotonic behavior. Interestingly, τ0 seems to depend only on the Semenoff mass and the Haldane flux. A similar dynamics for the edge current is also expected in other systems with topological phases.

  20. Parameter estimation and hypothesis testing in linear models

    Koch, Karl-Rudolf

    1999-01-01

    The necessity to publish the second edition of this book arose when its third German edition had just been published. This second English edition is there­ fore a translation of the third German edition of Parameter Estimation and Hypothesis Testing in Linear Models, published in 1997. It differs from the first English edition by the addition of a new chapter on robust estimation of parameters and the deletion of the section on discriminant analysis, which has been more completely dealt with by the author in the book Bayesian In­ ference with Geodetic Applications, Springer-Verlag, Berlin Heidelberg New York, 1990. Smaller additions and deletions have been incorporated, to im­ prove the text, to point out new developments or to eliminate errors which became apparent. A few examples have been also added. I thank Springer-Verlag for publishing this second edition and for the assistance in checking the translation, although the responsibility of errors remains with the author. I also want to express my thanks...

  1. Linear multivariate evaluation models for spatial perception of soundscape.

    Deng, Zhiyong; Kang, Jian; Wang, Daiwei; Liu, Aili; Kang, Joe Zhengyu

    2015-11-01

    Soundscape is a sound environment that emphasizes the awareness of auditory perception and social or cultural understandings. The case of spatial perception is significant to soundscape. However, previous studies on the auditory spatial perception of the soundscape environment have been limited. Based on 21 native binaural-recorded soundscape samples and a set of auditory experiments for subjective spatial perception (SSP), a study of the analysis among semantic parameters, the inter-aural-cross-correlation coefficient (IACC), A-weighted-equal sound-pressure-level (L(eq)), dynamic (D), and SSP is introduced to verify the independent effect of each parameter and to re-determine some of their possible relationships. The results show that the more noisiness the audience perceived, the worse spatial awareness they received, while the closer and more directional the sound source image variations, dynamics, and numbers of sound sources in the soundscape are, the better the spatial awareness would be. Thus, the sensations of roughness, sound intensity, transient dynamic, and the values of Leq and IACC have a suitable range for better spatial perception. A better spatial awareness seems to promote the preference slightly for the audience. Finally, setting SSPs as functions of the semantic parameters and Leq-D-IACC, two linear multivariate evaluation models of subjective spatial perception are proposed.

  2. Form factors in the projected linear chiral sigma model

    Alberto, P.; Coimbra Univ.; Bochum Univ.; Ruiz Arriola, E.; Fiolhais, M.; Urbano, J.N.; Coimbra Univ.; Goeke, K.; Gruemmer, F.; Bochum Univ.

    1990-01-01

    Several nucleon form factors are computed within the framework of the linear chiral soliton model. To this end variational means and projection techniques applied to generalized hedgehog quark-boson Fock states are used. In this procedure the Goldberger-Treiman relation and a virial theorem for the pion-nucleon form factor are well fulfilled demonstrating the consistency of the treatment. Both proton and neutron charge form factors are correctly reproduced, as well as the proton magnetic one. The shapes of the neutron magnetic and of the axial form factors are good but their absolute values at the origin are too large. The slopes of all the form factors at zero momentum transfer are in good agreement with the experimental data. The pion-nucleon form factor exhibits to great extent a monopole shape with a cut-off mass of Λ=690 MeV. Electromagnetic form factors for the vertex γNΔ and the nucleon spin distribution are also evaluated and discussed. (orig.)

  3. Modeling Optimal Scheduling for Pumping System to Minimize Operation Cost and Enhance Operation Reliability

    Yin Luo

    2012-01-01

    Full Text Available Traditional pump scheduling models neglect the operation reliability which directly relates with the unscheduled maintenance cost and the wear cost during the operation. Just for this, based on the assumption that the vibration directly relates with the operation reliability and the degree of wear, it could express the operation reliability as the normalization of the vibration level. The characteristic of the vibration with the operation point was studied, it could be concluded that idealized flow versus vibration plot should be a distinct bathtub shape. There is a narrow sweet spot (80 to 100 percent BEP to obtain low vibration levels in this shape, and the vibration also follows similar law with the square of the rotation speed without resonance phenomena. Then, the operation reliability could be modeled as the function of the capacity and rotation speed of the pump and add this function to the traditional model to form the new. And contrast with the tradition method, the result shown that the new model could fix the result produced by the traditional, make the pump operate in low vibration, then the operation reliability could increase and the maintenance cost could decrease.

  4. Prediction of minimum temperatures in an alpine region by linear and non-linear post-processing of meteorological models

    R. Barbiero

    2007-05-01

    Full Text Available Model Output Statistics (MOS refers to a method of post-processing the direct outputs of numerical weather prediction (NWP models in order to reduce the biases introduced by a coarse horizontal resolution. This technique is especially useful in orographically complex regions, where large differences can be found between the NWP elevation model and the true orography. This study carries out a comparison of linear and non-linear MOS methods, aimed at the prediction of minimum temperatures in a fruit-growing region of the Italian Alps, based on the output of two different NWPs (ECMWF T511–L60 and LAMI-3. Temperature, of course, is a particularly important NWP output; among other roles it drives the local frost forecast, which is of great interest to agriculture. The mechanisms of cold air drainage, a distinctive aspect of mountain environments, are often unsatisfactorily captured by global circulation models. The simplest post-processing technique applied in this work was a correction for the mean bias, assessed at individual model grid points. We also implemented a multivariate linear regression on the output at the grid points surrounding the target area, and two non-linear models based on machine learning techniques: Neural Networks and Random Forest. We compare the performance of all these techniques on four different NWP data sets. Downscaling the temperatures clearly improved the temperature forecasts with respect to the raw NWP output, and also with respect to the basic mean bias correction. Multivariate methods generally yielded better results, but the advantage of using non-linear algorithms was small if not negligible. RF, the best performing method, was implemented on ECMWF prognostic output at 06:00 UTC over the 9 grid points surrounding the target area. Mean absolute errors in the prediction of 2 m temperature at 06:00 UTC were approximately 1.2°C, close to the natural variability inside the area itself.

  5. Modelling and Inverse-Modelling: Experiences with O.D.E. Linear Systems in Engineering Courses

    Martinez-Luaces, Victor

    2009-01-01

    In engineering careers courses, differential equations are widely used to solve problems concerned with modelling. In particular, ordinary differential equations (O.D.E.) linear systems appear regularly in Chemical Engineering, Food Technology Engineering and Environmental Engineering courses, due to the usefulness in modelling chemical kinetics,…

  6. An improved robust model predictive control for linear parameter-varying input-output models

    Abbas, H.S.; Hanema, J.; Tóth, R.; Mohammadpour, J.; Meskin, N.

    2018-01-01

    This paper describes a new robust model predictive control (MPC) scheme to control the discrete-time linear parameter-varying input-output models subject to input and output constraints. Closed-loop asymptotic stability is guaranteed by including a quadratic terminal cost and an ellipsoidal terminal

  7. A non-linear state space approach to model groundwater fluctuations

    Berendrecht, W.L.; Heemink, A.W.; Geer, F.C. van; Gehrels, J.C.

    2006-01-01

    A non-linear state space model is developed for describing groundwater fluctuations. Non-linearity is introduced by modeling the (unobserved) degree of water saturation of the root zone. The non-linear relations are based on physical concepts describing the dependence of both the actual

  8. Half-trek criterion for generic identifiability of linear structural equation models

    Foygel, R.; Draisma, J.; Drton, M.

    2012-01-01

    A linear structural equation model relates random variables of interest and corresponding Gaussian noise terms via a linear equation system. Each such model can be represented by a mixed graph in which directed edges encode the linear equations, and bidirected edges indicate possible correlations

  9. Half-trek criterion for generic identifiability of linear structural equation models

    Foygel, R.; Draisma, J.; Drton, M.

    2011-01-01

    A linear structural equation model relates random variables of interest and corresponding Gaussian noise terms via a linear equation system. Each such model can be represented by a mixed graph in which directed edges encode the linear equations, and bidirected edges indicate possible correlations

  10. On-line validation of linear process models using generalized likelihood ratios

    Tylee, J.L.

    1981-12-01

    A real-time method for testing the validity of linear models of nonlinear processes is described and evaluated. Using generalized likelihood ratios, the model dynamics are continually monitored to see if the process has moved far enough away from the nominal linear model operating point to justify generation of a new linear model. The method is demonstrated using a seventh-order model of a natural circulation steam generator

  11. Modelling subject-specific childhood growth using linear mixed-effect models with cubic regression splines.

    Grajeda, Laura M; Ivanescu, Andrada; Saito, Mayuko; Crainiceanu, Ciprian; Jaganath, Devan; Gilman, Robert H; Crabtree, Jean E; Kelleher, Dermott; Cabrera, Lilia; Cama, Vitaliano; Checkley, William

    2016-01-01

    Childhood growth is a cornerstone of pediatric research. Statistical models need to consider individual trajectories to adequately describe growth outcomes. Specifically, well-defined longitudinal models are essential to characterize both population and subject-specific growth. Linear mixed-effect models with cubic regression splines can account for the nonlinearity of growth curves and provide reasonable estimators of population and subject-specific growth, velocity and acceleration. We provide a stepwise approach that builds from simple to complex models, and account for the intrinsic complexity of the data. We start with standard cubic splines regression models and build up to a model that includes subject-specific random intercepts and slopes and residual autocorrelation. We then compared cubic regression splines vis-à-vis linear piecewise splines, and with varying number of knots and positions. Statistical code is provided to ensure reproducibility and improve dissemination of methods. Models are applied to longitudinal height measurements in a cohort of 215 Peruvian children followed from birth until their fourth year of life. Unexplained variability, as measured by the variance of the regression model, was reduced from 7.34 when using ordinary least squares to 0.81 (p linear mixed-effect models with random slopes and a first order continuous autoregressive error term. There was substantial heterogeneity in both the intercept (p modeled with a first order continuous autoregressive error term as evidenced by the variogram of the residuals and by a lack of association among residuals. The final model provides a parametric linear regression equation for both estimation and prediction of population- and individual-level growth in height. We show that cubic regression splines are superior to linear regression splines for the case of a small number of knots in both estimation and prediction with the full linear mixed effect model (AIC 19,352 vs. 19

  12. Simultaneous Balancing and Model Reduction of Switched Linear Systems

    Monshizadeh, Nima; Trentelman, Hendrikus; Camlibel, M.K.

    2011-01-01

    In this paper, first, balanced truncation of linear systems is revisited. Then, simultaneous balancing of multiple linear systems is investigated. Necessary and sufficient conditions are introduced to identify the case where simultaneous balancing is possible. The validity of these conditions is not limited to a certain type of balancing, and they are applicable for different types of balancing corresponding to different equations, like Lyapunov or Riccati equations. The results obtained are ...

  13. Developing ontological model of computational linear algebra - preliminary considerations

    Wasielewska, K.; Ganzha, M.; Paprzycki, M.; Lirkov, I.

    2013-10-01

    The aim of this paper is to propose a method for application of ontologically represented domain knowledge to support Grid users. The work is presented in the context provided by the Agents in Grid system, which aims at development of an agent-semantic infrastructure for efficient resource management in the Grid. Decision support within the system should provide functionality beyond the existing Grid middleware, specifically, help the user to choose optimal algorithm and/or resource to solve a problem from a given domain. The system assists the user in at least two situations. First, for users without in-depth knowledge about the domain, it should help them to select the method and the resource that (together) would best fit the problem to be solved (and match the available resources). Second, if the user explicitly indicates the method and the resource configuration, it should "verify" if her choice is consistent with the expert recommendations (encapsulated in the knowledge base). Furthermore, one of the goals is to simplify the use of the selected resource to execute the job; i.e., provide a user-friendly method of submitting jobs, without required technical knowledge about the Grid middleware. To achieve the mentioned goals, an adaptable method of expert knowledge representation for the decision support system has to be implemented. The selected approach is to utilize ontologies and semantic data processing, supported by multicriterial decision making. As a starting point, an area of computational linear algebra was selected to be modeled, however, the paper presents a general approach that shall be easily extendable to other domains.

  14. Symmetry conservation in the linear chiral soliton model

    Goeke, K.

    1988-01-01

    The linear chiral soliton model with quark fields and elementary pion- and sigma-fields is solved in order to describe static properties of the nucleon and the delta resonance. To this end a Fock-state of the system is constructed consisting out of three valence quarks in a first orbit with a generalized hedgehog spin-flavour configuration. Coherent states are used to provide a quantum description for the mesonic parts of the total wave function. The corresponding classical pion field also exhibit a generalized hedgehog structure. In a pure mean field approximation the variation of the total energy results in the ordinary hedgehog form. In a quantized approach the generalized hedgehog-baryon is projected onto states with good spin and isospin and then noticeable deviations from the simple hedgehog form, if the relevant degrees of freedom of the wave function are varied after the projection. Various nucleon properties are calculated. These include proton and neutron charge radii, and the magnetic moment of the proton for which good agreement with experiment is obtained. The absolute value of the neutron magnetic moment comes out too large, similarly as the axial vector coupling constant and the pion-nucleon-nucleon coupling constant.To the generalization of the hedgehog the Goldberger-Treiman relation and a corresponding virial theorem are fulfilled. Variation of the quark-meson coupling parameter g and the sigma mass m σ shows that the g A is always at least 40 % too large compared to experiment. Hence it is concluded that either the inclusion of the polarization of the Dirac sea and/or further mesons with may be vector character or the consideration of intrinsic deformation is necessary. The concepts and results of the projections are compared with the semiclassical collective quantization method. 6 tabs., 14 figs., 43 refs

  15. Dosage and dose schedule screening of drug combinations in agent-based models reveals hidden synergies

    Lisa Corina Barros de Andrade e Sousa1

    2016-01-01

    Full Text Available The fungus Candida albicans is the most common causative agent of human fungal infections and better drugs or drug combination strategies are urgently needed. Here, we present an agent-based model of the interplay of C. albicans with the host immune system and with the microflora of the host. We took into account the morphological change of C. albicans from the yeast to hyphae form and its dynamics during infection. The model allowed us to follow the dynamics of fungal growth and morphology, of the immune cells and of microflora in different perturbing situations. We specifically focused on the consequences of microflora reduction following antibiotic treatment. Using the agent-based model, different drug types have been tested for their effectiveness, namely drugs that inhibit cell division and drugs that constrain the yeast-to-hyphae transition. Applied individually, the division drug turned out to successfully decrease hyphae while the transition drug leads to a burst in hyphae after the end of the treatment. To evaluate the effect of different drug combinations, doses, and schedules, we introduced a measure for the return to a healthy state, the infection score. Using this measure, we found that the addition of a transition drug to a division drug treatment can improve the treatment reliability while minimizing treatment duration and drug dosage. In this work we present a theoretical study. Although our model has not been calibrated to quantitative experimental data, the technique of computationally identifying synergistic treatment combinations in an agent based model exemplifies the importance of computational techniques in translational research.

  16. Log-normal frailty models fitted as Poisson generalized linear mixed models.

    Hirsch, Katharina; Wienke, Andreas; Kuss, Oliver

    2016-12-01

    The equivalence of a survival model with a piecewise constant baseline hazard function and a Poisson regression model has been known since decades. As shown in recent studies, this equivalence carries over to clustered survival data: A frailty model with a log-normal frailty term can be interpreted and estimated as a generalized linear mixed model with a binary response, a Poisson likelihood, and a specific offset. Proceeding this way, statistical theory and software for generalized linear mixed models are readily available for fitting frailty models. This gain in flexibility comes at the small price of (1) having to fix the number of pieces for the baseline hazard in advance and (2) having to "explode" the data set by the number of pieces. In this paper we extend the simulations of former studies by using a more realistic baseline hazard (Gompertz) and by comparing the model under consideration with competing models. Furthermore, the SAS macro %PCFrailty is introduced to apply the Poisson generalized linear mixed approach to frailty models. The simulations show good results for the shared frailty model. Our new %PCFrailty macro provides proper estimates, especially in case of 4 events per piece. The suggested Poisson generalized linear mixed approach for log-normal frailty models based on the %PCFrailty macro provides several advantages in the analysis of clustered survival data with respect to more flexible modelling of fixed and random effects, exact (in the sense of non-approximate) maximum likelihood estimation, and standard errors and different types of confidence intervals for all variance parameters. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  17. A Polytime Algorithm Based on a Primal LP Model for the Scheduling Problem 1 vertical bar pmtn;p(j)=2;r(j)vertical bar Sigma w(j)C(j)

    Bouma, Harmen W.; Goldengorin, Boris; Lagakos, S; Perlovsky, L; Jha, M; Covaci, B; Zaharim, A; Mastorakis, N

    2009-01-01

    In this paper a Boolean Linear Programming (BLP) model is presented for the single machine scheduling problem 1 vertical bar pmtn; p(j) = 2;r(j)vertical bar Sigma w(j)C(j). The problem is a special case of the open problem 1 vertical bar pmtn; p(j) = p; r(j)vertical bar Sigma wj(g)C(j). We show that

  18. OL-DEC-MDP Model for Multiagent Online Scheduling with a Time-Dependent Probability of Success

    Cheng Zhu

    2014-01-01

    Full Text Available Focusing on the on-line multiagent scheduling problem, this paper considers the time-dependent probability of success and processing duration and proposes an OL-DEC-MDP (opportunity loss-decentralized Markov Decision Processes model to include opportunity loss into scheduling decision to improve overall performance. The success probability of job processing as well as the process duration is dependent on the time at which the processing is started. The probability of completing the assigned job by an agent would be higher when the process is started earlier, but the opportunity loss could also be high due to the longer engaging duration. As a result, OL-DEC-MDP model introduces a reward function considering the opportunity loss, which is estimated based on the prediction of the upcoming jobs by a sampling method on the job arrival. Heuristic strategies are introduced in computing the best starting time for an incoming job by each agent, and an incoming job will always be scheduled to the agent with the highest reward among all agents with their best starting policies. The simulation experiments show that the OL-DEC-MDP model will improve the overall scheduling performance compared with models not considering opportunity loss in heavy-loading environment.

  19. The Linearity of Optical Tomography: Sensor Model and Experimental Verification

    Siti Zarina MOHD. MUJI

    2011-09-01

    Full Text Available The aim of this paper is to show the linearization of optical sensor. Linearity of the sensor response is a must in optical tomography application, which affects the tomogram result. Two types of testing are used namely, testing using voltage parameter and testing with time unit parameter. For the former, the testing is by measuring the voltage when the obstacle is placed between transmitter and receiver. The obstacle diameters are between 0.5 until 3 mm. The latter is also the same testing but the obstacle is bigger than the former which is 59.24 mm and the testing purpose is to measure the time unit spend for the ball when it cut the area of sensing circuit. Both results show a linear relation that proves the optical sensors is suitable for process tomography application.

  20. MODELLING TEMPORAL SCHEDULE OF URBAN TRAINS USING AGENT-BASED SIMULATION AND NSGA2-BASED MULTIOBJECTIVE OPTIMIZATION APPROACHES

    M. Sahelgozin

    2015-12-01

    Full Text Available Increasing distances between locations of residence and services leads to a large number of daily commutes in urban areas. Developing subway systems has been taken into consideration of transportation managers as a response to this huge amount of travel demands. In developments of subway infrastructures, representing a temporal schedule for trains is an important task; because an appropriately designed timetable decreases Total passenger travel times, Total Operation Costs and Energy Consumption of trains. Since these variables are not positively correlated, subway scheduling is considered as a multi-criteria optimization problem. Therefore, proposing a proper solution for subway scheduling has been always a controversial issue. On the other hand, research on a phenomenon requires a summarized representation of the real world that is known as Model. In this study, it is attempted to model temporal schedule of urban trains that can be applied in Multi-Criteria Subway Schedule Optimization (MCSSO problems. At first, a conceptual framework is represented for MCSSO. Then, an agent-based simulation environment is implemented to perform Sensitivity Analysis (SA that is used to extract the interrelations between the framework components. These interrelations is then taken into account in order to construct the proposed model. In order to evaluate performance of the model in MCSSO problems, Tehran subway line no. 1 is considered as the case study. Results of the study show that the model was able to generate an acceptable distribution of Pareto-optimal solutions which are applicable in the real situations while solving a MCSSO is the goal. Also, the accuracy of the model in representing the operation of subway systems was significant.