Time-dependent Integrated Predictive Modeling of ITER Plasmas
Institute of Scientific and Technical Information of China (English)
R.V. Budny
2007-01-01
@@ Introduction Modeling burning plasmas is important for speeding progress toward practical Tokamak energy production. Examples of issues that can be elucidated by modelinginclude requirements for heating, fueling, torque, and current drive systems, design of diagnostics, and estimates of the plasma performance (e.g., fusion power production) in various plasma scenarios. The modeling should be time-dependent to demonstrate that burning plasmas can be created, maintained (controlled), and terminated successfully. The modeling also should be integrated to treat self-consistently the nonlinearities and strong coupling between the plasma, heating, current drive, confinement, and control systems.
Iterated non-linear model predictive control based on tubes and contractive constraints.
Murillo, M; Sánchez, G; Giovanini, L
2016-05-01
This paper presents a predictive control algorithm for non-linear systems based on successive linearizations of the non-linear dynamic around a given trajectory. A linear time varying model is obtained and the non-convex constrained optimization problem is transformed into a sequence of locally convex ones. The robustness of the proposed algorithm is addressed adding a convex contractive constraint. To account for linearization errors and to obtain more accurate results an inner iteration loop is added to the algorithm. A simple methodology to obtain an outer bounding-tube for state trajectories is also presented. The convergence of the iterative process and the stability of the closed-loop system are analyzed. The simulation results show the effectiveness of the proposed algorithm in controlling a quadcopter type unmanned aerial vehicle.
SVD Iteration Model and Its Use in Prediction of Summer Precipitation
Institute of Scientific and Technical Information of China (English)
ZHANG Yongling; DING Yuguo; WANG Jijun
2008-01-01
A new short-term climatic prediction model based on the singular value decomposition(SVD)iteration Was designed with solid mathematics and strict logical reasoning.Taking predictors into prediction model,using iteration computation,and substituting the last results into the next computation,we can acquire better results with improved precision. Precipitation prediction experiments were separately done for 16 stations in North China and 30 stations in the mid-lower catchment of the Yangtze River during 1991-2000.Their average mean square errors are 0.352 and 0.312,and the results are very stable.Mean square errors of 9 yr are less than 0.5 while only that of 1、yr is more than 0.5.The mean sign correlation coefficients between forecast and observed summer precipitation during 1991-2000 are 0.575 in North China and 0.623 in the mid-lower catchment of the Yangtze River.Librations of them in North China during the 10 years are small.0nly in 1996 the sign correlation coefficient is below 0.5:the others are all over 0.5.But sign correlation coefficients in the mid-lower catchment of the Yangtze River vary obviously.The lowest is only 0.3 in 1992,and the highest is 0.9 in 1998,As the distribution of the forecast precipitation anomaly field in the summer 1998 of is examined,it is known that the model captured the positive and negative anomalyies of precipitation,and also well forecasted the anomaly distributions.But the errors are obvious in quantities between the forecast and the observed precipitation anomalies. Climate characteristics of large scale meteorological elements,such as summer precipitation have obvious differences in spatial distribution. We Can forecast better if we divide a big region into many subregions according to the discrepancy of climatic characteristics in the region.and predict in each subregion.The research shows that the model of SVD iteration is a very effective forecast model and has a strongly applicable value.
Tharwat, Alaa; Moemen, Yasmine S; Hassanien, Aboul Ella
2016-12-09
Measuring toxicity is one of the main steps in drug development. Hence, there is a high demand for computational models to predict the toxicity effects of the potential drugs. In this study, we used a dataset, which consists of four toxicity effects:mutagenic, tumorigenic, irritant and reproductive effects. The proposed model consists of three phases. In the first phase, rough set-based methods are used to select the most discriminative features for reducing the classification time and improving the classification performance. Due to the imbalanced class distribution, in the second phase, different sampling methods such as Random Under-Sampling, Random Over-Sampling and Synthetic Minority Oversampling Technique are used to solve the problem of imbalanced datasets. ITerative Sampling (ITS) method is proposed to avoid the limitations of those methods. ITS method has two steps. The first step (sampling step) iteratively modifies the prior distribution of the minority and majority classes. In the second step, a data cleaning method is used to remove the overlapping that is produced from the first step. In the third phase, Bagging classifier is used to classify an unknown drug into toxic or non-toxic. The experimental results proved that the proposed model performed well in classifying the unknown samples according to all toxic effects in the imbalanced datasets.
Progress and challenges in predictive modeling of runaway electron generation in ITER
Brennan, Dylan; Hirvijoki, Eero; Liu, Chang; Bhattacharjee, Amitava; Boozer, Allen
2016-10-01
Among the most important questions given a thermal collapse event in ITER is that of how many seed electrons are available for runaway acceleration and the avalanche process, how collisional and radiative mechanisms will affect the electron acceleration, and what mitigation techniques will be effective. In this study, we use the kinetic equation for electrons and ions to investigate how different cooling scenarios lead to different seed distributions. Given any initial distribution, we study their subsequent avalanche and acceleration to runaway with Adjoint and test particle methods. This method gives an accurate calculation of the runaway threshold by including the collisional drag of background electrons (assuming they are Maxwellian), pitch angle scattering, and synchrotron and Bremsstrahlung radiation. This effort is part of a new large collaboration in the US which promises to contribute substantially to our understanding of these issues. This talk will briefly review how this work contributes to this collaboration, and in particular discuss the technical challenges and open questions that stand in the way of quantitative, predictive modeling of runaway generation in ITER, and how we plan to address them.
Directory of Open Access Journals (Sweden)
Ojcius David M
2009-08-01
Full Text Available Abstract Background Promoter identification is a first step in the quest to explain gene regulation in bacteria. It has been demonstrated that the initiation of bacterial transcription depends upon the stability and topology of DNA in the promoter region as well as the binding affinity between the RNA polymerase σ-factor and promoter. However, promoter prediction algorithms to date have not explicitly used an ensemble of these factors as predictors. In addition, most promoter models have been trained on data from Escherichia coli. Although it has been shown that transcriptional mechanisms are similar among various bacteria, it is quite possible that the differences between Escherichia coli and Chlamydia trachomatis are large enough to recommend an organism-specific modeling effort. Results Here we present an iterative stochastic model building procedure that combines such biophysical metrics as DNA stability, curvature, twist and stress-induced DNA duplex destabilization along with duration hidden Markov model parameters to model Chlamydia trachomatis σ66 promoters from 29 experimentally verified sequences. Initially, iterative duration hidden Markov modeling of the training set sequences provides a scoring algorithm for Chlamydia trachomatis RNA polymerase σ66/DNA binding. Subsequently, an iterative application of Stepwise Binary Logistic Regression selects multiple promoter predictors and deletes/replaces training set sequences to determine an optimal training set. The resulting model predicts the final training set with a high degree of accuracy and provides insights into the structure of the promoter region. Model based genome-wide predictions are provided so that optimal promoter candidates can be experimentally evaluated, and refined models developed. Co-predictions with three other algorithms are also supplied to enhance reliability. Conclusion This strategy and resulting model support the conjecture that DNA biophysical properties
An Iterative Approach for Distributed Model Predictive Control of Irrigation Canals
Doan, D.; Keviczky, T.; Negenborn, R.R.; De Schutter, B.
2009-01-01
Optimization techniques have played a fundamental role in designing automatic control systems for the most part of the past half century. This dependence is ever more obvious in today’s wide-spread use of online optimization-based control methods, such as Model Predictive Control (MPC) [1]. The abil
An Iterative Approach for Distributed Model Predictive Control of Irrigation Canals
Doan, D.; Keviczky, T.; Negenborn, R.R.; De Schutter, B.
2009-01-01
Optimization techniques have played a fundamental role in designing automatic control systems for the most part of the past half century. This dependence is ever more obvious in today’s wide-spread use of online optimization-based control methods, such as Model Predictive Control (MPC) [1]. The ability to capture process constraints and characterize comprehensive economic objective functions has made MPC the industry standard for controlling complex systems.
An iterative approach of protein function prediction
Directory of Open Access Journals (Sweden)
Chi Xiaoxiao
2011-11-01
Full Text Available Abstract Background Current approaches of predicting protein functions from a protein-protein interaction (PPI dataset are based on an assumption that the available functions of the proteins (a.k.a. annotated proteins will determine the functions of the proteins whose functions are unknown yet at the moment (a.k.a. un-annotated proteins. Therefore, the protein function prediction is a mono-directed and one-off procedure, i.e. from annotated proteins to un-annotated proteins. However, the interactions between proteins are mutual rather than static and mono-directed, although functions of some proteins are unknown for some reasons at present. That means when we use the similarity-based approach to predict functions of un-annotated proteins, the un-annotated proteins, once their functions are predicted, will affect the similarities between proteins, which in turn will affect the prediction results. In other words, the function prediction is a dynamic and mutual procedure. This dynamic feature of protein interactions, however, was not considered in the existing prediction algorithms. Results In this paper, we propose a new prediction approach that predicts protein functions iteratively. This iterative approach incorporates the dynamic and mutual features of PPI interactions, as well as the local and global semantic influence of protein functions, into the prediction. To guarantee predicting functions iteratively, we propose a new protein similarity from protein functions. We adapt new evaluation metrics to evaluate the prediction quality of our algorithm and other similar algorithms. Experiments on real PPI datasets were conducted to evaluate the effectiveness of the proposed approach in predicting unknown protein functions. Conclusions The iterative approach is more likely to reflect the real biological nature between proteins when predicting functions. A proper definition of protein similarity from protein functions is the key to predicting
Lee, Seung Yup; Skolnick, Jeffrey
2007-07-01
To improve the accuracy of TASSER models especially in the limit where threading provided template alignments are of poor quality, we have developed the TASSER(iter) algorithm which uses the templates and contact restraints from TASSER generated models for iterative structure refinement. We apply TASSER(iter) to a large benchmark set of 2,773 nonhomologous single domain proteins that are iter) models have a smaller global average RMSD of 5.48 A compared to 5.81 A RMSD of the original TASSER models. Classifying the targets by the level of prediction difficulty (where Easy targets have a good template with a corresponding good threading alignment, Medium targets have a good template but a poor alignment, and Hard targets have an incorrectly identified template), TASSER(iter) (TASSER) models have an average RMSD of 4.15 A (4.35 A) for the Easy set and 9.05 A (9.52 A) for the Hard set. The largest reduction of average RMSD is for the Medium set where the TASSER(iter) models have an average global RMSD of 5.67 A compared to 6.72 A of the TASSER models. Seventy percent of the Medium set TASSER(iter) models have a smaller RMSD than the TASSER models, while 63% of the Easy and 60% of the Hard TASSER models are improved by TASSER(iter). For the foldable cases, where the targets have a RMSD to the native iter) shows obvious improvement over TASSER models: For the Medium set, it improves the success rate from 57.0 to 67.2%, followed by the Hard targets where the success rate improves from 32.0 to 34.8%, with the smallest improvement in the Easy targets from 82.6 to 84.0%. These results suggest that TASSER(iter) can provide more reliable predictions for targets of Medium difficulty, a range that had resisted improvement in the quality of protein structure predictions.
Energy Technology Data Exchange (ETDEWEB)
Pangione, L. [Associazione Euratom/ENEA Ssulla Fusione, Centro Ricerche Frascati, CP 65, 00044 Frascati, Roma (Italy)], E-mail: pangione@frascati.enea.it; Lister, J.B. [CRPP-EPFL, Association EURATOM-Suisse, Station 13, 1015 Lausanne (Switzerland)
2008-04-15
The ITER CODAC (COntrol, Data Access and Communication) conceptual design resulted from 2 years of activity. One result was a proposed functional partitioning of CODAC into different CODAC Systems, each of them partitioned into other CODAC Systems. Considering the large size of this project, simple use of human language assisted by figures would certainly be ineffective in creating an unambiguous description of all interactions and all relations between these Systems. Moreover, the underlying design is resident in the mind of the designers, who must consider all possible situations that could happen to each system. There is therefore a need to model the whole of CODAC with a clear and preferably graphical method, which allows the designers to verify the correctness and the consistency of their project. The aim of this paper is to describe the work started on ITER CODAC modeling using Matlab/Simulink. The main feature of this tool is the possibility of having a simple, graphical, intuitive representation of a complex system and ultimately to run a numerical simulation of it. Using Matlab/Simulink, each CODAC System was represented in a graphical and intuitive form with its relations and interactions through the definition of a small number of simple rules. In a Simulink diagram, each system was represented as a 'black box', both containing, and connected to, a number of other systems. In this way it is possible to move vertically between systems on different levels, to show the relation of membership, or horizontally to analyse the information exchange between systems at the same level. This process can be iterated, starting from a global diagram, in which only CODAC appears with the Plant Systems and the external sites, and going deeper down to the mathematical model of each CODAC system. The Matlab/Simulink features for simulating the whole top diagram encourage us to develop the idea of completing the functionalities of all systems in order to finally
Predictive Simulations of ITER Including Neutral Beam Driven Toroidal Rotation
Energy Technology Data Exchange (ETDEWEB)
Halpern, Federico D.; Kritz, Arnold H.; Bateman, Glenn; Pankin, Alexei Y.; Budny, Robert V.; McCune, Douglas C.
2008-06-16
Predictive simulations of ITER [R. Aymar et al., Plasma Phys. Control. Fusion 44, 519 2002] discharges are carried out for the 15 MA high confinement mode (H-mode) scenario using PTRANSP, the predictive version of the TRANSP code. The thermal and toroidal momentum transport equations are evolved using turbulent and neoclassical transport models. A predictive model is used to compute the temperature and width of the H-mode pedestal. The ITER simulations are carried out for neutral beam injection (NBI) heated plasmas, for ion cyclotron resonant frequency (ICRF) heated plasmas, and for plasmas heated with a mix of NBI and ICRF. It is shown that neutral beam injection drives toroidal rotation that improves the confinement and fusion power production in ITER. The scaling of fusion power with respect to the input power and to the pedestal temperature is studied. It is observed that, in simulations carried out using the momentum transport diffusivity computed using the GLF23 model [R.Waltz et al., Phys. Plasmas 4, 2482 (1997)], the fusion power increases with increasing injected beam power and central rotation frequency. It is found that the ITER target fusion power of 500 MW is produced with 20 MW of NBI power when the pedesta temperature is 3.5 keV. 2008 American Institute of Physics. [DOI: 10.1063/1.2931037
PTRANSP Tests of TGLF and Predictions for ITER
Energy Technology Data Exchange (ETDEWEB)
Robert V. Budny, Xingqiu Yuan, S. Jardin, G. Hammett, G. Staebler, J. Kinsey, members of the ITPA Transport and Confinement Topical Group, and JET EFDA Contributors
2012-09-23
A new numerical solver for stiff transport predictions has been developed and implemented in the PTRANSP predictive transport code. The TGLF and GLF23 predictive codes have been incorporated in the solver, verified by comparisons with predictions from the XPTOR code, and validated by comparing predicted and measured profiles. Predictions for ITER baseline plasmas are presented.
Iotti, Robert
2015-04-01
ITER is an international experimental facility being built by seven Parties to demonstrate the long term potential of fusion energy. The ITER Joint Implementation Agreement (JIA) defines the structure and governance model of such cooperation. There are a number of necessary conditions for such international projects to be successful: a complete design, strong systems engineering working with an agreed set of requirements, an experienced organization with systems and plans in place to manage the project, a cost estimate backed by industry, and someone in charge. Unfortunately for ITER many of these conditions were not present. The paper discusses the priorities in the JIA which led to setting up the project with a Central Integrating Organization (IO) in Cadarache, France as the ITER HQ, and seven Domestic Agencies (DAs) located in the countries of the Parties, responsible for delivering 90%+ of the project hardware as Contributions-in-Kind and also financial contributions to the IO, as ``Contributions-in-Cash.'' Theoretically the Director General (DG) is responsible for everything. In practice the DG does not have the power to control the work of the DAs, and there is not an effective management structure enabling the IO and the DAs to arbitrate disputes, so the project is not really managed, but is a loose collaboration of competing interests. Any DA can effectively block a decision reached by the DG. Inefficiencies in completing design while setting up a competent organization from scratch contributed to the delays and cost increases during the initial few years. So did the fact that the original estimate was not developed from industry input. Unforeseen inflation and market demand on certain commodities/materials further exacerbated the cost increases. Since then, improvements are debatable. Does this mean that the governance model of ITER is a wrong model for international scientific cooperation? I do not believe so. Had the necessary conditions for success
Wall conditioning for ITER: Current experimental and modeling activities
Energy Technology Data Exchange (ETDEWEB)
Douai, D., E-mail: david.douai@cea.fr [CEA, IRFM, Association Euratom-CEA, 13108 St. Paul lez Durance (France); Kogut, D. [CEA, IRFM, Association Euratom-CEA, 13108 St. Paul lez Durance (France); Wauters, T. [LPP-ERM/KMS, Association Belgian State, 1000 Brussels (Belgium); Brezinsek, S. [FZJ, Institut für Energie- und Klimaforschung Plasmaphysik, 52441 Jülich (Germany); Hagelaar, G.J.M. [Laboratoire Plasma et Conversion d’Energie, UMR5213, Toulouse (France); Hong, S.H. [National Fusion Research Institute, Daejeon 305-806 (Korea, Republic of); Lomas, P.J. [CCFE, Culham Science Centre, OX14 3DB Abingdon (United Kingdom); Lyssoivan, A. [LPP-ERM/KMS, Association Belgian State, 1000 Brussels (Belgium); Nunes, I. [Associação EURATOM-IST, Instituto de Plasmas e Fusão Nuclear, 1049-001 Lisboa (Portugal); Pitts, R.A. [ITER International Organization, F-13067 St. Paul lez Durance (France); Rohde, V. [Max-Planck-Institut für Plasmaphysik, 85748 Garching (Germany); Vries, P.C. de [ITER International Organization, F-13067 St. Paul lez Durance (France)
2015-08-15
Wall conditioning will be required in ITER to control fuel and impurity recycling, as well as tritium (T) inventory. Analysis of conditioning cycle on the JET, with its ITER-Like Wall is presented, evidencing reduced need for wall cleaning in ITER compared to JET–CFC. Using a novel 2D multi-fluid model, current density during Glow Discharge Conditioning (GDC) on the in-vessel plasma-facing components (PFC) of ITER is predicted to approach the simple expectation of total anode current divided by wall surface area. Baking of the divertor to 350 °C should desorb the majority of the co-deposited T. ITER foresees the use of low temperature plasma based techniques compatible with the permanent toroidal magnetic field, such as Ion (ICWC) or Electron Cyclotron Wall Conditioning (ECWC), for tritium removal between ITER plasma pulses. Extrapolation of JET ICWC results to ITER indicates removal comparable to estimated T-retention in nominal ITER D:T shots, whereas GDC may be unattractive for that purpose.
PTRANSP Tests Of TGLF And Predictions For ITER
Energy Technology Data Exchange (ETDEWEB)
Robert V. Budny, Xingqiu Yuan, S. Jardin, G. Hammett, G. Staebler, members of the ITPA Transport and Confinement Topical Group, and JET EFDA Contributions
2012-02-28
One of the physics goals for ITER is to achieve high fusion power PDT at a high gain QDT. This goal is important for studying the physics of reactor-relevant burning plasmas. Simulations of plasma performance in ITER can help achieve this goal by aiding in the design of systems such as diagnostics and in planning ITER plasma regimes. Simulations can indicate areas where further research in theory and experiments is needed. To have credible simulations integrated modeling is necessary since plasma profiles and applied heating, torque, and current drive are strongly coupled.
CORSICA modelling of ITER hybrid operation scenarios
Kim, S. H.; Bulmer, R. H.; Campbell, D. J.; Casper, T. A.; LoDestro, L. L.; Meyer, W. H.; Pearlstein, L. D.; Snipes, J. A.
2016-12-01
The hybrid operating mode observed in several tokamaks is characterized by further enhancement over the high plasma confinement (H-mode) associated with reduced magneto-hydro-dynamic (MHD) instabilities linked to a stationary flat safety factor (q ) profile in the core region. The proposed ITER hybrid operation is currently aiming at operating for a long burn duration (>1000 s) with a moderate fusion power multiplication factor, Q , of at least 5. This paper presents candidate ITER hybrid operation scenarios developed using a free-boundary transport modelling code, CORSICA, taking all relevant physics and engineering constraints into account. The ITER hybrid operation scenarios have been developed by tailoring the 15 MA baseline ITER inductive H-mode scenario. Accessible operation conditions for ITER hybrid operation and achievable range of plasma parameters have been investigated considering uncertainties on the plasma confinement and transport. ITER operation capability for avoiding the poloidal field coil current, field and force limits has been examined by applying different current ramp rates, flat-top plasma currents and densities, and pre-magnetization of the poloidal field coils. Various combinations of heating and current drive (H&CD) schemes have been applied to study several physics issues, such as the plasma current density profile tailoring, enhancement of the plasma energy confinement and fusion power generation. A parameterized edge pedestal model based on EPED1 added to the CORSICA code has been applied to hybrid operation scenarios. Finally, fully self-consistent free-boundary transport simulations have been performed to provide information on the poloidal field coil voltage demands and to study the controllability with the ITER controllers. Extended from Proc. 24th Int. Conf. on Fusion Energy (San Diego, 2012) IT/P1-13.
Predictions of H-mode performance in ITER
Energy Technology Data Exchange (ETDEWEB)
Budny, R. V.; Andre, R.; Bateman, G.; Halpern, F.; Kessel, C. E.; Kritz, A.; McCune, D.
2008-03-03
Time-dependent integrated predictive modeling is carried out using the PTRANSP code to predict fusion power and parameters such as alpha particle density and pressure in ITER H-mode plasmas. Auxiliary heating by negative ion neutral beam injection and ion cyclotron heating of He3 minority ions are modeled, and the GLF23 transport model is used in the prediction of the evolution of plasma temperature profiles. Effects of beam steering, beam torque, plasma rotation, beam current drive, pedestal temperatures, sawtooth oscillations, magnetic diffusion, and accumulation of He ash are treated self-consistently. Variations in assumptions associated with physics uncertainties for standard base-line DT H-mode plasmas (with Ip=15 MA, BTF=5.3 T, and Greenwald fraction=0.86) lead to a range of predictions for DT fusion power PDT and quasi-steady state fusion QDT (≡ PDT/Paux). Typical predictions assuming Paux = 50-53 MW yield PDT = 250- 720 MW and QDT = 5 - 14. In some cases where Paux is ramped down or shut off after initial flat-top conditions, quasi-steady QDT can be considerably higher, even infinite. Adverse physics assumptions such as existence of an inward pinch of the helium ash and an ash recycling coefficient approaching unity lead to very low values for PDT. Alternative scenarios with different heating and reduced performance regimes are also considered including plasmas with only H or D isotopes, DT plasmas with toroidal field reduced 10 or 20%, and discharges with reduced beam voltage. In full-performance D-only discharges, tritium burn-up is predicted to generate central tritium densities up to 1016/m3 and DT neutron rates up to 5×1016/s, compared with the DD neutron rates of 6×1017/s. Predictions with the toroidal field reduced 10 or 20% below the planned 5.3 T and keeping the same q98, Greenwald fraction, and Βη indicate that the fusion yield PDT and QDT will be lower by about a factor of two (scaling as B3.5).
ITER plasma safety interface models and assessments
Energy Technology Data Exchange (ETDEWEB)
Uckan, N.A. [Oak Ridge National Lab., TN (United States); Bartels, H-W. [ITER San Diego Joint Work Site, La Jolla, CA (United States); Honda, T. [Hitachi Ltd., Ibaraki (Japan). Hitachi Research Lab.; Putvinski, S. [ITER San Diego Joint Work Site, La Jolla, CA (United States); Amano, T. [National Inst. for Fusion Science, Nagoya (Japan); Boucher, D.; Post, D.; Wesley, J. [ITER San Diego Joint Work Site, La Jolla, CA (United States)
1996-12-31
Physics models and requirements to be used as a basis for safety analysis studies are developed and physics results motivated by safety considerations are presented for the ITER design. Physics specifications are provided for enveloping plasma dynamic events for Category I (operational event), Category II (likely event), and Category III (unlikely event). A safety analysis code SAFALY has been developed to investigate plasma anomaly events. The plasma response to ex-vessel component failure and machine response to plasma transients are considered.
Model Based Iterative Reconstruction for Bright Field Electron Tomography (Postprint)
2013-02-01
Reconstruction Technique ( SIRT ) are applied to the data. Model based iterative reconstruction (MBIR) provides a powerful framework for tomographic...the reconstruction when the typical algorithms such as Filtered Back Projection (FBP) and Simultaneous Iterative Reconstruction Technique ( SIRT ) are
Energy Technology Data Exchange (ETDEWEB)
Los Alamos National Laboratory, Mailstop M888, Los Alamos, NM 87545, USA; Lawrence Berkeley National Laboratory, One Cyclotron Road, Building 64R0121, Berkeley, CA 94720, USA; Department of Haematology, University of Cambridge, Cambridge CB2 0XY, England; Terwilliger, Thomas; Terwilliger, T.C.; Grosse-Kunstleve, Ralf Wilhelm; Afonine, P.V.; Moriarty, N.W.; Zwart, P.H.; Hung, L.-W.; Read, R.J.; Adams, P.D.
2008-02-12
A procedure for carrying out iterative model-building, density modification and refinement is presented in which the density in an OMIT region is essentially unbiased by an atomic model. Density from a set of overlapping OMIT regions can be combined to create a composite 'Iterative-Build' OMIT map that is everywhere unbiased by an atomic model but also everywhere benefiting from the model-based information present elsewhere in the unit cell. The procedure may have applications in the validation of specific features in atomic models as well as in overall model validation. The procedure is demonstrated with a molecular replacement structure and with an experimentally-phased structure, and a variation on the method is demonstrated by removing model bias from a structure from the Protein Data Bank.
Electronic noise modeling in statistical iterative reconstruction.
Xu, Jingyan; Tsui, Benjamin M W
2009-06-01
We consider electronic noise modeling in tomographic image reconstruction when the measured signal is the sum of a Gaussian distributed electronic noise component and another random variable whose log-likelihood function satisfies a certain linearity condition. Examples of such likelihood functions include the Poisson distribution and an exponential dispersion (ED) model that can approximate the signal statistics in integration mode X-ray detectors. We formulate the image reconstruction problem as a maximum-likelihood estimation problem. Using an expectation-maximization approach, we demonstrate that a reconstruction algorithm can be obtained following a simple substitution rule from the one previously derived without electronic noise considerations. To illustrate the applicability of the substitution rule, we present examples of a fully iterative reconstruction algorithm and a sinogram smoothing algorithm both in transmission CT reconstruction when the measured signal contains additive electronic noise. Our simulation studies show the potential usefulness of accurate electronic noise modeling in low-dose CT applications.
Nominal model predictive control
Grüne, Lars
2013-01-01
5 p., to appear in Encyclopedia of Systems and Control, Tariq Samad, John Baillieul (eds.); International audience; Model Predictive Control is a controller design method which synthesizes a sampled data feedback controller from the iterative solution of open loop optimal control problems.We describe the basic functionality of MPC controllers, their properties regarding feasibility, stability and performance and the assumptions needed in order to rigorously ensure these properties in a nomina...
Nominal Model Predictive Control
Grüne, Lars
2014-01-01
5 p., to appear in Encyclopedia of Systems and Control, Tariq Samad, John Baillieul (eds.); International audience; Model Predictive Control is a controller design method which synthesizes a sampled data feedback controller from the iterative solution of open loop optimal control problems.We describe the basic functionality of MPC controllers, their properties regarding feasibility, stability and performance and the assumptions needed in order to rigorously ensure these properties in a nomina...
Active Player Modeling in the Iterated Prisoner's Dilemma.
Park, Hyunsoo; Kim, Kyung-Joong
2016-01-01
The iterated prisoner's dilemma (IPD) is well known within the domain of game theory. Although it is relatively simple, it can also elucidate important problems related to cooperation and trust. Generally, players can predict their opponents' actions when they are able to build a precise model of their behavior based on their game playing experience. However, it is difficult to make such predictions based on a limited number of games. The creation of a precise model requires the use of not only an appropriate learning algorithm and framework but also a good dataset. Active learning approaches have recently been introduced to machine learning communities. The approach can usually produce informative datasets with relatively little effort. Therefore, we have proposed an active modeling technique to predict the behavior of IPD players. The proposed method can model the opponent player's behavior while taking advantage of interactive game environments. This experiment used twelve representative types of players as opponents, and an observer used an active modeling algorithm to model these opponents. This observer actively collected data and modeled the opponent's behavior online. Most of our data showed that the observer was able to build, through direct actions, a more accurate model of an opponent's behavior than when the data were collected through random actions.
Modeling results for the ITER cryogenic fore pump
Zhang, D. S.; Miller, F. K.; Pfotenhauer, J. M.
2014-01-01
The cryogenic fore pump (CFP) is designed for ITER to collect and compress hydrogen isotopes during the regeneration process of torus cryopumps. Different from common cryopumps, the ITER-CFP works in the viscous flow regime. As a result, both adsorption boundary conditions and transport phenomena contribute unique features to the pump performance. In this report, the physical mechanisms of cryopumping are studied, especially the diffusion-adsorption process and these are coupled with standard equations of species, momentum and energy balance, as well as the equation of state. Numerical models are developed, which include highly coupled non-linear conservation equations of species, momentum and energy and equation of state. Thermal and kinetic properties are treated as functions of temperature, pressure, and composition. To solve such a set of equations, a novel numerical technique, identified as the Group-Member numerical technique is proposed. It is presented here a 1D numerical model. The results include comparison with the experimental data of pure hydrogen flow and a prediction for hydrogen flow with trace helium. An advanced 2D model and detailed explanation of the Group-Member technique are to be presented in following papers.
Directory of Open Access Journals (Sweden)
Filippo Caschera
Full Text Available BACKGROUND: We consider the problem of optimizing a liposomal drug formulation: a complex chemical system with many components (e.g., elements of a lipid library that interact nonlinearly and synergistically in ways that cannot be predicted from first principles. METHODOLOGY/PRINCIPAL FINDINGS: The optimization criterion in our experiments was the percent encapsulation of a target drug, Amphotericin B, detected experimentally via spectrophotometric assay. Optimization of such a complex system requires strategies that efficiently discover solutions in extremely large volumes of potential experimental space. We have designed and implemented a new strategy of evolutionary design of experiments (Evo-DoE, that efficiently explores high-dimensional spaces by coupling the power of computer and statistical modeling with experimentally measured responses in an iterative loop. CONCLUSIONS: We demonstrate how iterative looping of modeling and experimentation can quickly produce new discoveries with significantly better experimental response, and how such looping can discover the chemical landscape underlying complex chemical systems.
Iteration model of starch hydrolysis by amylolytic enzymes.
Wojciechowski, P M; Koziol, A; Noworyta, A
2001-12-05
An elaborate computer program to simulate the process of starch hydrolysis by amylolytic enzymes was been developed. It is based on the Monte Carlo method and iteration kinetic model, which predict productive and non-productive amylase complexes with substrates. It describes both multienzymatic and multisubstrate reactions simulating the "real" concentrations of all components versus the time of the depolymerization reaction the number of substrates, intermediate products, and final products are limited only by computer memory. In this work, it is assumed that the "proper" substrate for amylases is the glucoside linkages in starch molecules. Dynamic changes of substrate during the simulation adequately influence the increase or decrease of reaction velocity, as well as the kinetics of depolymerization. The presented kinetic model, can be adapted to describe most enzymatic degradations of a polymer. This computer program has been tested on experimental data obtained for alpha- and beta-amylases.
Institute of Scientific and Technical Information of China (English)
Chen Chen; Zhihua Xiong; Yisheng Zhong
2014-01-01
Based on the two-dimensional (2D) system theory, an integrated predictive iterative learning control (2D-IPILC) strategy for batch processes is presented. First, the output response and the error transition model predictions along the batch index can be calculated analytically due to the 2D Roesser model of the batch process. Then, an integrated framework of combining iterative learning control (ILC) and model predictive control (MPC) is formed reasonably. The output of feedforward ILC is estimated on the basis of the predefined process 2D model. By min-imizing a quadratic objective function, the feedback MPC is introduced to obtain better control performance for tracking problem of batch processes. Simulations on a typical batch reactor demonstrate that the satisfactory tracking performance as wel as faster convergence speed can be achieved than traditional proportion type (P-type) ILC despite the model error and disturbances.
Predictive capabilities, analysis and experiments for Fusion Nuclear Technology, and ITER R D
Energy Technology Data Exchange (ETDEWEB)
1991-01-01
This report discusses the following topics on ITER research and development: trituim modeling; liquid metal blanket modeling; free surface liquid metal studies; and thermal conductance and thermal control experiments and modeling. (LIP)
Modelling of hybrid scenario: from present-day experiments towards ITER
Litaudon, X.; Voitsekhovitch, I.; Artaud, J. F.; Belo, P.; Bizarro, João P. S.; Casper, T.; Citrin, J.; Fable, E.; Ferreira, J.; Garcia, J.; Garzotti, L.; Giruzzi, G.; Hobirk, J.; Hogeweij, G. M. D.; Imbeaux, F.; Joffrin, E.; Koechl, F.; Liu, F.; Lönnroth, J.; Moreau, D.; Parail, V.; Schneider, M.; Snyder, P. B.; the ASDEX-Upgrade Team; Contributors, JET-EFDA; the EU-ITM ITER Scenario Modelling Group
2013-07-01
The ‘hybrid’ scenario is an attractive operating scenario for ITER since it combines long plasma duration with the reliability of the reference H-mode regime. We review the recent European modelling effort carried out within the Integrated Scenario Modelling group which aims at (i) understanding the underlying physics of the hybrid regime in ASDEX-Upgrade and JET and (ii) extrapolating them towards ITER. JET and ASDEX-Upgrade hybrid scenarios performed under different experimental conditions have been simulated in an interpretative and predictive way in order to address the current profile dynamics and its link with core confinement, the relative importance of magnetic shear, s, and E × B flow shear on the core turbulence, pedestal stability and H-L transition. The correlation of the improved confinement with an increased s/q at outer radii observed in JET and ASDEX-Upgrade discharges is consistent with the predictions based on the GLF23 model applied in the simulations of the ion and electron kinetic profiles. Projections to ITER hybrid scenarios have been carried out focusing on optimization of the heating/current drive schemes to reach and ultimately control the desired plasma equilibrium using ITER actuators. Firstly, access condition to the hybrid-like q-profiles during the current ramp-up phase has been investigated. Secondly, from the interpreted role of the s/q ratio, ITER hybrid scenario flat-top performance has been optimized through tailoring the q-profile shape and pedestal conditions. EPED predictions of pedestal pressure and width have been used as constraints in the interpretative modelling while the core heat transport is predicted by GLF23. Finally, model-based approach for real-time control of advanced tokamak scenarios has been applied to ITER hybrid regime for simultaneous magnetic and kinetic profile control.
Ab initio modeling of small proteins by iterative TASSER simulations
Directory of Open Access Journals (Sweden)
Zhang Yang
2007-05-01
Full Text Available Abstract Background Predicting 3-dimensional protein structures from amino-acid sequences is an important unsolved problem in computational structural biology. The problem becomes relatively easier if close homologous proteins have been solved, as high-resolution models can be built by aligning target sequences to the solved homologous structures. However, for sequences without similar folds in the Protein Data Bank (PDB library, the models have to be predicted from scratch. Progress in the ab initio structure modeling is slow. The aim of this study was to extend the TASSER (threading/assembly/refinement method for the ab initio modeling and examine systemically its ability to fold small single-domain proteins. Results We developed I-TASSER by iteratively implementing the TASSER method, which is used in the folding test of three benchmarks of small proteins. First, data on 16 small proteins (α-root mean square deviation (RMSD of 3.8Å, with 6 of them having a Cα-RMSD α-RMSD α-RMSD of the I-TASSER models was 3.9Å, whereas it was 5.9Å using TOUCHSTONE-II software. Finally, 20 non-homologous small proteins (α-RMSD of 3.9Å was obtained for the third benchmark, with seven cases having a Cα-RMSD Conclusion Our simulation results show that I-TASSER can consistently predict the correct folds and sometimes high-resolution models for small single-domain proteins. Compared with other ab initio modeling methods such as ROSETTA and TOUCHSTONE II, the average performance of I-TASSER is either much better or is similar within a lower computational time. These data, together with the significant performance of automated I-TASSER server (the Zhang-Server in the 'free modeling' section of the recent Critical Assessment of Structure Prediction (CASP7 experiment, demonstrate new progresses in automated ab initio model generation. The I-TASSER server is freely available for academic users http://zhang.bioinformatics.ku.edu/I-TASSER.
Transient thermal hydraulic modeling and analysis of ITER divertor plate system
Energy Technology Data Exchange (ETDEWEB)
El-Morshedy, Salah El-Din [Argonne National Laboratory, Argonne, IL (United States); Atomic Energy Authority, Cairo (Egypt)], E-mail: selmorshedy@etrr2-aea.org.eg; Hassanein, Ahmed [Purdue University, West Lafayette, IN (United States)], E-mail: hassanein@purdue.edu
2009-12-15
A mathematical model has been developed/updated to simulate the steady state and transient thermal-hydraulics of the International Thermonuclear Experimental Reactor (ITER) divertor module. The model predicts the thermal response of the armour coating, divertor plate structural materials and coolant channels. The selected heat transfer correlations cover all operating conditions of ITER under both normal and off-normal situations. The model also accounts for the melting, vaporization, and solidification of the armour material. The developed model is to provide a quick benchmark of the HEIGHTS multidimensional comprehensive simulation package. The present model divides the coolant channels into a specified axial regions and the divertor plate into a specified radial zones, then a two-dimensional heat conduction calculation is created to predict the temperature distribution for both steady and transient states. The model is benchmarked against experimental data performed at Sandia National Laboratory for both bare and swirl tape coolant channel mockups. The results show very good agreements with the data for steady and transient states. The model is then used to predict the thermal behavior of the ITER plasma facing and structural materials due to plasma instability event where 60 MJ/m{sup 2} plasma energy is deposited over 500 ms. The results for ITER divertor response is analyzed and compared with HEIGHTS results.
Surrogate model based iterative ensemble smoother for subsurface flow data assimilation
Chang, Haibin; Liao, Qinzhuo; Zhang, Dongxiao
2017-02-01
Subsurface geological formation properties often involve some degree of uncertainty. Thus, for most conditions, uncertainty quantification and data assimilation are necessary for predicting subsurface flow. The surrogate model based method is one common type of uncertainty quantification method, in which a surrogate model is constructed for approximating the relationship between model output and model input. Based on the prediction ability, the constructed surrogate model can be utilized for performing data assimilation. In this work, we develop an algorithm for implementing an iterative ensemble smoother (ES) using the surrogate model. We first derive an iterative ES scheme using a regular routine. In order to utilize surrogate models, we then borrow the idea of Chen and Oliver (2013) to modify the Hessian, and further develop an independent parameter based iterative ES formula. Finally, we establish the algorithm for the implementation of iterative ES using surrogate models. Two surrogate models, the PCE surrogate and the interpolation surrogate, are introduced for illustration. The performances of the proposed algorithm are tested by synthetic cases. The results show that satisfactory data assimilation results can be obtained by using surrogate models that have sufficient accuracy.
Iterative prediction of chaotic time series using a recurrent neural network
Energy Technology Data Exchange (ETDEWEB)
Essawy, M.A.; Bodruzzaman, M. [Tennessee State Univ., Nashville, TN (United States). Dept. of Electrical and Computer Engineering; Shamsi, A.; Noel, S. [USDOE Morgantown Energy Technology Center, WV (United States)
1996-12-31
Chaotic systems are known for their unpredictability due to their sensitive dependence on initial conditions. When only time series measurements from such systems are available, neural network based models are preferred due to their simplicity, availability, and robustness. However, the type of neutral network used should be capable of modeling the highly non-linear behavior and the multi-attractor nature of such systems. In this paper the authors use a special type of recurrent neural network called the ``Dynamic System Imitator (DSI)``, that has been proven to be capable of modeling very complex dynamic behaviors. The DSI is a fully recurrent neural network that is specially designed to model a wide variety of dynamic systems. The prediction method presented in this paper is based upon predicting one step ahead in the time series, and using that predicted value to iteratively predict the following steps. This method was applied to chaotic time series generated from the logistic, Henon, and the cubic equations, in addition to experimental pressure drop time series measured from a Fluidized Bed Reactor (FBR), which is known to exhibit chaotic behavior. The time behavior and state space attractor of the actual and network synthetic chaotic time series were analyzed and compared. The correlation dimension and the Kolmogorov entropy for both the original and network synthetic data were computed. They were found to resemble each other, confirming the success of the DSI based chaotic system modeling.
Heffernan, Rhys; Paliwal, Kuldip; Lyons, James; Dehzangi, Abdollah; Sharma, Alok; Wang, Jihua; Sattar, Abdul; Yang, Yuedong; Zhou, Yaoqi
2015-01-01
Direct prediction of protein structure from sequence is a challenging problem. An effective approach is to break it up into independent sub-problems. These sub-problems such as prediction of protein secondary structure can then be solved independently. In a previous study, we found that an iterative use of predicted secondary structure and backbone torsion angles can further improve secondary structure and torsion angle prediction. In this study, we expand the iterative features to include solvent accessible surface area and backbone angles and dihedrals based on Cα atoms. By using a deep learning neural network in three iterations, we achieved 82% accuracy for secondary structure prediction, 0.76 for the correlation coefficient between predicted and actual solvent accessible surface area, 19° and 30° for mean absolute errors of backbone φ and ψ angles, respectively, and 8° and 32° for mean absolute errors of Cα-based θ and τ angles, respectively, for an independent test dataset of 1199 proteins. The accuracy of the method is slightly lower for 72 CASP 11 targets but much higher than those of model structures from current state-of-the-art techniques. This suggests the potentially beneficial use of these predicted properties for model assessment and ranking.
DISIS: prediction of drug response through an iterative sure independence screening.
Directory of Open Access Journals (Sweden)
Yun Fang
Full Text Available Prediction of drug response based on genomic alterations is an important task in the research of personalized medicine. Current elastic net model utilized a sure independence screening to select relevant genomic features with drug response, but it may neglect the combination effect of some marginally weak features. In this work, we applied an iterative sure independence screening scheme to select drug response relevant features from the Cancer Cell Line Encyclopedia (CCLE dataset. For each drug in CCLE, we selected up to 40 features including gene expressions, mutation and copy number alterations of cancer-related genes, and some of them are significantly strong features but showing weak marginal correlation with drug response vector. Lasso regression based on the selected features showed that our prediction accuracies are higher than those by elastic net regression for most drugs.
3D modeling and optimization of the ITER ICRH antenna
Louche, F.; Dumortier, P.; Durodié, F.; Messiaen, A.; Maggiora, R.; Milanesio, D.
2011-12-01
The prediction of the coupling properties of the ITER ICRH antenna necessitates the accurate evaluation of the resistance and reactance matrices. The latter are mostly dependent on the geometry of the array and therefore a model as accurate as possible is needed to precisely compute these matrices. Furthermore simulations have so far neglected the poloidal and toroidal profile of the plasma, and it is expected that the loading by individual straps will vary significantly due to varying strap-plasma distance. To take this curvature into account, some modifications of the alignment of the straps with respect to the toroidal direction are proposed. It is shown with CST Microwave Studio® [1] that considering two segments in the toroidal direction, i.e. a "V-shaped" toroidal antenna, is sufficient. A new CATIA model including this segmentation has been drawn and imported into both MWS and TOPICA [2] codes. Simulations show a good agreement of the impedance matrices in vacuum. Various modifications of the geometry are proposed in order to further optimize the coupling. In particular we study the effect of the strap box parameters and the recess of the vertical septa.
Compressive Imaging with Iterative Forward Models
Liu, Hsiou-Yuan; Liu, Dehong; Mansour, Hassan; Boufounos, Petros T
2016-01-01
We propose a new compressive imaging method for reconstructing 2D or 3D objects from their scattered wave-field measurements. Our method relies on a novel, nonlinear measurement model that can account for the multiple scattering phenomenon, which makes the method preferable in applications where linear measurement models are inaccurate. We construct the measurement model by expanding the scattered wave-field with an accelerated-gradient method, which is guaranteed to converge and is suitable for large-scale problems. We provide explicit formulas for computing the gradient of our measurement model with respect to the unknown image, which enables image formation with a sparsity- driven numerical optimization algorithm. We validate the method both analytically and with numerical simulations.
Iterative integral parameter identification of a respiratory mechanics model
Directory of Open Access Journals (Sweden)
Schranz Christoph
2012-07-01
Full Text Available Abstract Background Patient-specific respiratory mechanics models can support the evaluation of optimal lung protective ventilator settings during ventilation therapy. Clinical application requires that the individual’s model parameter values must be identified with information available at the bedside. Multiple linear regression or gradient-based parameter identification methods are highly sensitive to noise and initial parameter estimates. Thus, they are difficult to apply at the bedside to support therapeutic decisions. Methods An iterative integral parameter identification method is applied to a second order respiratory mechanics model. The method is compared to the commonly used regression methods and error-mapping approaches using simulated and clinical data. The clinical potential of the method was evaluated on data from 13 Acute Respiratory Distress Syndrome (ARDS patients. Results The iterative integral method converged to error minima 350 times faster than the Simplex Search Method using simulation data sets and 50 times faster using clinical data sets. Established regression methods reported erroneous results due to sensitivity to noise. In contrast, the iterative integral method was effective independent of initial parameter estimations, and converged successfully in each case tested. Conclusion These investigations reveal that the iterative integral method is beneficial with respect to computing time, operator independence and robustness, and thus applicable at the bedside for this clinical application.
Iteration schemes for parallelizing models of superconductivity
Energy Technology Data Exchange (ETDEWEB)
Gray, P.A. [Michigan State Univ., East Lansing, MI (United States)
1996-12-31
The time dependent Lawrence-Doniach model, valid for high fields and high values of the Ginzburg-Landau parameter, is often used for studying vortex dynamics in layered high-T{sub c} superconductors. When solving these equations numerically, the added degrees of complexity due to the coupling and nonlinearity of the model often warrant the use of high-performance computers for their solution. However, the interdependence between the layers can be manipulated so as to allow parallelization of the computations at an individual layer level. The reduced parallel tasks may then be solved independently using a heterogeneous cluster of networked workstations connected together with Parallel Virtual Machine (PVM) software. Here, this parallelization of the model is discussed and several computational implementations of varying degrees of parallelism are presented. Computational results are also given which contrast properties of convergence speed, stability, and consistency of these implementations. Included in these results are models involving the motion of vortices due to an applied current and pinning effects due to various material properties.
Detailed Modeling of Grounding Solutions for the ITER ICRF Antenna
Kyrytsya, V.; Dumortier, P.; Messiaen, A.; Louche, F.; Durodié, F.
2011-12-01
The excitation of non-TEM modes around the ITER ICRF antenna plug can considerably increase the level of RF voltages and currents on the ITER plug. First study of these modes and a solution to avoid them in the ITER ion cyclotron range of frequencies were reported in [1]. In this work a detailed analysis of electrical properties of the ITER ICRF antenna with the plug was studied for different grounding solutions with CST Microwave Studio® [2]. Conclusions of an earlier work [ 1 ] were confirmed on the detailed model of the antenna with the plug. Different grounding contacts (capacitive, galvanic and mixed capacitive-galvanic) as well as their distribution inside the plug gap were analyzed. It was shown that capacitive and mixed capacitive-galvanic grounding are less effective because they demand high values of the capacitance and are more sensitive to the frequency and antenna spectrum. In particular a galvanic grounding realized by the contacts put around the perimeter of the plug gap at lm behind the front face of the antenna is the most suitable solution from the electromagnetic point of view. An optimization of the layout and arrangement of the contacts in order to assess and optimize the current distribution on them is under way. Measurements on a scaled mock-up of the complete antenna and the plug are under way for modeling results confirmation.
CFD predictions of LBO limits for aero-engine combustors using fuel iterative approximation
Institute of Scientific and Technical Information of China (English)
Hu Bin; Huang Yong; Wang Fang; Xie Fa
2013-01-01
Lean blow-out (LBO) is critical to operational performance of combustion systems in propulsion and power generation.Current predictive tools for LBO limits are based on decadesold empirical correlations that have limited applicability for modern combustor designs.According to the Lefebvre's model for LBO and classical perfect stirred reactor (PSR) concept,a load parameter (LP) is proposed for LBO analysis of aero-engine combustors in this paper.The parameters contained in load parameter are all estimated from the non-reacting flow field of a combustor that is obtained by numerical simulation.Additionally,based on the load parameter,a method of fuel iterative approximation (FIA) is proposed to predict the LBO limit of the combustor.Compared with experimental data for 19 combustors,it is found that load parameter can represent the actual combustion load of the combustor near LBO and have good relativity with LBO fuel/air ratio (FAR).The LBO FAR obtained by FIA shows good agreement with experimental data,the maximum prediction uncertainty of FIA is about ± 17.5％.Because only the non-reacting flow is simulated,the time cost of the LBO limit prediction using FIA is relatively low (about 6 h for one combustor with computer equipment of CPU 2.66 GHz × 4 and 4 GB memory),showing that FIA is reliable and efficient to be used for practical applications.
Modeling Results For the ITER Cryogenic Fore Pump. Final Report
Energy Technology Data Exchange (ETDEWEB)
Pfotenhauer, John M. [University of Wisconsin, Madison, WI (United States); Zhang, Dongsheng [University of Wisconsin, Madison, WI (United States)
2014-03-31
A numerical model characterizing the operation of a cryogenic fore-pump (CFP) for ITER has been developed at the University of Wisconsin – Madison during the period from March 15, 2011 through June 30, 2014. The purpose of the ITER-CFP is to separate hydrogen isotopes from helium gas, both making up the exhaust components from the ITER reactor. The model explicitly determines the amount of hydrogen that is captured by the supercritical-helium-cooled pump as a function of the inlet temperature of the supercritical helium, its flow rate, and the inlet conditions of the hydrogen gas flow. Furthermore the model computes the location and amount of hydrogen captured in the pump as a function of time. Throughout the model’s development, and as a calibration check for its results, it has been extensively compared with the measurements of a CFP prototype tested at Oak Ridge National Lab. The results of the model demonstrate that the quantity of captured hydrogen is very sensitive to the inlet temperature of the helium coolant on the outside of the cryopump. Furthermore, the model can be utilized to refine those tests, and suggests methods that could be incorporated in the testing to enhance the usefulness of the measured data.
Modeling Results for the ITER Cryogenic Fore Pump
Zhang, Dongsheng
The work presented here is the analysis and modeling of the ITER-Cryogenic Fore Pump (CFP), also called Cryogenic Viscous Compressor (CVC). Unlike common cryopumps that are usually used to create and maintain vacuum, the cryogenic fore pump is designed for ITER to collect and compress hydrogen isotopes during the regeneration process of the torus cryopumps. Different from common cryopumps, the ITER-CFP works in the viscous flow regime. As a result, both adsorption boundary conditions and transport phenomena contribute unique features to the pump performance. In this report, the physical mechanisms of cryopumping are studied, especially the diffusion-adsorption process and these are coupled with the standard equations of species, momentum and energy balance, as well as the equation of state. Numerical models are developed, which include highly coupled non-linear conservation equations of species, momentum, and energy and equation of state. Thermal and kinetic properties are treated as functions of temperature, pressure, and composition of the gas fluid mixture. To solve such a set of equations, a novel numerical technique, identified as the Group-Member numerical technique is proposed. This document presents three numerical models: a transient model, a steady state model, and a hemisphere (or molecular flow) model. The first two models are developed based on analysis of the raw experimental data while the third model is developed as a preliminary study. The modeling results are compared with available experiment data for verification. The models can be used for cryopump design, and can also benefit problems, such as loss of vacuum in a cryomodule or cryogenic desublimation. The scientific and engineering investigation being done here builds connections between Mechanical Engineering and other disciplines, such as Chemical Engineering, Physics, and Chemistry.
Weld distortion prediction of the ITER Vacuum Vessel using Finite Element simulations
Energy Technology Data Exchange (ETDEWEB)
Caixas, Joan, E-mail: joan.caixas@f4e.europa.eu [F4E, c/ Josep Pla, n.2, Torres Diagonal Litoral, Edificio B3, E-08019 Barcelona (Spain); Guirao, Julio [Numerical Analysis Technologies, S. L., Marqués de San Esteban 52, Entlo, 33209 Gijon (Spain); Bayon, Angel; Jones, Lawrence; Arbogast, Jean François [F4E, c/ Josep Pla, n.2, Torres Diagonal Litoral, Edificio B3, E-08019 Barcelona (Spain); Barbensi, Andrea [Ansaldo Nucleare, Corso F.M. Perrone, 25, I-16152 Genoa (Italy); Dans, Andres [F4E, c/ Josep Pla, n.2, Torres Diagonal Litoral, Edificio B3, E-08019 Barcelona (Spain); Facca, Aldo [Mangiarotti, Pannellia di Sedegliano, I-33039 Sedegliano (UD) (Italy); Fernandez, Elena; Fernández, José [F4E, c/ Josep Pla, n.2, Torres Diagonal Litoral, Edificio B3, E-08019 Barcelona (Spain); Iglesias, Silvia [Numerical Analysis Technologies, S. L., Marqués de San Esteban 52, Entlo, 33209 Gijon (Spain); Jimenez, Marc; Jucker, Philippe; Micó, Gonzalo [F4E, c/ Josep Pla, n.2, Torres Diagonal Litoral, Edificio B3, E-08019 Barcelona (Spain); Ordieres, Javier [Numerical Analysis Technologies, S. L., Marqués de San Esteban 52, Entlo, 33209 Gijon (Spain); Pacheco, Jose Miguel [F4E, c/ Josep Pla, n.2, Torres Diagonal Litoral, Edificio B3, E-08019 Barcelona (Spain); Paoletti, Roberto [Walter Tosto, Via Erasmo Piaggio, 72, I-66100 Chieti Scalo (Italy); Sanguinetti, Gian Paolo [Ansaldo Nucleare, Corso F.M. Perrone, 25, I-16152 Genoa (Italy); Stamos, Vassilis [F4E, c/ Josep Pla, n.2, Torres Diagonal Litoral, Edificio B3, E-08019 Barcelona (Spain); Tacconelli, Massimiliano [Walter Tosto, Via Erasmo Piaggio, 72, I-66100 Chieti Scalo (Italy)
2013-10-15
Highlights: ► Computational simulations of the weld processes can rapidly assess different sequences. ► Prediction of welding distortion to optimize the manufacturing sequence. ► Accurate shape prediction after each manufacture phase allows to generate modified procedures and pre-compensate distortions. ► The simulation methodology is improved using condensed computation techniques with ANSYS in order to reduce computation resources. ► For each welding process, the models are calibrated with the results of coupons and mock-ups. -- Abstract: The as-welded surfaces of the ITER Vacuum Vessel sectors need to be within a very tight tolerance, without a full-scale prototype. In order to predict welding distortion and optimize the manufacturing sequence, the industrial contract includes extensive computational simulations of the weld processes which can rapidly assess different sequences. The accurate shape prediction, after each manufacturing phase, enables actual distortions to be compared with the welding simulations to generate modified procedures and pre-compensate distortions. While previous mock-ups used heavy welded-on jigs to try to restrain the distortions, this method allows the use of lightweight jigs and yields important cost and rework savings. In order to enable the optimization of different alternative welding sequences, the simulation methodology is improved using condensed computation techniques with ANSYS in order to reduce computational resources. For each welding process, the models are calibrated with the results of coupons and mock-ups. The calibration is used to construct representative models of each segment and sector. This paper describes the application to the construction of the Vacuum Vessel sector of the enhanced simulation methodology with condensed Finite Element computation techniques and results of the calibration on several test pieces for different types of welds.
Modelling of ELM dynamics for DIII-D and ITER
Energy Technology Data Exchange (ETDEWEB)
Pankin, A Y [Lehigh University, 16 Memorial Drive East, Bethlehem, PA 18015 (United States); Bateman, G [Lehigh University, 16 Memorial Drive East, Bethlehem, PA 18015 (United States); Brennan, D P [University of Tulsa, Tulsa, Oklahoma (United States); Kritz, A H [Lehigh University, 16 Memorial Drive East, Bethlehem, PA 18015 (United States); Kruger, S [Tech-X, Boulder, CO 80303 (United States); Snyder, P B [General Atomics, San Diego, CA 92186 (United States); Sovinec, C [University of Wisconsin, Madison, WI 53706 (United States)
2007-07-15
A model for integrated modelling studies of edge localized modes (ELMs) in ITER is discussed in this paper. Stability analyses are carried out for ITER and DIII-D equilibria that are generated with the TEQ and TOQ equilibrium codes. The H-mode pedestal pressure and parallel current density are varied in a systematic way in order to span the relevant parameter space for specific ITER plasma parameters. The ideal MHD stability codes, DCON, ELITE and BALOO, are employed to determine whether or not each ITER equilibrium profile is unstable to peeling or ballooning modes in the pedestal region. Several equilibria that are close to the marginal stability boundary for peeling and ballooning modes are tested with the NIMROD non-ideal MHD code. When the effects of finite resistivity are studied in a series of linear NIMROD computations, it is found that the peeling-ballooning stability threshold is very sensitive to the resistivity and viscosity profiles, which vary dramatically over a wide range near the separatrix. When two-fluid gyro-viscous and Hall effects are included in NIMROD computations, it is found that harmonics with high toroidal mode numbers are stabilized while the growth rate of harmonics with low toroidal mode numbers are only moderately reduced. When flow shear across the H-mode pedestal is included, it is found that linear growth rates are increased, particularly for harmonics with high toroidal mode numbers. In nonlinear NIMROD simulations, ELM crashes produce filaments that extend out to the wall in the absence of flow shear. When flow shear is included, the filaments are dragged by the fluid and sheared off before they extend to the wall.
ITER transient consequences for material damage: modelling versus experiments
Energy Technology Data Exchange (ETDEWEB)
Bazylev, B [Forschungszentrum Karlsruhe, IHM, P O Box 3640, 76021 Karlsruhe (Germany); Janeschitz, G [Forschungszentrum Karlsruhe, Fusion, P O Box 3640, 76021 Karlsruhe (Germany); Landman, I [Forschungszentrum Karlsruhe, IHM, P O Box 3640, 76021 Karlsruhe (Germany); Pestchanyi, S [Forschungszentrum Karlsruhe, IHM, P O Box 3640, 76021 Karlsruhe (Germany); Loarte, A [EFDA Close Support Unit Garching, Boltmannstr 2, D-85748 Garching (Germany); Federici, G [ITER International Team, Garching Working Site, Boltmannstr 2, D-85748 Garching (Germany); Merola, M [ITER International Team, Garching Working Site, Boltmannstr 2, D-85748 Garching (Germany); Linke, J [Forschungszentrum Juelich, EURATOM-Association, D-52425 Juelich (Germany); Zhitlukhin, A [SRC RF TRINITI, Troitsk, 142190, Moscow Region (Russian Federation); Podkovyrov, V [SRC RF TRINITI, Troitsk, 142190, Moscow Region (Russian Federation); Klimov, N [SRC RF TRINITI, Troitsk, 142190, Moscow Region (Russian Federation); Safronov, V [SRC RF TRINITI, Troitsk, 142190, Moscow Region (Russian Federation)
2007-03-15
Carbon-fibre composite (CFC) and tungsten macrobrush armours are foreseen as PFC for the ITER divertor. In ITER the main mechanisms of metallic armour damage remain surface melting and melt motion erosion. In the case of CFC armour, due to rather different heat conductivities of CFC fibres a noticeable erosion of the PAN bundles may occur at rather small heat loads. Experiments carried out in the plasma gun facilities QSPA-T for the ITER like edge localized mode (ELM) heat load also demonstrated significant erosion of the frontal and lateral brush edges. Numerical simulations of the CFC and tungsten (W) macrobrush target damage accounting for the heat loads at the face and lateral brush edges were carried out for QSPA-T conditions using the three-dimensional (3D) code PHEMOBRID. The modelling results of CFC damage are in a good qualitative and quantitative agreement with the experiments. Estimation of the droplet splashing caused by the Kelvin-Helmholtz (KH) instability was performed.
Iterative learning control algorithm for spiking behavior of neuron model
Li, Shunan; Li, Donghui; Wang, Jiang; Yu, Haitao
2016-11-01
Controlling neurons to generate a desired or normal spiking behavior is the fundamental building block of the treatment of many neurologic diseases. The objective of this work is to develop a novel control method-closed-loop proportional integral (PI)-type iterative learning control (ILC) algorithm to control the spiking behavior in model neurons. In order to verify the feasibility and effectiveness of the proposed method, two single-compartment standard models of different neuronal excitability are specifically considered: Hodgkin-Huxley (HH) model for class 1 neural excitability and Morris-Lecar (ML) model for class 2 neural excitability. ILC has remarkable advantages for the repetitive processes in nature. To further highlight the superiority of the proposed method, the performances of the iterative learning controller are compared to those of classical PI controller. Either in the classical PI control or in the PI control combined with ILC, appropriate background noises are added in neuron models to approach the problem under more realistic biophysical conditions. Simulation results show that the controller performances are more favorable when ILC is considered, no matter which neuronal excitability the neuron belongs to and no matter what kind of firing pattern the desired trajectory belongs to. The error between real and desired output is much smaller under ILC control signal, which suggests ILC of neuron’s spiking behavior is more accurate.
SOLPS modelling of W arising from repetitive mitigated ELMs in ITER
Energy Technology Data Exchange (ETDEWEB)
Coster, D.P., E-mail: David.Coster@ipp.mpg.de [Max-Planck-Institut für Plasmaphysik, Garching (Germany); Chankin, A.V.; Klingshirn, H.-J.; Dux, R.; Fable, E. [Max-Planck-Institut für Plasmaphysik, Garching (Germany); Bonnin, X. [CNRS – LSPM, Université Paris 13, Sorbonne Paris Cité, 93430 Villetaneuse (France); ITER Organization, Route de Vinon-sur-Verdon, 13067 St Paul Lez Durance Cedex (France); Kukushkin, A.; Loarte, A. [ITER Organization, Route de Vinon-sur-Verdon, 13067 St Paul Lez Durance Cedex (France)
2015-08-15
SOLPS simulations of ELMs are performed for ITER to model the impact of sputtered tungsten on the plasma. Without prompt redeposition, the impact on the core is found to depend on the nature of the ELM event: if it is modelled by a diffusive process, contamination of the core is possible; if it is modelled by a convective process, contamination of the core can be avoided. With prompt redeposition, W contamination of the core seems to be unlikely as a result of the very high prompt redeposition fraction predicted by both simple and more complicated models.
Error Control of Iterative Linear Solvers for Integrated Groundwater Models
Dixon, Matthew; Brush, Charles; Chung, Francis; Dogrul, Emin; Kadir, Tariq
2010-01-01
An open problem that arises when using modern iterative linear solvers, such as the preconditioned conjugate gradient (PCG) method or Generalized Minimum RESidual method (GMRES) is how to choose the residual tolerance in the linear solver to be consistent with the tolerance on the solution error. This problem is especially acute for integrated groundwater models which are implicitly coupled to another model, such as surface water models, and resolve both multiple scales of flow and temporal interaction terms, giving rise to linear systems with variable scaling. This article uses the theory of 'forward error bound estimation' to show how rescaling the linear system affects the correspondence between the residual error in the preconditioned linear system and the solution error. Using examples of linear systems from models developed using the USGS GSFLOW package and the California State Department of Water Resources' Integrated Water Flow Model (IWFM), we observe that this error bound guides the choice of a prac...
ITER CS Model Coil and CS Insert Test Results
Energy Technology Data Exchange (ETDEWEB)
Martovetsky, N; Michael, P; Minervina, J; Radovinsky, A; Takayasu, M; Thome, R; Ando, T; Isono, T; Kato, T; Nakajima, H; Nishijima, G; Nunoya, Y; Sugimoto, M; Takahashi, Y; Tsuji, H; Bessette, D; Okuno, K; Ricci, M
2000-09-07
The Inner and Outer modules of the Central Solenoid Model Coil (CSMC) were built by US and Japanese home teams in collaboration with European and Russian teams to demonstrate the feasibility of a superconducting Central Solenoid for ITER and other large tokamak reactors. The CSMC mass is about 120 t, OD is about 3.6 m and the stored energy is 640 MJ at 46 kA and peak field of 13 T. Testing of the CSMC and the CS Insert took place at Japan Atomic Energy Research Institute (JAERI) from mid March until mid August 2000. This paper presents the main results of the tests performed.
System models for PET statistical iterative reconstruction: A review.
Iriarte, A; Marabini, R; Matej, S; Sorzano, C O S; Lewitt, R M
2016-03-01
Positron emission tomography (PET) is a nuclear imaging modality that provides in vivo quantitative measurements of the spatial and temporal distribution of compounds labeled with a positron emitting radionuclide. In the last decades, a tremendous effort has been put into the field of mathematical tomographic image reconstruction algorithms that transform the data registered by a PET camera into an image that represents slices through the scanned object. Iterative image reconstruction methods often provide higher quality images than conventional direct analytical methods. Aside from taking into account the statistical nature of the data, the key advantage of iterative reconstruction techniques is their ability to incorporate detailed models of the data acquisition process. This is mainly realized through the use of the so-called system matrix, that defines the mapping from the object space to the measurement space. The quality of the reconstructed images relies to a great extent on the accuracy with which the system matrix is estimated. Unfortunately, an accurate system matrix is often associated with high reconstruction times and huge storage requirements. Many attempts have been made to achieve realistic models without incurring excessive computational costs. As a result, a wide range of alternatives to the calculation of the system matrix exists. In this article we present a review of the different approaches used to address the problem of how to model, calculate and store the system matrix.
Directory of Open Access Journals (Sweden)
Robert Lagerström
2017-07-01
Full Text Available The Multi-Attribute Prediction Language (MAPL, an analysis metamodel for non-functional qualities of system architectures, is introduced. MAPL features automate analysis in five non-functional areas: service cost, service availability, data accuracy, application coupling, and application size. In addition, MAPL explicitly includes utility modeling to make trade-offs between the qualities. The article introduces how each of the five non-functional qualities are modeled and quantitatively analyzed based on the ArchiMate standard for enterprise architecture modeling and the previously published Predictive, Probabilistic Architecture Modeling Framework, building on the well-known UML and OCL formalisms. The main contribution of MAPL lies in the probabilistic use of multi-attribute utility theory for the trade-off analysis of the non-functional properties. Additionally, MAPL proposes novel model-based analyses of several non-functional attributes. We also report how MAPL has iteratively been developed using multiple case studies.
An Iterative Algorithm to Build Chinese Language Models
Luo, X; Luo, Xiaoqiang; Roukos, Salim
1996-01-01
We present an iterative procedure to build a Chinese language model (LM). We segment Chinese text into words based on a word-based Chinese language model. However, the construction of a Chinese LM itself requires word boundaries. To get out of the chicken-and-egg problem, we propose an iterative procedure that alternates two operations: segmenting text into words and building an LM. Starting with an initial segmented corpus and an LM based upon it, we use a Viterbi-liek algorithm to segment another set of data. Then, we build an LM based on the second set and use the resulting LM to segment again the first corpus. The alternating procedure provides a self-organized way for the segmenter to detect automatically unseen words and correct segmentation errors. Our preliminary experiment shows that the alternating procedure not only improves the accuracy of our segmentation, but discovers unseen words surprisingly well. The resulting word-based LM has a perplexity of 188 for a general Chinese corpus.
Iterative learning control of SOFC based on ARX identification model
Institute of Scientific and Technical Information of China (English)
无
2007-01-01
This paper presents an application of iterative learning control (ILC) technique to the voltage control of solid oxide fuel cell (SOFC) stack. To meet the demands of the control system design, an autoregressive model with exogenous input (ARX) is established. Firstly, by regulating the variation of the hydrogen flow rate proportional to that of the current, the fuel utilization of the SOFC is kept within its admissible range. Then, based on the ARX model, three kinds of ILC controllers, i.e. P-, PI- and PD-type are designed to keep the voltage at a desired level. Simulation results demonstrate the potential of the ARX model applied to the control of the SOFC, and prove the excellence of the ILC controllers for the voltage control of the SOFC.
Comparisons of Predicted Plasma Performance in ITER H-mode Plasmas with Various Mixes of External He
Energy Technology Data Exchange (ETDEWEB)
R.V. Budny
2009-03-20
Performance in H-mode DT plasmas in ITER with various choices of heating systems are predicted and compared. Combinations of external heating by Negative Ion Neutral Beam Injection (NNBI), Ion Cyclotron Range of Frequencies (ICRF), and Electron Cyclotron Heating (ECH) are assumed. Scans with a range of physics assumptions about boundary temperatures in the edge pedestal, alpha ash transport, and toroidal momentum transport are used to indicate effects of uncertainties. Time-dependent integrated modeling with the PTRANSP code is used to predict profiles of heating, beam torque, and plasma profiles. The GLF23 model is used to predict temperature profiles. Either GLF23 or the assumption of a constant ratio for χø/χi is used to predict toroidal rotation profiles driven by the beam torques. Large differences for the core temperatures are predicted with different mixes of the external heating during the density and current ramp-up phase, but the profiles are similar during the flattop phase. With χø/χi = 0.5, the predicted toroidal rotation is relatively slow and the flow shear implied by the pressure, toroidal rotation, and neoclassical poloidal rotation are not sufficient to cause significant changes in the energy transport or steady state temperature profiles. The GLF23-predicted toroidal rotation is faster by a factor of six, and significant flow shear effects are predicted.
DIVA: an iterative method for building modular integrated models
Hinkel, J.
2005-08-01
Integrated modelling of global environmental change impacts faces the challenge that knowledge from the domains of Natural and Social Science must be integrated. This is complicated by often incompatible terminology and the fact that the interactions between subsystems are usually not fully understood at the start of the project. While a modular modelling approach is necessary to address these challenges, it is not sufficient. The remaining question is how the modelled system shall be cut down into modules. While no generic answer can be given to this question, communication tools can be provided to support the process of modularisation and integration. Along those lines of thought a method for building modular integrated models was developed within the EU project DINAS-COAST and applied to construct a first model, which assesses the vulnerability of the world's coasts to climate change and sea-level-rise. The method focuses on the development of a common language and offers domain experts an intuitive interface to code their knowledge in form of modules. However, instead of rigorously defining interfaces between the subsystems at the project's beginning, an iterative model development process is defined and tools to facilitate communication and collaboration are provided. This flexible approach has the advantage that increased understanding about subsystem interactions, gained during the project's lifetime, can immediately be reflected in the model.
Modelling Iteration Convergence Condition for Single SAR Image Geocoding
Dong, Yuting; Liao, Minghsheng; Zhang, Lu; Shi, Xuguo
2014-11-01
Single SAR image geocoding is to determine the ground coordinate for each pixel in the SAR image assisted with an external DEM. Due to the uncertainty of the elevation of each pixel in SAR image, an iterative procedure is needed, which suffers from the problem of divergence in some difficult areas such as shaded and serious layover areas. This paper aims at theoretically analysing the convergence conditions that has not been intensively studied till now. To make the discussion easier, the Range-Doppler (RD) model is simplified and then the general surface is simplified into a planar surface. Mathematical deduction is carried out to derive the convergence conditions and the impact factors for the convergence speed are analysed. The theoretical findings are validated by experiments for both simulated and real surfaces.
Modelling of radiation impact on ITER Beryllium wall
Landman, I. S.; Janeschitz, G.
2009-04-01
In the ITER H-Mode confinement regime, edge localized instabilities (ELMs) will perturb the discharge. Plasma lost after each ELM moves along magnetic field lines and impacts on divertor armour, causing plasma contamination by back propagating eroded carbon or tungsten. These impurities produce enhanced radiation flux distributed mainly over the beryllium main chamber wall. The simulation of the complicated processes involved are subject of the integrated tokamak code TOKES that is currently under development. This work describes the new TOKES model for radiation transport through confined plasma. Equations for level populations of the multi-fluid plasma species and the propagation of different kinds of radiation (resonance, recombination and bremsstrahlung photons) are implemented. First simulation results without account of resonance lines are presented.
Iterative solvers for Navier-Stokes equations: Experiments with turbulence model
Energy Technology Data Exchange (ETDEWEB)
Page, M. [IREQ - Institut de Recherche d`Hydro-Quebec, Varennes (Canada); Garon, A. [Ecole Polytechnique de Montreal (Canada)
1994-12-31
In the framework of developing software for the prediction of flows in hydraulic turbine components, Reynolds averaged Navier-Stokes equations coupled with {kappa}-{omega} two-equation turbulence model are discretized by finite element method. Since the resulting matrices are large, sparse and nonsymmetric, strategies based on CG-type iterative methods must be devised. A segregated solution strategy decouples the momentum equation, the {kappa} transport equation and the {omega} transport equation. These sets of equations must be solved while satisfying constraint equations. Experiments with orthogonal projection method are presented for the imposition of essential boundary conditions in a weak sense.
Institute of Scientific and Technical Information of China (English)
MO Jia-qi; LIN Yi-hua; WANG Hui
2005-01-01
Atmospheric physics is a very complicated natural phenomenon and needs to simplify its basic models for the sea-air oscillator. And it is solved by using the approximate method. The variational iteration method is a simple and valid method. In this paper the coupled system for a sea-air oscillator model of interdecadal climate fluctuations is considered. Firstly, through introducing a set of functions, and computing the variations, the Lagrange multipliers are obtained. And then, the generalized expressions of variational iteration are constructed. Finally, through selecting appropriate initial iteration from the iteration expressions, the approximations of solution for the sea-air oscillator model are solved successively.
Approximating Attractors of Boolean Networks by Iterative CTL Model Checking.
Klarner, Hannes; Siebert, Heike
2015-01-01
This paper introduces the notion of approximating asynchronous attractors of Boolean networks by minimal trap spaces. We define three criteria for determining the quality of an approximation: "faithfulness" which requires that the oscillating variables of all attractors in a trap space correspond to their dimensions, "univocality" which requires that there is a unique attractor in each trap space, and "completeness" which requires that there are no attractors outside of a given set of trap spaces. Each is a reachability property for which we give equivalent model checking queries. Whereas faithfulness and univocality can be decided by model checking the corresponding subnetworks, the naive query for completeness must be evaluated on the full state space. Our main result is an alternative approach which is based on the iterative refinement of an initially poor approximation. The algorithm detects so-called autonomous sets in the interaction graph, variables that contain all their regulators, and considers their intersection and extension in order to perform model checking on the smallest possible state spaces. A benchmark, in which we apply the algorithm to 18 published Boolean networks, is given. In each case, the minimal trap spaces are faithful, univocal, and complete, which suggests that they are in general good approximations for the asymptotics of Boolean networks.
An iterative construction of solutions of the TAP equations for the Sherrington-Kirkpatrick model
Bolthausen, Erwin
2012-01-01
We propose an iterative construction of solutions of the Thouless-Anderson-Palmer-equations for the Sherrington-Kirpatrick model. The iterative scheme is proved to converge exactly up to the de Almayda-Thouless-line. No results on the SK-model itself are derived.
Predictions of Alpha Heating in ITER L-mode and H-mode Plasmas
Energy Technology Data Exchange (ETDEWEB)
R.V. Budny
2011-01-06
Predictions of alpha heating in L-mode and H-mode DT plasmas in ITER are generated using the PTRANSP code. The baseline toroidal field of 5.3 T, plasma current ramped to 15 MA and a flat electron density profile ramped to Greenwald fraction 0.85 are assumed. Various combinations of external heating by negative ion neutral beam injection, ion cyclotron resonance, and electron cyclotron resonance are assumed to start half-way up the density ramp. The time evolution of plasma temperatures and, for some cases, toroidal rotation are predicted assuming GLF23 and boundary parameters. Significant toroidal rotation and flow-shearing rates are predicted by GLF23 even in the L-mode phase with low boundary temperatures, and the alpha heating power is predicted to be significant if the power threshold for the transition to H-mode is higher than the planned total heating power. The alpha heating is predicted to be 8-76 MW in L-mode at full density. External heating mixes with higher beam injection power have higher alpha heating power. Alternatively if the toroidal rotation is predicted assuming that the ratio of the momentum to thermal ion energy conductivity is 0.5, the flow-shearing rate is predicted to have insignificant effects on the GLF23- predicted temperatures, and alpha heating is predicted to be 8-20 MW. In H-mode plasmas the alpha heating is predicted to depend sensitively on the assumed pedestal temperatures. Cases with fusion gain greater than 10 are predicted to have alpha heating greater than 80 MW.
PARALLELISATION OF THE MODEL-BASED ITERATIVE RECONSTRUCTION ALGORITHM DIRA.
Örtenberg, A; Magnusson, M; Sandborg, M; Alm Carlsson, G; Malusek, A
2016-06-01
New paradigms for parallel programming have been devised to simplify software development on multi-core processors and many-core graphical processing units (GPU). Despite their obvious benefits, the parallelisation of existing computer programs is not an easy task. In this work, the use of the Open Multiprocessing (OpenMP) and Open Computing Language (OpenCL) frameworks is considered for the parallelisation of the model-based iterative reconstruction algorithm DIRA with the aim to significantly shorten the code's execution time. Selected routines were parallelised using OpenMP and OpenCL libraries; some routines were converted from MATLAB to C and optimised. Parallelisation of the code with the OpenMP was easy and resulted in an overall speedup of 15 on a 16-core computer. Parallelisation with OpenCL was more difficult owing to differences between the central processing unit and GPU architectures. The resulting speedup was substantially lower than the theoretical peak performance of the GPU; the cause was explained. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
ITER Side Correction Coil Quench model and analysis
Nicollet, S.; Bessette, D.; Ciazynski, D.; Duchateau, J. L.; Gauthier, F.; Lacroix, B.
2016-12-01
Previous thermohydraulic studies performed for the ITER TF, CS and PF magnet systems have brought some important information on the detection and consequences of a quench as a function of the initial conditions (deposited energy, heated length). Even if the temperature margin of the Correction Coils is high, their behavior during a quench should also be studied since a quench is likely to be triggered by potential anomalies in joints, ground fault on the instrumentation wires, etc. A model has been developed with the SuperMagnet Code (Bagnasco et al., 2010) for a Side Correction Coil (SCC2) with four pancakes cooled in parallel, each of them represented by a Thea module (with the proper Cable In Conduit Conductor characteristics). All the other coils of the PF cooling loop are hydraulically connected in parallel (top/bottom correction coils and six Poloidal Field Coils) are modeled by Flower modules with equivalent hydraulics properties. The model and the analysis results are presented for five quench initiation cases with/without fast discharge: two quenches initiated by a heat input to the innermost turn of one pancake (case 1 and case 2) and two other quenches initiated at the innermost turns of four pancakes (case 3 and case 4). In the 5th case, the quench is initiated at the middle turn of one pancake. The impact on the cooling circuit, e.g. the exceedance of the opening pressure of the quench relief valves, is detailed in case of an undetected quench (i.e. no discharge of the magnet). Particular attention is also paid to a possible secondary quench detection system based on measured thermohydraulic signals (pressure, temperature and/or helium mass flow rate). The maximum cable temperature achieved in case of a fast current discharge (primary detection by voltage) is compared to the design hot spot criterion of 150 K, which includes the contribution of helium and jacket.
Shuman, William P; Chan, Keith T; Busey, Janet M; Mitsumori, Lee M; Choi, Eunice; Koprowicz, Kent M; Kanal, Kalpana M
2014-12-01
To investigate whether reduced radiation dose liver computed tomography (CT) images reconstructed with model-based iterative reconstruction ( MBIR model-based iterative reconstruction ) might compromise depiction of clinically relevant findings or might have decreased image quality when compared with clinical standard radiation dose CT images reconstructed with adaptive statistical iterative reconstruction ( ASIR adaptive statistical iterative reconstruction ). With institutional review board approval, informed consent, and HIPAA compliance, 50 patients (39 men, 11 women) were prospectively included who underwent liver CT. After a portal venous pass with ASIR adaptive statistical iterative reconstruction images, a 60% reduced radiation dose pass was added with MBIR model-based iterative reconstruction images. One reviewer scored ASIR adaptive statistical iterative reconstruction image quality and marked findings. Two additional independent reviewers noted whether marked findings were present on MBIR model-based iterative reconstruction images and assigned scores for relative conspicuity, spatial resolution, image noise, and image quality. Liver and aorta Hounsfield units and image noise were measured. Volume CT dose index and size-specific dose estimate ( SSDE size-specific dose estimate ) were recorded. Qualitative reviewer scores were summarized. Formal statistical inference for signal-to-noise ratio ( SNR signal-to-noise ratio ), contrast-to-noise ratio ( CNR contrast-to-noise ratio ), volume CT dose index, and SSDE size-specific dose estimate was made (paired t tests), with Bonferroni adjustment. Two independent reviewers identified all 136 ASIR adaptive statistical iterative reconstruction image findings (n = 272) on MBIR model-based iterative reconstruction images, scoring them as equal or better for conspicuity, spatial resolution, and image noise in 94.1% (256 of 272), 96.7% (263 of 272), and 99.3% (270 of 272), respectively. In 50 image sets, two reviewers
Research on the iterative method for model updating based on the frequency response function
Institute of Scientific and Technical Information of China (English)
Wei-Ming Li; Jia-Zhen Hong
2012-01-01
Model reduction technique is usually employed in model updating process,In this paper,a new model updating method named as cross-model cross-frequency response function (CMCF) method is proposed and a new iterative method associating the model updating method with the model reduction technique is investigated.The new model updating method utilizes the frequency response function to avoid the modal analysis process and it does not need to pair or scale the measured and the analytical frequency response function,which could greatly increase the number of the equations and the updating parameters.Based on the traditional iterative method,a correction term related to the errors resulting from the replacement of the reduction matrix of the experimental model with that of the finite element model is added in the new iterative method.Comparisons between the traditional iterative method and the proposed iterative method are shown by model updating examples of solar panels,and both of these two iterative methods combine the CMCF method and the succession-level approximate reduction technique.Results show the effectiveness of the CMCF method and the proposed iterative method.
Directory of Open Access Journals (Sweden)
Toru Higaki
2017-08-01
Full Text Available This article describes a quantitative evaluation of visualizing small vessels using several image reconstruction methods in computed tomography. Simulated vessels with diameters of 1–6 mm made by 3D printer was scanned using 320-row detector computed tomography (CT. Hybrid iterative reconstruction (hybrid IR and model-based iterative reconstruction (MBIR were performed for the image reconstruction.
An iterative stochastic ensemble method for parameter estimation of subsurface flow models
Elsheikh, Ahmed H.
2013-06-01
Parameter estimation for subsurface flow models is an essential step for maximizing the value of numerical simulations for future prediction and the development of effective control strategies. We propose the iterative stochastic ensemble method (ISEM) as a general method for parameter estimation based on stochastic estimation of gradients using an ensemble of directional derivatives. ISEM eliminates the need for adjoint coding and deals with the numerical simulator as a blackbox. The proposed method employs directional derivatives within a Gauss-Newton iteration. The update equation in ISEM resembles the update step in ensemble Kalman filter, however the inverse of the output covariance matrix in ISEM is regularized using standard truncated singular value decomposition or Tikhonov regularization. We also investigate the performance of a set of shrinkage based covariance estimators within ISEM. The proposed method is successfully applied on several nonlinear parameter estimation problems for subsurface flow models. The efficiency of the proposed algorithm is demonstrated by the small size of utilized ensembles and in terms of error convergence rates. © 2013 Elsevier Inc.
Numerical modeling of 3D halo current path in ITER structures
Energy Technology Data Exchange (ETDEWEB)
Bettini, Paolo; Marconato, Nicolò; Furno Palumbo, Maurizio; Peruzzo, Simone [Consorzio RFX, EURATOM-ENEA Association, C.so Stati Uniti 4, 35127 Padova (Italy); Specogna, Ruben, E-mail: ruben.specogna@uniud.it [DIEGM, Università di Udine, Via delle Scienze, 208, 33100 Udine (Italy); Albanese, Raffaele; Rubinacci, Guglielmo; Ventre, Salvatore; Villone, Fabio [Consorzio CREATE, EURATOM-ENEA Association, Via Claudio 21, 80125 Napoli (Italy)
2013-10-15
Highlights: ► Two numerical codes for the evaluation of halo currents in 3D structures are presented. ► A simplified plasma model is adopted to provide the input (halo current injected into the FW). ► Two representative test cases of ITER symmetric and asymmetric VDEs have been analyzed. ► The proposed approaches provide results in excellent agreement for both cases. -- Abstract: Disruptions represent one of the main concerns for Tokamak operation, especially in view of fusion reactors, or experimental test reactors, due to the electro-mechanical loads induced by halo and eddy currents. The development of a predictive tool which allows to estimate the magnitude and spatial distribution of the halo current forces is of paramount importance in order to ensure robust vessel and in-vessel component design. With this aim, two numerical codes (CARIDDI, CAFE) have been developed, which allow to calculate the halo current path (resistive distribution) in the passive structures surrounding the plasma. The former is based on an integral formulation for the eddy currents problem particularized to the static case; the latter implements a pair of 3D FEM complementary formulations for the solution of the steady-state current conduction problem. A simplified plasma model is adopted to provide the inputs (halo current injected into the first wall). Two representative test cases (ITER symmetric and asymmetric VDEs) have been selected to cross check the results of the proposed approaches.
Directory of Open Access Journals (Sweden)
Raftery Adrian E
2009-02-01
Full Text Available Abstract Background Microarray technology is increasingly used to identify potential biomarkers for cancer prognostics and diagnostics. Previously, we have developed the iterative Bayesian Model Averaging (BMA algorithm for use in classification. Here, we extend the iterative BMA algorithm for application to survival analysis on high-dimensional microarray data. The main goal in applying survival analysis to microarray data is to determine a highly predictive model of patients' time to event (such as death, relapse, or metastasis using a small number of selected genes. Our multivariate procedure combines the effectiveness of multiple contending models by calculating the weighted average of their posterior probability distributions. Our results demonstrate that our iterative BMA algorithm for survival analysis achieves high prediction accuracy while consistently selecting a small and cost-effective number of predictor genes. Results We applied the iterative BMA algorithm to two cancer datasets: breast cancer and diffuse large B-cell lymphoma (DLBCL data. On the breast cancer data, the algorithm selected a total of 15 predictor genes across 84 contending models from the training data. The maximum likelihood estimates of the selected genes and the posterior probabilities of the selected models from the training data were used to divide patients in the test (or validation dataset into high- and low-risk categories. Using the genes and models determined from the training data, we assigned patients from the test data into highly distinct risk groups (as indicated by a p-value of 7.26e-05 from the log-rank test. Moreover, we achieved comparable results using only the 5 top selected genes with 100% posterior probabilities. On the DLBCL data, our iterative BMA procedure selected a total of 25 genes across 3 contending models from the training data. Once again, we assigned the patients in the validation set to significantly distinct risk groups (p
Goodenberger, Martin H; Wagner-Bartak, Nicolaus A; Gupta, Shiva; Liu, Xinming; Yap, Ramon Q; Sun, Jia; Tamm, Eric P; Jensen, Corey T
2017-08-12
The purpose of this study was to compare abdominopelvic computed tomography images reconstructed with adaptive statistical iterative reconstruction-V (ASIR-V) with model-based iterative reconstruction (Veo 3.0), ASIR, and filtered back projection (FBP). Abdominopelvic computed tomography scans for 36 patients (26 males and 10 females) were reconstructed using FBP, ASIR (80%), Veo 3.0, and ASIR-V (30%, 60%, 90%). Mean ± SD patient age was 32 ± 10 years with mean ± SD body mass index of 26.9 ± 4.4 kg/m. Images were reviewed by 2 independent readers in a blinded, randomized fashion. Hounsfield unit, noise, and contrast-to-noise ratio (CNR) values were calculated for each reconstruction algorithm for further comparison. Phantom evaluation of low-contrast detectability (LCD) and high-contrast resolution was performed. Adaptive statistical iterative reconstruction-V 30%, ASIR-V 60%, and ASIR 80% were generally superior qualitatively compared with ASIR-V 90%, Veo 3.0, and FBP (P V 90% showed superior LCD and had the highest CNR in the liver, aorta, and, pancreas, measuring 7.32 ± 3.22, 11.60 ± 4.25, and 4.60 ± 2.31, respectively, compared with the next best series of ASIR-V 60% with respective CNR values of 5.54 ± 2.39, 8.78 ± 3.15, and 3.49 ± 1.77 (P V 30% and ASIR-V 60% provided the best combination of qualitative and quantitative performance. Adaptive statistical iterative reconstruction 80% was equivalent qualitatively, but demonstrated inferior spatial resolution and LCD.
Thermal Dissipation Modelling and Design of ITER PF Converter Alternating Current Busbar
Guo, Bin; Song, Zhiquan; Fu, Peng; Jiang, Li; Li, Jinchao; Wang, Min; Dong, Lin
2016-10-01
Because the larger metallic surrounds are heated by the eddy current, which is generated by the AC current flowing through the AC busbar in the International Thermonuclear Experimental Reactor (ITER) poloidal field (PF) converter system, shielding of the AC busbar is required to decrease the temperature rise of the surrounds to satisfy the design requirement. Three special types of AC busbar with natural cooling, air cooling and water cooling busbar structure have been proposed and investigated in this paper. For each cooling scheme, a 3D finite model based on the proposed structure has been developed to perform the electromagnetic and thermal analysis to predict their operation behavior. Comparing the analysis results of the three different cooling patterns, water cooling has more advantages than the other patterns and it is selected to be the thermal dissipation pattern for the AC busbar of ITER PF converter unit. The approach to qualify the suitable cooling scheme in this paper can be provided as a reference on the thermal dissipation design of AC busbar in the converter system. supported by National Natural Science Foundation of China (No. 51407179)
Modeling of the ITER-like wide-angle infrared thermography view of JET.
Aumeunier, M-H; Firdaouss, M; Travère, J-M; Loarer, T; Gauthier, E; Martin, V; Chabaud, D; Humbert, E
2012-10-01
Infrared (IR) thermography systems are mandatory to ensure safe plasma operation in fusion devices. However, IR measurements are made much more complicated in metallic environment because of the spurious contributions of the reflected fluxes. This paper presents a full predictive photonic simulation able to assess accurately the surface temperature measurement with classical IR thermography from a given plasma scenario and by taking into account the optical properties of PFCs materials. This simulation has been carried out the ITER-like wide angle infrared camera view of JET in comparing with experimental data. The consequences and the effects of the low emissivity and the bidirectional reflectivity distribution function used in the model for the metallic PFCs on the contribution of the reflected flux in the analysis are discussed.
Modeling of the ITER-like wide-angle infrared thermography view of JET
Energy Technology Data Exchange (ETDEWEB)
Aumeunier, M.-H. [CEA, IRFM, F-13108 Saint-Paul-Lez-Durance (France); OPTIS, ZE de La Farlede, F-83078 Toulon Cedex 9 (France); Firdaouss, M.; Travere, J.-M.; Loarer, T.; Gauthier, E.; Martin, V. [CEA, IRFM, F-13108 Saint-Paul-Lez-Durance (France); Chabaud, D.; Humbert, E. [OPTIS, ZE de La Farlede, F-83078 Toulon Cedex 9 (France); Collaboration: JET-EFDA Contributors
2012-10-15
Infrared (IR) thermography systems are mandatory to ensure safe plasma operation in fusion devices. However, IR measurements are made much more complicated in metallic environment because of the spurious contributions of the reflected fluxes. This paper presents a full predictive photonic simulation able to assess accurately the surface temperature measurement with classical IR thermography from a given plasma scenario and by taking into account the optical properties of PFCs materials. This simulation has been carried out the ITER-like wide angle infrared camera view of JET in comparing with experimental data. The consequences and the effects of the low emissivity and the bidirectional reflectivity distribution function used in the model for the metallic PFCs on the contribution of the reflected flux in the analysis are discussed.
Energy iteration model research of DCM Buck converter with multilevel pulse train technique
Qin, Ming; Li, Xiang
2017-08-01
According as the essence of switching converter is the nature of energy, the energy iteration model of the Multilevel Pulse Train (MPT) technique is studied in this paper. The energy iteration model of DCM Buck converter with MPT technique can reflect the control law and excellent transient performance of the MPT technique. The iteration relation of energy transfer in switching converter is discussed. The structure and operation principle of DCM Buck converter with MPT technique is introduced and the energy iteration model of this converter is set up. The energy tracks of MPT-control Buck converter and PT converter is researched and compared to show that the ratio of steady-state control pulse satisfies the expectation for the MPT technique and the MPT-controlled switching converter has much lower output voltage ripple than the PT converter.
Ehret, Phillip J; Monroe, Brian M; Read, Stephen J
2015-05-01
We present a neural network implementation of central components of the iterative reprocessing (IR) model. The IR model argues that the evaluation of social stimuli (attitudes, stereotypes) is the result of the IR of stimuli in a hierarchy of neural systems: The evaluation of social stimuli develops and changes over processing. The network has a multilevel, bidirectional feedback evaluation system that integrates initial perceptual processing and later developing semantic processing. The network processes stimuli (e.g., an individual's appearance) over repeated iterations, with increasingly higher levels of semantic processing over time. As a result, the network's evaluations of stimuli evolve. We discuss the implications of the network for a number of different issues involved in attitudes and social evaluation. The success of the network supports the IR model framework and provides new insights into attitude theory.
Institute of Scientific and Technical Information of China (English)
Zhu Dong; Liu Cheng; Xu Zhengyang; Liu Jia
2016-01-01
Electrochemical machining (ECM) is an effective and economical manufacturing method for machining hard-to-cut metal materials that are often used in the aerospace field. Cathode design is very complicated in ECM and is a core problem influencing machining accuracy, especially for complex profiles such as compressor blades in aero engines. A new cathode design method based on iterative correction of predicted profile errors in blade ECM is proposed in this paper. A math-ematical model is first built according to the ECM shaping law, and a simulation is then carried out using ANSYS software. A dynamic forming process is obtained and machining gap distributions at different stages are analyzed. Additionally, the simulation deviation between the prediction profile and model is improved by the new method through correcting the initial cathode profile. Further-more, validation experiments are conducted using cathodes designed before and after the simulation correction. Machining accuracy for the optimal cathode is improved markedly compared with that for the initial cathode. The experimental results illustrate the suitability of the new method and that it can also be applied to other complex engine components such as diffusers.
Directory of Open Access Journals (Sweden)
Zhu Dong
2016-08-01
Full Text Available Electrochemical machining (ECM is an effective and economical manufacturing method for machining hard-to-cut metal materials that are often used in the aerospace field. Cathode design is very complicated in ECM and is a core problem influencing machining accuracy, especially for complex profiles such as compressor blades in aero engines. A new cathode design method based on iterative correction of predicted profile errors in blade ECM is proposed in this paper. A mathematical model is first built according to the ECM shaping law, and a simulation is then carried out using ANSYS software. A dynamic forming process is obtained and machining gap distributions at different stages are analyzed. Additionally, the simulation deviation between the prediction profile and model is improved by the new method through correcting the initial cathode profile. Furthermore, validation experiments are conducted using cathodes designed before and after the simulation correction. Machining accuracy for the optimal cathode is improved markedly compared with that for the initial cathode. The experimental results illustrate the suitability of the new method and that it can also be applied to other complex engine components such as diffusers.
Toroidal modeling of plasma response to RMP fields in ITER
Li, L.; Liu, Y. Q.; Wang, N.; Kirk, A.; Koslowski, H. R.; Liang, Y.; Loarte, A.; Ryan, D.; Zhong, F. C.
2017-04-01
A systematic numerical study is carried out, computing the resistive plasma response to the resonant magnetic perturbation (RMP) fields for ITER plasmas, utilizing the toroidal code MARS-F (Liu et al 2000 Phys. Plasmas 7 3681). A number of factors are taken into account, including the variation of the plasma scenarios (from 15 MA Q = 10 inductive scenario to the 9 MA Q = 5 steady state scenario), the variation of the toroidal spectrum of the applied fields (n = 1, 2, 3, 4, with n being the toroidal mode number), the amplitude and phase variation of the currents in three rows of the RMP coils as designed for ITER, and finally a special case of mixed toroidal spectrum between the n = 3 and n = 4 RMP fields. Two-dimensional parameter scans, for the edge safety factor and the coil phasing between the upper and lower rows of coils, yield ‘optimal’ curves that maximize a set of figures of merit, that are defined in this work to measure the plasma response. Other two-dimensional scans of the relative coil current phasing among three rows of coils, at fixed coil currents amplitude, reveal a single optimum for each coil configuration with a given n number, for the 15 MA ITER inductive plasma. On the other hand, scanning of the coil current amplitude, at fixed coil phasing, shows either synergy or cancellation effect, for the field contributions between the off-middle rows and the middle row of the RMP coils. Finally, the mixed toroidal spectrum, by combining the n = 3 and the n = 4 RMP field, results in a substantial local reduction of the amplitude of the plasma surface displacement.
Levy, R.; Mcginness, H.
1976-01-01
Investigations were performed to predict the power available from the wind at the Goldstone, California, antenna site complex. The background for power prediction was derived from a statistical evaluation of available wind speed data records at this location and at nearby locations similarly situated within the Mojave desert. In addition to a model for power prediction over relatively long periods of time, an interim simulation model that produces sample wind speeds is described. The interim model furnishes uncorrelated sample speeds at hourly intervals that reproduce the statistical wind distribution at Goldstone. A stochastic simulation model to provide speed samples representative of both the statistical speed distributions and correlations is also discussed.
Institute of Scientific and Technical Information of China (English)
孙孝前; 尤进红
2003-01-01
In this paper we consider the estimating problem of a semiparametric regression modelling whenthe data are longitudinal. An iterative weighted partial spline least squares estimator (IWPSLSE) for the para-metric component is proposed which is more efficient than the weighted partial spline least squares estimator(WPSLSE) with weights constructed by using the within-group partial spline least squares residuals in the senseof asymptotic variance. The asymptotic normality of this IWPSLSE is established. An adaptive procedure ispresented which ensures that the iterative process stops after a finite number of iterations and produces anestimator asymptotically equivalent to the best estimator that can be obtained by using the iterative proce-dure. These results are generalizations of those in heteroscedastic linear model to the case of semiparametric regression.
When your words count: a discriminative model to predict approval of referrals
Directory of Open Access Journals (Sweden)
Adol Esquivel
2009-12-01
Conclusions Three iterations of the model correctly predicted at least 75% of the approved referrals in the validation set. A correct prediction of whether or not a referral will be approved can be made in three out of four cases.
An Iterative Construction of Solutions of the TAP Equations for the Sherrington-Kirkpatrick Model
Bolthausen, Erwin
2014-01-01
We propose an iterative scheme for the solutions of the TAP-equations in the Sherrington-Kirkpatrick model which is shown to converge up to and including the de Almeida-Thouless line. The main tool is a representation of the iterations which reveals an interesting structure of them. This representation does not depend on the temperature parameter, but for temperatures below the de Almeida-Thouless line, it contains a part which does not converge to zero in the limit.
Abawajy, Jemal; Kelarev, Andrei; Chowdhury, Morshed U; Jelinek, Herbert F
2016-01-01
Blood biochemistry attributes form an important class of tests, routinely collected several times per year for many patients with diabetes. The objective of this study is to investigate the role of blood biochemistry for improving the predictive accuracy of the diagnosis of cardiac autonomic neuropathy (CAN) progression. Blood biochemistry contributes to CAN, and so it is a causative factor that can provide additional power for the diagnosis of CAN especially in the absence of a complete set of Ewing tests. We introduce automated iterative multitier ensembles (AIME) and investigate their performance in comparison to base classifiers and standard ensemble classifiers for blood biochemistry attributes. AIME incorporate diverse ensembles into several tiers simultaneously and combine them into one automatically generated integrated system so that one ensemble acts as an integral part of another ensemble. We carried out extensive experimental analysis using large datasets from the diabetes screening research initiative (DiScRi) project. The results of our experiments show that several blood biochemistry attributes can be used to supplement the Ewing battery for the detection of CAN in situations where one or more of the Ewing tests cannot be completed because of the individual difficulties faced by each patient in performing the tests. The results show that AIME provide higher accuracy as a multitier CAN classification paradigm. The best predictive accuracy of 99.57% has been obtained by the AIME combining decorate on top tier with bagging on middle tier based on random forest. Practitioners can use these findings to increase the accuracy of CAN diagnosis.
An, Hyunuk; Ichikawa, Yutaka; Tachikawa, Yasuto; Shiiba, Michiharu
2012-11-01
SummaryThree different iteration methods for a three-dimensional coordinate-transformed saturated-unsaturated flow model are compared in this study. The Picard and Newton iteration methods are the common approaches for solving Richards' equation. The Picard method is simple to implement and cost-efficient (on an individual iteration basis). However it converges slower than the Newton method. On the other hand, although the Newton method converges faster, it is more complex to implement and consumes more CPU resources per iteration than the Picard method. The comparison of the two methods in finite-element model (FEM) for saturated-unsaturated flow has been well evaluated in previous studies. However, two iteration methods might exhibit different behavior in the coordinate-transformed finite-difference model (FDM). In addition, the Newton-Krylov method could be a suitable alternative for the coordinate-transformed FDM because it requires the evaluation of a 19-point stencil matrix. The formation of a 19-point stencil is quite a complex and laborious procedure. Instead, the Newton-Krylov method calculates the matrix-vector product, which can be easily approximated by calculating the differences of the original nonlinear function. In this respect, the Newton-Krylov method might be the most appropriate iteration method for coordinate-transformed FDM. However, this method involves the additional cost of taking an approximation at each Krylov iteration in the Newton-Krylov method. In this paper, we evaluated the efficiency and robustness of three iteration methods—the Picard, Newton, and Newton-Krylov methods—for simulating saturated-unsaturated flow through porous media using a three-dimensional coordinate-transformed FDM.
Heller, R.; Bauer, P.; Savoldi, L.; Zanino, R.; Zappatore, A.
2016-12-01
We present an analysis of the prototype high-temperature superconducting (HTS) current leads (CLs) for the ITER correction coils, which will operate at 10 kA. A copper heat exchanger (HX) of the meander-flow type is included in the CL design and covers the temperature range between room temperature and 65 K, whereas the HTS module, where Bi-2223 stacked tapes are positioned on the outer surface of a stainless steel hollow cylindrical support, covers the temperature range between 65 K and 4.5 K. The HX is cooled by gaseous helium entering at 50 K, whereas the HTS module is cooled by conduction from the cold end of the CL. We use the CURLEAD code, developed some years ago and now supplemented by a new set of correlations for the helium friction factor and heat transfer coefficient in the HX, recently derived using Computational Fluid Dynamics. Our analysis is aimed first of all at a "blind" design-like prediction of the CL performance, for both steady state and pulsed operation. In particular, the helium mass flow rate needed to guarantee the target temperature at the HX-HTS interface, the temperature profile, and the pressure drop across the HX will be computed. The predictive capabilities of the CURLEAD model are then assessed by comparison of the simulation results with experimental data obtained in the test of the prototype correction coil CLs at ASIPP, whose results were considered only after the simulations were performed.
Directory of Open Access Journals (Sweden)
Tatjana Braun
2015-12-01
Full Text Available Recent work has shown that the accuracy of ab initio structure prediction can be significantly improved by integrating evolutionary information in form of intra-protein residue-residue contacts. Following this seminal result, much effort is put into the improvement of contact predictions. However, there is also a substantial need to develop structure prediction protocols tailored to the type of restraints gained by contact predictions. Here, we present a structure prediction protocol that combines evolutionary information with the resolution-adapted structural recombination approach of Rosetta, called RASREC. Compared to the classic Rosetta ab initio protocol, RASREC achieves improved sampling, better convergence and higher robustness against incorrect distance restraints, making it the ideal sampling strategy for the stated problem. To demonstrate the accuracy of our protocol, we tested the approach on a diverse set of 28 globular proteins. Our method is able to converge for 26 out of the 28 targets and improves the average TM-score of the entire benchmark set from 0.55 to 0.72 when compared to the top ranked models obtained by the EVFold web server using identical contact predictions. Using a smaller benchmark, we furthermore show that the prediction accuracy of our method is only slightly reduced when the contact prediction accuracy is comparatively low. This observation is of special interest for protein sequences that only have a limited number of homologs.
Modelling and shielding analysis of the neutral beam injector ports in ITER
Energy Technology Data Exchange (ETDEWEB)
Pereslavtsev, P., E-mail: pavel.pereslavtsev@kit.edu [Karlsruhe Institute of Technology (KIT), Hermann-von-Helmholtz-Platz 1, 76344 Eggenstein-Leopoldshafen (Germany); Fischer, U. [Karlsruhe Institute of Technology (KIT), Hermann-von-Helmholtz-Platz 1, 76344 Eggenstein-Leopoldshafen (Germany); Loughlin, M. [ITER Organization, Route de Vinon-sur-Verdon, CS 90 046, 13067 St. Paul Lez Durance Cedex (France); Lu, Lei [Karlsruhe Institute of Technology (KIT), Hermann-von-Helmholtz-Platz 1, 76344 Eggenstein-Leopoldshafen (Germany); Polunovskiy, E. [ITER Organization, Route de Vinon-sur-Verdon, CS 90 046, 13067 St. Paul Lez Durance Cedex (France); Vielhaber, S. [Karlsruhe Institute of Technology (KIT), Hermann-von-Helmholtz-Platz 1, 76344 Eggenstein-Leopoldshafen (Germany)
2015-10-15
Highlights: • The engineering CAD models of the NBI ports were simplified on the CATIA platform. • CAD to MCNP model convesion was done making use of McCAD converting tool. • The new NBI port model was integrated into 80° A-lite ITER torus sector model. • The nuclear responces important for the safety issues were assessed. - Abstract: A new MCNP geometry model of the ITER Neutral Beam Injection (NBI) ports was developed starting from the latest engineering CAD models provided by ITER. The model includes 3 heating (HNBI) ports and one diagnostic port (DNBI), and extends up to the bio-shield. The engineering CAD models were simplified on the CATIA platform according to the neutronic requirements and then converted into MCNP geometry making use of the McCad conversion tool. Finally, the new NBI port model was integrated into an available 80° A-lite ITER torus sector model. The nuclear analysis performed on the basis of this model provides the following nuclear responses: the neutron flux distribution in all NBI ports, the nuclear heating distribution in all NBI ducts; the nuclear heating and radiation loads to the TFC magnets; the radiation damage and gas production in the VV; and the distribution of the shutdown dose rate inside the cryostat.
Experiment and Modeling of ITER Demonstration Discharges in the DIII-D Tokamak
Energy Technology Data Exchange (ETDEWEB)
Park, Jin Myung [ORNL; Doyle, E. J. [University of California, Los Angeles; Ferron, J.R. [General Atomics, San Diego; Holcomb, C T [Lawrence Livermore National Laboratory (LLNL); Jackson, G. L. [General Atomics; Lao, L. L. [General Atomics; Luce, T.C. [General Atomics, San Diego; Owen, Larry W [ORNL; Murakami, Masanori [ORNL; Osborne, T. H. [General Atomics; Politzer, P. A. [General Atomics, San Diego; Prater, R. [General Atomics; Snyder, P. B. [General Atomics
2011-01-01
DIII-D is providing experimental evaluation of 4 leading ITER operational scenarios: the baseline scenario in ELMing H-mode, the advanced inductive scenario, the hybrid scenario, and the steady state scenario. The anticipated ITER shape, aspect ratio and value of I/{alpha}B were reproduced, with the size reduced by a factor of 3.7, while matching key performance targets for {beta}{sub N} and H{sub 98}. Since 2008, substantial experimental progress was made to improve the match to other expected ITER parameters for the baseline scenario. A lower density baseline discharge was developed with improved stationarity and density control to match the expected ITER edge pedestal collisionality ({nu}*{sub e} {approx} 0.1). Target values for {beta}{sub N} and H{sub 98} were maintained at lower collisionality (lower density) operation without loss in fusion performance but with significant change in ELM characteristics. The effects of lower plasma rotation were investigated by adding counter-neutral beam power, resulting in only a modest reduction in confinement. Robust preemptive stabilization of 2/1 NTMs was demonstrated for the first time using ECCD under ITER-like conditions. Data from these experiments were used extensively to test and develop theory and modeling for realistic ITER projection and for further development of its optimum scenarios in DIII-D. Theory-based modeling of core transport (TGLF) with an edge pedestal boundary condition provided by the EPED1 model reproduces T{sub e} and T{sub i} profiles reasonably well for the 4 ITER scenarios developed in DIII-D. Modeling of the baseline scenario for low and high rotation discharges indicates that a modest performance increase of {approx} 15% is needed to compensate for the expected lower rotation of ITER. Modeling of the steady-state scenario reproduces a strong dependence of confinement, stability, and noninductive fraction (f{sub NI}) on q{sub 95}, as found in the experimental I{sub p} scan, indicating that
Cestari, Andrea
2013-01-01
Predictive modeling is emerging as an important knowledge-based technology in healthcare. The interest in the use of predictive modeling reflects advances on different fronts such as the availability of health information from increasingly complex databases and electronic health records, a better understanding of causal or statistical predictors of health, disease processes and multifactorial models of ill-health and developments in nonlinear computer models using artificial intelligence or neural networks. These new computer-based forms of modeling are increasingly able to establish technical credibility in clinical contexts. The current state of knowledge is still quite young in understanding the likely future direction of how this so-called 'machine intelligence' will evolve and therefore how current relatively sophisticated predictive models will evolve in response to improvements in technology, which is advancing along a wide front. Predictive models in urology are gaining progressive popularity not only for academic and scientific purposes but also into the clinical practice with the introduction of several nomograms dealing with the main fields of onco-urology.
Nonconvex Model Predictive Control for Commercial Refrigeration
DEFF Research Database (Denmark)
Hovgaard, Tobias Gybel; Larsen, Lars F.S.; Jørgensen, John Bagterp
2013-01-01
is to minimize the total energy cost, using real-time electricity prices, while obeying temperature constraints on the zones. We propose a variation on model predictive control to achieve this goal. When the right variables are used, the dynamics of the system are linear, and the constraints are convex. The cost...... the iterations, which is more than fast enough to run in real-time. We demonstrate our method on a realistic model, with a full year simulation and 15 minute time periods, using historical electricity prices and weather data, as well as random variations in thermal load. These simulations show substantial cost...
Comment on an application of the asymptotic iteration method to a perturbed Coulomb model
Energy Technology Data Exchange (ETDEWEB)
Amore, Paolo [Facultad de Ciencias, Universidad de Colima, Bernal DIaz del Castillo 340, Colima (Mexico); Fernandez, Francisco M [INIFTA (Conicet, UNLP), Blvd. 113 y 64 S/N, Sucursal 4, Casilla de Correo 16, 1900 La Plata (Argentina)
2006-08-18
We discuss a recent application of the asymptotic iteration method (AIM) to a perturbed Coulomb model. Contrary to what was argued before we show that the AIM converges and yields accurate energies for that model. We also consider alternative perturbation approaches and show that one of them is equivalent to that recently proposed by another author.
An Iterative Needs Assessment/Evaluation Model for a Japanese University English-Language Program
Brown, Kathleen A.
2009-01-01
The focus of this study is the development and implementation of the Iterative Needs Assessment/Evaluation Model for use as part of an English curriculum reform project at a four-year university in Japan. Three questions were addressed in this study: (a) what model components were necessary for use in a Japanese university setting; (b) what survey…
Mitchell, William
1992-01-01
This paper, dating from May 1991, contains preliminary (and unpublishable) notes on investigations about iteration trees. They will be of interest only to the specialist. In the first two sections I define notions of support and embeddings for tree iterations, proving for example that every tree iteration is a direct limit of finite tree iterations. This is a generalization to models with extenders of basic ideas of iterated ultrapowers using only ultrapowers. In the final section (which is m...
BSIRT: a block-iterative SIRT parallel algorithm using curvilinear projection model.
Zhang, Fa; Zhang, Jingrong; Lawrence, Albert; Ren, Fei; Wang, Xuan; Liu, Zhiyong; Wan, Xiaohua
2015-03-01
Large-field high-resolution electron tomography enables visualizing detailed mechanisms under global structure. As field enlarges, the distortions of reconstruction and processing time become more critical. Using the curvilinear projection model can improve the quality of large-field ET reconstruction, but its computational complexity further exacerbates the processing time. Moreover, there is no parallel strategy on GPU for iterative reconstruction method with curvilinear projection. Here we propose a new Block-iterative SIRT parallel algorithm with the curvilinear projection model (BSIRT) for large-field ET reconstruction, to improve the quality of reconstruction and accelerate the reconstruction process. We also develop some key techniques, including block-iterative method with the curvilinear projection, a scope-based data decomposition method and a page-based data transfer scheme to implement the parallelization of BSIRT on GPU platform. Experimental results show that BSIRT can improve the reconstruction quality as well as the speed of the reconstruction process.
Pipeline Processing with an Iterative, Context-Based Detection Model
2016-01-22
and D. Dodge2 5d. PROJECT NUMBER 1010 5e. TASK NUMBER PPM00018850 5f. WORK UNIT NUMBER EF122183 7. PERFORMING ORGANIZATION NAME(S) AND...15. SUBJECT TERMS aftershock sequences, repeating explosions, detection framework, pattern detectors, correlation detectors, subspace detectors...2°x2° bins. The red stippled curves denote distances of 20, 95 and 144 degrees from ARCES. ............... 74 Figure 40: Predicted detection
MODEL PREDICTIVE CONTROL FUNDAMENTALS
African Journals Online (AJOL)
2012-07-02
Jul 2, 2012 ... paper, we will present an introduction to the theory and application of MPC with Matlab codes written to ... model predictive control, linear systems, discrete-time systems, ... and then compute very rapidly for this open-loop con-.
Variational iteration solving method for El Nino phenomenon atmospheric physics of nonlinear model
Institute of Scientific and Technical Information of China (English)
无
2005-01-01
A class of El Nino atmospheric physics oscillation model is considered. The El Nino atmospheric physics oscillation is an abnormal phenomenon involved in the tropical Pacific ocean-atmosphere interactions. The conceptual oscillator model should consider the variations of both the eastern and westem Pacific anomaly patterns. An El Nino atmospheric physics model is proposed using a method for the variational iteration theory. Using the variational iteration method, the approximate expansions of the solution of corresponding problem are constructed. That is, firstly, introducing a set of functional and accounting their variationals, the Lagrange multiplicators are counted, and then the variational iteration is defined, finally, the approximate solution is obtained. From approximate expansions of the solution, the zonal sea surface temperature anomaly in the equatorial eastern Pacific and the thermocline depth anomaly of the seaair oscillation for El Nino atmospheric physics model can be analyzed. El Nino is a very complicated natural phenomenon. Hence basic models need to be reduced for the sea-air oscillator and are solved. The variational iteration is a simple and valid approximate method.
Directory of Open Access Journals (Sweden)
Yi Xu
2013-01-01
Full Text Available We propose a fourth-order total bounded variation regularization model which could reduce undesirable effects effectively. Based on this model, we introduce an improved split Bregman iteration algorithm to obtain the optimum solution. The convergence property of our algorithm is provided. Numerical experiments show the more excellent visual quality of the proposed model compared with the second-order total bounded variation model which is proposed by Liu and Huang (2010.
Institute of Scientific and Technical Information of China (English)
Ge-mai Chen; Jin-hong You
2005-01-01
Consider a repeated measurement partially linear regression model with an unknown vector pasemiparametric generalized least squares estimator (SGLSE) ofβ, we propose an iterative weighted semiparametric least squares estimator (IWSLSE) and show that it improves upon the SGLSE in terms of asymptotic covariance matrix. An adaptive procedure is given to determine the number of iterations. We also show that when the number of replicates is less than or equal to two, the IWSLSE can not improve upon the SGLSE.These results are generalizations of those in [2] to the case of semiparametric regressions.
An Overview of Recent Advances in the Iterative Analysis of Coupled Models for Wave Propagation
Directory of Open Access Journals (Sweden)
D. Soares
2014-01-01
Full Text Available Wave propagation problems can be solved using a variety of methods. However, in many cases, the joint use of different numerical procedures to model different parts of the problem may be advisable and strategies to perform the coupling between them must be developed. Many works have been published on this subject, addressing the case of electromagnetic, acoustic, or elastic waves and making use of different strategies to perform this coupling. Both direct and iterative approaches can be used, and they may exhibit specific advantages and disadvantages. This work focuses on the use of iterative coupling schemes for the analysis of wave propagation problems, presenting an overview of the application of iterative procedures to perform the coupling between different methods. Both frequency- and time-domain analyses are addressed, and problems involving acoustic, mechanical, and electromagnetic wave propagation problems are illustrated.
Hybrid and Model-Based Iterative Reconstruction Techniques for Pediatric CT
den Harder, Annemarie M.; Willemink, Martin J.; Budde, Ricardo P. J.; Schilham, Arnold M. R.; Leiner, Tim; de Jong, Pim A.
2015-01-01
OBJECTIVE. Radiation exposure from CT examinations should be reduced to a minimum in children. Iterative reconstruction (IR) is a method to reduce image noise that can be used to improve CT image quality, thereby allowing radiation dose reduction. This article reviews the use of hybrid and model-bas
Solving large test-day models by iteration on data and preconditioned conjugate gradient.
Lidauer, M; Strandén, I; Mäntysaari, E A; Pösö, J; Kettunen, A
1999-12-01
A preconditioned conjugate gradient method was implemented into an iteration on a program for data estimation of breeding values, and its convergence characteristics were studied. An algorithm was used as a reference in which one fixed effect was solved by Gauss-Seidel method, and other effects were solved by a second-order Jacobi method. Implementation of the preconditioned conjugate gradient required storing four vectors (size equal to number of unknowns in the mixed model equations) in random access memory and reading the data at each round of iteration. The preconditioner comprised diagonal blocks of the coefficient matrix. Comparison of algorithms was based on solutions of mixed model equations obtained by a single-trait animal model and a single-trait, random regression test-day model. Data sets for both models used milk yield records of primiparous Finnish dairy cows. Animal model data comprised 665,629 lactation milk yields and random regression test-day model data of 6,732,765 test-day milk yields. Both models included pedigree information of 1,099,622 animals. The animal model ¿random regression test-day model¿ required 122 ¿305¿ rounds of iteration to converge with the reference algorithm, but only 88 ¿149¿ were required with the preconditioned conjugate gradient. To solve the random regression test-day model with the preconditioned conjugate gradient required 237 megabytes of random access memory and took 14% of the computation time needed by the reference algorithm.
A Modified Model Predictive Control Scheme
Institute of Scientific and Technical Information of China (English)
Xiao-Bing Hu; Wen-Hua Chen
2005-01-01
In implementations of MPC (Model Predictive Control) schemes, two issues need to be addressed. One is how to enlarge the stability region as much as possible. The other is how to guarantee stability when a computational time limitation exists. In this paper, a modified MPC scheme for constrained linear systems is described. An offline LMI-based iteration process is introduced to expand the stability region. At the same time, a database of feasible control sequences is generated offline so that stability can still be guaranteed in the case of computational time limitations. Simulation results illustrate the effectiveness of this new approach.
Energy Technology Data Exchange (ETDEWEB)
Chatzidakis, Stylianos [ORNL; Jarrell, Joshua J [ORNL; Scaglione, John M [ORNL
2017-01-01
The inspection of the dry storage canisters that house spent nuclear fuel is an important issue facing the nuclear industry; currently, there are limited options available to provide for even minimal inspections. An issue of concern is stress corrosion cracking (SCC) in austenitic stainless steel canisters. SCC is difficult to predict and exhibits small crack opening displacements on the order of 15 30 m. Nondestructive examination (NDE) of such microscopic cracks is especially challenging, and it may be possible to miss SCC during inspections. The coarse grain microstructure at the heat affected zone reduces the achievable sensitivity of conventional ultrasound techniques. At Oak Ridge National Laboratory, a tomographic approach is under development to improve SCC detection using ultrasound guided waves and model-based iterative reconstruction (MBIR). Ultrasound-guided waves propagate parallel to the physical boundaries of the surface and allow for rapid inspection of a large area from a single probe location. MBIR is a novel, effective probabilistic imaging tool that offers higher precision and better image quality than current reconstruction techniques. This paper analyzes the canister environment, stainless steel microstructure, and SCC characteristics. The end goal is to demonstrate the feasibility of an NDE system based on ultrasonic guided waves and MBIR for canister degradation and to produce radar-like images of the canister surface with significantly improved image quality. The proposed methodology can potentially reduce human radiation exposure, result in lower operational costs, and provide a methodology that can be used to verify canister integrity in-situ during extended storage
An iterative statistical tolerance analysis procedure to deal with linearized behavior models
Institute of Scientific and Technical Information of China (English)
Antoine DUMAS; Jean-Yves DANTAN; Nicolas GAYTON; Thomas BLES; Robin LOEBL
2015-01-01
Tolerance analysis consists of analyzing the impact of variations on the mechanism behavior due to the manufacturing process. The goal is to predict its quality level at the design stage. The technique involves computing probabilities of failure of the mechanism in a mass production process. The various analysis methods have to consider the component’s variations as random variables and the worst configuration of gaps for over-constrained systems. This consideration varies in function by the type of mechanism behavior and is realized by an optimization scheme combined with a Monte Carlo simulation. To simplify the optimization step, it is necessary to linearize the mechanism behavior into several parts. This study aims at analyzing the impact of the linearization strategy on the probability of failure estimation; a highly over-constrained mechanism with two pins and five cotters is used as an illustration for this study. The purpose is to strike a balance among model error caused by the linearization, computing time, and result accuracy. In addition, an iterative procedure is proposed for the assembly requirement to provide accurate results without using the entire Monte Carlo simulation.
Development of acoustic model-based iterative reconstruction technique for thick-concrete imaging
Almansouri, Hani; Clayton, Dwight; Kisner, Roger; Polsky, Yarom; Bouman, Charles; Santos-Villalobos, Hector
2016-02-01
Ultrasound signals have been used extensively for non-destructive evaluation (NDE). However, typical reconstruction techniques, such as the synthetic aperture focusing technique (SAFT), are limited to quasi-homogenous thin media. New ultrasonic systems and reconstruction algorithms are in need for one-sided NDE of non-homogenous thick objects. An application example space is imaging of reinforced concrete structures for commercial nuclear power plants (NPPs). These structures provide important foundation, support, shielding, and containment functions. Identification and management of aging and degradation of concrete structures is fundamental to the proposed long-term operation of NPPs. Another example is geothermal and oil/gas production wells. These multi-layered structures are composed of steel, cement, and several types of soil and rocks. Ultrasound systems with greater penetration range and image quality will allow for better monitoring of the well's health and prediction of high-pressure hydraulic fracturing of the rock. These application challenges need to be addressed with an integrated imaging approach, where the application, hardware, and reconstruction software are highly integrated and optimized. Therefore, we are developing an ultrasonic system with Model-Based Iterative Reconstruction (MBIR) as the image reconstruction backbone. As the first implementation of MBIR for ultrasonic signals, this paper document the first implementation of the algorithm and show reconstruction results for synthetically generated data.1
Development of Acoustic Model-Based Iterative Reconstruction Technique for Thick-Concrete Imaging
Energy Technology Data Exchange (ETDEWEB)
Almansouri, Hani [Purdue University; Clayton, Dwight A [ORNL; Kisner, Roger A [ORNL; Polsky, Yarom [ORNL; Bouman, Charlie [Purdue University; Santos-Villalobos, Hector J [ORNL
2015-01-01
Ultrasound signals have been used extensively for non-destructive evaluation (NDE). However, typical reconstruction techniques, such as the synthetic aperture focusing technique (SAFT), are limited to quasi-homogenous thin media. New ultrasonic systems and reconstruction algorithms are in need for one-sided NDE of non-homogenous thick objects. An application example space is imaging of reinforced concrete structures for commercial nuclear power plants (NPPs). These structures provide important foundation, support, shielding, and containment functions. Identification and management of aging and degradation of concrete structures is fundamental to the proposed long-term operation of NPPs. Another example is geothermal and oil/gas production wells. These multi-layered structures are composed of steel, cement, and several types of soil and rocks. Ultrasound systems with greater penetration range and image quality will allow for better monitoring of the well s health and prediction of high-pressure hydraulic fracturing of the rock. These application challenges need to be addressed with an integrated imaging approach, where the application, hardware, and reconstruction software are highly integrated and optimized. Therefore, we are developing an ultrasonic system with Model-Based Iterative Reconstruction (MBIR) as the image reconstruction backbone. As the first implementation of MBIR for ultrasonic signals, this paper document the first implementation of the algorithm and show reconstruction results for synthetically generated data.
Development of Acoustic Model-Based Iterative Reconstruction Technique for Thick-Concrete Imaging
Energy Technology Data Exchange (ETDEWEB)
Almansouri, Hani [Purdue University; Clayton, Dwight A [ORNL; Kisner, Roger A [ORNL; Polsky, Yarom [ORNL; Bouman, Charlie [Purdue University; Santos-Villalobos, Hector J [ORNL
2016-01-01
Ultrasound signals have been used extensively for non-destructive evaluation (NDE). However, typical reconstruction techniques, such as the synthetic aperture focusing technique (SAFT), are limited to quasi-homogenous thin media. New ultrasonic systems and reconstruction algorithms are in need for one-sided NDE of non-homogenous thick objects. An application example space is imaging of reinforced concrete structures for commercial nuclear power plants (NPPs). These structures provide important foundation, support, shielding, and containment functions. Identification and management of aging and degradation of concrete structures is fundamental to the proposed long-term operation of NPPs. Another example is geothermal and oil/gas production wells. These multi-layered structures are composed of steel, cement, and several types of soil and rocks. Ultrasound systems with greater penetration range and image quality will allow for better monitoring of the well's health and prediction of high-pressure hydraulic fracturing of the rock. These application challenges need to be addressed with an integrated imaging approach, where the application, hardware, and reconstruction software are highly integrated and optimized. Therefore, we are developing an ultrasonic system with Model-Based Iterative Reconstruction (MBIR) as the image reconstruction backbone. As the first implementation of MBIR for ultrasonic signals, this paper document the first implementation of the algorithm and show reconstruction results for synthetically generated data.
Feed-back control of 2/1 locked mode phase: experiment on DIII-D and modeling for ITER
Choi, W.; Olofsson, K. E. J.; Sweeney, R.; Volpe, F. A.
2016-10-01
A model has been developed for ITER to predict the dynamics of saturated m / n = 2/1 tearing modes subject to various torques. The modes, with finite moment of inertia, are modeled as surface currents interacting with error fields, applied magnetic perturbations generated by internal and external non-axisymmetric coils, the vacuum vessel, and the first wall. Using this model, a feed-back controller has been designed to control the phase of locked modes. As predicted by simulation, experimental results on DIII-D show a simple fixed-gain controller can impose a desired constant phase or entrain the mode at a desired constant frequency (e.g. 20 Hz). For a given current in the control coils, a maximum entrainment frequency exists and is dependent on island width. The performance of such a controller in ITER is hereby simulated. The controller is expected to be useful in assisting island suppression with electron cyclotron current drive, as well as to prevent large amplitude locked modes and possible disruption. This work was supported in part by the US Department of Energy under DE-SC0008520.
Sánchez, Benjamín J; Pérez-Correa, José R; Agosin, Eduardo
2014-09-01
Dynamic flux balance analysis (dFBA) has been widely employed in metabolic engineering to predict the effect of genetic modifications and environmental conditions in the cell׳s metabolism during dynamic cultures. However, the importance of the model parameters used in these methodologies has not been properly addressed. Here, we present a novel and simple procedure to identify dFBA parameters that are relevant for model calibration. The procedure uses metaheuristic optimization and pre/post-regression diagnostics, fixing iteratively the model parameters that do not have a significant role. We evaluated this protocol in a Saccharomyces cerevisiae dFBA framework calibrated for aerobic fed-batch and anaerobic batch cultivations. The model structures achieved have only significant, sensitive and uncorrelated parameters and are able to calibrate different experimental data. We show that consumption, suboptimal growth and production rates are more useful for calibrating dynamic S. cerevisiae metabolic models than Boolean gene expression rules, biomass requirements and ATP maintenance.
Real-Time Optimization for Economic Model Predictive Control
DEFF Research Database (Denmark)
Sokoler, Leo Emil; Edlund, Kristian; Frison, Gianluca
2012-01-01
In this paper, we develop an efficient homogeneous and self-dual interior-point method for the linear programs arising in economic model predictive control. To exploit structure in the optimization problems, the algorithm employs a highly specialized Riccati iteration procedure. Simulations show...
Candidate Prediction Models and Methods
DEFF Research Database (Denmark)
Nielsen, Henrik Aalborg; Nielsen, Torben Skov; Madsen, Henrik
2005-01-01
This document lists candidate prediction models for Work Package 3 (WP3) of the PSO-project called ``Intelligent wind power prediction systems'' (FU4101). The main focus is on the models transforming numerical weather predictions into predictions of power production. The document also outlines...... the possibilities w.r.t. different numerical weather predictions actually available to the project....
Testing and modeling of diffusion bonded prototype optical windows under ITER conditions
Energy Technology Data Exchange (ETDEWEB)
Jacobs, M. [Flemish Inst. for Technological Research, Mol (Belgium); Van Oost, G. [Dept. of Applied Physics, Ghent Univ., Ghent (Belgium); Degrieck, J.; De Baere, I. [Dept. of Materials Science and Engineering, Ghent Univ., Ghent (Belgium); Gusarov, A. [Belgian Nuclear Research Center, Mol (Belgium); Gubbels, F. [TNO, Eindhoven (Netherlands); Massaut, V. [Belgian Nuclear Research Center, Mol (Belgium)
2011-07-01
Glass-metal joints are a part of ITER optical diagnostics windows. These joints must be leak tight for the safety (presence of tritium in ITER) and to preserve the vacuum. They must also withstand the ITER environment: temperatures up to 220 deg.C and fast neutron fluxes of {approx}3.10{sup 9} n/cm{sup 2}.s. At the moment, little information is available about glass-metal joints suitable for ITER. Therefore, we performed mechanical and thermal tests on some prototypes of an aluminium diffusion bonded optical window. Finite element modeling with Abaqus code was used to understand the experimental results. The prototypes were helium leaking probably due to very tiny cracks in the interaction layer between the steel and the aluminium. However, they were all able to withstand a thermal cycling test up to 200 deg. C; no damage could be seen after the tests by visual inspection. The prototypes successfully passed push-out test with a 500 N load. During the destructive push-out tests the prototypes broke at a 6-12 kN load between the aluminium layer and the steel or the glass, depending on the surface quality of the glass. The microanalysis of the joints has also been performed. The finite element modeling of the push-out tests is in a reasonable agreement with the experiments. According to the model, the highest thermal stress is created in the aluminium layer. Thus, the aluminium joint seems to be the weakest part of the prototypes. If this layer is improved, it will probably make the prototype helium leak tight and as such, a good ITER window candidate. (authors)
Predictive Surface Complexation Modeling
Energy Technology Data Exchange (ETDEWEB)
Sverjensky, Dimitri A. [Johns Hopkins Univ., Baltimore, MD (United States). Dept. of Earth and Planetary Sciences
2016-11-29
Surface complexation plays an important role in the equilibria and kinetics of processes controlling the compositions of soilwaters and groundwaters, the fate of contaminants in groundwaters, and the subsurface storage of CO_{2} and nuclear waste. Over the last several decades, many dozens of individual experimental studies have addressed aspects of surface complexation that have contributed to an increased understanding of its role in natural systems. However, there has been no previous attempt to develop a model of surface complexation that can be used to link all the experimental studies in order to place them on a predictive basis. Overall, my research has successfully integrated the results of the work of many experimentalists published over several decades. For the first time in studies of the geochemistry of the mineral-water interface, a practical predictive capability for modeling has become available. The predictive correlations developed in my research now enable extrapolations of experimental studies to provide estimates of surface chemistry for systems not yet studied experimentally and for natural and anthropogenically perturbed systems.
Energy Technology Data Exchange (ETDEWEB)
Corcelli, S.A.; Kress, J.D.; Pratt, L.R.
1995-08-07
This paper develops and characterizes mixed direct-iterative methods for boundary integral formulations of continuum dielectric solvation models. We give an example, the Ca{sup ++}{hor_ellipsis}Cl{sup {minus}} pair potential of mean force in aqueous solution, for which a direct solution at thermal accuracy is difficult and, thus for which mixed direct-iterative methods seem necessary to obtain the required high resolution. For the simplest such formulations, Gauss-Seidel iteration diverges in rare cases. This difficulty is analyzed by obtaining the eigenvalues and the spectral radius of the non-symmetric iteration matrix. This establishes that those divergences are due to inaccuracies of the asymptotic approximations used in evaluation of the matrix elements corresponding to accidental close encounters of boundary elements on different atomic spheres. The spectral radii are then greater than one for those diverging cases. This problem is cured by checking for boundary element pairs closer than the typical spatial extent of the boundary elements and for those cases performing an ``in-line`` Monte Carlo integration to evaluate the required matrix elements. These difficulties are not expected and have not been observed for the thoroughly coarsened equations obtained when only a direct solution is sought. Finally, we give an example application of hybrid quantum-classical methods to deprotonation of orthosilicic acid in water.
DEFF Research Database (Denmark)
Dieterle, Mischa; Horstmeyer, Thomas; Berthold, Jost;
2012-01-01
block inside a bigger structure. In this work, we present a general framework for skeleton iteration and discuss requirements and variations of iteration control and iteration body. Skeleton iteration is expressed by synchronising a parallel iteration body skeleton with a (likewise parallel) state...
Kazemi, Mahdi; Arefi, Mohammad Mehdi
2016-12-15
In this paper, an online identification algorithm is presented for nonlinear systems in the presence of output colored noise. The proposed method is based on extended recursive least squares (ERLS) algorithm, where the identified system is in polynomial Wiener form. To this end, an unknown intermediate signal is estimated by using an inner iterative algorithm. The iterative recursive algorithm adaptively modifies the vector of parameters of the presented Wiener model when the system parameters vary. In addition, to increase the robustness of the proposed method against variations, a robust RLS algorithm is applied to the model. Simulation results are provided to show the effectiveness of the proposed approach. Results confirm that the proposed method has fast convergence rate with robust characteristics, which increases the efficiency of the proposed model and identification approach. For instance, the FIT criterion will be achieved 92% in CSTR process where about 400 data is used.
Techniques in Iterative Proton CT Image Reconstruction
Penfold, Scott
2015-01-01
This is a review paper on some of the physics, modeling, and iterative algorithms in proton computed tomography (pCT) image reconstruction. The primary challenge in pCT image reconstruction lies in the degraded spatial resolution resulting from multiple Coulomb scattering within the imaged object. Analytical models such as the most likely path (MLP) have been proposed to predict the scattered trajectory from measurements of individual proton location and direction before and after the object. Iterative algorithms provide a flexible tool with which to incorporate these models into image reconstruction. The modeling leads to a large and sparse linear system of equations that can efficiently be solved by projection methods-based iterative algorithms. Such algorithms perform projections of the iterates onto the hyperlanes that are represented by the linear equations of the system. They perform these projections in possibly various algorithmic structures, such as block-iterative projections (BIP), string-averaging...
Konrath, Fabian; Witt, Johannes; Sauter, Thomas; Kulms, Dagmar
2014-03-01
The transcription factor nuclear factor kappa-B (NFκB) is a key regulator of pro-inflammatory and pro-proliferative processes. Accordingly, uncontrolled NFκB activity may contribute to the development of severe diseases when the regulatory system is impaired. Since NFκB can be triggered by a huge variety of inflammatory, pro-and anti-apoptotic stimuli, its activation underlies a complex and tightly regulated signaling network that also includes multi-layered negative feedback mechanisms. Detailed understanding of this complex signaling network is mandatory to identify sensitive parameters that may serve as targets for therapeutic interventions. While many details about canonical and non-canonical NFκB activation have been investigated, less is known about cellular IκBα pools that may tune the cellular NFκB levels. IκBα has so far exclusively been described to exist in two different forms within the cell: stably bound to NFκB or, very transiently, as unbound protein. We created a detailed mathematical model to quantitatively capture and analyze the time-resolved network behavior. By iterative refinement with numerous biological experiments, we yielded a highly identifiable model with superior predictive power which led to the hypothesis of an NFκB-lacking IκBα complex that contains stabilizing IKK subunits. We provide evidence that other but canonical pathways exist that may affect the cellular IκBα status. This additional IκBα:IKKγ complex revealed may serve as storage for the inhibitor to antagonize undesired NFκB activation under physiological and pathophysiological conditions.
Clustered iterative stochastic ensemble method for multi-modal calibration of subsurface flow models
Elsheikh, Ahmed H.
2013-05-01
A novel multi-modal parameter estimation algorithm is introduced. Parameter estimation is an ill-posed inverse problem that might admit many different solutions. This is attributed to the limited amount of measured data used to constrain the inverse problem. The proposed multi-modal model calibration algorithm uses an iterative stochastic ensemble method (ISEM) for parameter estimation. ISEM employs an ensemble of directional derivatives within a Gauss-Newton iteration for nonlinear parameter estimation. ISEM is augmented with a clustering step based on k-means algorithm to form sub-ensembles. These sub-ensembles are used to explore different parts of the search space. Clusters are updated at regular intervals of the algorithm to allow merging of close clusters approaching the same local minima. Numerical testing demonstrates the potential of the proposed algorithm in dealing with multi-modal nonlinear parameter estimation for subsurface flow models. © 2013 Elsevier B.V.
Mechanical and Electrical Modeling of Strands in Two ITER CS Cable Designs
Torre, A; Ciazynski, D
2014-01-01
Following the test of the first Central Solenoid (CS) conductor short samples for the International Thermonuclear Experimental Reactor (ITER) in the SULTAN facility, Iter Organization (IO) decided to manufacture and test two alternate samples using four different cable designs. These samples, while using the same Nb3Sn strand, were meant to assess the influence of various cable design parameters on the conductor performance and behavior under mechanical cycling. In particular, the second of these samples, CSIO2, aimed at comparing designs with modified cabling twist pitches sequences. This sample has been tested, and the two legs exhibited very different behaviors. To help understand what could lead to such a difference, these two cables were mechanically modeled using the MULTIFIL code, and the resulting strain map was used as an input into the CEA electrical code CARMEN. This article presents the main data extracted from the mechanical simulation and its use into the electrical modeling of individual strand...
Space station short-term mission planning using ontology modelling and time iteration
Institute of Scientific and Technical Information of China (English)
Huijiao Bu; Jin Zhang; Yazhong Luo
2016-01-01
This paper studies the problem of the space station short-term mission planning, which aims to alocate the exe-cuting time of missions effectively, schedule the correspon- ding resources reasonably and arrange the time of the as-tronauts properly. A domain model is developed by using the ontology theory to describe the concepts, constraints and relations of the planning domain formaly, abstractly and normatively. A method based on time iteration is adopted to solve the short-term planning problem. Meanwhile, the re-solving strategies are proposed to resolve different kinds of conflicts induced by the constraints of power, heat, resource, astronaut and relationship. The proposed approach is evalu-ated in a test case with fifteen missions, thirteen resources and three astronauts. The results show that the developed domain ontology model is reasonable, and the time iteration method using the proposed resolving strategies can suc-cessfuly obtain the plan satisfying al considered constraints.
Candidate Prediction Models and Methods
DEFF Research Database (Denmark)
Nielsen, Henrik Aalborg; Nielsen, Torben Skov; Madsen, Henrik
2005-01-01
This document lists candidate prediction models for Work Package 3 (WP3) of the PSO-project called ``Intelligent wind power prediction systems'' (FU4101). The main focus is on the models transforming numerical weather predictions into predictions of power production. The document also outlines...
Energy confinement scaling and the extrapolation to ITER
Energy Technology Data Exchange (ETDEWEB)
NONE
1997-11-01
The fusion performance of ITER is predicted using three different techniques; statistical analysis of the global energy confinement data, a dimensionless physics parameter similarity method and the full 1-D modeling of the plasma profiles. Although the three methods give overlapping predictions for the performance of ITER, the confidence interval of all of the techniques is still quite wide.
Iterative convergence of passage-time densities in semi-Markov performance models
Bradley, J.T.; Wilson, H. J.
2005-01-01
Passage-time densities are important for the detailed performance analysis of distributed computer and communicating systems. We provide a proof and demonstration of a practical iterative algorithm for extracting complete passage-time densities from expressive semi-Markov systems. We end by showing its application to a distributed web-server cluster model of 15.9 million states. © 2004 Elsevier B.V. All rights reserved.
Building generic anatomical models using virtual model cutting and iterative registration
Directory of Open Access Journals (Sweden)
Hallgrímsson Benedikt
2010-02-01
Full Text Available Abstract Background Using 3D generic models to statistically analyze trends in biological structure changes is an important tool in morphometrics research. Therefore, 3D generic models built for a range of populations are in high demand. However, due to the complexity of biological structures and the limited views of them that medical images can offer, it is still an exceptionally difficult task to quickly and accurately create 3D generic models (a model is a 3D graphical representation of a biological structure based on medical image stacks (a stack is an ordered collection of 2D images. We show that the creation of a generic model that captures spatial information exploitable in statistical analyses is facilitated by coupling our generalized segmentation method to existing automatic image registration algorithms. Methods The method of creating generic 3D models consists of the following processing steps: (i scanning subjects to obtain image stacks; (ii creating individual 3D models from the stacks; (iii interactively extracting sub-volume by cutting each model to generate the sub-model of interest; (iv creating image stacks that contain only the information pertaining to the sub-models; (v iteratively registering the corresponding new 2D image stacks; (vi averaging the newly created sub-models based on intensity to produce the generic model from all the individual sub-models. Results After several registration procedures are applied to the image stacks, we can create averaged image stacks with sharp boundaries. The averaged 3D model created from those image stacks is very close to the average representation of the population. The image registration time varies depending on the image size and the desired accuracy of the registration. Both volumetric data and surface model for the generic 3D model are created at the final step. Conclusions Our method is very flexible and easy to use such that anyone can use image stacks to create models and
Hao, Yan; Kemper, Peter; Smith, Gregory D
2009-09-01
Mathematical models of calcium release sites derived from Markov chain models of intracellular calcium channels exhibit collective gating reminiscent of the experimentally observed phenomenon of calcium puffs and sparks. Such models often take the form of stochastic automata networks in which the transition probabilities of each channel depend on the local calcium concentration and thus the state of the other channels. In order to overcome the state-space explosion that occurs in such compositionally defined calcium release site models, we have implemented several automated procedures for model reduction using fast/slow analysis. After categorizing rate constants in the single channel model as either fast or slow, groups of states in the expanded release site model that are connected by fast transitions are lumped, and transition rates between reduced states are chosen consistent with the conditional probability distribution among states within each group. For small problems these conditional probability distributions can be numerically calculated from the full model without approximation. For large problems the conditional probability distributions can be approximated without the construction of the full model by assuming rapid mixing of states connected by fast transitions. Alternatively, iterative aggregation/disaggregation may be employed to obtain reduced calcium release site models in a memory-efficient fashion. Benchmarking of several different iterative aggregation/disaggregation-based fast/slow reduction schemes establishes the effectiveness of automated calcium release site reduction utilizing the Koury-McAllister-Stewart method.
Melanoma risk prediction models
Directory of Open Access Journals (Sweden)
Nikolić Jelena
2014-01-01
Full Text Available Background/Aim. The lack of effective therapy for advanced stages of melanoma emphasizes the importance of preventive measures and screenings of population at risk. Identifying individuals at high risk should allow targeted screenings and follow-up involving those who would benefit most. The aim of this study was to identify most significant factors for melanoma prediction in our population and to create prognostic models for identification and differentiation of individuals at risk. Methods. This case-control study included 697 participants (341 patients and 356 controls that underwent extensive interview and skin examination in order to check risk factors for melanoma. Pairwise univariate statistical comparison was used for the coarse selection of the most significant risk factors. These factors were fed into logistic regression (LR and alternating decision trees (ADT prognostic models that were assessed for their usefulness in identification of patients at risk to develop melanoma. Validation of the LR model was done by Hosmer and Lemeshow test, whereas the ADT was validated by 10-fold cross-validation. The achieved sensitivity, specificity, accuracy and AUC for both models were calculated. The melanoma risk score (MRS based on the outcome of the LR model was presented. Results. The LR model showed that the following risk factors were associated with melanoma: sunbeds (OR = 4.018; 95% CI 1.724- 9.366 for those that sometimes used sunbeds, solar damage of the skin (OR = 8.274; 95% CI 2.661-25.730 for those with severe solar damage, hair color (OR = 3.222; 95% CI 1.984-5.231 for light brown/blond hair, the number of common naevi (over 100 naevi had OR = 3.57; 95% CI 1.427-8.931, the number of dysplastic naevi (from 1 to 10 dysplastic naevi OR was 2.672; 95% CI 1.572-4.540; for more than 10 naevi OR was 6.487; 95%; CI 1.993-21.119, Fitzpatricks phototype and the presence of congenital naevi. Red hair, phototype I and large congenital naevi were
Modelling ELM heat flux deposition on the ITER main chamber wall
Energy Technology Data Exchange (ETDEWEB)
Kočan, M., E-mail: martin.kocan@iter.org [ITER Organization, Route de Vinon-sur-Verdon, CS 90 046, F-13067 St Paul lez Durance Cedex (France); Pitts, R.A.; Lisgo, S.W.; Loarte, A. [ITER Organization, Route de Vinon-sur-Verdon, CS 90 046, F-13067 St Paul lez Durance Cedex (France); Gunn, J.P. [Association Euratom-CEA, CEA/DSM/IRFM, Cadarache, 13108 Saint-Paul-lez-Durance (France); Fuchs, V. [Institute of Plasma Physics, Association EURATOM/IPP.CR, Praha 18200 (Czech Republic)
2015-08-15
The interaction of ELM filaments with the ITER beryllium first wall panels (FWPs) is studied using a simple ad-hoc fluid model of the filament parallel transport, taking into account the full, three-dimensional structure of the FWPs, including magnetic shadowing effects. The calculated ELM surface heat loads are used as input to the RACLETTE heat transfer code to estimate the FWP surface temperature rise. The results indicate that controlled ELMs in ITER during burning plasma operation (ΔW{sub ELM} ≈ 0.6 M J) will not lead to melting or significant evaporation of the beryllium surfaces, even in the case of high ELM broadening and the minimum allowable distance between the primary and secondary separatrices. The ELM-averaged steady-state heat load also stays below the maximum power handling capability of the FWPs.
Combined trust model based on evidence theory in iterated prisoner's dilemma game
Chen, Bo; Zhang, Bin; Zhu, Weidong
2011-01-01
In the iterated prisoner's dilemma game, agents often play defection based on mutual distrust for the sake of their own benefits. However, most game strategies and mechanisms are limited for strengthening cooperative behaviour in the current literature, especially in noisy environments. In this article, we construct a combined trust model by combining the locally owned information and the recommending information from other agents and develop a combined trust strategy in the iterated prisoner's dilemma game. The proposed game strategy can provide not only a higher payoff for agents, but also a trust mechanism for the system. Furthermore, agents can form their own reputation evaluations upon their opponents and make more rational and precise decisions under our framework. Simulations of application are performed to show the performance of the proposed strategy in noise-free and noisy environments.
Energy Technology Data Exchange (ETDEWEB)
Kaasalainen, Touko; Lampinen, Anniina [University of Helsinki and Helsinki University Hospital, HUS Medical Imaging Center, Radiology, POB 340, Helsinki (Finland); University of Helsinki, Department of Physics, Helsinki (Finland); Palmu, Kirsi [University of Helsinki and Helsinki University Hospital, HUS Medical Imaging Center, Radiology, POB 340, Helsinki (Finland); School of Science, Aalto University, Department of Biomedical Engineering and Computational Science, Helsinki (Finland); Reijonen, Vappu; Kortesniemi, Mika [University of Helsinki and Helsinki University Hospital, HUS Medical Imaging Center, Radiology, POB 340, Helsinki (Finland); Leikola, Junnu [University of Helsinki and Helsinki University Hospital, Department of Plastic Surgery, Helsinki (Finland); Kivisaari, Riku [University of Helsinki and Helsinki University Hospital, Department of Neurosurgery, Helsinki (Finland)
2015-09-15
Medical professionals need to exercise particular caution when developing CT scanning protocols for children who require multiple CT studies, such as those with craniosynostosis. To evaluate the utility of ultra-low-dose CT protocols with model-based iterative reconstruction techniques for craniosynostosis imaging. We scanned two pediatric anthropomorphic phantoms with a 64-slice CT scanner using different low-dose protocols for craniosynostosis. We measured organ doses in the head region with metal-oxide-semiconductor field-effect transistor (MOSFET) dosimeters. Numerical simulations served to estimate organ and effective doses. We objectively and subjectively evaluated the quality of images produced by adaptive statistical iterative reconstruction (ASiR) 30%, ASiR 50% and Veo (all by GE Healthcare, Waukesha, WI). Image noise and contrast were determined for different tissues. Mean organ dose with the newborn phantom was decreased up to 83% compared to the routine protocol when using ultra-low-dose scanning settings. Similarly, for the 5-year phantom the greatest radiation dose reduction was 88%. The numerical simulations supported the findings with MOSFET measurements. The image quality remained adequate with Veo reconstruction, even at the lowest dose level. Craniosynostosis CT with model-based iterative reconstruction could be performed with a 20-μSv effective dose, corresponding to the radiation exposure of plain skull radiography, without compromising required image quality. (orig.)
Yang, Lei; Tang, Xianglong
2014-01-01
Cliques (maximal complete subnets) in protein-protein interaction (PPI) network are an important resource used to analyze protein complexes and functional modules. Clique-based methods of predicting PPI complement the data defection from biological experiments. However, clique-based predicting methods only depend on the topology of network. The false-positive and false-negative interactions in a network usually interfere with prediction. Therefore, we propose a method combining clique-based method of prediction and gene ontology (GO) annotations to overcome the shortcoming and improve the accuracy of predictions. According to different GO correcting rules, we generate two predicted interaction sets which guarantee the quality and quantity of predicted protein interactions. The proposed method is applied to the PPI network from the Database of Interacting Proteins (DIP) and most of the predicted interactions are verified by another biological database, BioGRID. The predicted protein interactions are appended to the original protein network, which leads to clique extension and shows the significance of biological meaning.
Directory of Open Access Journals (Sweden)
Lei Yang
2014-01-01
Full Text Available Cliques (maximal complete subnets in protein-protein interaction (PPI network are an important resource used to analyze protein complexes and functional modules. Clique-based methods of predicting PPI complement the data defection from biological experiments. However, clique-based predicting methods only depend on the topology of network. The false-positive and false-negative interactions in a network usually interfere with prediction. Therefore, we propose a method combining clique-based method of prediction and gene ontology (GO annotations to overcome the shortcoming and improve the accuracy of predictions. According to different GO correcting rules, we generate two predicted interaction sets which guarantee the quality and quantity of predicted protein interactions. The proposed method is applied to the PPI network from the Database of Interacting Proteins (DIP and most of the predicted interactions are verified by another biological database, BioGRID. The predicted protein interactions are appended to the original protein network, which leads to clique extension and shows the significance of biological meaning.
Computed tomography depiction of small pediatric vessels with model-based iterative reconstruction
Energy Technology Data Exchange (ETDEWEB)
Koc, Gonca; Courtier, Jesse L.; Phelps, Andrew; Marcovici, Peter A.; MacKenzie, John D. [UCSF Benioff Children' s Hospital, Department of Radiology and Biomedical Imaging, San Francisco, CA (United States)
2014-07-15
Computed tomography (CT) is extremely important in characterizing blood vessel anatomy and vascular lesions in children. Recent advances in CT reconstruction technology hold promise for improved image quality and also reductions in radiation dose. This report evaluates potential improvements in image quality for the depiction of small pediatric vessels with model-based iterative reconstruction (Veo trademark), a technique developed to improve image quality and reduce noise. To evaluate Veo trademark as an improved method when compared to adaptive statistical iterative reconstruction (ASIR trademark) for the depiction of small vessels on pediatric CT. Seventeen patients (mean age: 3.4 years, range: 2 days to 10.0 years; 6 girls, 11 boys) underwent contrast-enhanced CT examinations of the chest and abdomen in this HIPAA compliant and institutional review board approved study. Raw data were reconstructed into separate image datasets using Veo trademark and ASIR trademark algorithms (GE Medical Systems, Milwaukee, WI). Four blinded radiologists subjectively evaluated image quality. The pulmonary, hepatic, splenic and renal arteries were evaluated for the length and number of branches depicted. Datasets were compared with parametric and non-parametric statistical tests. Readers stated a preference for Veo trademark over ASIR trademark images when subjectively evaluating image quality criteria for vessel definition, image noise and resolution of small anatomical structures. The mean image noise in the aorta and fat was significantly less for Veo trademark vs. ASIR trademark reconstructed images. Quantitative measurements of mean vessel lengths and number of branches vessels delineated were significantly different for Veo trademark and ASIR trademark images. Veo trademark consistently showed more of the vessel anatomy: longer vessel length and more branching vessels. When compared to the more established adaptive statistical iterative reconstruction algorithm, model
Electrostatic ion thrusters - towards predictive modeling
Energy Technology Data Exchange (ETDEWEB)
Kalentev, O.; Matyash, K.; Duras, J.; Lueskow, K.F.; Schneider, R. [Ernst-Moritz-Arndt Universitaet Greifswald, D-17489 (Germany); Koch, N. [Technische Hochschule Nuernberg Georg Simon Ohm, Kesslerplatz 12, D-90489 Nuernberg (Germany); Schirra, M. [Thales Electronic Systems GmbH, Soeflinger Strasse 100, D-89077 Ulm (Germany)
2014-02-15
The development of electrostatic ion thrusters so far has mainly been based on empirical and qualitative know-how, and on evolutionary iteration steps. This resulted in considerable effort regarding prototype design, construction and testing and therefore in significant development and qualification costs and high time demands. For future developments it is anticipated to implement simulation tools which allow for quantitative prediction of ion thruster performance, long-term behavior and space craft interaction prior to hardware design and construction. Based on integrated numerical models combining self-consistent kinetic plasma models with plasma-wall interaction modules a new quality in the description of electrostatic thrusters can be reached. These open the perspective for predictive modeling in this field. This paper reviews the application of a set of predictive numerical modeling tools on an ion thruster model of the HEMP-T (High Efficiency Multi-stage Plasma Thruster) type patented by Thales Electron Devices GmbH. (copyright 2014 WILEY-VCH Verlag GmbH and Co. KGaA, Weinheim) (orig.)
Energy Technology Data Exchange (ETDEWEB)
Kuya, Keita; Shinohara, Yuki; Fujii, Shinya; Ogawa, Toshihide [Tottori University, Division of Radiology, Department of Pathophysiological Therapeutic Science, Faculty of Medicine, Yonago (Japan); Sakamoto, Makoto; Watanabe, Takashi [Tottori University, Division of Neurosurgery, Department of Brain and Neurosciences, Faculty of Medicine, Yonago (Japan); Iwata, Naoki; Kishimoto, Junichi [Tottori University, Division of Clinical Radiology Faculty of Medicine, Yonago (Japan); Kaminou, Toshio [Osaka Minami Medical Center, Department of Radiology, Osaka (Japan)
2014-11-15
Follow-up CT angiography (CTA) is routinely performed for post-procedure management after carotid artery stenting (CAS). However, the stent lumen tends to be underestimated because of stent artifacts on CTA reconstructed with the filtered back projection (FBP) technique. We assessed the utility of new iterative reconstruction techniques, such as adaptive statistical iterative reconstruction (ASIR) and model-based iterative reconstruction (MBIR), for CTA after CAS in comparison with FBP. In a phantom study, we evaluated the differences among the three reconstruction techniques with regard to the relationship between the stent luminal diameter and the degree of underestimation of stent luminal diameter. In a clinical study, 34 patients who underwent follow-up CTA after CAS were included. We compared the stent luminal diameters among FBP, ASIR, and MBIR, and performed visual assessment of low attenuation area (LAA) in the stent lumen using a three-point scale. In the phantom study, stent luminal diameter was increasingly underestimated as luminal diameter became smaller in all CTA images. Stent luminal diameter was larger with MBIR than with the other reconstruction techniques. Similarly, in the clinical study, stent luminal diameter was larger with MBIR than with the other reconstruction techniques. LAA detectability scores of MBIR were greater than or equal to those of FBP and ASIR in all cases. MBIR improved the accuracy of assessment of stent luminal diameter and LAA detectability in the stent lumen when compared with FBP and ASIR. We conclude that MBIR is a useful reconstruction technique for CTA after CAS. (orig.)
Dynamic modelling of flexibly supported gears using iterative convergence of tooth mesh stiffness
Xue, Song; Howard, Ian
2016-12-01
This paper presents a new gear dynamic model for flexibly supported gear sets aiming to improve the accuracy of gear fault diagnostic methods. In the model, the operating gear centre distance, which can affect the gear design parameters, like the gear mesh stiffness, has been selected as the iteration criteria because it will significantly deviate from its nominal value for a flexible supported gearset when it is operating. The FEA method was developed for calculation of the gear mesh stiffnesses with varying gear centre distance, which can then be incorporated by iteration into the gear dynamic model. The dynamic simulation results from previous models that neglect the operating gear centre distance change and those from the new model that incorporate the operating gear centre distance change were obtained by numerical integration of the differential equations of motion using the Newmark method. Some common diagnostic tools were utilized to investigate the difference and comparison of the fault diagnostic results between the two models. The results of this paper indicate that the major difference between the two diagnostic results for the cracked tooth exists in the extended duration of the crack event and in changes to the phase modulation of the coherent time synchronous averaged signal even though other notable differences from other diagnostic results can also be observed.
Coupled core-SOL modelling of W contamination in H-mode JET plasmas with ITER-like wall
Energy Technology Data Exchange (ETDEWEB)
Parail, V., E-mail: Vassili.parail@ccfe.ac.uk [CCFE Culham Science Centre, Abingdon, Oxon OX14 3DB (United Kingdom); Corrigan, G.; Da Silva Aresta Belo, P. [CCFE Culham Science Centre, Abingdon, Oxon OX14 3DB (United Kingdom); De La Luna, E. [Laboratorio Nacional de Fusion, Madrid (Spain); Harting, D. [CCFE Culham Science Centre, Abingdon, Oxon OX14 3DB (United Kingdom); Koechl, F. [Atominstitut, TU Wien, 1020 Vienna (Austria); Koskela, T. [Aalto University, Department of Applied Physics, P.O. Box 14100, FIN-00076 Aalto (Finland); Meigs, A.; Militello-Asp, E.; Romanelli, M. [CCFE Culham Science Centre, Abingdon, Oxon OX14 3DB (United Kingdom); Tsalas, M. [FOM Institute DIFFER, P.O. Box 1207, NL-3430 BE Nieuwegein (Netherlands)
2015-08-15
The influence of the ITER-like Wall (ILW) with divertor target plate made of tungsten (W), on plasma performance in JET H-mode is being investigated since 2011 (see F. Romanelli and references therein). One of the key issues in discharges with low level of D fuelling is observed accumulation of W in the plasma core, which leads to a reduction in plasma performance. To study the interplay between W sputtering on the target plate, penetration of W through the SOL and edge transport barrier (ETB) and its further accumulation in plasma core predictive modelling was launched using a coupled 1.5D core and 2D SOL code JINTRAC (Romanelli, 2014; Cenacchi and Taroni, 1988; Taroni et al., 1992; Wiesen et al., 2006). Simulations reveal the important role of ELMs in W sputtering and plasma density control. Analyses also confirm pivotal role played by the neo-classical pinch of heavy impurities within the ETB.
DEFF Research Database (Denmark)
Dieterle, Mischa; Horstmeyer, Thomas; Berthold, Jost;
2012-01-01
Skeleton-based programming is an area of increasing relevance with upcoming highly parallel hardware, since it substantially facilitates parallel programming and separates concerns. When parallel algorithms expressed by skeletons involve iterations – applying the same algorithm repeatedly...... block inside a bigger structure. In this work, we present a general framework for skeleton iteration and discuss requirements and variations of iteration control and iteration body. Skeleton iteration is expressed by synchronising a parallel iteration body skeleton with a (likewise parallel) state......-based iteration control, where both skeletons offer supportive type safety by dedicated types geared towards stream communication for the iteration. The skeleton iteration framework is implemented in the parallel Haskell dialect Eden. We use example applications to assess performance and overhead....
Energy Technology Data Exchange (ETDEWEB)
Banerjee, Santanu; Vasu, P [Institute for Plasma Research, Bhat, Gandhinagar 382 428, Gujarat (India); Von Hellermann, M [FOM Institute for Plasma Physics, Rijnhuizen (Netherlands); Jaspers, R J E, E-mail: sbanerje@ipr.res.i [Applied Physics Department, Eindhoven University of Technology, Eindhoven (Netherlands)
2010-12-15
Contamination of optical signals by reflections from the tokamak vessel wall is a matter of great concern. For machines such as ITER and future reactors, where the vessel wall will be predominantly metallic, this is potentially a risk factor for quantitative optical emission spectroscopy. This is, in particular, the case when bremsstrahlung continuum radiation from the bulk plasma is used as a common reference light source for the cross-calibration of visible spectroscopy. In this paper the reflected contribution to the continuum level in Textor and ITER has been estimated for the detection channels meant for charge exchange recombination spectroscopy (CXRS). A model assuming diffuse reflection has been developed for the bremsstrahlung which is a much extended source. Based on this model, it is shown that in the case of ITER upper port 3, a wall with a moderate reflectivity of 20% leads to the wall reflected fraction being as high as 55-60% of the weak signals in the edge channels. In contrast, a complete bidirectional reflectance distribution function (BRDF) based model has been developed in order to estimate the reflections from more localized sources like the charge exchange (CX) emission from a neutral beam in tokamaks. The largest signal contamination of {approx}15% is seen in the core CX channels, where the true CX signal level is much lower than that in the edge channels. Similar values are obtained for Textor also. These results indicate that the contributions from wall reflections may be large enough to significantly distort the overall spectral features of CX data, warranting an analysis at different wavelengths.
Banerjee, Santanu; Vasu, P.; von Hellermann, M.; Jaspers, R. J. E.
2010-12-01
Contamination of optical signals by reflections from the tokamak vessel wall is a matter of great concern. For machines such as ITER and future reactors, where the vessel wall will be predominantly metallic, this is potentially a risk factor for quantitative optical emission spectroscopy. This is, in particular, the case when bremsstrahlung continuum radiation from the bulk plasma is used as a common reference light source for the cross-calibration of visible spectroscopy. In this paper the reflected contribution to the continuum level in Textor and ITER has been estimated for the detection channels meant for charge exchange recombination spectroscopy (CXRS). A model assuming diffuse reflection has been developed for the bremsstrahlung which is a much extended source. Based on this model, it is shown that in the case of ITER upper port 3, a wall with a moderate reflectivity of 20% leads to the wall reflected fraction being as high as 55-60% of the weak signals in the edge channels. In contrast, a complete bidirectional reflectance distribution function (BRDF) based model has been developed in order to estimate the reflections from more localized sources like the charge exchange (CX) emission from a neutral beam in tokamaks. The largest signal contamination of ~15% is seen in the core CX channels, where the true CX signal level is much lower than that in the edge channels. Similar values are obtained for Textor also. These results indicate that the contributions from wall reflections may be large enough to significantly distort the overall spectral features of CX data, warranting an analysis at different wavelengths.
Reynolds, Andrew M; Lihoreau, Mathieu; Chittka, Lars
2013-01-01
Pollinating bees develop foraging circuits (traplines) to visit multiple flowers in a manner that minimizes overall travel distance, a task analogous to the travelling salesman problem. We report on an in-depth exploration of an iterative improvement heuristic model of bumblebee traplining previously found to accurately replicate the establishment of stable routes by bees between flowers distributed over several hectares. The critical test for a model is its predictive power for empirical data for which the model has not been specifically developed, and here the model is shown to be consistent with observations from different research groups made at several spatial scales and using multiple configurations of flowers. We refine the model to account for the spatial search strategy of bees exploring their environment, and test several previously unexplored predictions. We find that the model predicts accurately 1) the increasing propensity of bees to optimize their foraging routes with increasing spatial scale; 2) that bees cannot establish stable optimal traplines for all spatial configurations of rewarding flowers; 3) the observed trade-off between travel distance and prioritization of high-reward sites (with a slight modification of the model); 4) the temporal pattern with which bees acquire approximate solutions to travelling salesman-like problems over several dozen foraging bouts; 5) the instability of visitation schedules in some spatial configurations of flowers; 6) the observation that in some flower arrays, bees' visitation schedules are highly individually different; 7) the searching behaviour that leads to efficient location of flowers and routes between them. Our model constitutes a robust theoretical platform to generate novel hypotheses and refine our understanding about how small-brained insects develop a representation of space and use it to navigate in complex and dynamic environments.
Directory of Open Access Journals (Sweden)
Andrew M Reynolds
Full Text Available Pollinating bees develop foraging circuits (traplines to visit multiple flowers in a manner that minimizes overall travel distance, a task analogous to the travelling salesman problem. We report on an in-depth exploration of an iterative improvement heuristic model of bumblebee traplining previously found to accurately replicate the establishment of stable routes by bees between flowers distributed over several hectares. The critical test for a model is its predictive power for empirical data for which the model has not been specifically developed, and here the model is shown to be consistent with observations from different research groups made at several spatial scales and using multiple configurations of flowers. We refine the model to account for the spatial search strategy of bees exploring their environment, and test several previously unexplored predictions. We find that the model predicts accurately 1 the increasing propensity of bees to optimize their foraging routes with increasing spatial scale; 2 that bees cannot establish stable optimal traplines for all spatial configurations of rewarding flowers; 3 the observed trade-off between travel distance and prioritization of high-reward sites (with a slight modification of the model; 4 the temporal pattern with which bees acquire approximate solutions to travelling salesman-like problems over several dozen foraging bouts; 5 the instability of visitation schedules in some spatial configurations of flowers; 6 the observation that in some flower arrays, bees' visitation schedules are highly individually different; 7 the searching behaviour that leads to efficient location of flowers and routes between them. Our model constitutes a robust theoretical platform to generate novel hypotheses and refine our understanding about how small-brained insects develop a representation of space and use it to navigate in complex and dynamic environments.
First Test Results on ITER CS Model Coil and CS Insert
Energy Technology Data Exchange (ETDEWEB)
Martovetsky, N; Michael, P; Minervini, J; Radovinsky, A; Takayasu, M; Thome, R; Ando, T; Isono, T; Kato, T; Nakajima, H; Nishijima, G; Nunoya, Y; Sugimoto, M; Takahashi, Y; Tsuji, H; Bessette, D; Okuno, K; Ricci, M; Maix, R
2000-10-12
The Inner and Outer modules of the Central Solenoid Model Coil (CSMC) were built by US and Japanese home teams in collaboration with European and Russian teams to demonstrate the feasibility of a superconducting Central Solenoid for ITER and other large tokamak reactors. The CSMC mass is about 120 t, OD is about 3.6 m and the stored energy is 640 MJ at 46 kA and peak field of 13 T. Testing of the CSMC and the CS Insert took place at the Japan Atomic Energy Research Institute (JAERI) from mid March until mid August 2000. This paper presents the main results of the tests performed.
The Law of Iterated Logarithm of Rescaled Range Statistics for AR(1) Model
Institute of Scientific and Technical Information of China (English)
Zheng Yan LIN; Sung Chul LEE
2006-01-01
Let {Xn,n ≥ 0} be an AR(1) process. Let Q(n) be the rescaled range statistic, or the R/S statistic for {Xn} which is given by (max1≤k≤n(∑kj=1(Xj - (-X)n)) - min1≤k≤n(∑kj=1(Xj -(-X)n)))/(n-1 ∑n j=1(Xj - (-X)n)2)1/2 where (-X)n = n-1 ∑nj=1 Xj. In this paper we show a law of iterated logarithm for rescaled range statistics Q(n) for AR(1) model.
Modelling CH$_3$OH masers: Sobolev approximation and accelerated lambda iteration method
Nesterenok, Aleksandr
2015-01-01
A simple one-dimensional model of CH$_3$OH maser is considered. Two techniques are used for the calculation of molecule level populations: the accelerated lambda iteration (ALI) method and the large velocity gradient (LVG), or Sobolev, approximation. The LVG approximation gives accurate results provided that the characteristic dimensions of the medium are larger than 5-10 lengths of the resonance region. We presume that this condition can be satisfied only for the largest observed maser spot distributions. Factors controlling the pumping of class I and class II methanol masers are considered.
Numerical Solutions of the Multispecies Predator-Prey Model by Variational Iteration Method
Directory of Open Access Journals (Sweden)
Khaled Batiha
2007-01-01
Full Text Available The main objective of the current work was to solve the multispecies predator-prey model. The techniques used here were called the variational iteration method (VIM and the Adomian decomposition method (ADM. The advantage of this work is twofold. Firstly, the VIM reduces the computational work. Secondly, in comparison with existing techniques, the VIM is an improvement with regard to its accuracy and rapid convergence. The VIM has the advantage of being more concise for analytical and numerical purposes. Comparisons with the exact solution and the fourth-order Runge-Kutta method (RK4 show that the VIM is a powerful method for the solution of nonlinear equations.
Filippo Caschera; Gianluca Gazzola; Bedau, Mark A.; Carolina Bosch Moreno; Andrew Buchanan; James Cawse; Norman Packard; Martin M Hanczyc
2010-01-01
BACKGROUND: We consider the problem of optimizing a liposomal drug formulation: a complex chemical system with many components (e.g., elements of a lipid library) that interact nonlinearly and synergistically in ways that cannot be predicted from first principles. METHODOLOGY/PRINCIPAL FINDINGS: The optimization criterion in our experiments was the percent encapsulation of a target drug, Amphotericin B, detected experimentally via spectrophotometric assay. Optimization of such a complex syste...
Trial-by-trial identification of categorization strategy using iterative decision-bound modeling.
Hélie, Sébastien; Turner, Benjamin O; Crossley, Matthew J; Ell, Shawn W; Ashby, F Gregory
2016-08-05
Identifying the strategy that participants use in laboratory experiments is crucial in interpreting the results of behavioral experiments. This article introduces a new modeling procedure called iterative decision-bound modeling (iDBM), which iteratively fits decision-bound models to the trial-by-trial responses generated from single participants in perceptual categorization experiments. The goals of iDBM are to identify: (1) all response strategies used by a participant, (2) changes in response strategy, and (3) the trial number at which each change occurs. The new method is validated by testing its ability to identify the response strategies used in noisy simulated data. The benchmark simulation results show that iDBM is able to detect and identify strategy switches during an experiment and accurately estimate the trial number at which the strategy change occurs in low to moderate noise conditions. The new method is then used to reanalyze data from Ell and Ashby (2006). Applying iDBM revealed that increasing category overlap in an information-integration category learning task increased the proportion of participants who abandoned explicit rules, and reduced the number of training trials needed to abandon rules in favor of a procedural strategy. Finally, we discuss new research questions made possible through iDBM.
TRIPOLI-4{sup ®} Monte Carlo code ITER A-lite neutronic model validation
Energy Technology Data Exchange (ETDEWEB)
Jaboulay, Jean-Charles, E-mail: jean-charles.jaboulay@cea.fr [CEA, DEN, Saclay, DM2S, SERMA, F-91191 Gif-sur-Yvette (France); Cayla, Pierre-Yves; Fausser, Clement [MILLENNIUM, 16 Av du Québec Silic 628, F-91945 Villebon sur Yvette (France); Damian, Frederic; Lee, Yi-Kang; Puma, Antonella Li; Trama, Jean-Christophe [CEA, DEN, Saclay, DM2S, SERMA, F-91191 Gif-sur-Yvette (France)
2014-10-15
3D Monte Carlo transport codes are extensively used in neutronic analysis, especially in radiation protection and shielding analyses for fission and fusion reactors. TRIPOLI-4{sup ®} is a Monte Carlo code developed by CEA. The aim of this paper is to show its capability to model a large-scale fusion reactor with complex neutron source and geometry. A benchmark between MCNP5 and TRIPOLI-4{sup ®}, on the ITER A-lite model was carried out; neutron flux, nuclear heating in the blankets and tritium production rate in the European TBMs were evaluated and compared. The methodology to build the TRIPOLI-4{sup ®} A-lite model is based on MCAM and the MCNP A-lite model. Simplified TBMs, from KIT, were integrated in the equatorial-port. A good agreement between MCNP and TRIPOLI-4{sup ®} is shown; discrepancies are mainly included in the statistical error.
Jørgensen, Jakob H; Pan, Xiaochuan
2011-01-01
Discrete-to-discrete imaging models for computed tomography (CT) are becoming increasingly ubiquitous as the interest in iterative image reconstruction algorithms has heightened. Despite this trend, all the intuition for algorithm and system design derives from analysis of continuous-to-continuous models such as the X-ray and Radon transform. While the similarity between these models justifies some crossover, questions such as what are sufficient sampling conditions can be quite different for the two models. This sampling issue is addressed extensively in the first half of the article using singular value decomposition analysis for determining sufficient number of views and detector bins. The question of full sampling for CT is particularly relevant to current attempts to adapt compressive sensing (CS) motivated methods to application in CT image reconstruction. The second half goes in depth on this subject and discusses the link between object sparsity and sufficient sampling for accurate reconstruction. Par...
Iterative optimisation of Monte Carlo detector models using measurements and simulations
Energy Technology Data Exchange (ETDEWEB)
Marzocchi, O., E-mail: olaf@marzocchi.net [European Patent Office, Rijswijk (Netherlands); Leone, D., E-mail: debora.leone@kit.edu [Institute for Nuclear Waste Disposal, Karlsruhe Institute of Technology, Karlsruhe (Germany)
2015-04-11
This work proposes a new technique to optimise the Monte Carlo models of radiation detectors, offering the advantage of a significantly lower user effort and therefore an improved work efficiency compared to the prior techniques. The method consists of four steps, two of which are iterative and suitable for automation using scripting languages. The four steps consist in the acquisition in the laboratory of measurement data to be used as reference; the modification of a previously available detector model; the simulation of a tentative model of the detector to obtain the coefficients of a set of linear equations; the solution of the system of equations and the update of the detector model. Steps three and four can be repeated for more accurate results. This method avoids the “try and fail” approach typical of the prior techniques.
Paiement, Jean-François; Grandvalet, Yves; Bengio, Samy
2008-01-01
Modeling long-term dependencies in time series has proved very difficult to achieve with traditional machine learning methods. This problem occurs when considering music data. In this paper, we introduce generative models for melodies. We decompose melodic modeling into two subtasks. We first propose a rhythm model based on the distributions of distances between subsequences. Then, we define a generative model for melodies given chords and rhythms based on modeling sequences of Narmour featur...
Smolders, K.; Volckaert, M.; Swevers, J.
2008-11-01
This paper presents a nonlinear model-based iterative learning control procedure to achieve accurate tracking control for nonlinear lumped mechanical continuous-time systems. The model structure used in this iterative learning control procedure is new and combines a linear state space model and a nonlinear feature space transformation. An intuitive two-step iterative algorithm to identify the model parameters is presented. It alternates between the estimation of the linear and the nonlinear model part. It is assumed that besides the input and output signals also the full state vector of the system is available for identification. A measurement and signal processing procedure to estimate these signals for lumped mechanical systems is presented. The iterative learning control procedure relies on the calculation of the input that generates a given model output, so-called offline model inversion. A new offline nonlinear model inversion method for continuous-time, nonlinear time-invariant, state space models based on Newton's method is presented and applied to the new model structure. This model inversion method is not restricted to minimum phase models. It requires only calculation of the first order derivatives of the state space model and is applicable to multivariable models. For periodic reference signals the method yields a compact implementation in the frequency domain. Moreover it is shown that a bandwidth can be specified up to which learning is allowed when using this inversion method in the iterative learning control procedure. Experimental results for a nonlinear single-input-single-output system corresponding to a quarter car on a hydraulic test rig are presented. It is shown that the new nonlinear approach outperforms the linear iterative learning control approach which is currently used in the automotive industry on durability test rigs.
Wang, Ophelia; Zachmann, Luke J; Sesnie, Steven E; Olsson, Aaryn D; Dickson, Brett G
2014-01-01
Prioritizing areas for management of non-native invasive plants is critical, as invasive plants can negatively impact plant community structure. Extensive and multi-jurisdictional inventories are essential to prioritize actions aimed at mitigating the impact of invasions and changes in disturbance regimes. However, previous work devoted little effort to devising sampling methods sufficient to assess the scope of multi-jurisdictional invasion over extensive areas. Here we describe a large-scale sampling design that used species occurrence data, habitat suitability models, and iterative and targeted sampling efforts to sample five species and satisfy two key management objectives: 1) detecting non-native invasive plants across previously unsampled gradients, and 2) characterizing the distribution of non-native invasive plants at landscape to regional scales. Habitat suitability models of five species were based on occurrence records and predictor variables derived from topography, precipitation, and remotely sensed data. We stratified and established field sampling locations according to predicted habitat suitability and phenological, substrate, and logistical constraints. Across previously unvisited areas, we detected at least one of our focal species on 77% of plots. In turn, we used detections from 2011 to improve habitat suitability models and sampling efforts in 2012, as well as additional spatial constraints to increase detections. These modifications resulted in a 96% detection rate at plots. The range of habitat suitability values that identified highly and less suitable habitats and their environmental conditions corresponded to field detections with mixed levels of agreement. Our study demonstrated that an iterative and targeted sampling framework can address sampling bias, reduce time costs, and increase detections. Other studies can extend the sampling framework to develop methods in other ecosystems to provide detection data. The sampling methods
Application of Gauss's law space-charge limited emission model in iterative particle tracking method
Altsybeyev, V. V.; Ponomarev, V. A.
2016-11-01
The particle tracking method with a so-called gun iteration for modeling the space charge is discussed in the following paper. We suggest to apply the emission model based on the Gauss's law for the calculation of the space charge limited current density distribution using considered method. Based on the presented emission model we have developed a numerical algorithm for this calculations. This approach allows us to perform accurate and low time consumpting numerical simulations for different vacuum sources with the curved emitting surfaces and also in the presence of additional physical effects such as bipolar flows and backscattered electrons. The results of the simulations of the cylindrical diode and diode with elliptical emitter with the use of axysimmetric coordinates are presented. The high efficiency and accuracy of the suggested approach are confirmed by the obtained results and comparisons with the analytical solutions.
Ciotti, M.; Nijhuis, A.; Ribani, P. L.; Savoldi Richard, L.; Zanino, R.
2006-10-01
The new THELMA code, including a thermal-hydraulic (TH) and an electro-magnetic (EM) model of a cable-in-conduit conductor (CICC), has been developed. The TH model is at this stage relatively conventional, with two fluid components (He flowing in the annular cable region and He flowing in the central channel) being particular to the CICC of the International Thermonuclear Experimental Reactor (ITER), and two solid components (superconducting strands and jacket/conduit). In contrast, the EM model is novel and will be presented here in full detail. The results obtained from this first version of the code are compared with experimental results from pulsed tests of the ENEA stability experiment (ESE), showing good agreement between computed and measured deposited energy and subsequent temperature increase.
Bayramoglu, Beste; Faller, Roland
2011-03-01
We present systematic coarse-graining of several polystyrene models and test their performance under confinement and eventually in brush systems. The structural properties of a dilute polystyrene solution, a polystyrene melt and a confined concentrated polystyrene solution at 450K, 1 bar were investigated in detail by atomistic molecular dynamics simulations of these systems. Coarse-graining of the models was performed by Iterative Boltzmann Inversion Technique (IBI), in which the interaction potentials are optimized against the structure of the corresponding atomistically simulated systems. Radial distribution functions, bond, angle and dihedral angle probability distributions were calculated and compared to characterize the structure of the systems. Good agreement between the simulation results of the coarse-grained and atomistic models was observed.
An Iterative Method for the Construction of Equilibrium N-Body Models for Stellar Disks
Rodionov, S A
2006-01-01
One widely used technique for the construction of equilibrium models of stellar disks is based on the Jeans equations and the moments of velocity distribution functions derived using these equations. Stellar disks constructed using this technique are shown to be "not entirely" in equilibrium. Our attempt to abandon the epicyclic approximation and the approximation of infinite isothermal layers, which are commonly adopted in this technique, failed to improve the situation substantially. We conclude that the main drawback of techniques based on the Jeans equations is that the system of equations employed is not closed, and therefore requires adopting an essentially ad hoc additional closure condition. A new iterative approach to constructing equilibrium N-body models with a given density distribution is proposed. The main idea behind this approach is that a model is first constructed using some approximation method, and is then allowed to adjust to an equilibrium state with the specified density and the require...
Zephyr - the prediction models
DEFF Research Database (Denmark)
Nielsen, Torben Skov; Madsen, Henrik; Nielsen, Henrik Aalborg
2001-01-01
This paper briefly describes new models and methods for predicationg the wind power output from wind farms. The system is being developed in a project which has the research organization Risø and the department of Informatics and Mathematical Modelling (IMM) as the modelling team and all the Dani...
Using a web-based, iterative education model to enhance clinical clerkships.
Alexander, Erik K; Bloom, Nurit; Falchuk, Kenneth H; Parker, Michael
2006-10-01
Although most clinical clerkship curricula are designed to provide all students consistent exposure to defined course objectives, it is clear that individual students are diverse in their backgrounds and baseline knowledge. Ideally, the learning process should be individualized towards the strengths and weakness of each student, but, until recently, this has proved prohibitively time-consuming. The authors describe a program to develop and evaluate an iterative, Web-based educational model assessing medical students' knowledge deficits and allowing targeted teaching shortly after their identification. Beginning in 2002, a new educational model was created, validated, and applied in a prospective fashion to medical students during an internal medicine clerkship at Harvard Medical School. Using a Web-based platform, five validated questions were delivered weekly and a specific knowledge deficiency identified. Teaching targeted to the deficiency was provided to an intervention cohort of five to seven students in each clerkship, though not to controls (the remaining 7-10 students). Effectiveness of this model was assessed by performance on the following week's posttest question. Specific deficiencies were readily identified weekly using this model. Throughout the year, however, deficiencies varied unpredictably. Teaching targeted to deficiencies resulted in significantly better performance on follow-up questioning compared to the performance of those who did not receive this intervention. This model was easily applied in an additive fashion to the current curriculum, and student acceptance was high. The authors conclude that a Web-based, iterative assessment model can effectively target specific curricular needs unique to each group; focus teaching in a rapid, formative, and highly efficient manner; and may improve the efficiency of traditional clerkship teaching.
Fertitta, D. A.; Macdonald, A. M.; Rypina, I.
2015-12-01
In the aftermath of the 2011 Fukushima nuclear power plant accident, it became critical to determine how radionuclides, both from atmospheric deposition and direct ocean discharge, were spreading in the ocean. One successful method used drifter observations from the Global Drifter Program (GDP) to predict the timing of the spread of surface contamination. U.S. coasts are home to a number of nuclear power plants as well as other industries capable of leaking contamination into the surface ocean. Here, the spread of surface contamination from a hypothetical accident at the existing Pilgrim nuclear power plant on the coast of Massachusetts is used as an example to show how the historical drifter dataset can be used as a prediction tool. Our investigation uses a combined dataset of drifter tracks from the GDP and the NOAA Northeast Fisheries Science Center. Two scenarios are examined to estimate the spread of surface contamination: a local direct leakage scenario and a broader atmospheric deposition scenario that could result from an explosion. The local leakage scenario is used to study the spread of contamination within and beyond Cape Cod Bay, and the atmospheric deposition scenario is used to study the large-scale spread of contamination throughout the North Atlantic Basin. A multiple-iteration method of estimating probability makes best use of the available drifter data. This technique, which allows for direct observationally-based predictions, can be applied anywhere that drifter data are available to calculate estimates of the likelihood and general timing of the spread of surface contamination in the ocean.
Test of the ITER TF Insert and Central Solenoid Model Coil
Energy Technology Data Exchange (ETDEWEB)
Martovetsky, N; Takayasu, M; Minervini, J; Isono, T; Sugimoto, M; Kato, T; Kawano, K; Koisumi, N; Nakajima, H; Nunova, Y; Okuno, K; Tsuji, H; Oshikiri, M; Mitchell, N; Takahashi, Y; Egorov, S; Rodin, I; Zanino, R; Savoldi, L
2002-07-29
The Central Solenoid Model Coil (CSMC) was designed and built by ITER collaboration between the European Union, Japan, Russian Federation and the United States in 1993-2001. Three heavily instrumented insert coils have been also built for testing in the background field of the CSMC to cover a wide operational space. The TF Insert was designed and built by the Russian Federation to simulate the conductor performance under the ITER TF coil conditions. The TF Insert Coil was tested in the CSMC Test Facility at the Japan Atomic Energy Research Institute, Naka, Japan in September-October 2001. Some measurements were performed also on the CSMC to study effects of electromagnetic and cooldown cycles. The TF Insert coil was charged successfully, without training, in the background field of the CSMC to the design current of 46 kA at 13 T peak field. The TF Insert met or exceeded all design objectives, however some interesting results require thorough analyses. This paper presents the overview of main results of the testing--magnet critical parameters, ac losses, joint performance, effect of cycles on performance, quench and thermo-hydraulic characteristics and some results of the post-test analysis.
Energy Technology Data Exchange (ETDEWEB)
Lee, Eun Chae; Kim, Yeo Koon; Chun, Eun Ju; Choi, Sang IL [Dept. of of Radiology, Seoul National University Bundang Hospital, Seongnam (Korea, Republic of)
2016-05-15
To assess the performance of model-based iterative reconstruction (MBIR) technique for evaluation of coronary artery stents on coronary CT angiography (CCTA). Twenty-two patients with coronary stent implantation who underwent CCTA were retrospectively enrolled for comparison of image quality between filtered back projection (FBP), adaptive statistical iterative reconstruction (ASIR) and MBIR. In each data set, image noise was measured as the standard deviation of the measured attenuation units within circular regions of interest in the ascending aorta (AA) and left main coronary artery (LM). To objectively assess the noise and blooming artifacts in coronary stent, we additionally measured the standard deviation of the measured attenuation and intra-luminal stent diameters of total 35 stents with dedicated software. All image noise measured in the AA (all p < 0.001), LM (p < 0.001, p = 0.001) and coronary stent (all p < 0.001) were significantly lower with MBIR in comparison to those with FBP or ASIR. Intraluminal stent diameter was significantly higher with MBIR, as compared with ASIR or FBP (p < 0.001, p = 0.001). MBIR can reduce image noise and blooming artifact from the stent, leading to better in-stent assessment in patients with coronary artery stent.
Kuo, Yu; Lin, Yi-Yang; Lee, Rheun-Chuan; Lin, Chung-Jung; Chiou, Yi-You; Guo, Wan-Yuo
2016-08-01
The purpose of this study was to compare the image noise-reducing abilities of iterative model reconstruction (IMR) with those of traditional filtered back projection (FBP) and statistical iterative reconstruction (IR) in abdominal computed tomography (CT) imagesThis institutional review board-approved retrospective study enrolled 103 patients; informed consent was waived. Urinary bladder (n = 83) and renal cysts (n = 44) were used as targets for evaluating imaging quality. Raw data were retrospectively reconstructed using FBP, statistical IR, and IMR. Objective image noise and signal-to-noise ratio (SNR) were calculated and analyzed using one-way analysis of variance. Subjective image quality was evaluated and analyzed using Wilcoxon signed-rank test with Bonferroni correction.Objective analysis revealed a reduction in image noise for statistical IR compared with that for FBP, with no significant differences in SNR. In the urinary bladder group, IMR achieved up to 53.7% noise reduction, demonstrating a superior performance to that of statistical IR. IMR also yielded a significantly superior SNR to that of statistical IR. Similar results were obtained in the cyst group. Subjective analysis revealed reduced image noise for IMR, without inferior margin delineation or diagnostic confidence.IMR reduced noise and increased SNR to greater degrees than did FBP and statistical IR. Applying the IMR technique to abdominal CT imaging has potential for reducing the radiation dose without sacrificing imaging quality.
Plasma burn-through simulations using the DYON code and predictions for ITER
Kim, Hyun-Tae; de Vries, P C; Contributors, JET-EFDA
2014-01-01
This paper will discuss simulations of the full ionization process (i.e. plasma burn-through), fundamental to creating high temperature plasma. By means of an applied electric field, the gas is partially ionized by the electron avalanche process. In order for the electron temperature to increase, the remaining neutrals need to be fully ionized in the plasma burn-through phase, as radiation is the main contribution to the electron power loss. The radiated power loss can be significantly affected by impurities resulting from interaction with the plasma facing components. The DYON code is a plasma burn-through simulator developed at Joint European Torus (JET) [1] [2]. The dynamic evolution of the plasma temperature and plasma densities including impurity content is calculated in a self-consistent way, using plasma wall interaction models. The recent installation of a beryllium wall at JET enabled validation of the plasma burn-through model in the presence of new, metallic plasma facing components. The simulation...
Two grid iteration with a conjugate gradient fine grid smoother applied to a groundwater flow model
Energy Technology Data Exchange (ETDEWEB)
Hagger, M.J.; Spence, A.; Cliffe, K.A.
1994-12-31
This talk is concerned with the efficient solution of Ax=b, where A is a large, sparse, symmetric positive definite matrix arising from a standard finite element discretisation of the groundwater flow problem {triangledown}{sm_bullet}(k{triangledown}p)=0. Here k is the coefficient of rock permeability in applications and is highly discontinuous. The discretisation is carried out using the Harwell NAMMU finite element package, using, for 2D, 9 node biquadratic rectangular elements, and 27 node biquadratics for 3D. The aim is to develop a robust technique for iterative solutions of 3D problems based on a regional groundwater flow model of a geological area with sharply varying hydrogeological properties. Numerical experiments with polynomial preconditioned conjugate gradient methods on a 2D groundwater flow model were found to yield very poor results, converging very slowly. In order to utilise the fact that A comes from the discretisation of a PDE the authors try the two grid method as is well analysed from studies of multigrid methods, see for example {open_quotes}Multi-Grid Methods and Applications{close_quotes} by W. Hackbusch. Specifically they consider two discretisations resulting in stiffness matrices A{sub N} and A{sub n}, of size N and n respectively, where N > n, for both a model problem and the geological model. They perform a number of conjugate gradient steps on the fine grid, ie using A{sub N}, followed by an exact coarse grid solve, using A{sub n}, and then update the fine grid solution, the exact coarse grid solve being done using a frontal method factorisation of A{sub n}. Note that in the context of the standard two grid method this is equivalent to using conjugate gradients as a fine grid smoothing step. Experimental results are presented to show the superiority of the two grid iteration method over the polynomial preconditioned conjugate gradient method.
Energy Technology Data Exchange (ETDEWEB)
Ortuno, J E; Kontaxakis, G; Rubio, J L; Santos, A [Departamento de Ingenieria Electronica (DIE), Universidad Politecnica de Madrid, Ciudad Universitaria s/n, 28040 Madrid (Spain); Guerra, P [Networking Research Center on Bioengineering, Biomaterials and Nanomedicine (CIBER-BBN), Madrid (Spain)], E-mail: juanen@die.upm.es
2010-04-07
A fully 3D iterative image reconstruction algorithm has been developed for high-resolution PET cameras composed of pixelated scintillator crystal arrays and rotating planar detectors, based on the ordered subsets approach. The associated system matrix is precalculated with Monte Carlo methods that incorporate physical effects not included in analytical models, such as positron range effects and interaction of the incident gammas with the scintillator material. Custom Monte Carlo methodologies have been developed and optimized for modelling of system matrices for fast iterative image reconstruction adapted to specific scanner geometries, without redundant calculations. According to the methodology proposed here, only one-eighth of the voxels within two central transaxial slices need to be modelled in detail. The rest of the system matrix elements can be obtained with the aid of axial symmetries and redundancies, as well as in-plane symmetries within transaxial slices. Sparse matrix techniques for the non-zero system matrix elements are employed, allowing for fast execution of the image reconstruction process. This 3D image reconstruction scheme has been compared in terms of image quality to a 2D fast implementation of the OSEM algorithm combined with Fourier rebinning approaches. This work confirms the superiority of fully 3D OSEM in terms of spatial resolution, contrast recovery and noise reduction as compared to conventional 2D approaches based on rebinning schemes. At the same time it demonstrates that fully 3D methodologies can be efficiently applied to the image reconstruction problem for high-resolution rotational PET cameras by applying accurate pre-calculated system models and taking advantage of the system's symmetries.
Analysis of the cooldown of the ITER central solenoid model coil and insert coil
Bonifetto, R.; Brighenti, A.; Isono, T.; Martovetsky, N.; Kawano, K.; Savoldi, L.; Zanino, R.
2017-01-01
A series of superconducting insert coils (ICs) made of different materials has been tested since 2000 at JAEA Naka in the bore of the central solenoid model coil at fields up to 13 T and currents up to several tens of kA, fully representative of the ITER operating conditions. Here we focus on the 2015 test of the presently last IC of the series, the central solenoid (CS) insert coil, which was aimed at confirming the performance and properties of the Nb3Sn conductor, manufactured in Japan and used to wind the ITER CS modules in the US. As typical for these large scale applications, the cooldown (CD) from ambient to supercritical He temperature may take a long time, of the order of several weeks, so that it should be useful, also in the perspective of future IC tests, to optimize it. To that purpose, a comprehensive CD model implemented in the 4C code is developed and presented in this paper. The model is validated against the experimental data of an actual CD scenario, showing a very good agreement between simulation and measurements, from 300 to 4.5 K. The maximum temperature difference across the coil, which can only be roughly estimated from the measurements, is then extracted from the results of the simulation and shown to be much larger than the maximum value of 50 K, prescribed on the basis of the allowable thermal stress on the materials. An optimized CD scenario is finally designed using the model for the initial phase of the CD between 300 and 80 K, which allows reducing the needed time by ∼20%, while still satisfying the major constraints. Recommendations are also given for a better location/choice of the thermometers to be used for the monitoring of the maximum temperature difference across the coil.
Aumeunier, M-H; Travere, J-M
2010-10-01
In nuclear fusion experiments, the plasma facing components are exposed to high heat fluxes and infrared (IR) imaging diagnostics are routinely used for surveying their surface temperature for preventing damages. However the future use of metallic components in the ITER tokamak adds complications in temperature estimation. Indeed, low and variable emissivity of the observed surface and the multiple reflections of the light coming from hot regions will have to be understood and then taken into account. In this paper, a realistic photonic modeling based on Monte Carlo ray-tracing codes is used to predict the global response of the complete IR survey system. This also includes the complex vessel geometry and the thermal and optical surface properties using the bidirectional reflectivity distribution function that models the photon-material interactions. The first results of this simulation applied to a reference torus are presented and are used as a benchmark to investigate the validity of the global model. Finally the most critical key model parameters in the reflected signals are identified and their contribution is discussed.
Nonconvex model predictive control for commercial refrigeration
Gybel Hovgaard, Tobias; Boyd, Stephen; Larsen, Lars F. S.; Bagterp Jørgensen, John
2013-08-01
We consider the control of a commercial multi-zone refrigeration system, consisting of several cooling units that share a common compressor, and is used to cool multiple areas or rooms. In each time period we choose cooling capacity to each unit and a common evaporation temperature. The goal is to minimise the total energy cost, using real-time electricity prices, while obeying temperature constraints on the zones. We propose a variation on model predictive control to achieve this goal. When the right variables are used, the dynamics of the system are linear, and the constraints are convex. The cost function, however, is nonconvex due to the temperature dependence of thermodynamic efficiency. To handle this nonconvexity we propose a sequential convex optimisation method, which typically converges in fewer than 5 or so iterations. We employ a fast convex quadratic programming solver to carry out the iterations, which is more than fast enough to run in real time. We demonstrate our method on a realistic model, with a full year simulation and 15-minute time periods, using historical electricity prices and weather data, as well as random variations in thermal load. These simulations show substantial cost savings, on the order of 30%, compared to a standard thermostat-based control system. Perhaps more important, we see that the method exhibits sophisticated response to real-time variations in electricity prices. This demand response is critical to help balance real-time uncertainties in generation capacity associated with large penetration of intermittent renewable energy sources in a future smart grid.
In vitro identification of four-element windkessel models based on iterated unscented Kalman filter.
Huang, Huan; Yang, Ming; Zang, Wangfu; Wu, Shunjie; Pang, Yafei
2011-09-01
Mock circulatory loops (MCLs) have been widely used to test left ventricular assist devices. The hydraulic properties of the mock systemic arterial system are usually described by two alternative four-element windkessel (W4) models. Compared with three-element windkessel model, their parameters, especially the inertial term, are much more difficult to estimate. In this paper, an estimator based on the iterated unscented Kalman filter (IUKF) algorithm is proposed to identify model parameters. Identifiability of these parameters for different measurements is described. Performance of the estimator for different model structures is first evaluated using numerical simulation data contaminated with artificial noise. An MCL is developed to test the proposed algorithm. Parameter estimates for different models are compared with the calculated values derived from the mechanical and hydraulic properties of the MCL to validate model structures. In conclusion, the W4 model with an inertance and an aortic characteristic resistance arranged in series is proposed to represent the mock systemic arterial system. Once model structure is appropriately selected, IUKF can provide reasonable estimation accuracy in a limited time and may be helpful for future clinical applications.
Energy Technology Data Exchange (ETDEWEB)
Almansouri, Hani [Purdue University; Johnson, Christi R [ORNL; Clayton, Dwight A [ORNL; Polsky, Yarom [ORNL; Bouman, Charlie [Purdue University; Santos-Villalobos, Hector J [ORNL
2017-01-01
All commercial nuclear power plants (NPPs) in the United States contain concrete structures. These structures provide important foundation, support, shielding, and containment functions. Identification and management of aging and the degradation of concrete structures is fundamental to the proposed long-term operation of NPPs. Concrete structures in NPPs are often inaccessible and contain large volumes of massively thick concrete. While acoustic imaging using the synthetic aperture focusing technique (SAFT) works adequately well for thin specimens of concrete such as concrete transportation structures, enhancements are needed for heavily reinforced, thick concrete. We argue that image reconstruction quality for acoustic imaging in thick concrete could be improved with Model-Based Iterative Reconstruction (MBIR) techniques. MBIR works by designing a probabilistic model for the measurements (forward model) and a probabilistic model for the object (prior model). Both models are used to formulate an objective function (cost function). The final step in MBIR is to optimize the cost function. Previously, we have demonstrated a first implementation of MBIR for an ultrasonic transducer array system. The original forward model has been upgraded to account for direct arrival signal. Updates to the forward model will be documented and the new algorithm will be assessed with synthetic and empirical samples.
Almansouri, Hani; Johnson, Christi; Clayton, Dwight; Polsky, Yarom; Bouman, Charles; Santos-Villalobos, Hector
2017-02-01
All commercial nuclear power plants (NPPs) in the United States contain concrete structures. These structures provide important foundation, support, shielding, and containment functions. Identification and management of aging and the degradation of concrete structures is fundamental to the proposed long-term operation of NPPs. Concrete structures in NPPs are often inaccessible and contain large volumes of massively thick concrete. While acoustic imaging using the synthetic aperture focusing technique (SAFT) works adequately well for thin specimens of concrete such as concrete transportation structures, enhancements are needed for heavily reinforced, thick concrete. We argue that image reconstruction quality for acoustic imaging in thick concrete could be improved with Model-Based Iterative Reconstruction (MBIR) techniques. MBIR works by designing a probabilistic model for the measurements (forward model) and a probabilistic model for the object (prior model). Both models are used to formulate an objective function (cost function). The final step in MBIR is to optimize the cost function. Previously, we have demonstrated a first implementation of MBIR for an ultrasonic transducer array system. The original forward model has been upgraded to account for direct arrival signal. Updates to the forward model will be documented and the new algorithm will be assessed with synthetic and empirical samples.
Confidence scores for prediction models
DEFF Research Database (Denmark)
Gerds, Thomas Alexander; van de Wiel, MA
2011-01-01
modelling strategy is applied to different training sets. For each modelling strategy we estimate a confidence score based on the same repeated bootstraps. A new decomposition of the expected Brier score is obtained, as well as the estimates of population average confidence scores. The latter can be used...... to distinguish rival prediction models with similar prediction performances. Furthermore, on the subject level a confidence score may provide useful supplementary information for new patients who want to base a medical decision on predicted risk. The ideas are illustrated and discussed using data from cancer...
Modelling, controlling, predicting blackouts
Wang, Chengwei; Baptista, Murilo S
2016-01-01
The electric power system is one of the cornerstones of modern society. One of its most serious malfunctions is the blackout, a catastrophic event that may disrupt a substantial portion of the system, playing havoc to human life and causing great economic losses. Thus, understanding the mechanisms leading to blackouts and creating a reliable and resilient power grid has been a major issue, attracting the attention of scientists, engineers and stakeholders. In this paper, we study the blackout problem in power grids by considering a practical phase-oscillator model. This model allows one to simultaneously consider different types of power sources (e.g., traditional AC power plants and renewable power sources connected by DC/AC inverters) and different types of loads (e.g., consumers connected to distribution networks and consumers directly connected to power plants). We propose two new control strategies based on our model, one for traditional power grids, and another one for smart grids. The control strategie...
Energy Technology Data Exchange (ETDEWEB)
Los Alamos National Laboratory, Mailstop M888, Los Alamos, NM 87545, USA; Lawrence Berkeley National Laboratory, One Cyclotron Road, Building 64R0121, Berkeley, CA 94720, USA; Department of Haematology, University of Cambridge, Cambridge CB2 0XY, England; Terwilliger, Thomas; Terwilliger, T.C.; Grosse-Kunstleve, Ralf Wilhelm; Afonine, P.V.; Moriarty, N.W.; Zwart, P.H.; Hung, L.-W.; Read, R.J.; Adams, P.D.
2007-04-29
The PHENIX AutoBuild Wizard is a highly automated tool for iterative model-building, structure refinement and density modification using RESOLVE or TEXTAL model-building, RESOLVE statistical density modification, and phenix.refine structure refinement. Recent advances in the AutoBuild Wizard and phenix.refine include automated detection and application of NCS from models as they are built, extensive model completion algorithms, and automated solvent molecule picking. Model completion algorithms in the AutoBuild Wizard include loop-building, crossovers between chains in different models of a structure, and side-chain optimization. The AutoBuild Wizard has been applied to a set of 48 structures at resolutions ranging from 1.1 {angstrom} to 3.2 {angstrom}, resulting in a mean R-factor of 0.24 and a mean free R factor of 0.29. The R-factor of the final model is dependent on the quality of the starting electron density, and relatively independent of resolution.
Testing and modeling of diffusion bonded prototype optical windows under ITER conditions
Jacobs, M.; Oost, G. van; Degrieck, J.; Baere, I. De; Gusarov, A.; Gubbels, F.; Massaut, V.
2011-01-01
Glass-metal joints are a part of ITER optical diagnostics windows. These joints must be leak tight for the safety (presence of tritium in ITER) and to preserve the vacuum. They must also withstand the ITER environment: temperatures up to 220°C and fast neutron fluxes of ∼3·10 9 n/cm 2·s. At the mome
Melanoma Risk Prediction Models
Developing statistical models that estimate the probability of developing melanoma cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.
Barriers and strategies to an iterative model of advance care planning communication.
Ahluwalia, Sangeeta C; Bekelman, David B; Huynh, Alexis K; Prendergast, Thomas J; Shreve, Scott; Lorenz, Karl A
2015-12-01
Early and repeated patient-provider conversations about advance care planning (ACP) are now widely recommended. We sought to characterize barriers and strategies for realizing an iterative model of ACP patient-provider communication. A total of 2 multidisciplinary focus groups and 3 semistructured interviews with 20 providers at a large Veterans Affairs medical center. Thematic analysis was employed to identify salient themes. Barriers included variation among providers in approaches to ACP, lack of useful information about patient values to guide decision making, and ineffective communication between providers across settings. Strategies included eliciting patient values rather than specific treatment choices and an increased role for primary care in the ACP process. Greater attention to connecting providers across the continuum, maximizing the potential of the electronic health record, and linking patient experiences to their values may help to connect ACP communication across the continuum. © The Author(s) 2014.
Iterating block spin transformations of the O(3) nonlinear {sigma} model
Energy Technology Data Exchange (ETDEWEB)
Gottlob, A.P. [Fachbereich Physik, Universitaet Kaiserslautern, D-67653 Kaiserslautern (Germany); Hasenbusch, M. [DAMTP, Silver Street, Cambridge, CB3 9EW (England); Pinn, K. [Institut fuer Theoretische Physik I, Universitaet Muenster, Wilhelm-Klemm-Strasse 9, D-48149 Muenster (Germany)
1996-07-01
We study the iteration of block spin transformations in the O(3) symmetric nonlinear {sigma} model on a two-dimensional square lattice with the help of the Monte Carlo method. In contrast with the classical Monte Carlo renormalization group approach, we {ital do} attempt to explicitly compute the block spin effective actions. Using two different methods for the determination of effective couplings, we study the renormalization group flow for various parametrization and truncation schemes. The largest ansatz for the effective action contains thirteen coupling constants. Actions on the renormalized trajectory should describe theories with no lattice artifacts, even at a small correlation length. However, tests with the step scaling function of L{umlt u}scher, Weisz, and Wolff reveal that our truncated effective actions show sizable scaling violations indicating that the {ital Ans{umlt a}tze} are still too small. {copyright} {ital 1996 The American Physical Society.}
Implementation of Newton-Rapshon iterations for parallel staggered-grid geodynamic models
Popov, A. A.; Kaus, B. J. P.
2012-04-01
Staggered-grid finite differences discretization has a good potential for solving highly heterogeneous geodynamic models on parallel computers (e.g. Tackey, 2008; Gerya &Yuen, 2007). They are inherently stable, computationally inexpensive and relatively easy to implement. However, currently used staggered-grid geodynamic codes employ almost exclusively the sub-optimal Picard linearization scheme to deal with nonlinearities. It was shown that Newton-Rapshon linearization can lead to substantial improvements of the solution quality in geodynamic problems, simultaneously with reduction of computer time (e.g. Popov & Sobolev, 2008). This work is aimed at implementation of the Newton-Rapshon linearization in the parallel geodynamic code LaMEM together with staggered-grid discretization and viso-(elasto)-plastic rock rheologies. We present the expressions for the approximate Jacobian matrix, and give detailed comparisons with the currently employed Picard linearization scheme, in terms of solution quality and number of iterations.
Prediction models in complex terrain
DEFF Research Database (Denmark)
Marti, I.; Nielsen, Torben Skov; Madsen, Henrik
2001-01-01
are calculated using on-line measurements of power production as well as HIRLAM predictions as input thus taking advantage of the auto-correlation, which is present in the power production for shorter pediction horizons. Statistical models are used to discribe the relationship between observed energy production......The objective of the work is to investigatethe performance of HIRLAM in complex terrain when used as input to energy production forecasting models, and to develop a statistical model to adapt HIRLAM prediction to the wind farm. The features of the terrain, specially the topography, influence...... and HIRLAM predictions. The statistical models belong to the class of conditional parametric models. The models are estimated using local polynomial regression, but the estimation method is here extended to be adaptive in order to allow for slow changes in the system e.g. caused by the annual variations...
Accuracy improvement of a hybrid robot for ITER application using POE modeling method
Energy Technology Data Exchange (ETDEWEB)
Wang, Yongbo, E-mail: yongbo.wang@hotmail.com [Laboratory of Intelligent Machines, Lappeenranta University of Technology, FIN-53851 Lappeenranta (Finland); Wu, Huapeng; Handroos, Heikki [Laboratory of Intelligent Machines, Lappeenranta University of Technology, FIN-53851 Lappeenranta (Finland)
2013-10-15
Highlights: ► The product of exponential (POE) formula for error modeling of hybrid robot. ► Differential Evolution (DE) algorithm for parameter identification. ► Simulation results are given to verify the effectiveness of the method. -- Abstract: This paper focuses on the kinematic calibration of a 10 degree-of-freedom (DOF) redundant serial–parallel hybrid robot to improve its accuracy. The robot was designed to perform the assembling and repairing tasks of the vacuum vessel (VV) of the international thermonuclear experimental reactor (ITER). By employing the product of exponentials (POEs) formula, we extended the POE-based calibration method from serial robot to redundant serial–parallel hybrid robot. The proposed method combines the forward and inverse kinematics together to formulate a hybrid calibration method for serial–parallel hybrid robot. Because of the high nonlinear characteristics of the error model and too many error parameters need to be identified, the traditional iterative linear least-square algorithms cannot be used to identify the parameter errors. This paper employs a global optimization algorithm, Differential Evolution (DE), to identify parameter errors by solving the inverse kinematics of the hybrid robot. Furthermore, after the parameter errors were identified, the DE algorithm was adopted to numerically solve the forward kinematics of the hybrid robot to demonstrate the accuracy improvement of the end-effector. Numerical simulations were carried out by generating random parameter errors at the allowed tolerance limit and generating a number of configuration poses in the robot workspace. Simulation of the real experimental conditions shows that the accuracy of the end-effector can be improved to the same precision level of the given external measurement device.
Zhang, Ruoqiao; Thibault, Jean-Baptiste; Bouman, Charles A; Sauer, Ken D; Hsieh, Jiang
2014-01-01
Dual-energy X-ray CT (DECT) has the potential to improve contrast and reduce artifacts as compared to traditional CT. Moreover, by applying model-based iterative reconstruction (MBIR) to dual-energy data, one might also expect to reduce noise and improve resolution. However, the direct implementation of dual-energy MBIR requires the use of a nonlinear forward model, which increases both complexity and computation. Alternatively, simplified forward models have been used which treat the material-decomposed channels separately, but these approaches do not fully account for the statistical dependencies in the channels. In this paper, we present a method for joint dual-energy MBIR (JDE-MBIR), which simplifies the forward model while still accounting for the complete statistical dependency in the material-decomposed sinogram components. The JDE-MBIR approach works by using a quadratic approximation to the polychromatic log-likelihood and a simple but exact nonnegativity constraint in the image domain. We demonstrate that our method is particularly effective when the DECT system uses fast kVp switching, since in this case the model accounts for the inaccuracy of interpolated sinogram entries. Both phantom and clinical results show that the proposed model produces images that compare favorably in quality to previous decomposition-based methods, including FBP and other statistical iterative approaches.
Butt, Z.; Haberman, S
2009-01-01
We implement a specialised iterative regression methodology in R for the analysis of age-period mortality data based on a class of generalised Lee-Carter (LC) type modelling structures. The LC-based modelling frameworks is viewed in the current literature as among the most efficient and transparent methods of modelling and projecting mortality improvements. Thus, we make use of the modelling approach discussed in Renshaw and Haberman (2006), which extends the basic LC model and proposes to ma...
Modelling and engineering aspects of the plasma shape control in ITER
Energy Technology Data Exchange (ETDEWEB)
Albanese, R.; Ambrosino, G.; Coccorese, E.; Pironti, A. [Naples Univ., Dip. di Ingegneria Elettrica, Consorzio CREATE, Naples (Italy); Lister, J.B.; Ward, D.J. [Ecole Polytechnique Federale, Lausanne (Switzerland). Centre de Recherche en Physique des Plasma (CRPP)
1996-10-01
As part of the ITER Engineering Design Activity, a number of questions related to plasma control has been addressed, using linearised and non-linear simulation codes to assess the control of the plasma shape given the particular design restrictions of ITER. (author) 5 figs., 1 tab., 2 refs.
Iterative approach to modeling subsurface stormflow based on nonlinear, hillslope-scale physics
Directory of Open Access Journals (Sweden)
J. H. Spaaks
2009-08-01
Full Text Available Soil water transport in small, humid, upland catchments is often dominated by subsurface stormflow. Recent studies of this process suggest that at the plot scale, generation of transient saturation may be governed by threshold behavior, and that transient saturation is a prerequisite for lateral flow. The interaction between these plot scale processes yields complex behavior at the hillslope scale. We argue that this complexity should be incorporated into our models. We take an iterative approach to developing our model, starting with a very simple representation of hillslope rainfall-runoff. Next, we design new virtual experiments with which we test our model, while adding more structural complexity. In this study, we present results from three such development cycles, corresponding to three different hillslope-scale, lumped models. Model_{1} is a linear tank model, which assumes transient saturation to be homogeneously distributed over the hillslope. Model_{2} assumes transient saturation to be heterogeneously distributed over the hillslope, and that the spatial distribution of the saturated zone does not vary with time. Model_{3} assumes that transient saturation is heterogeneous both in space and in time. We found that the homogeneity assumption underlying Model_{1} resulted in hillslope discharge being too steep during the first part of the rising limb, but not steep enough on the second part. Also, peak height was underestimated. The additional complexity in Model_{2} improved the simulations in terms of the fit, but not in terms of the dynamics. The threshold-based Model_{3} captured most of the hydrograph dynamics (Nash-Sutcliffe efficiency of 0.98. After having assessed our models in a lumped setup, we then compared Model_{1} to Model_{3} in a spatially explicit setup, and evaluated what patterns of subsurface flow were possible with model elements of each type. We found
Energy Technology Data Exchange (ETDEWEB)
Nakaura, Takeshi; Iyama, Yuji; Kidoh, Masafumi; Yokoyama, Koichi [Amakusa Medical Center, Diagnostic Radiology, Amakusa, Kumamoto (Japan); Kumamoto University, Department of Diagnostic Radiology, Graduate School of Medical Sciences, Kumamoto (Japan); Oda, Seitaro; Yamashita, Yasuyuki [Kumamoto University, Department of Diagnostic Radiology, Graduate School of Medical Sciences, Kumamoto (Japan); Tokuyasu, Shinichi [Philips Electronics, Kumamoto (Japan); Harada, Kazunori [Amakusa Medical Center, Department of Surgery, Kumamoto (Japan)
2016-03-15
The purpose of this study was to evaluate the utility of iterative model reconstruction (IMR) in brain CT especially with thin-slice images. This prospective study received institutional review board approval, and prior informed consent to participate was obtained from all patients. We enrolled 34 patients who underwent brain CT and reconstructed axial images with filtered back projection (FBP), hybrid iterative reconstruction (HIR) and IMR with 1 and 5 mm slice thicknesses. The CT number, image noise, contrast, and contrast noise ratio (CNR) between the thalamus and internal capsule, and the rate of increase of image noise in 1 and 5 mm thickness images between the reconstruction methods, were assessed. Two independent radiologists assessed image contrast, image noise, image sharpness, and overall image quality on a 4-point scale. The CNRs in 1 and 5 mm slice thickness were significantly higher with IMR (1.2 ± 0.6 and 2.2 ± 0.8, respectively) than with FBP (0.4 ± 0.3 and 1.0 ± 0.4, respectively) and HIR (0.5 ± 0.3 and 1.2 ± 0.4, respectively) (p < 0.01). The mean rate of increasing noise from 5 to 1 mm thickness images was significantly lower with IMR (1.7 ± 0.3) than with FBP (2.3 ± 0.3) and HIR (2.3 ± 0.4) (p < 0.01). There were no significant differences in qualitative analysis of unfamiliar image texture between the reconstruction techniques. IMR offers significant noise reduction and higher contrast and CNR in brain CT, especially for thin-slice images, when compared to FBP and HIR. (orig.)
Directory of Open Access Journals (Sweden)
Jihang Sun
Full Text Available OBJECTIVE: To evaluate noise reduction and image quality improvement in low-radiation dose chest CT images in children using adaptive statistical iterative reconstruction (ASIR and a full model-based iterative reconstruction (MBIR algorithm. METHODS: Forty-five children (age ranging from 28 days to 6 years, median of 1.8 years who received low-dose chest CT scans were included. Age-dependent noise index (NI was used for acquisition. Images were retrospectively reconstructed using three methods: MBIR, 60% of ASIR and 40% of conventional filtered back-projection (FBP, and FBP. The subjective quality of the images was independently evaluated by two radiologists. Objective noises in the left ventricle (LV, muscle, fat, descending aorta and lung field at the layer with the largest cross-section area of LV were measured, with the region of interest about one fourth to half of the area of descending aorta. Optimized signal-to-noise ratio (SNR was calculated. RESULT: In terms of subjective quality, MBIR images were significantly better than ASIR and FBP in image noise and visibility of tiny structures, but blurred edges were observed. In terms of objective noise, MBIR and ASIR reconstruction decreased the image noise by 55.2% and 31.8%, respectively, for LV compared with FBP. Similarly, MBIR and ASIR reconstruction increased the SNR by 124.0% and 46.2%, respectively, compared with FBP. CONCLUSION: Compared with FBP and ASIR, overall image quality and noise reduction were significantly improved by MBIR. MBIR image could reconstruct eligible chest CT images in children with lower radiation dose.
Prediction models in complex terrain
DEFF Research Database (Denmark)
Marti, I.; Nielsen, Torben Skov; Madsen, Henrik
2001-01-01
The objective of the work is to investigatethe performance of HIRLAM in complex terrain when used as input to energy production forecasting models, and to develop a statistical model to adapt HIRLAM prediction to the wind farm. The features of the terrain, specially the topography, influence...
Owens, David H.; Freeman, Chris T.; Chu, Bing
2014-08-01
Motivated by the commonly encountered problem in which tracking is only required at selected intermediate points within the time interval, a general optimisation-based iterative learning control (ILC) algorithm is derived that ensures convergence of tracking errors to zero whilst simultaneously minimising a specified quadratic objective function of the input signals and chosen auxiliary (state) variables. In practice, the proposed solutions enable a repeated tracking task to be accurately completed whilst simultaneously reducing undesirable effects such as payload spillage, vibration tendencies and actuator wear. The theory is developed using the well-known norm optimal ILC (NOILC) framework, using general linear, functional operators between real Hilbert spaces. Solutions are derived using feedforward action, convergence is proved and robustness bounds are presented using both norm bounds and positivity conditions. Algorithms are specified for both continuous and discrete-time state-space representations, with the latter including application to multi-rate sampled systems. Experimental results using a robotic manipulator confirm the practical utility of the algorithms and the closeness with which observed results match theoretical predictions.
A systematic review of predictive modeling for bronchiolitis.
Luo, Gang; Nkoy, Flory L; Gesteland, Per H; Glasgow, Tiffany S; Stone, Bryan L
2014-10-01
Bronchiolitis is the most common cause of illness leading to hospitalization in young children. At present, many bronchiolitis management decisions are made subjectively, leading to significant practice variation among hospitals and physicians caring for children with bronchiolitis. To standardize care for bronchiolitis, researchers have proposed various models to predict the disease course to help determine a proper management plan. This paper reviews the existing state of the art of predictive modeling for bronchiolitis. Predictive modeling for respiratory syncytial virus (RSV) infection is covered whenever appropriate, as RSV accounts for about 70% of bronchiolitis cases. A systematic review was conducted through a PubMed search up to April 25, 2014. The literature on predictive modeling for bronchiolitis was retrieved using a comprehensive search query, which was developed through an iterative process. Search results were limited to human subjects, the English language, and children (birth to 18 years). The literature search returned 2312 references in total. After manual review, 168 of these references were determined to be relevant and are discussed in this paper. We identify several limitations and open problems in predictive modeling for bronchiolitis, and provide some preliminary thoughts on how to address them, with the hope to stimulate future research in this domain. Many problems remain open in predictive modeling for bronchiolitis. Future studies will need to address them to achieve optimal predictive models. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
Koechl, F.; Loarte, A.; Parail, V.; Belo, P.; Brix, M.; Corrigan, G.; Harting, D.; Koskela, T.; Kukushkin, A. S.; Polevoi, A. R.; Romanelli, M.; Saibene, G.; Sartori, R.; Eich, T.; Contributors, JET
2017-08-01
The dynamics for the transition from L-mode to a stationary high Q DT H-mode regime in ITER is expected to be qualitatively different to present experiments. Differences may be caused by a low fuelling efficiency of recycling neutrals, that influence the post transition plasma density evolution on the one hand. On the other hand, the effect of the plasma density evolution itself both on the alpha heating power and the edge power flow required to sustain the H-mode confinement itself needs to be considered. This paper presents results of modelling studies of the transition to stationary high Q DT H-mode regime in ITER with the JINTRAC suite of codes, which include optimisation of the plasma density evolution to ensure a robust achievement of high Q DT regimes in ITER on the one hand and the avoidance of tungsten accumulation in this transient phase on the other hand. As a first step, the JINTRAC integrated models have been validated in fully predictive simulations (excluding core momentum transport which is prescribed) against core, pedestal and divertor plasma measurements in JET C-wall experiments for the transition from L-mode to stationary H-mode in partially ITER relevant conditions (highest achievable current and power, H 98,y ~ 1.0, low collisionality, comparable evolution in P net/P L-H, but different ρ *, T i/T e, Mach number and plasma composition compared to ITER expectations). The selection of transport models (core: NCLASS + Bohm/gyroBohm in L-mode/GLF23 in H-mode) was determined by a trade-off between model complexity and efficiency. Good agreement between code predictions and measured plasma parameters is obtained if anomalous heat and particle transport in the edge transport barrier are assumed to be reduced at different rates with increasing edge power flow normalised to the H-mode threshold; in particular the increase in edge plasma density is dominated by this edge transport reduction as the calculated neutral influx across the
El-Amin, M F; Sun, Shuyu; Salama, Amgad
2013-01-01
In this paper, we introduce a mathematical model to describe the nanoparticles transport carried by a two-phase flow in a porous medium including gravity, capillary forces and Brownian diffusion. Nonlinear iterative IMPES scheme is used to solve the flow equation, and saturation and pressure are calculated at the current iteration step and then the transport equation is soved implicitly. Therefore, once the nanoparticles concentration is computed, the two equations of volume of the nanoparticles available on the pore surfaces and the volume of the nanoparticles entrapped in pore throats are solved implicitly. The porosity and the permeability variations are updated at each time step after each iteration loop. Two numerical examples, namely, regular heterogeneous permeability and random permeability are considered. We monitor the changing of the fluid and solid properties due to adding the nanoparticles. Variation of water saturation, water pressure, nanoparticles concentration and porosity are presented graph...
Iterative reconstruction for few-view grating-based phase-contrast CT —An in vitro mouse model
Gaass, T.; Potdevin, G.; Bech, M.; Noël, P. B.; Willner, M.; Tapfer, A.; Pfeiffer, F.; Haase, A.
2013-05-01
The aim of this work is to investigate the improvement of image quality in few-view grating-based phase-contrast computed tomography (PCCT) applications via compressed sensing (CS) inspired iterative reconstruction on an in vitro mouse model. PCCT measurements are performed on a grating-based PCCT setup using a high-brilliance synchrotron source and a conventional tube source. The sampling density of the data is reduced by a factor of up to 20 and iteratively reconstructed. It is demonstrated that grating-based PCCT intrinsically meets the major conditions for a successful application of CS. Contrast fidelity and the reproduction of details is presented in all reconstructed objects. The feasibility of the iterative reconstruction on data generated with a conventional X-ray source is illustrated on a fluid phantom and a mouse specimen, undersampled by a factor of up to 20.
Elsheikh, Ahmed H.
2013-06-01
We introduce a nonlinear orthogonal matching pursuit (NOMP) for sparse calibration of subsurface flow models. Sparse calibration is a challenging problem as the unknowns are both the non-zero components of the solution and their associated weights. NOMP is a greedy algorithm that discovers at each iteration the most correlated basis function with the residual from a large pool of basis functions. The discovered basis (aka support) is augmented across the nonlinear iterations. Once a set of basis functions are selected, the solution is obtained by applying Tikhonov regularization. The proposed algorithm relies on stochastically approximated gradient using an iterative stochastic ensemble method (ISEM). In the current study, the search space is parameterized using an overcomplete dictionary of basis functions built using the K-SVD algorithm. The proposed algorithm is the first ensemble based algorithm that tackels the sparse nonlinear parameter estimation problem. © 2013 Elsevier Ltd.
Energy Technology Data Exchange (ETDEWEB)
Duchateau, J.L.; Ciazynski, D.; Guerber, O.; Park, S.H.; Zani, L. [Association Euratom-CEA Cadarache, 13 - Saint-Paul-lez-Durance (France). Dept. de Recherches sur la Fusion Controlee; Fietz, W.H.; Ulbricht, A.; Zahn, G. [Association Euratom-FZK Forschungszentrum, Karlsruhe (Germany)
2003-07-01
In Phase II experiment of the International Thermonuclear Experimental Reactor (ITER) Toroidal Field Model Coil (TFMC) the operation limits of its 80 kA Nb{sub 3}Sn conductor were explored. To increase the magnetic field on the conductor, the TFMC was tested in presence of another large coil: the EURATOM-LCT coil. Under these conditions the maximum field reached on the conductor, was around 10 tesla. This exploration has been performed at constant current, by progressively increasing the coil temperature and monitoring the coil voltage drop in the current sharing regime. Such an operation was made possible thanks to the very high stability of the conductor. The aim of these tests was to compare the critical properties of the conductor with expectations and assess the ITER TF conductor design. These expectations are based on the documented critical field and temperature dependent properties of the 720 superconducting strands which compose the conductor. In addition the conductor properties are highly dependent on the strain, due to the compression appearing on Nb{sub 3}Sn during the heat treatment of the pancakes and related to the differential thermal compression between Nb{sub 3}Sn and the stainless steel jacket. No precise model exists to predict this strain, which is therefore the main information, which is expected from these tests. The method to deduce this strain from the different tests is presented, including a thermalhydraulic analysis to identify the temperature of the critical point and a careful estimation of the field map across the conductor. The measured strain has been estimated in the range -0.75% to -0.79 %. This information will be taken into account for ITER design and some adjustment of the ITER conductor design is under examination. (authors)
Chen, Tinggui; Xiao, Renbin
2014-01-01
Due to fierce market competition, how to improve product quality and reduce development cost determines the core competitiveness of enterprises. However, design iteration generally causes increases of product cost and delays of development time as well, so how to identify and model couplings among tasks in product design and development has become an important issue for enterprises to settle. In this paper, the shortcomings existing in WTM model are discussed and tearing approach as well as inner iteration method is used to complement the classic WTM model. In addition, the ABC algorithm is also introduced to find out the optimal decoupling schemes. In this paper, firstly, tearing approach and inner iteration method are analyzed for solving coupled sets. Secondly, a hybrid iteration model combining these two technologies is set up. Thirdly, a high-performance swarm intelligence algorithm, artificial bee colony, is adopted to realize problem-solving. Finally, an engineering design of a chemical processing system is given in order to verify its reasonability and effectiveness.
An Iterative Inversion Technique to Compute Structural Martian Models for Refining Event Locations
Ceylan, S.; Khan, A.; van Driel, M.; Clinton, J. F.; Boese, M.; Euchner, F.; Giardini, D.; Garcia, R.; Lognonne, P. H.; Panning, M. P.; Banerdt, W. B.
2016-12-01
The InSight mission will deploy a single seismic station on Mars in 2018. The main task of the MarsQuake Service within the project includes detecting and locating quakes on Mars, and managing the event catalog. In preparation for the mission, we continually calibrate single station event location algorithms, employing seismic phase travel times computed for a suite of structural models. However, our knowledge about the interior structure of Mars is limited, which in turn will affect our ability to locate events accurately. Here, we present an iterative method to invert for the interior structure of Mars and revise event locations, consecutively. We first locate seismic events using differential arrival times (with respect to the first phase arrival) of all possible seismic phases, computed for a priori initial structural models. These models are built considering a one-dimensional average crust and current estimates of bulk mantle chemistry and areotherm. Phase picks and uncertainty assignments are done manually. Then, we invert for the interior structure employing the arrival times for the picked phases, and generate an updated suite of models, which are further used to revise the initial phase picks, and relocate events. We repeat this sequence for each additional and new entry in the travel time database to improve event locations and models for average Martian structure. In order to test our approach, we simulate the operational conditions we will encounter in practice: We compute synthetic waveforms for a realistic event catalog of 120 events, with magnitudes between 2.5 and 5.0 and double-couple source mechanisms only. 1-Hz seismograms are computed using AxiSEM and Instaseis, employing two Martian models with a thin (30 km) and thick (80 km) crust, both with and without seismic surface noise. The waveforms are hosted at the ETH servers, and are publicly accessible via FDSN web services.
Sparse Polynomial Chaos Surrogate for ACME Land Model via Iterative Bayesian Compressive Sensing
Sargsyan, K.; Ricciuto, D. M.; Safta, C.; Debusschere, B.; Najm, H. N.; Thornton, P. E.
2015-12-01
For computationally expensive climate models, Monte-Carlo approaches of exploring the input parameter space are often prohibitive due to slow convergence with respect to ensemble size. To alleviate this, we build inexpensive surrogates using uncertainty quantification (UQ) methods employing Polynomial Chaos (PC) expansions that approximate the input-output relationships using as few model evaluations as possible. However, when many uncertain input parameters are present, such UQ studies suffer from the curse of dimensionality. In particular, for 50-100 input parameters non-adaptive PC representations have infeasible numbers of basis terms. To this end, we develop and employ Weighted Iterative Bayesian Compressive Sensing to learn the most important input parameter relationships for efficient, sparse PC surrogate construction with posterior uncertainty quantified due to insufficient data. Besides drastic dimensionality reduction, the uncertain surrogate can efficiently replace the model in computationally intensive studies such as forward uncertainty propagation and variance-based sensitivity analysis, as well as design optimization and parameter estimation using observational data. We applied the surrogate construction and variance-based uncertainty decomposition to Accelerated Climate Model for Energy (ACME) Land Model for several output QoIs at nearly 100 FLUXNET sites covering multiple plant functional types and climates, varying 65 input parameters over broad ranges of possible values. This work is supported by the U.S. Department of Energy, Office of Science, Biological and Environmental Research, Accelerated Climate Modeling for Energy (ACME) project. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000.
The Silicon Trypanosome : A Test Case of Iterative Model Extension in Systems Biology
Achcar, Fiona; Fadda, Abeer; Haanstra, Jurgen R.; Kerkhoven, Eduard J.; Kim, Dong-Hyun; Leroux, Alejandro E.; Papamarkou, Theodore; Rojas, Federico; Bakker, Barbara M.; Barrett, Michael P.; Clayton, Christine; Girolami, Mark; Krauth-Siegel, R. Luise; Matthews, Keith R.; Breitling, Rainer; Poole, RK
2014-01-01
The African trypanosome, Ttypanosoma brucei, is a unicellular parasite causing African Trypanosomiasis (sleeping sickness in humans and nagana in animals). Due to some of its unique properties, it has emerged as a popular model organism in systems biology. A predictive quantitative model of glycolys
Predictive models of forest dynamics.
Purves, Drew; Pacala, Stephen
2008-06-13
Dynamic global vegetation models (DGVMs) have shown that forest dynamics could dramatically alter the response of the global climate system to increased atmospheric carbon dioxide over the next century. But there is little agreement between different DGVMs, making forest dynamics one of the greatest sources of uncertainty in predicting future climate. DGVM predictions could be strengthened by integrating the ecological realities of biodiversity and height-structured competition for light, facilitated by recent advances in the mathematics of forest modeling, ecological understanding of diverse forest communities, and the availability of forest inventory data.
Particle model of full-size ITER-relevant negative ion source.
Taccogna, F; Minelli, P; Ippolito, N
2016-02-01
This work represents the first attempt to model the full-size ITER-relevant negative ion source including the expansion, extraction, and part of the acceleration regions keeping the mesh size fine enough to resolve every single aperture. The model consists of a 2.5D particle-in-cell Monte Carlo collision representation of the plane perpendicular to the filter field lines. Magnetic filter and electron deflection field have been included and a negative ion current density of j(H(-)) = 660 A/m(2) from the plasma grid (PG) is used as parameter for the neutral conversion. The driver is not yet included and a fixed ambipolar flux is emitted from the driver exit plane. Results show the strong asymmetry along the PG driven by the electron Hall (E × B and diamagnetic) drift perpendicular to the filter field. Such asymmetry creates an important dis-homogeneity in the electron current extracted from the different apertures. A steady state is not yet reached after 15 μs.
Particle model of full-size ITER-relevant negative ion source
Energy Technology Data Exchange (ETDEWEB)
Taccogna, F., E-mail: francesco.taccogna@nanotec.cnr.it; Minelli, P. [CNR-Nanotec, Bari 70126 (Italy); INFN, Bari 70126 (Italy); Ippolito, N. [INFN, Bari 70126 (Italy)
2016-02-15
This work represents the first attempt to model the full-size ITER-relevant negative ion source including the expansion, extraction, and part of the acceleration regions keeping the mesh size fine enough to resolve every single aperture. The model consists of a 2.5D particle-in-cell Monte Carlo collision representation of the plane perpendicular to the filter field lines. Magnetic filter and electron deflection field have been included and a negative ion current density of j{sub H{sup −}} = 660 A/m{sup 2} from the plasma grid (PG) is used as parameter for the neutral conversion. The driver is not yet included and a fixed ambipolar flux is emitted from the driver exit plane. Results show the strong asymmetry along the PG driven by the electron Hall (E × B and diamagnetic) drift perpendicular to the filter field. Such asymmetry creates an important dis-homogeneity in the electron current extracted from the different apertures. A steady state is not yet reached after 15 μs.
ROI reconstruction for model-based iterative reconstruction (MBIR) via a coupled dictionary learning
Ye, Dong Hye; Srivastava, Somesh; Thibault, Jean-Baptiste; Sauer, Ken D.; Bouman, Charles A.
2017-03-01
Model based iterative reconstruction (MBIR) algorithms have shown significant improvement in CT image quality by increasing resolution as well as reducing noise and artifacts. In diagnostic protocols, radiologists often need the high-resolution reconstruction of a limited region of interest (ROI). This ROI reconstruction is complicated for MBIR which should reconstruct an image in a full field of view (FOV) given full sinogram measurements. Multi-resolution approaches are widely used for this ROI reconstruction of MBIR, in which the image with a full FOV is reconstructed in a low-resolution and the forward projection of non-ROI is subtracted from the original sinogram measurements for high-resolution ROI reconstruction. However, a low-resolution reconstruction of a full FOV can be susceptible to streaking and blurring artifacts and these can be propagated into the following high-resolution ROI reconstruction. To tackle this challenge, we use a coupled dictionary representation model between low- and high-resolution training dataset for artifact removal and super resolution of a low-resolution full FOV reconstruction. Experimental results on phantom data show that the restored full FOV reconstruction via a coupled dictionary learning significantly improve the image quality of high-resolution ROI reconstruction for MBIR.
Particle model of full-size ITER-relevant negative ion source
Taccogna, F.; Minelli, P.; Ippolito, N.
2016-02-01
This work represents the first attempt to model the full-size ITER-relevant negative ion source including the expansion, extraction, and part of the acceleration regions keeping the mesh size fine enough to resolve every single aperture. The model consists of a 2.5D particle-in-cell Monte Carlo collision representation of the plane perpendicular to the filter field lines. Magnetic filter and electron deflection field have been included and a negative ion current density of jH- = 660 A/m2 from the plasma grid (PG) is used as parameter for the neutral conversion. The driver is not yet included and a fixed ambipolar flux is emitted from the driver exit plane. Results show the strong asymmetry along the PG driven by the electron Hall (E × B and diamagnetic) drift perpendicular to the filter field. Such asymmetry creates an important dis-homogeneity in the electron current extracted from the different apertures. A steady state is not yet reached after 15 μs.
Goncharsky, Alexander V.; Romanov, Sergey Y.
2017-02-01
We develop efficient iterative methods for solving inverse problems of wave tomography in models incorporating both diffraction effects and attenuation. In the inverse problem the aim is to reconstruct the velocity structure and the function that characterizes the distribution of attenuation properties in the object studied. We prove mathematically and rigorously the differentiability of the residual functional in normed spaces, and derive the corresponding formula for the Fréchet derivative. The computation of the Fréchet derivative includes solving both the direct problem with the Neumann boundary condition and the reversed-time conjugate problem. We develop efficient methods for numerical computations where the approximate solution is found using the detector measurements of the wave field and its normal derivative. The wave field derivative values at detector locations are found by solving the exterior boundary value problem with the Dirichlet boundary conditions. We illustrate the efficiency of this approach by applying it to model problems. The algorithms developed are highly parallelizable and designed to be run on supercomputers. Among the most promising medical applications of our results is the development of ultrasonic tomographs for differential diagnosis of breast cancer.
Zhou, Liming; Yang, Yuxing; Yuan, Shiying
2006-02-01
A new algorithm, the coordinates transform iterative optimizing method based on the least square curve fitting model, is presented. This arithmetic is used for extracting the bio-impedance model parameters. It is superior to other methods, for example, its speed of the convergence is quicker, and its calculating precision is higher. The objective to extract the model parameters, such as Ri, Re, Cm and alpha, has been realized rapidly and accurately. With the aim at lowering the power consumption, decreasing the price and improving the price-to-performance ratio, a practical bio-impedance measure system with double CPUs has been built. It can be drawn from the preliminary results that the intracellular resistance Ri increased largely with an increase in working load during sitting, which reflects the ischemic change of lower limbs.
Setodji, Claude Messan; Le, Vi-Nhuan; Schaack, Diana
2013-04-01
Research linking high-quality child care programs and children's cognitive development has contributed to the growing popularity of child care quality benchmarking efforts such as quality rating and improvement systems (QRIS). Consequently, there has been an increased interest in and a need for approaches to identifying thresholds, or cutpoints, in the child care quality measures used in these benchmarking efforts that differentiate between different levels of children's cognitive functioning. To date, research has provided little guidance to policymakers as to where these thresholds should be set. Using the Early Childhood Longitudinal Study, Birth Cohort (ECLS-B) data set, this study explores the use of generalized additive modeling (GAM) as a method of identifying thresholds on the Infant/Toddler Environment Rating Scale (ITERS) in relation to toddlers' performance on the Mental Development subscale of the Bayley Scales of Infant Development (the Bayley Mental Development Scale Short Form-Research Edition, or BMDSF-R). The present findings suggest that simple linear models do not always correctly depict the relationships between ITERS scores and BMDSF-R scores and that GAM-derived thresholds were more effective at differentiating among children's performance levels on the BMDSF-R. Additionally, the present findings suggest that there is a minimum threshold on the ITERS that must be exceeded before significant improvements in children's cognitive development can be expected. There may also be a ceiling threshold on the ITERS, such that beyond a certain level, only marginal increases in children's BMDSF-R scores are observed.
Xiang, D.; Ni, W.; Zhang, H.; Wu, J.; Yan, W.; Su, Y.
2017-09-01
Superpixel segmentation has an advantage that can well preserve the target shape and details. In this research, an adaptive polarimetric SLIC (Pol-ASLIC) superpixel segmentation method is proposed. First, the spherically invariant random vector (SIRV) product model is adopted to estimate the normalized covariance matrix and texture for each pixel. A new edge detector is then utilized to extract PolSAR image edges for the initialization of central seeds. In the local iterative clustering, multiple cues including polarimetric, texture, and spatial information are considered to define the similarity measure. Moreover, a polarimetric homogeneity measurement is used to automatically determine the tradeoff factor, which can vary from homogeneous areas to heterogeneous areas. Finally, the SLIC superpixel segmentation scheme is applied to the airborne Experimental SAR and PiSAR L-band PolSAR data to demonstrate the effectiveness of this proposed segmentation approach. This proposed algorithm produces compact superpixels which can well adhere to image boundaries in both natural and urban areas. The detail information in heterogeneous areas can be well preserved.
Test of the ITER Central Solenoid Model Coil and CS Insert
Energy Technology Data Exchange (ETDEWEB)
Martovetsky, N; Michael, P; Minervini, J; Radovinsky, A; Takayasu, M; Gung, C Y; Thome, R; Ando, T; Isono, T; Hamada, K; Kato, T; Kawano, K; Koizumi, N; Matsui, K; Nakajima, H; Nishijima, G; Nunoya, Y; Sugimoto, M; Takahasi, Y; Hsuji, H; Bessette, D; Okuno, K; Mitchell, N; Ricci, M; Zanino, R; Savoldi, L; Arai, K; Ninomiya, A
2001-09-25
The Central Solenoid Model Coil (CSMC) was designed and built from 1993 to 1999 by an ITER collaboration between the US and Japan, with contributions from the European Union and the Russian Federation. The main goal of the project was to establish the superconducting magnet technology necessary for a large-scale fusion experimental reactor. Three heavily instrumented insert coils were built to cover a wide operational space for testing. The CS Insert, built by Japan, was tested in April-August of 2000. The TF Insert, built by Russian Federation, will be tested in the fall of 2001. The NbAl Insert, built by Japan, will be tested in 2002. The testing takes place in the CSMC Test Facility at the Japan Atomic Energy Research Institute, Naka, Japan. The CSMC was charged successfully without training to its design current of 46 kA to produce 13 T in the magnet bore. The stored energy at 46 kA was 640 MJ. This paper presents the main results of the CSMC and the CS Insert testing--magnet critical parameters, ac losses, joint performance, quench characteristics and some results of the post-test analysis.
Thermal-Hydraulic Issues in the ITER Toroidal Field Model Coil (TFMC) Test and Analysis
Zanino, R.; Bagnasco, M.; Fillunger, H.; Heller, R.; Savoldi Richard, L.; Suesser, M.; Zahn, G.
2004-06-01
The International Thermonuclear Experimental Reactor (ITER) Toroidal Field Model Coil (TFMC) was tested in the Toska facility of Forschungszentrum Karlsruhe during 2001 (standalone) and 2002 (in the background magnetic field of the LCT coil). The TFMC is a racetrack coil wound in five double pancakes on stainless steel radial plates using Nb3Sn dual-channel cable-in-conduit conductor (CICC) with a thin circular SS jacket. The coil was cooled by supercritical helium in forced convection at nominal 4.5 K and 0.5 MPa. Instrumentation, all outside the coil, included voltage taps, pressure and temperature sensors, as well as flow meters. Additionally, differential pressure drop measurement was available on the two pancakes DP1.1 and DP1.2, equipped with heaters. Two major thermal-hydraulic issues in the TFMC tests will be addressed here: 1) the pressure drop along heated pancakes and the comparison with friction factor correlations; 2) the quench initiation and propagation. Other thermal-hydraulic issues like heat generation and exchange in joints, radial plates, coil case, or the effects of the resistive heaters on the helium dynamics, have been already addressed elsewhere.
Indirect iterative learning control for a discrete visual servo without a camera-robot model.
Jiang, Ping; Bamforth, Leon C A; Feng, Zuren; Baruch, John E F; Chen, YangQuan
2007-08-01
This paper presents a discrete learning controller for vision-guided robot trajectory imitation with no prior knowledge of the camera-robot model. A teacher demonstrates a desired movement in front of a camera, and then, the robot is tasked to replay it by repetitive tracking. The imitation procedure is considered as a discrete tracking control problem in the image plane, with an unknown and time-varying image Jacobian matrix. Instead of updating the control signal directly, as is usually done in iterative learning control (ILC), a series of neural networks are used to approximate the unknown Jacobian matrix around every sample point in the demonstrated trajectory, and the time-varying weights of local neural networks are identified through repetitive tracking, i.e., indirect ILC. This makes repetitive segmented training possible, and a segmented training strategy is presented to retain the training trajectories solely within the effective region for neural network approximation. However, a singularity problem may occur if an unmodified neural-network-based Jacobian estimation is used to calculate the robot end-effector velocity. A new weight modification algorithm is proposed which ensures invertibility of the estimation, thus circumventing the problem. Stability is further discussed, and the relationship between the approximation capability of the neural network and the tracking accuracy is obtained. Simulations and experiments are carried out to illustrate the validity of the proposed controller for trajectory imitation of robot manipulators with unknown time-varying Jacobian matrices.
Malusek, Alexandr; Magnusson, Maria; Sandborg, Michael; Westin, Robin; Alm Carlsson, Gudrun
2014-03-01
Better knowledge of elemental composition of patient tissues may improve the accuracy of absorbed dose delivery in brachytherapy. Deficiencies of water-based protocols have been recognized and work is ongoing to implement patient-specific radiation treatment protocols. A model based iterative image reconstruction algorithm DIRA has been developed by the authors to automatically decompose patient tissues to two or three base components via dual-energy computed tomography. Performance of an updated version of DIRA was evaluated for the determination of prostate calcification. A computer simulation using an anthropomorphic phantom showed that the mass fraction of calcium in the prostate tissue was determined with accuracy better than 9%. The calculated mass fraction was little affected by the choice of the material triplet for the surrounding soft tissue. Relative differences between true and approximated values of linear attenuation coefficient and mass energy absorption coefficient for the prostate tissue were less than 6% for photon energies from 1 keV to 2 MeV. The results indicate that DIRA has the potential to improve the accuracy of dose delivery in brachytherapy despite the fact that base material triplets only approximate surrounding soft tissues.
Circuit model of the ITER-like antenna for JET and simulation of its control algorithms
Energy Technology Data Exchange (ETDEWEB)
Durodié, Frédéric, E-mail: frederic.durodie@rma.ac.be; Křivská, Alena [LPP-ERM/KMS, TEC Partner, Brussels (Belgium); Dumortier, Pierre; Lerche, Ernesto [LPP-ERM/KMS, TEC Partner, Brussels (Belgium); JET, Culham Science Centre, Abingdon, OX14 3DB (United Kingdom); Helou, Walid [CEA, IRFM, F-13108 St-Paul-Lez-Durance (France); Collaboration: EUROfusion Consortium
2015-12-10
The ITER-like Antenna (ILA) for JET [1] is a 2 toroidal by 2 poloidal array of Resonant Double Loops (RDL) featuring in-vessel matching capacitors feeding RF current straps in conjugate-T manner, a low impedance quarter-wave impedance transformer, a service stub allowing hydraulic actuator and water cooling services to reach the aforementioned capacitors and a 2nd stage phase-shifter-stub matching circuit allowing to correct/choose the conjugate-T working impedance. Toroidally adjacent RDLs are fed from a 3dB hybrid splitter. It has been operated at 33, 42 and 47MHz on plasma (2008-2009) while it presently estimated frequency range is from 29 to 49MHz. At the time of the design (2001-2004) as well as the experiments the circuit models of the ILA were quite basic. The ILA front face and strap array Topica model was relatively crude and failed to correctly represent the poloidal central septum, Faraday Screen attachment as well as the segmented antenna central septum limiter. The ILA matching capacitors, T-junction, Vacuum Transmission Line (VTL) and Service Stubs were represented by lumped circuit elements and simple transmission line models. The assessment of the ILA results carried out to decide on the repair of the ILA identified that achieving routine full array operation requires a better understanding of the RF circuit, a feedback control algorithm for the 2nd stage matching as well as tighter calibrations of RF measurements. The paper presents the progress in modelling of the ILA comprising a more detailed Topica model of the front face for various plasma Scrape Off Layer profiles, a comprehensive HFSS model of the matching capacitors including internal bellows and electrode cylinders, 3D-EM models of the VTL including vacuum ceramic window, Service stub, a transmission line model of the 2nd stage matching circuit and main transmission lines including the 3dB hybrid splitters. A time evolving simulation using the improved circuit model allowed to design and
Modelling of 3D fields due to ferritic inserts and test blanket modules in toroidal geometry at ITER
Liu, Yueqiang; Äkäslompolo, Simppa; Cavinato, Mario; Koechl, Florian; Kurki-Suonio, Taina; Li, Li; Parail, Vassili; Saibene, Gabriella; Särkimäki, Konsta; Sipilä, Seppo; Varje, Jari
2016-06-01
Computations in toroidal geometry are systematically performed for the plasma response to 3D magnetic perturbations produced by ferritic inserts (FIs) and test blanket modules (TBMs) for four ITER plasma scenarios: the 15 MA baseline, the 12.5 MA hybrid, the 9 MA steady state, and the 7.5 MA half-field helium plasma. Due to the broad toroidal spectrum of the FI and TBM fields, the plasma response for all the n = 1-6 field components are computed and compared. The plasma response is found to be weak for the high-n (n > 4) components. The response is not globally sensitive to the toroidal plasma flow speed, as long as the latter is not reduced by an order of magnitude. This is essentially due to the strong screening effect occurring at a finite flow, as predicted for ITER plasmas. The ITER error field correction coils (EFCC) are used to compensate the n = 1 field errors produced by FIs and TBMs for the baseline scenario for the purpose of avoiding mode locking. It is found that the middle row of the EFCC, with a suitable toroidal phase for the coil current, can provide the best correction of these field errors, according to various optimisation criteria. On the other hand, even without correction, it is predicted that these n = 1 field errors will not cause substantial flow damping for the 15 MA baseline scenario.
Boosting iterative stochastic ensemble method for nonlinear calibration of subsurface flow models
Elsheikh, Ahmed H.
2013-06-01
A novel parameter estimation algorithm is proposed. The inverse problem is formulated as a sequential data integration problem in which Gaussian process regression (GPR) is used to integrate the prior knowledge (static data). The search space is further parameterized using Karhunen-Loève expansion to build a set of basis functions that spans the search space. Optimal weights of the reduced basis functions are estimated by an iterative stochastic ensemble method (ISEM). ISEM employs directional derivatives within a Gauss-Newton iteration for efficient gradient estimation. The resulting update equation relies on the inverse of the output covariance matrix which is rank deficient.In the proposed algorithm we use an iterative regularization based on the ℓ2 Boosting algorithm. ℓ2 Boosting iteratively fits the residual and the amount of regularization is controlled by the number of iterations. A termination criteria based on Akaike information criterion (AIC) is utilized. This regularization method is very attractive in terms of performance and simplicity of implementation. The proposed algorithm combining ISEM and ℓ2 Boosting is evaluated on several nonlinear subsurface flow parameter estimation problems. The efficiency of the proposed algorithm is demonstrated by the small size of utilized ensembles and in terms of error convergence rates. © 2013 Elsevier B.V.
Quasi-linear modeling of lower hybrid current drive in ITER and DEMO
Energy Technology Data Exchange (ETDEWEB)
Cardinali, A., E-mail: alessandro.cardinali@enea.it; Cesario, R.; Panaccione, L.; Santini, F.; Amicucci, L.; Castaldo, C.; Ceccuzzi, S.; Mirizzi, F.; Tuccillo, A. A. [ENEA, Unità Tecnica Fusione, Via E Fermi 45 Rome (Italy)
2015-12-10
First pass absorption of the Lower Hybrid waves in thermonuclear devices like ITER and DEMO is modeled by coupling the ray tracing equations with the quasi-linear evolution of the electron distribution function in 2D velocity space. As usually assumed, the Lower Hybrid Current Drive is not effective in a plasma of a tokamak fusion reactor, owing to the accessibility condition which, depending on the density, restricts the parallel wavenumber to values greater than n{sub ∥crit} and, at the same time, to the high electron temperature that would enhance the wave absorption and then restricts the RF power deposition to the very periphery of the plasma column (near the separatrix). In this work, by extensively using the “ray{sup star}” code, a parametric study of the propagation and absorption of the LH wave as function of the coupled wave spectrum (as its width, and peak value), has been performed very accurately. Such a careful investigation aims at controlling the power deposition layer possibly in the external half radius of the plasma, thus providing a valuable aid to the solution of how to control the plasma current profile in a toroidal magnetic configuration, and how to help the suppression of MHD mode that can develop in the outer part of the plasma. This analysis is useful not only for exploring the possibility of profile control of a pulsed operation reactor as well as the tearing mode stabilization, but also in order to reconsider the feasibility of steady state regime for DEMO.
Huo, P; Coker, D F
2010-11-14
Rather than incoherent hopping between chromophores, experimental evidence suggests that the excitation energy transfer in some biological light harvesting systems initially occurs coherently, and involves coherent superposition states in which excitation spreads over multiple chromophores separated by several nanometers. Treating such delocalized coherent superposition states in the presence of decoherence and dissipation arising from coupling to an environment is a significant challenge for conventional theoretical tools that either use a perturbative approach or make the Markovian approximation. In this paper, we extend the recently developed iterative linearized density matrix (ILDM) propagation scheme [E. R. Dunkel et al., J. Chem. Phys. 129, 114106 (2008)] to study coherent excitation energy transfer in a model of the Fenna-Matthews-Olsen light harvesting complex from green sulfur bacteria. This approach is nonperturbative and uses a discrete path integral description employing a short time approximation to the density matrix propagator that accounts for interference between forward and backward paths of the quantum excitonic system while linearizing the phase in the difference between the forward and backward paths of the environmental degrees of freedom resulting in a classical-like treatment of these variables. The approach avoids making the Markovian approximation and we demonstrate that it successfully describes the coherent beating of the site populations on different chromophores and gives good agreement with other methods that have been developed recently for going beyond the usual approximations, thus providing a new reliable theoretical tool to study coherent exciton transfer in light harvesting systems. We conclude with a discussion of decoherence in independent bilinearly coupled harmonic chromophore baths. The ILDM propagation approach in principle can be applied to more general descriptions of the environment.
Haneda, Eri; Luo, Jiajia; Can, Ali; Ramani, Sathish; Fu, Lin; De Man, Bruno
2016-05-01
In this study, we implement and compare model based iterative reconstruction (MBIR) with dictionary learning (DL) over MBIR with pairwise pixel-difference regularization, in the context of transportation security. DL is a technique of sparse signal representation using an over complete dictionary which has provided promising results in image processing applications including denoising,1 as well as medical CT reconstruction.2 It has been previously reported that DL produces promising results in terms of noise reduction and preservation of structural details, especially for low dose and few-view CT acquisitions.2 A distinguishing feature of transportation security CT is that scanned baggage may contain items with a wide range of material densities. While medical CT typically scans soft tissues, blood with and without contrast agents, and bones, luggage typically contains more high density materials (i.e. metals and glass), which can produce severe distortions such as metal streaking artifacts. Important factors of security CT are the emphasis on image quality such as resolution, contrast, noise level, and CT number accuracy for target detection. While MBIR has shown exemplary performance in the trade-off of noise reduction and resolution preservation, we demonstrate that DL may further improve this trade-off. In this study, we used the KSVD-based DL3 combined with the MBIR cost-minimization framework and compared results to Filtered Back Projection (FBP) and MBIR with pairwise pixel-difference regularization. We performed a parameter analysis to show the image quality impact of each parameter. We also investigated few-view CT acquisitions where DL can show an additional advantage relative to pairwise pixel difference regularization.
Energy Technology Data Exchange (ETDEWEB)
Gordic, Sonja; Husarik, Daniela B.; Alkadhi, Hatem [University Hospital Zurich, University of Zurich, Institute of Diagnostic and Interventional Radiology, Zurich (Switzerland); Desbiolles, Lotus; Leschka, Sebastian [University Hospital Zurich, University of Zurich, Institute of Diagnostic and Interventional Radiology, Zurich (Switzerland); Kantonsspital, Divison of Radiology and Nuclear Medicine, St. Gallen (Switzerland); Sedlmair, Martin; Schmidt, Bernhard [Siemens Healthcare, Computed Tomography Division, Forchheim (Germany); Manka, Robert [University Hospital Zurich, University of Zurich, Institute of Diagnostic and Interventional Radiology, Zurich (Switzerland); University Hospital Zurich, University of Zurich, Clinic of Cardiology, Zurich (Switzerland); University and ETH Zurich, Institute for Biomedical Engineering, Zurich (Switzerland); Plass, Andre; Maisano, Francesco [University Hospital Zurich, University of Zurich, Clinic for Cardiovascular Surgery, Zurich (Switzerland); Wildermuth, Simon [Kantonsspital, Divison of Radiology and Nuclear Medicine, St. Gallen (Switzerland)
2016-02-15
To evaluate the potential of advanced modeled iterative reconstruction (ADMIRE) for optimizing radiation dose of high-pitch coronary CT angiography (CCTA). High-pitch 192-slice dual-source CCTA was performed in 25 patients (group 1) according to standard settings (ref. 100 kVp, ref. 270 mAs/rot). Images were reconstructed with filtered back projection (FBP) and ADMIRE (strength levels 1-5). In another 25 patients (group 2), high-pitch CCTA protocol parameters were adapted according to results from group 1 (ref. 160 mAs/rot), and images were reconstructed with ADMIRE level 4. In ten patients of group 1, vessel sharpness using full width at half maximum (FWHM) analysis was determined. Image quality was assessed by two independent, blinded readers. Interobserver agreements for attenuation and noise were excellent (r = 0.88/0.85, p < 0.01). In group 1, ADMIRE level 4 images were most often selected (84 %, 21/25) as preferred data set; at this level noise reduction was 40 % compared to FBP. Vessel borders showed increasing sharpness (FWHM) at increasing ADMIRE levels (p < 0.05). Image quality in group 2 was similar to that of group 1 at ADMIRE levels 2-3. Radiation dose in group 2 (0.3 ± 0.1 mSv) was significantly lower than in group 1 (0.5 ± 0.3 mSv; p < 0.05). In a selected population, ADMIRE can be used for optimizing high-pitch CCTA to an effective dose of 0.3 mSv. (orig.)
Benchmarking ICRF simulations for ITER
Energy Technology Data Exchange (ETDEWEB)
R. V. Budny, L. Berry, R. Bilato, P. Bonoli, M. Brambilla, R.J. Dumont, A. Fukuyama, R. Harvey, E.F. Jaeger, E. Lerche, C.K. Phillips, V. Vdovin, J. Wright, and members of the ITPA-IOS
2010-09-28
Abstract Benchmarking of full-wave solvers for ICRF simulations is performed using plasma profiles and equilibria obtained from integrated self-consistent modeling predictions of four ITER plasmas. One is for a high performance baseline (5.3 T, 15 MA) DT H-mode plasma. The others are for half-field, half-current plasmas of interest for the pre-activation phase with bulk plasma ion species being either hydrogen or He4. The predicted profiles are used by seven groups to predict the ICRF electromagnetic fields and heating profiles. Approximate agreement is achieved for the predicted heating power partitions for the DT and He4 cases. Profiles of the heating powers and electromagnetic fields are compared.
Directory of Open Access Journals (Sweden)
Yu Zhang
2015-10-01
Full Text Available In this article, we begin with the non-homogeneous model for the non-differentiable heat flow, which is described using the local fractional vector calculus, from the first law of thermodynamics in fractal media point view. We employ the local fractional variational iteration algorithm II to solve the fractal heat equations. The obtained results show the non-differentiable behaviors of temperature fields of fractal heat flow defined on Cantor sets.
Institute of Scientific and Technical Information of China (English)
Liu Zi-Xin; Wen Sheng-Hui; Li Ming
2008-01-01
A combination of the iterative perturbation theory (ITP) of the dynamical mean field theory (DMFT) and coherent-potential approximation (CPA) is generalized to the double exchange model with orbital degeneracy. The Hubbard interaction and the off-diagonal components for the hopping matrix tmnijn(m≠n) are considered in our calculation of spectrum and optical conductivity. The numerical results show that the effects of the non-diagonal hopping matrix elements are important.
Modified two-grid method for solving coupled Navier-Stokes/Darcy model based on Newton iteration
Institute of Scientific and Technical Information of China (English)
SHEN Yu-jing; HAN Dan-fu; SHAO Xin-ping
2015-01-01
A new decoupled two-gird algorithm with the Newton iteration is proposed for solving the coupled Navier-Stokes/Darcy model which describes a fluid flow filtrating through porous media. Moreover the error estimate is given, which shows that the same order of accuracy can be achieved as solving the system directly in the fine mesh when h=H2. Both theoretical analysis and numerical experiments illustrate the eﬃ ciency of the algorithm for solving the coupled problem.
DEFF Research Database (Denmark)
Van Daele, Timothy; Gernaey, Krist V.; Ringborg, Rolf Hoffmeyer
2017-01-01
The aim of model calibration is to estimate unique parameter values from available experimental data, here applied to a biocatalytic process. The traditional approach of first gathering data followed by performing a model calibration is inefficient, since the information gathered during experimen......The aim of model calibration is to estimate unique parameter values from available experimental data, here applied to a biocatalytic process. The traditional approach of first gathering data followed by performing a model calibration is inefficient, since the information gathered during...
Heyvaert, S.; Meuret, Y.; Meulebroeck, W.; Thienpont, H.
2012-06-01
The propagation of coherent light through a heterogeneous medium is an often-encountered problem in optics. Analytical solutions, found by solving the appropriate differential equations, usually only exist for simplified and idealized situations limiting their accuracy and applicability. A widely used approach is the Beam Propagation Method in which the electric field is determined by solving the wave equation numerically, making the method time-consuming, a drawback exacerbated by the heterogeneity of the medium. In this work we propose an alternative approach which combines, in an iterative way, optical ray-tracing simulation in the software ASAP™ with numerical simulations in Matlab in order to model the change in light distribution in a medium with anisotropic absorption, exposed to partially coherent light with high irradiance. The medium under study is a photosensitive polymer in which photochemical reactions cause the local absorption to change as a function of the local light fluence. Under continuous illumination, this results in time-varying light distributions throughout the irradiance process. In our model the fluence-absorption interaction is modelled by splitting up each iteration step into two parts. In the first part the optical ray-tracing software determines the new light distribution in the medium using the absorption from the previous iteration step. In the second part, using the new light distribution, the new absorption coefficients are calculated and expressed as a set of polynomials. The evolution of the light distribution and absorption is presented and the change in total transmission is compared with experiments.
Banerjee, S.; Vasu, P.; von Hellermann, M.; Jaspers, R. J. E.
2010-01-01
Contamination of optical signals by reflections from the tokamak vessel wall is a matter of great concern. For machines such as ITER and future reactors, where the vessel wall will be predominantly metallic, this is potentially a risk factor for quantitative optical emission spectroscopy. This is, i
Banerjee, S.; Vasu, P.; von Hellermann, M.; Jaspers, R. J. E.
2010-01-01
Contamination of optical signals by reflections from the tokamak vessel wall is a matter of great concern. For machines such as ITER and future reactors, where the vessel wall will be predominantly metallic, this is potentially a risk factor for quantitative optical emission spectroscopy. This is, i
Tahmasebi, Pejman; Sahimi, Muhammad
2016-03-01
This series addresses a fundamental issue in multiple-point statistical (MPS) simulation for generation of realizations of large-scale porous media. Past methods suffer from the fact that they generate discontinuities and patchiness in the realizations that, in turn, affect their flow and transport properties. Part I of this series addressed certain aspects of this fundamental issue, and proposed two ways of improving of one such MPS method, namely, the cross correlation-based simulation (CCSIM) method that was proposed by the authors. In the present paper, a new algorithm is proposed to further improve the quality of the realizations. The method utilizes the realizations generated by the algorithm introduced in Part I, iteratively removes any possible remaining discontinuities in them, and addresses the problem with honoring hard (quantitative) data, using an error map. The map represents the differences between the patterns in the training image (TI) and the current iteration of a realization. The resulting iterative CCSIM—the iCCSIM algorithm—utilizes a random path and the error map to identify the locations in the current realization in the iteration process that need further "repairing;" that is, those locations at which discontinuities may still exist. The computational time of the new iterative algorithm is considerably lower than one in which every cell of the simulation grid is visited in order to repair the discontinuities. Furthermore, several efficient distance functions are introduced by which one extracts effectively key information from the TIs. To increase the quality of the realizations and extracting the maximum amount of information from the TIs, the distance functions can be used simultaneously. The performance of the iCCSIM algorithm is studied using very complex 2-D and 3-D examples, including those that are process-based. Comparison is made between the quality and accuracy of the results with those generated by the original CCSIM
Wazwaz, Abdul-Majid
2017-07-01
In this work we address the Lane-Emden boundary value problems which appear in chemical applications, biochemical applications, and scientific disciplines. We apply the variational iteration method to solve two specific models. The first problem models reaction-diffusion equation in a spherical catalyst, while the second problem models the reaction-diffusion process in a spherical biocatalyst. We obtain reliable analytical expressions of the concentrations and the effectiveness factors. Proper graphs will be used to illustrate the obtained results. The proposed analysis demonstrates reliability and efficiency applicability of the employed method.
Energy Technology Data Exchange (ETDEWEB)
Ryu, Young Jin; Choi, Young Hun [Seoul National University Hospital, Department of Radiology, Seoul (Korea, Republic of); Seoul National University College of Medicine, Department of Radiology, Seoul (Korea, Republic of); Cheon, Jung-Eun; Kim, Woo Sun; Kim, In-One [Seoul National University Hospital, Department of Radiology, Seoul (Korea, Republic of); Seoul National University College of Medicine, Department of Radiology, Seoul (Korea, Republic of); Seoul National University Medical Research Center, Institute of Radiation Medicine, Seoul (Korea, Republic of); Ha, Seongmin [New York-Presbyterian Hospital and the Weill Cornell Medical College, Dalio Institute of Cardiovascular Imaging, New York, NY (United States)
2016-03-15
CT of pediatric phantoms can provide useful guidance to the optimization of knowledge-based iterative reconstruction CT. To compare radiation dose and image quality of CT images obtained at different radiation doses reconstructed with knowledge-based iterative reconstruction, hybrid iterative reconstruction and filtered back-projection. We scanned a 5-year anthropomorphic phantom at seven levels of radiation. We then reconstructed CT data with knowledge-based iterative reconstruction (iterative model reconstruction [IMR] levels 1, 2 and 3; Philips Healthcare, Andover, MA), hybrid iterative reconstruction (iDose{sup 4}, levels 3 and 7; Philips Healthcare, Andover, MA) and filtered back-projection. The noise, signal-to-noise ratio and contrast-to-noise ratio were calculated. We evaluated low-contrast resolutions and detectability by low-contrast targets and subjective and objective spatial resolutions by the line pairs and wire. With radiation at 100 peak kVp and 100 mAs (3.64 mSv), the relative doses ranged from 5% (0.19 mSv) to 150% (5.46 mSv). Lower noise and higher signal-to-noise, contrast-to-noise and objective spatial resolution were generally achieved in ascending order of filtered back-projection, iDose{sup 4} levels 3 and 7, and IMR levels 1, 2 and 3, at all radiation dose levels. Compared with filtered back-projection at 100% dose, similar noise levels were obtained on IMR level 2 images at 24% dose and iDose{sup 4} level 3 images at 50% dose, respectively. Regarding low-contrast resolution, low-contrast detectability and objective spatial resolution, IMR level 2 images at 24% dose showed comparable image quality with filtered back-projection at 100% dose. Subjective spatial resolution was not greatly affected by reconstruction algorithm. Reduced-dose IMR obtained at 0.92 mSv (24%) showed similar image quality to routine-dose filtered back-projection obtained at 3.64 mSv (100%), and half-dose iDose{sup 4} obtained at 1.81 mSv. (orig.)
PREDICT : model for prediction of survival in localized prostate cancer
Kerkmeijer, Linda G W; Monninkhof, Evelyn M.; van Oort, Inge M.; van der Poel, Henk G.; de Meerleer, Gert; van Vulpen, Marco
2016-01-01
Purpose: Current models for prediction of prostate cancer-specific survival do not incorporate all present-day interventions. In the present study, a pre-treatment prediction model for patients with localized prostate cancer was developed.Methods: From 1989 to 2008, 3383 patients were treated with I
Fast Nonconvex Model Predictive Control for Commercial Refrigeration
DEFF Research Database (Denmark)
Hovgaard, Tobias Gybel; Larsen, Lars F. S.; Jørgensen, John Bagterp
2012-01-01
in fewer than 5 or so iterations. We employ a fast convex quadratic programming solver to carry out the iterations, which is more than fast enough to run in real-time. We demonstrate our method on a realistic model, with a full year simulation, using real historical data. These simulations show substantial...
Calzado, A; Geleijns, J; Joemai, R M S; Veldkamp, W J H
2014-01-01
Objective: To compare low-contrast detectability (LCDet) performance between a model [non–pre-whitening matched filter with an eye filter (NPWE)] and human observers in CT images reconstructed with filtered back projection (FBP) and iterative [adaptive iterative dose reduction three-dimensional (AIDR 3D; Toshiba Medical Systems, Zoetermeer, Netherlands)] algorithms. Methods: Images of the Catphan® phantom (Phantom Laboratories, New York, NY) were acquired with Aquilion ONE™ 320-detector row CT (Toshiba Medical Systems, Tokyo, Japan) at five tube current levels (20–500 mA range) and reconstructed with FBP and AIDR 3D. Samples containing either low-contrast objects (diameters, 2–15 mm) or background were extracted and analysed by the NPWE model and four human observers in a two-alternative forced choice detection task study. Proportion correct (PC) values were obtained for each analysed object and used to compare human and model observer performances. An efficiency factor (η) was calculated to normalize NPWE to human results. Results: Human and NPWE model PC values (normalized by the efficiency, η = 0.44) were highly correlated for the whole dose range. The Pearson's product-moment correlation coefficients (95% confidence interval) between human and NPWE were 0.984 (0.972–0.991) for AIDR 3D and 0.984 (0.971–0.991) for FBP, respectively. Bland–Altman plots based on PC results showed excellent agreement between human and NPWE [mean absolute difference 0.5 ± 0.4%; range of differences (−4.7%, 5.6%)]. Conclusion: The NPWE model observer can predict human performance in LCDet tasks in phantom CT images reconstructed with FBP and AIDR 3D algorithms at different dose levels. Advances in knowledge: Quantitative assessment of LCDet in CT can accurately be performed using software based on a model observer. PMID:24837275
Relaxation Criteria for Iterated Traffic Simulations
Kelly, Terence; Nagel, Kai
Iterative transportation microsimulations adjust traveler route plans by iterating between a microsimulation and a route planner. At each iteration, the route planner adjusts individuals' route choices based on the preceding microsimulations. Empirically, this process yields good results, but it is usually unclear when to stop the iterative process when modeling real-world traffic. This paper investigates several criteria to judge relaxation of the iterative process, emphasizing criteria related to traveler decision-making.
Predictive Modeling of Cardiac Ischemia
Anderson, Gary T.
1996-01-01
The goal of the Contextual Alarms Management System (CALMS) project is to develop sophisticated models to predict the onset of clinical cardiac ischemia before it occurs. The system will continuously monitor cardiac patients and set off an alarm when they appear about to suffer an ischemic episode. The models take as inputs information from patient history and combine it with continuously updated information extracted from blood pressure, oxygen saturation and ECG lines. Expert system, statistical, neural network and rough set methodologies are then used to forecast the onset of clinical ischemia before it transpires, thus allowing early intervention aimed at preventing morbid complications from occurring. The models will differ from previous attempts by including combinations of continuous and discrete inputs. A commercial medical instrumentation and software company has invested funds in the project with a goal of commercialization of the technology. The end product will be a system that analyzes physiologic parameters and produces an alarm when myocardial ischemia is present. If proven feasible, a CALMS-based system will be added to existing heart monitoring hardware.
Bagni, T.; Duchateau, J. L.; Breschi, M.; Devred, A.; Nijhuis, A.
2017-09-01
Cable-in-conduit conductors (CICCs) for ITER magnets are subjected to fast changing magnetic fields during the plasma-operating scenario. In order to anticipate the limitations of conductors under the foreseen operating conditions, it is essential to have a better understanding of the stability margin of magnets. In the last decade ITER has launched a campaign for characterization of several types of NbTi and Nb3Sn CICCs comprising quench tests with a singular sine wave fast magnetic field pulse and relatively small amplitude. The stability tests, performed in the SULTAN facility, were reproduced and analyzed using two codes: JackPot-AC/DC, an electromagnetic-thermal numerical model for CICCs, developed at the University of Twente (van Lanen and Nijhuis 2010 Cryogenics 50 139-148) and multi-constant-model (MCM) (Turck and Zani 2010 Cryogenics 50 443-9), an analytical model for CICCs coupling losses. The outputs of both codes were combined with thermal, hydraulic and electric analysis of superconducting cables to predict the minimum quench energy (MQE) (Bottura et al 2000 Cryogenics 40 617-26). The experimental AC loss results were used to calibrate the JackPot and MCM models and to reproduce the energy deposited in the cable during an MQE test. The agreement between experiments and models confirm a good comprehension of the various CICCs thermal and electromagnetic phenomena. The differences between the analytical MCM and numerical JackPot approaches are discussed. The results provide a good basis for further investigation of CICC stability under plasma scenario conditions using magnetic field pulses with lower ramp rate and higher amplitude.
Numerical weather prediction model tuning via ensemble prediction system
Jarvinen, H.; Laine, M.; Ollinaho, P.; Solonen, A.; Haario, H.
2011-12-01
This paper discusses a novel approach to tune predictive skill of numerical weather prediction (NWP) models. NWP models contain tunable parameters which appear in parameterizations schemes of sub-grid scale physical processes. Currently, numerical values of these parameters are specified manually. In a recent dual manuscript (QJRMS, revised) we developed a new concept and method for on-line estimation of the NWP model parameters. The EPPES ("Ensemble prediction and parameter estimation system") method requires only minimal changes to the existing operational ensemble prediction infra-structure and it seems very cost-effective because practically no new computations are introduced. The approach provides an algorithmic decision making tool for model parameter optimization in operational NWP. In EPPES, statistical inference about the NWP model tunable parameters is made by (i) generating each member of the ensemble of predictions using different model parameter values, drawn from a proposal distribution, and (ii) feeding-back the relative merits of the parameter values to the proposal distribution, based on evaluation of a suitable likelihood function against verifying observations. In the presentation, the method is first illustrated in low-order numerical tests using a stochastic version of the Lorenz-95 model which effectively emulates the principal features of ensemble prediction systems. The EPPES method correctly detects the unknown and wrongly specified parameters values, and leads to an improved forecast skill. Second, results with an atmospheric general circulation model based ensemble prediction system show that the NWP model tuning capacity of EPPES scales up to realistic models and ensemble prediction systems. Finally, a global top-end NWP model tuning exercise with preliminary results is published.
The Tungsten Project: Dielectronic Recombination Data For Collisional-RadiativeModelling In ITER
Preval, S P; O'Mullane, M G
2016-01-01
Tungsten is an important metal in nuclear fusion reactors. It will be used in the divertor component of ITER (Latin for 'the way'). The Tungsten Project aims to calculate partial and total DR rate coefficients for the isonuclear sequence of Tungsten. The calculated data will be made available as and when they are produced via the open access database OPEN- ADAS in the standard adf09 and adf48 file formats. We present our progress thus far, detailing calculational methods, and showing comparisons with other available data. We conclude with plans for the future.
Application of Homotopy Perturbation and Variational Iteration Methods to SIR Epidemic Model
DEFF Research Database (Denmark)
Ghotbi, Abdoul R.; Barari, Amin; Omidvar, M.;
2011-01-01
Children born are susceptible to various diseases such as mumps, chicken pox etc. These diseases are the most common form of infectious diseases. In recent years, scientists have been trying to devise strategies to fight against these diseases. Since vaccination is considered to be the most....... In this article two methods namely Homotopy Perturbation Method (HPM) and Variational Iteration Method (VIM) are employed to compute an approximation to the solution of non-linear system of differential equations governing the problem. The obtained results are compared with those obtained by Adomian Decomposition...
The Tungsten Project: Dielectronic recombination data for collisional-radiative modelling in ITER
Preval, S. P.; Badnell, N. R.; O'Mullane, M. G.
2017-03-01
Tungsten is an important metal in nuclear fusion reactors. It will be used in the divertor component of ITER (Latin for `the way'). The Tungsten Project aims to calculate partial and total DR rate coefficients for the isonuclear sequence of Tungsten. The calculated data will be made available as and when they are produced via the open access database OPEN-ADAS in the standard adf09 and adf48 file formats. We present our progress thus far, detailing calculational methods, and showing comparisons with other available data. We conclude with plans for the future.
Power deposition modelling of the ITER-like wall beryllium tiles at JET
Firdaouss, M.; Mitteau, R.; Villedieu, E.; Riccardo, V.; Lomas, P.; Vizvary, Z.; Portafaix, C.; Ferrand, L.; Thomas, P.; Nunes, I.; de Vries, P.; Chappuis, P.; Stephan, Y.
2009-06-01
A precise geometric method is used to calculate the power deposition on the future JET ITER-Like Wall beryllium tiles with particular emphasis on the internal edge loads. If over-heated surfaces are identified, these can be modified before the machining or failing that actively monitored during operations. This paper presents the methodology applied to the assessment of the main chamber beryllium limiters. The detailed analysis of one limiter is described. The conclusion of this study is that operation will not be limited by edges exposed to plasma convective loads.
Application of Homotopy Perturbation and Variational Iteration Methods to SIR Epidemic Model
DEFF Research Database (Denmark)
Ghotbi, Abdoul R.; Barari, Amin; Omidvar, M.
2011-01-01
Children born are susceptible to various diseases such as mumps, chicken pox etc. These diseases are the most common form of infectious diseases. In recent years, scientists have been trying to devise strategies to fight against these diseases. Since vaccination is considered to be the most....... In this article two methods namely Homotopy Perturbation Method (HPM) and Variational Iteration Method (VIM) are employed to compute an approximation to the solution of non-linear system of differential equations governing the problem. The obtained results are compared with those obtained by Adomian Decomposition...
Directory of Open Access Journals (Sweden)
Waleed Albusaidi
2015-08-01
Full Text Available This paper introduces a new iterative method to predict the equivalent centrifugal compressor performance at various operating conditions. The presented theoretical analysis and empirical correlations provide a novel approach to derive the entire compressor map corresponding to various suction conditions without a prior knowledge of the detailed geometry. The efficiency model was derived to reflect the impact of physical gas properties, Mach number, and flow and work coefficients. One of the main features of the developed technique is the fact that it considers the variation in the gas properties and stage efficiency which makes it appropriate with hydrocarbons. This method has been tested to predict the performance of two multistage centrifugal compressors and the estimated characteristics are compared with the measured data. The carried comparison revealed a good matching with the actual values, including the stable operation region limits. Furthermore, an optimization study was conducted to investigate the influences of suction conditions on the stage efficiency and surge margin. Moreover, a new sort of presentation has been generated to obtain the equivalent performance characteristics for a constant discharge pressure operation at variable suction pressure and temperature working conditions. A further validation is included in part two of this study in order to evaluate the prediction capability of the derived model at various gas compositions.
Return Predictability, Model Uncertainty, and Robust Investment
DEFF Research Database (Denmark)
Lukas, Manuel
Stock return predictability is subject to great uncertainty. In this paper we use the model confidence set approach to quantify uncertainty about expected utility from investment, accounting for potential return predictability. For monthly US data and six representative return prediction models, we...
Energy Technology Data Exchange (ETDEWEB)
Saadd, Y.
1994-12-31
In spite of the tremendous progress achieved in recent years in the general area of iterative solution techniques, there are still a few obstacles to the acceptance of iterative methods in a number of applications. These applications give rise to very indefinite or highly ill-conditioned non Hermitian matrices. Trying to solve these systems with the simple-minded standard preconditioned Krylov subspace methods can be a frustrating experience. With the mathematical and physical models becoming more sophisticated, the typical linear systems which we encounter today are far more difficult to solve than those of just a few years ago. This trend is likely to accentuate. This workshop will discuss (1) these applications and the types of problems that they give rise to; and (2) recent progress in solving these problems with iterative methods. The workshop will end with a hopefully stimulating panel discussion with the speakers.
Predictive Model Assessment for Count Data
2007-09-05
critique count regression models for patent data, and assess the predictive performance of Bayesian age-period-cohort models for larynx cancer counts...the predictive performance of Bayesian age-period-cohort models for larynx cancer counts in Germany. We consider a recent suggestion by Baker and...Figure 5. Boxplots for various scores for patent data count regressions. 11 Table 1 Four predictive models for larynx cancer counts in Germany, 1998–2002
Bergeot, Baptiste; Vergez, Christophe; Gazengel, Bruno
2014-01-01
Simple models of clarinet instruments based on iterated maps have been used in the past to successfully estimate the threshold of oscillation of this instrument as a function of a constant blowing pressure. However, when the blowing pressure gradually increases through time, the oscillations appear at a much higher value than what is predicted in the static case. This is known as bifurcation delay, a phenomenon studied in [1] for a clarinet model. In numerical simulations the bifurcation delay showed a strong sensitivity to numerical precision.
Lu, Dan; Ye, Ming; Meyer, Philip D.; Curtis, Gary P.; Shi, Xiaoqing; Niu, Xu-Feng; Yabusaki, Steve B.
2013-01-01
obtained from the iterative two-stage method also improved predictive performance of the individual models and model averaging in both synthetic and experimental studies.
Energy Technology Data Exchange (ETDEWEB)
Lu, Dan; Ye, Ming; Meyer, Philip D.; Curtis, Gary P.; Shi, Xiaoqing; Niu, Xu-Feng; Yabusaki, Steven B.
2013-07-23
obtained from the iterative two-stage method also improved predictive performance of the individual models and model averaging in both synthetic and experimental studies.
Lu, Dan; Ye, Ming; Meyer, Philip D.; Curtis, Gary P.; Shi, Xiaoqing; Niu, Xu-Feng; Yabusaki, Steve B.
2013-09-01
obtained from the iterative two-stage method also improved predictive performance of the individual models and model averaging in both synthetic and experimental studies.
Gharamti, M. E.
2015-05-11
The ensemble Kalman filter (EnKF) is a popular method for state-parameters estimation of subsurface flow and transport models based on field measurements. The common filtering procedure is to directly update the state and parameters as one single vector, which is known as the Joint-EnKF. In this study, we follow the one-step-ahead smoothing formulation of the filtering problem, to derive a new joint-based EnKF which involves a smoothing step of the state between two successive analysis steps. The new state-parameters estimation scheme is derived in a consistent Bayesian filtering framework and results in separate update steps for the state and the parameters. This new algorithm bears strong resemblance with the Dual-EnKF, but unlike the latter which first propagates the state with the model then updates it with the new observation, the proposed scheme starts by an update step, followed by a model integration step. We exploit this new formulation of the joint filtering problem and propose an efficient model-integration-free iterative procedure on the update step of the parameters only for further improved performances. Numerical experiments are conducted with a two-dimensional synthetic subsurface transport model simulating the migration of a contaminant plume in a heterogenous aquifer domain. Contaminant concentration data are assimilated to estimate both the contaminant state and the hydraulic conductivity field. Assimilation runs are performed under imperfect modeling conditions and various observational scenarios. Simulation results suggest that the proposed scheme efficiently recovers both the contaminant state and the aquifer conductivity, providing more accurate estimates than the standard Joint and Dual EnKFs in all tested scenarios. Iterating on the update step of the new scheme further enhances the proposed filter’s behavior. In term of computational cost, the new Joint-EnKF is almost equivalent to that of the Dual-EnKF, but requires twice more model
Moyer, R. A.; Paz-Soldan, C.; Nazikian, R.; Orlov, D. M.; Ferraro, N. M.; Grierson, B. A.; Knölker, M.; Lyons, B. C.; McKee, G. R.; Osborne, T. H.; Rhodes, T. L.; Meneghini, O.; Smith, S.; Evans, T. E.; Fenstermacher, M. E.; Groebner, R. J.; Hanson, J. M.; La Haye, R. J.; Luce, T. C.; Mordijck, S.; Solomon, W. M.; Turco, F.; Yan, Z.; Zeng, L.; DIII-D Team
2017-10-01
Experiments have been executed in the DIII-D tokamak to extend suppression of Edge Localized Modes (ELMs) with Resonant Magnetic Perturbations (RMPs) to ITER-relevant levels of beam torque. The results support the hypothesis for RMP ELM suppression based on transition from an ideal screened response to a tearing response at a resonant surface that prevents expansion of the pedestal to an unstable width [Snyder et al., Nucl. Fusion 51, 103016 (2011) and Wade et al., Nucl. Fusion 55, 023002 (2015)]. In ITER baseline plasmas with I/aB = 1.4 and pedestal ν* ˜ 0.15, ELMs are readily suppressed with co- Ip neutral beam injection. However, reducing the beam torque from 5 Nm to ≤ 3.5 Nm results in loss of ELM suppression and a shift in the zero-crossing of the electron perpendicular rotation ω⊥e ˜ 0 deeper into the plasma. The change in radius of ω⊥e ˜ 0 is due primarily to changes to the electron diamagnetic rotation frequency ωe*. Linear plasma response modeling with the resistive MHD code m3d-c1 indicates that the tearing response location tracks the inward shift in ω⊥e ˜ 0. At pedestal ν*˜ 1, ELM suppression is also lost when the beam torque is reduced, but the ω⊥e change is dominated by collapse of the toroidal rotation vT. The hypothesis predicts that it should be possible to obtain ELM suppression at reduced beam torque by also reducing the height and width of the ωe* profile. This prediction has been confirmed experimentally with RMP ELM suppression at 0 Nm of beam torque and plasma normalized pressure βN ˜ 0.7. This opens the possibility of accessing ELM suppression in low torque ITER baseline plasmas by establishing suppression at low beta and then increasing beta while relying on the strong RMP-island coupling to maintain suppression.
Application of Gauss's law space-charge limited emission model in iterative particle tracking method
Energy Technology Data Exchange (ETDEWEB)
Altsybeyev, V.V., E-mail: v.altsybeev@spbu.ru; Ponomarev, V.A.
2016-11-01
The particle tracking method with a so-called gun iteration for modeling the space charge is discussed in the following paper. We suggest to apply the emission model based on the Gauss's law for the calculation of the space charge limited current density distribution using considered method. Based on the presented emission model we have developed a numerical algorithm for this calculations. This approach allows us to perform accurate and low time consumpting numerical simulations for different vacuum sources with the curved emitting surfaces and also in the presence of additional physical effects such as bipolar flows and backscattered electrons. The results of the simulations of the cylindrical diode and diode with elliptical emitter with the use of axysimmetric coordinates are presented. The high efficiency and accuracy of the suggested approach are confirmed by the obtained results and comparisons with the analytical solutions.
Institute of Scientific and Technical Information of China (English)
XIONG ZhiHua; DONG Jin; ZHANG Jie
2009-01-01
An optimal iterative learning control (ILC) strategy of improving endpoint products in semi-batch processes is presented by combining a neural network model. Control affine feed-forward neural network (CAFNN) is proposed to build a model of semi-batch process. The main advantage of CAFNN is to obtain analytically its gradient of endpoint products with respect to input. Therefore, an optimal ILC law with direct error feedback is obtained explicitly, and the convergence of tracking error can be analyzed theoretically. It has been proved that the tracking errors may converge to small values. The proposed modeling and control strategy is illustrated on a simulated isothermal semi-batch reactor, and the results show that the endpoint products can be improved gradually from batch to batch.
Nonlinear chaotic model for predicting storm surges
Directory of Open Access Journals (Sweden)
M. Siek
2010-09-01
Full Text Available This paper addresses the use of the methods of nonlinear dynamics and chaos theory for building a predictive chaotic model from time series. The chaotic model predictions are made by the adaptive local models based on the dynamical neighbors found in the reconstructed phase space of the observables. We implemented the univariate and multivariate chaotic models with direct and multi-steps prediction techniques and optimized these models using an exhaustive search method. The built models were tested for predicting storm surge dynamics for different stormy conditions in the North Sea, and are compared to neural network models. The results show that the chaotic models can generally provide reliable and accurate short-term storm surge predictions.
Nonlinear chaotic model for predicting storm surges
Siek, M.; Solomatine, D.P.
This paper addresses the use of the methods of nonlinear dynamics and chaos theory for building a predictive chaotic model from time series. The chaotic model predictions are made by the adaptive local models based on the dynamical neighbors found in the reconstructed phase space of the observables.
EFFICIENT PREDICTIVE MODELLING FOR ARCHAEOLOGICAL RESEARCH
Balla, A.; Pavlogeorgatos, G.; Tsiafakis, D.; Pavlidis, G.
2014-01-01
The study presents a general methodology for designing, developing and implementing predictive modelling for identifying areas of archaeological interest. The methodology is based on documented archaeological data and geographical factors, geospatial analysis and predictive modelling, and has been applied to the identification of possible Macedonian tombs’ locations in Northern Greece. The model was tested extensively and the results were validated using a commonly used predictive gain,...
基于迭代自学习的操纵子结构预测%Operon Prediction Based On an Iterative Self-learning Algorithm
Institute of Scientific and Technical Information of China (English)
吴文琪; 郑晓斌; 刘永初; 汤凯; 朱怀球
2011-01-01
their training sets. Nevertheless the lack of experimental operon dataset has been the bottleneck of operon prediction. The authors employ an iterative self-learning algorithm which is independent of training set with known operon dataset. The algorithm develops based on a probabilistic model using features including gene distance, regulation signals of gene expression and functional annotation such as COG. The test result compared against the experimental operon data indicates that the algorithm can reach the best accuracy without any training set. Besides, this self-learning algorithm is superior to the algorithm trained on any species with known operons. Accordingly, the algorithm can be applied to any newly sequenced genome. Moreover, comparative analysis of bacteria and archaea enhances the knowledge of universal and genome specific features of operons.
Nonlinear model predictive control based on collective neurodynamic optimization.
Yan, Zheng; Wang, Jun
2015-04-01
In general, nonlinear model predictive control (NMPC) entails solving a sequential global optimization problem with a nonconvex cost function or constraints. This paper presents a novel collective neurodynamic optimization approach to NMPC without linearization. Utilizing a group of recurrent neural networks (RNNs), the proposed collective neurodynamic optimization approach searches for optimal solutions to global optimization problems by emulating brainstorming. Each RNN is guaranteed to converge to a candidate solution by performing constrained local search. By exchanging information and iteratively improving the starting and restarting points of each RNN using the information of local and global best known solutions in a framework of particle swarm optimization, the group of RNNs is able to reach global optimal solutions to global optimization problems. The essence of the proposed collective neurodynamic optimization approach lies in the integration of capabilities of global search and precise local search. The simulation results of many cases are discussed to substantiate the effectiveness and the characteristics of the proposed approach.
Freire, Paulo G L; Ferrari, Ricardo J
2016-06-01
Multiple sclerosis (MS) is a demyelinating autoimmune disease that attacks the central nervous system (CNS) and affects more than 2 million people worldwide. The segmentation of MS lesions in magnetic resonance imaging (MRI) is a very important task to assess how a patient is responding to treatment and how the disease is progressing. Computational approaches have been proposed over the years to segment MS lesions and reduce the amount of time spent on manual delineation and inter- and intra-rater variability and bias. However, fully-automatic segmentation of MS lesions still remains an open problem. In this work, we propose an iterative approach using Student's t mixture models and probabilistic anatomical atlases to automatically segment MS lesions in Fluid Attenuated Inversion Recovery (FLAIR) images. Our technique resembles a refinement approach by iteratively segmenting brain tissues into smaller classes until MS lesions are grouped as the most hyperintense one. To validate our technique we used 21 clinical images from the 2015 Longitudinal Multiple Sclerosis Lesion Segmentation Challenge dataset. Evaluation using Dice Similarity Coefficient (DSC), True Positive Ratio (TPR), False Positive Ratio (FPR), Volume Difference (VD) and Pearson's r coefficient shows that our technique has a good spatial and volumetric agreement with raters' manual delineations. Also, a comparison between our proposal and the state-of-the-art shows that our technique is comparable and, in some cases, better than some approaches, thus being a viable alternative for automatic MS lesion segmentation in MRI.
Energy Technology Data Exchange (ETDEWEB)
Oda, Seitaro [MedStar Washington Hospital Center, Department of Cardiology, Washington, DC (United States); Kumamoto University, Department of Diagnostic Radiology, Faculty of Life Sciences, Kumamoto (Japan); Weissman, Gaby; Weigold, W. Guy [MedStar Washington Hospital Center, Department of Cardiology, Washington, DC (United States); Vembar, Mani [Philips Healthcare, CT Clinical Science, Cleveland, OH (United States)
2015-01-15
The purpose of this study was to investigate the effects of knowledge-based iterative model reconstruction (IMR) on image quality in cardiac CT performed for the planning of redo cardiac surgery by comparing IMR images with images reconstructed with filtered back-projection (FBP) and hybrid iterative reconstruction (HIR). We studied 31 patients (23 men, 8 women; mean age 65.1 ± 16.5 years) referred for redo cardiac surgery who underwent cardiac CT. Paired image sets were created using three types of reconstruction: FBP, HIR, and IMR. Quantitative parameters including CT attenuation, image noise, and contrast-to-noise ratio (CNR) of each cardiovascular structure were calculated. The visual image quality - graininess, streak artefact, margin sharpness of each cardiovascular structure, and overall image quality - was scored on a five-point scale. The mean image noise of FBP, HIR, and IMR images was 58.3 ± 26.7, 36.0 ± 12.5, and 14.2 ± 5.5 HU, respectively; there were significant differences in all comparison combinations among the three methods. The CNR of IMR images was better than that of FBP and HIR images in all evaluated structures. The visual scores were significantly higher for IMR than for the other images in all evaluated parameters. IMR can provide significantly improved qualitative and quantitative image quality at in cardiac CT for planning of reoperative cardiac surgery. (orig.)
DISOPE distributed model predictive control of cascade systems with network communication
Institute of Scientific and Technical Information of China (English)
Yan ZHANG; Shaoyuan LI
2005-01-01
A novel distributed model predictive control scheme based on dynamic integrated system optimization and parameter estimation (DISOPE) was proposed for nonlinear cascade systems under network environment.Under the distributed control structure,online optimization of the cascade system was composed of several cascaded agents that can cooperate and exchange information via network communication.By iterating on modified distributed linear optimal control problems on the basis of estimating parameters at every iteration the correct optimal control action of the nonlinear model predictive control problem of the cascade system could be obtained,assuming that the algorithm was convergent.This approach avoids solving the complex nonlinear optimization problem and significantly reduces the computational burden.The simulation results of the fossil fuel power unit are illustrated to verify the effectiveness and practicability of the proposed algorithm.
How to Establish Clinical Prediction Models
Directory of Open Access Journals (Sweden)
Yong-ho Lee
2016-03-01
Full Text Available A clinical prediction model can be applied to several challenging clinical scenarios: screening high-risk individuals for asymptomatic disease, predicting future events such as disease or death, and assisting medical decision-making and health education. Despite the impact of clinical prediction models on practice, prediction modeling is a complex process requiring careful statistical analyses and sound clinical judgement. Although there is no definite consensus on the best methodology for model development and validation, a few recommendations and checklists have been proposed. In this review, we summarize five steps for developing and validating a clinical prediction model: preparation for establishing clinical prediction models; dataset selection; handling variables; model generation; and model evaluation and validation. We also review several studies that detail methods for developing clinical prediction models with comparable examples from real practice. After model development and vigorous validation in relevant settings, possibly with evaluation of utility/usability and fine-tuning, good models can be ready for the use in practice. We anticipate that this framework will revitalize the use of predictive or prognostic research in endocrinology, leading to active applications in real clinical practice.
Energy Technology Data Exchange (ETDEWEB)
Bagheri, Saman; Nikkar, Ali [University of Tabriz, Tabriz (Iran, Islamic Republic of)
2014-11-15
This paper deals with the determination of approximate solutions for a model of column buckling using two efficient and powerful methods called He's variational approach and variational iteration algorithm-II. These methods are used to find analytical approximate solution of nonlinear dynamic equation of a model for the column buckling. First and second order approximate solutions of the equation of the system are achieved. To validate the solutions, the analytical results have been compared with those resulted from Runge-Kutta 4th order method. A good agreement of the approximate frequencies and periodic solutions with the numerical results and the exact solution shows that the present methods can be easily extended to other nonlinear oscillation problems in engineering. The accuracy and convenience of the proposed methods are also revealed in comparisons with the other solution techniques.
DEFF Research Database (Denmark)
Sokoler, Leo Emil; Frison, Gianluca; Skajaa, Anders
2015-01-01
We develop an efficient homogeneous and self-dual interior-point method (IPM) for the linear programs arising in economic model predictive control of constrained linear systems with linear objective functions. The algorithm is based on a Riccati iteration procedure, which is adapted to the linear...... is significantly faster than several state-of-the-art IPMs based on sparse linear algebra, and 2) warm-start reduces the average number of iterations by 35-40%.......We develop an efficient homogeneous and self-dual interior-point method (IPM) for the linear programs arising in economic model predictive control of constrained linear systems with linear objective functions. The algorithm is based on a Riccati iteration procedure, which is adapted to the linear...
Comparison of Prediction-Error-Modelling Criteria
DEFF Research Database (Denmark)
Jørgensen, John Bagterp; Jørgensen, Sten Bay
2007-01-01
is a realization of a continuous-discrete multivariate stochastic transfer function model. The proposed prediction error-methods are demonstrated for a SISO system parameterized by the transfer functions with time delays of a continuous-discrete-time linear stochastic system. The simulations for this case suggest......Single and multi-step prediction-error-methods based on the maximum likelihood and least squares criteria are compared. The prediction-error methods studied are based on predictions using the Kalman filter and Kalman predictors for a linear discrete-time stochastic state space model, which...... computational resources. The identification method is suitable for predictive control....
Institute of Scientific and Technical Information of China (English)
熊智华; ZHANG Jie; 董进
2008-01-01
A batch-to-batch optimal iterative learning control (ILC) strategy for the tracking control of product quality in batch processes is presented. The linear time-varying perturbation (LTVP) model is built for product quality around the nominal trajectories. To address problems of model-plant mismatches, model prediction errors in the previous batch run are added to the model predictions for the current batch run. Then tracking error transition models can be built, and the ILC law with direct error feedback is explicitly obtained. A rigorous theorem is pro-posed, to prove the convergence of tracking error under ILC. The proposed methodology is illustrated on a typical batch reactor and the results show that the performance of trajectory tracking is gradually improved by the ILC.
Quality Prediction and Control of Reducing Pipe Based on EOS-ELM-RPLS Mathematics Modeling Method
Directory of Open Access Journals (Sweden)
Dong Xiao
2014-01-01
Full Text Available The inspection of inhomogeneous transverse and longitudinal wall thicknesses, which determines the quality of reducing pipe during the production of seamless steel reducing pipe, is lags and difficult to establish its mechanism model. Aiming at the problems, we proposed the quality prediction model of reducing pipe based on EOS-ELM-RPLS algorithm, which taking into account the production characteristics of its time-varying, nonlinearity, rapid intermission, and data echelon distribution. Key contents such as analysis of data time interval, solving of mean value, establishment of regression model, and model online prediction were introduced and the established prediction model was used in the quality prediction and iteration control of reducing pipe. It is shown through experiment and simulation that the prediction and iteration control method based on EOS-ELM-RPLS model can effectively improve the quality of steel reducing pipe, and, moreover, its maintenance cost was low and it has good characteristics of real time, reliability, and high accuracy.
Predictive modelling of complex agronomic and biological systems.
Keurentjes, Joost J B; Molenaar, Jaap; Zwaan, Bas J
2013-09-01
Biological systems are tremendously complex in their functioning and regulation. Studying the multifaceted behaviour and describing the performance of such complexity has challenged the scientific community for years. The reduction of real-world intricacy into simple descriptive models has therefore convinced many researchers of the usefulness of introducing mathematics into biological sciences. Predictive modelling takes such an approach another step further in that it takes advantage of existing knowledge to project the performance of a system in alternating scenarios. The ever growing amounts of available data generated by assessing biological systems at increasingly higher detail provide unique opportunities for future modelling and experiment design. Here we aim to provide an overview of the progress made in modelling over time and the currently prevalent approaches for iterative modelling cycles in modern biology. We will further argue for the importance of versatility in modelling approaches, including parameter estimation, model reduction and network reconstruction. Finally, we will discuss the difficulties in overcoming the mathematical interpretation of in vivo complexity and address some of the future challenges lying ahead. © 2013 John Wiley & Sons Ltd.
Core-Level Modeling and Frequency Prediction for DSP Applications on FPGAs
Directory of Open Access Journals (Sweden)
Gongyu Wang
2015-01-01
Full Text Available Field-programmable gate arrays (FPGAs provide a promising technology that can improve performance of many high-performance computing and embedded applications. However, unlike software design tools, the relatively immature state of FPGA tools significantly limits productivity and consequently prevents widespread adoption of the technology. For example, the lengthy design-translate-execute (DTE process often must be iterated to meet the application requirements. Previous works have enabled model-based, design-space exploration to reduce DTE iterations but are limited by a lack of accurate model-based prediction of key design parameters, the most important of which is clock frequency. In this paper, we present a core-level modeling and design (CMD methodology that enables modeling of FPGA applications at an abstract level and yet produces accurate predictions of parameters such as clock frequency, resource utilization (i.e., area, and latency. We evaluate CMD’s prediction methods using several high-performance DSP applications on various families of FPGAs and show an average clock-frequency prediction error of 3.6%, with a worst-case error of 20.4%, compared to the best of existing high-level prediction methods, 13.9% average error with 48.2% worst-case error. We also demonstrate how such prediction enables accurate design-space exploration without coding in a hardware-description language (HDL, significantly reducing the total design time.
Case studies in archaeological predictive modelling
Verhagen, Jacobus Wilhelmus Hermanus Philippus
2007-01-01
In this thesis, a collection of papers is put together dealing with various quantitative aspects of predictive modelling and archaeological prospection. Among the issues covered are the effects of survey bias on the archaeological data used for predictive modelling, and the complexities of testing p
Childhood asthma prediction models: a systematic review.
Smit, Henriette A; Pinart, Mariona; Antó, Josep M; Keil, Thomas; Bousquet, Jean; Carlsen, Kai H; Moons, Karel G M; Hooft, Lotty; Carlsen, Karin C Lødrup
2015-12-01
Early identification of children at risk of developing asthma at school age is crucial, but the usefulness of childhood asthma prediction models in clinical practice is still unclear. We systematically reviewed all existing prediction models to identify preschool children with asthma-like symptoms at risk of developing asthma at school age. Studies were included if they developed a new prediction model or updated an existing model in children aged 4 years or younger with asthma-like symptoms, with assessment of asthma done between 6 and 12 years of age. 12 prediction models were identified in four types of cohorts of preschool children: those with health-care visits, those with parent-reported symptoms, those at high risk of asthma, or children in the general population. Four basic models included non-invasive, easy-to-obtain predictors only, notably family history, allergic disease comorbidities or precursors of asthma, and severity of early symptoms. Eight extended models included additional clinical tests, mostly specific IgE determination. Some models could better predict asthma development and other models could better rule out asthma development, but the predictive performance of no single model stood out in both aspects simultaneously. This finding suggests that there is a large proportion of preschool children with wheeze for which prediction of asthma development is difficult.
Directory of Open Access Journals (Sweden)
Waleed Albusaidi
2015-08-01
Full Text Available This is the second part of a study conducted to model the aerothermodynamic impact of suction parameters and gas properties on a multi-stage centrifugal compressor’s performance. A new iterative method has been developed in the first part to derive the equivalent performance at various operating conditions. This approach has been validated to predict the compressor map at different suction pressures and temperatures using the design characteristics as reference values. A further case is included in this paper in order to emphasize the validity of the developed approach to obtain the performance characteristics at various gas compositions. The provided example shows that the performance parameters at different gas mixtures can be predicted to within ±1.34%. Furthermore, the conducted optimization in this paper reveals that the proposed method can be applied for the compressor design evaluation corresponding to the expected variation in suction conditions. Moreover, the examined case study demonstrates the effect of gas properties’ variation on the operating point and aerodynamic stability of the entire compression system. In order to achieve that, a simple approach has been established to assess the contribution of gas properties’ variation to the inefficient and unstable compressor performance based on the available operational data.
Institute of Scientific and Technical Information of China (English)
陶华学; 郭金运
2003-01-01
Data coming from different sources have different types and temporal states. Relations between one type of data and another ones, or between data and unknown parameters are almost nonlinear. It is not accurate and reliable to process the data in building the digital earth with the classical least squares method or the method of the common nonlinear least squares. So a generalized nonlinear dynamic least squares method was put forward to process data in building the digital earth. A separating solution model and the iterative calculation method were used to solve the generalized nonlinear dynamic least squares problem. In fact, a complex problem can be separated and then solved by converting to two sub-problems, each of which has a single variable. Therefore the dimension of unknown parameters can be reduced to its half, which simplifies the original high dimensional equations.
Energy Technology Data Exchange (ETDEWEB)
Vdovin, V. L., E-mail: vdov@nfi.kiae.ru [National Research Centre ' Kurchatov Institute,' (Russian Federation)
2013-02-15
The innovative concept and 3D full-wave code modeling the off-axis current drive by radio-frequency (RF) waves in large-scale tokamaks, ITER and DEMO, for steady-state operation with high efficiency is proposed. The scheme uses the helicon radiation (fast magnetosonic waves at high (20-40) ion cyclotron frequency harmonics) at frequencies of 500-700 MHz propagating in the outer regions of the plasmas with a rotational transform. It is expected that the current generated by helicons, in conjunction with the bootstrap current, ensure the maintenance of a given value of the total current in the stability margin q(0) {>=} 2 and q(a) {>=} 4, and will help to have regimes with a negative magnetic shear and internal transport barrier to ensure stability at high normalized plasma pressure {beta}{sub N} > 3 (the so-called advanced scenarios) of interest for the commercial reactor. Modeling with full-wave three-dimensional codes PSTELION and STELEC showed flexible control of the current profile in the reactor plasmas of ITER and DEMO, using multiple frequencies, the positions of the antennae and toroidal wave slow down. Also presented are the results of simulations of current generation by helicons in the DIII-D, T-15MD, and JT-60AS tokamaks. Commercially available continuous-wave klystrons of the MW/tube range are promising for commercial stationary fusion reactors. The compact antennae of the waveguide type are proposed, and an example of a possible RF system for today's tokamaks is given. The advantages of the scheme (partially tested at lower frequencies in tokamaks) are a significant decline in the role of parametric instabilities in the plasma periphery, the use of electrically strong resonator-waveguide type antennae, and substantially greater antenna-plasma coupling.
Vdovin, V. L.
2013-02-01
The innovative concept and 3D full-wave code modeling the off-axis current drive by radio-frequency (RF) waves in large-scale tokamaks, ITER and DEMO, for steady-state operation with high efficiency is proposed. The scheme uses the helicon radiation (fast magnetosonic waves at high (20-40) ion cyclotron frequency harmonics) at frequencies of 500-700 MHz propagating in the outer regions of the plasmas with a rotational transform. It is expected that the current generated by helicons, in conjunction with the bootstrap current, ensure the maintenance of a given value of the total current in the stability margin q(0) ≥ 2 and q( a) ≥ 4, and will help to have regimes with a negative magnetic shear and internal transport barrier to ensure stability at high normalized plasma pressure β N > 3 (the so-called advanced scenarios) of interest for the commercial reactor. Modeling with full-wave three-dimensional codes PSTELION and STELEC showed flexible control of the current profile in the reactor plasmas of ITER and DEMO, using multiple frequencies, the positions of the antennae and toroidal wave slow down. Also presented are the results of simulations of current generation by helicons in the DIII-D, T-15MD, and JT-60AS tokamaks. Commercially available continuous-wave klystrons of the MW/tube range are promising for commercial stationary fusion reactors. The compact antennae of the waveguide type are proposed, and an example of a possible RF system for today's tokamaks is given. The advantages of the scheme (partially tested at lower frequencies in tokamaks) are a significant decline in the role of parametric instabilities in the plasma periphery, the use of electrically strong resonator-waveguide type antennae, and substantially greater antenna-plasma coupling.
Chang, Li-Der; Slatton, K. Clint; Krekeler, Carolyn
2010-08-01
In the last decade, various algorithms have been developed for extracting the digital terrain model from LiDAR point clouds. Although most filters perform well in flat and uncomplicated landscapes, landscapes containing steep slopes and discontinuities are still problematic. In this research, we develop a novel bare-earth extraction algorithm consisting of segmentation modeling and surface modeling based on our previous work, forest canopy removal. The proposed segmentation modeling is built on a triangulated irregular network and composed of triangle assimilation, edge clustering, and point classification to achieve better discrimination of objects and preserve terrain discontinuities. The surface modeling is proposed to iteratively correct both Type I and Type II errors through estimating roughness of digital surface/terrain models, detecting bridges and sharp ridges, etc. Finally, we have compared our obtained filtering results with twelve other filters working on the same fifteen study sites provided by the ISPRS. Our average error and kappa index of agreement in the automated process are 4.6% and 84.5%, respectively, which outperform all other twelve proposed filters. Our kappa index, 84.5%, can be interpreted as almost perfect agreement. In addition, applying this work with optimized parameters further improves performance.
Input-constrained model predictive control via the alternating direction method of multipliers
DEFF Research Database (Denmark)
Sokoler, Leo Emil; Frison, Gianluca; Andersen, Martin S.
2014-01-01
This paper presents an algorithm, based on the alternating direction method of multipliers, for the convex optimal control problem arising in input-constrained model predictive control. We develop an efficient implementation of the algorithm for the extended linear quadratic control problem (LQCP......) with input and input-rate limits. The algorithm alternates between solving an extended LQCP and a highly structured quadratic program. These quadratic programs are solved using a Riccati iteration procedure, and a structure-exploiting interior-point method, respectively. The computational cost per iteration...... is quadratic in the dimensions of the controlled system, and linear in the length of the prediction horizon. Simulations show that the approach proposed in this paper is more than an order of magnitude faster than several state-of-the-art quadratic programming algorithms, and that the difference in computation...
Convergence Guaranteed Nonlinear Constraint Model Predictive Control via I/O Linearization
Directory of Open Access Journals (Sweden)
Xiaobing Kong
2013-01-01
Full Text Available Constituting reliable optimal solution is a key issue for the nonlinear constrained model predictive control. Input-output feedback linearization is a popular method in nonlinear control. By using an input-output feedback linearizing controller, the original linear input constraints will change to nonlinear constraints and sometimes the constraints are state dependent. This paper presents an iterative quadratic program (IQP routine on the continuous-time system. To guarantee its convergence, another iterative approach is incorporated. The proposed algorithm can reach a feasible solution over the entire prediction horizon. Simulation results on both a numerical example and the continuous stirred tank reactors (CSTR demonstrate the effectiveness of the proposed method.
Masuda, Naoki; Nakamura, Mitsuhiro
2011-06-07
Humans and other animals can adapt their social behavior in response to environmental cues including the feedback obtained through experience. Nevertheless, the effects of the experience-based learning of players in evolution and maintenance of cooperation in social dilemma games remain relatively unclear. Some previous literature showed that mutual cooperation of learning players is difficult or requires a sophisticated learning model. In the context of the iterated Prisoner's dilemma, we numerically examine the performance of a reinforcement learning model. Our model modifies those of Karandikar et al. (1998), Posch et al. (1999), and Macy and Flache (2002) in which players satisfy if the obtained payoff is larger than a dynamic threshold. We show that players obeying the modified learning mutually cooperate with high probability if the dynamics of threshold is not too fast and the association between the reinforcement signal and the action in the next round is sufficiently strong. The learning players also perform efficiently against the reactive strategy. In evolutionary dynamics, they can invade a population of players adopting simpler but competitive strategies. Our version of the reinforcement learning model does not complicate the previous model and is sufficiently simple yet flexible. It may serve to explore the relationships between learning and evolution in social dilemma situations.
Energy Technology Data Exchange (ETDEWEB)
Millon, Domitille; Coche, Emmanuel E. [Universite Catholique de Louvain, Department of Radiology and Medical Imaging, Cliniques Universitaires Saint Luc, Brussels (Belgium); Vlassenbroek, Alain [Philips Healthcare, Brussels (Belgium); Maanen, Aline G. van; Cambier, Samantha E. [Universite Catholique de Louvain, Statistics Unit, King Albert II Cancer Institute, Brussels (Belgium)
2017-03-15
To compare image quality [low contrast (LC) detectability, noise, contrast-to-noise (CNR) and spatial resolution (SR)] of MDCT images reconstructed with an iterative reconstruction (IR) algorithm and a filtered back projection (FBP) algorithm. The experimental study was performed on a 256-slice MDCT. LC detectability, noise, CNR and SR were measured on a Catphan phantom scanned with decreasing doses (48.8 down to 0.7 mGy) and parameters typical of a chest CT examination. Images were reconstructed with FBP and a model-based IR algorithm. Additionally, human chest cadavers were scanned and reconstructed using the same technical parameters. Images were analyzed to illustrate the phantom results. LC detectability and noise were statistically significantly different between the techniques, supporting model-based IR algorithm (p < 0.0001). At low doses, the noise in FBP images only enabled SR measurements of high contrast objects. The superior CNR of model-based IR algorithm enabled lower dose measurements, which showed that SR was dose and contrast dependent. Cadaver images reconstructed with model-based IR illustrated that visibility and delineation of anatomical structure edges could be deteriorated at low doses. Model-based IR improved LC detectability and enabled dose reduction. At low dose, SR became dose and contrast dependent. (orig.)
Model predictive control classical, robust and stochastic
Kouvaritakis, Basil
2016-01-01
For the first time, a textbook that brings together classical predictive control with treatment of up-to-date robust and stochastic techniques. Model Predictive Control describes the development of tractable algorithms for uncertain, stochastic, constrained systems. The starting point is classical predictive control and the appropriate formulation of performance objectives and constraints to provide guarantees of closed-loop stability and performance. Moving on to robust predictive control, the text explains how similar guarantees may be obtained for cases in which the model describing the system dynamics is subject to additive disturbances and parametric uncertainties. Open- and closed-loop optimization are considered and the state of the art in computationally tractable methods based on uncertainty tubes presented for systems with additive model uncertainty. Finally, the tube framework is also applied to model predictive control problems involving hard or probabilistic constraints for the cases of multiplic...
A Simplified Approach to Multivariable Model Predictive Control
Directory of Open Access Journals (Sweden)
Michael Short
2015-01-01
Full Text Available The benefits of applying the range of technologies generally known as Model Predictive Control (MPC to the control of industrial processes have been well documented in recent years. One of the principal drawbacks to MPC schemes are the relatively high on-line computational burdens when used with adaptive, constrained and/or multivariable processes, which has warranted some researchers and practitioners to seek simplified approaches for its implementation. To date, several schemes have been proposed based around a simplified 1-norm formulation of multivariable MPC, which is solved online using the simplex algorithm in both the unconstrained and constrained cases. In this paper a 2-norm approach to simplified multivariable MPC is formulated, which is solved online using a vector-matrix product or a simple iterative coordinate descent algorithm for the unconstrained and constrained cases respectively. A CARIMA model is employed to ensure offset-free control, and a simple scheme to produce the optimal predictions is described. A small simulation study and further discussions help to illustrate that this quadratic formulation performs well and can be considered a useful adjunct to its linear counterpart, and still retains the beneficial features such as ease of computer-based implementation.
Energy based prediction models for building acoustics
DEFF Research Database (Denmark)
Brunskog, Jonas
2012-01-01
In order to reach robust and simplified yet accurate prediction models, energy based principle are commonly used in many fields of acoustics, especially in building acoustics. This includes simple energy flow models, the framework of statistical energy analysis (SEA) as well as more elaborated...... principles as, e.g., wave intensity analysis (WIA). The European standards for building acoustic predictions, the EN 12354 series, are based on energy flow and SEA principles. In the present paper, different energy based prediction models are discussed and critically reviewed. Special attention is placed...
Institute of Scientific and Technical Information of China (English)
陈海波; 慈云桂
1989-01-01
In this paper,we present an approximate formula for calculating th speedup of a concurrent non-DO loop.The execution pattern of a concurrent non-Do loop is analyzed.As a result,the optimal concurrent step for a non-DO loop is presened and proved.With the analysis of the speedup of a concurrent non-DO loop,a simple and useful approximate formula is deduced,which is just the mathematical limit of speedup when the number of iterations is approaching infinity.
Massive Predictive Modeling using Oracle R Enterprise
CERN. Geneva
2014-01-01
R is fast becoming the lingua franca for analyzing data via statistics, visualization, and predictive analytics. For enterprise-scale data, R users have three main concerns: scalability, performance, and production deployment. Oracle's R-based technologies - Oracle R Distribution, Oracle R Enterprise, Oracle R Connector for Hadoop, and the R package ROracle - address these concerns. In this talk, we introduce Oracle's R technologies, highlighting how each enables R users to achieve scalability and performance while making production deployment of R results a natural outcome of the data analyst/scientist efforts. The focus then turns to Oracle R Enterprise with code examples using the transparency layer and embedded R execution, targeting massive predictive modeling. One goal behind massive predictive modeling is to build models per entity, such as customers, zip codes, simulations, in an effort to understand behavior and tailor predictions at the entity level. Predictions...
de Oliveira, Tiago E.; Netz, Paulo A.; Kremer, Kurt; Junghans, Christoph; Mukherji, Debashish
2016-05-01
We present a coarse-graining strategy that we test for aqueous mixtures. The method uses pair-wise cumulative coordination as a target function within an iterative Boltzmann inversion (IBI) like protocol. We name this method coordination iterative Boltzmann inversion ( C -IBI). While the underlying coarse-grained model is still structure based and, thus, preserves pair-wise solution structure, our method also reproduces solvation thermodynamics of binary and/or ternary mixtures. Additionally, we observe much faster convergence within C -IBI compared to IBI. To validate the robustness, we apply C -IBI to study test cases of solvation thermodynamics of aqueous urea and a triglycine solvation in aqueous urea.
Liver Cancer Risk Prediction Models
Developing statistical models that estimate the probability of developing liver cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.
Colorectal Cancer Risk Prediction Models
Developing statistical models that estimate the probability of developing colorectal cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.
Cervical Cancer Risk Prediction Models
Developing statistical models that estimate the probability of developing cervical cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.
Prostate Cancer Risk Prediction Models
Developing statistical models that estimate the probability of developing prostate cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.
Pancreatic Cancer Risk Prediction Models
Developing statistical models that estimate the probability of developing pancreatic cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.
Colorectal Cancer Risk Prediction Models
Developing statistical models that estimate the probability of developing colorectal cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.
Bladder Cancer Risk Prediction Models
Developing statistical models that estimate the probability of developing bladder cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.
Esophageal Cancer Risk Prediction Models
Developing statistical models that estimate the probability of developing esophageal cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.
Lung Cancer Risk Prediction Models
Developing statistical models that estimate the probability of developing lung cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.
Breast Cancer Risk Prediction Models
Developing statistical models that estimate the probability of developing breast cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.
Ovarian Cancer Risk Prediction Models
Developing statistical models that estimate the probability of developing ovarian cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.
Testicular Cancer Risk Prediction Models
Developing statistical models that estimate the probability of testicular cervical cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.
Evaluation of damage models by finite element prediction of fracture in cylindrical tensile test.
Eom, Jaegun; Kim, Mincheol; Lee, Seongwon; Ryu, Hoyeun; Joun, Mansoo
2014-10-01
In this research, tensile tests of cylindrical specimens of a mild steel are predicted via the finite element method, with emphasis on the fracture predictions of various damage models. An analytical model is introduced for this purpose. An iterative material identification procedure is used to obtain the flow stress, making it possible to exactly predict a tensile test up to the fracture point, in the engineering sense. A node-splitting technique is used to generate the cracks on the damaged elements. The damage models of McClintock, Rice-Tracey, Cockcroft-Latham, Freudenthal, Brozzo et al. and Oyane et al. are evaluated by comparing their predictions from the tensile test perspective.
Iterative participatory design
DEFF Research Database (Denmark)
Simonsen, Jesper; Hertzum, Morten
2010-01-01
iterative process of mutual learning by designers and domain experts (users), who aim to change the users’ work practices through the introduction of information systems. We provide an illustrative case example with an ethnographic study of clinicians experimenting with a new electronic patient record......The theoretical background in this chapter is information systems development in an organizational context. This includes theories from participatory design, human-computer interaction, and ethnographically inspired studies of work practices. The concept of design is defined as an experimental...... system, focussing on emergent and opportunity-based change enabled by appropriating the system into real work. The contribution to a general core of design research is a reconstruction of the iterative prototyping approach into a general model for sustained participatory design....
Energy Technology Data Exchange (ETDEWEB)
Carmignani, B.; Toselli, G. E-mail: toselli@bologna.enea.it; Interlandi, S.; Lucca, F. E-mail: lucca@tempe.mi.cnr.it; Marin, A
2001-11-01
In order to set up the welding techniques to be utilized for ITER TF coil case, it was considered that the numerical simulations might be an important tool in order to give some information on deformations and residual stresses due to the different phases of welding. TIG and SAW with filler material are the welding techniques considered. By the numerical point of view, until 1998, the numerical simulation of these welding types was practically an uninvestigated problem. Studies, researches and developments have been carried out with the aim to single out the calculation tools and methodologies to be used. Reference has been made to some experimental models with simple geometry and reduced dimensions, which, however, consider a more complex experimental model. The welds of these models were followed by several measures for different physical quantities of interest to be compared with the corresponding numerical results. Subsequently, studies and researches, still in course, concerned simplifications of the numerical procedures, which, however, can produce equivalent results and are able to simulate the welds of more complex pieces, having big dimensions.
Energy Technology Data Exchange (ETDEWEB)
Bae, Jae Keon; Bae, Seung Bin; Lee, Ki Sung; Kim, Yong Kwon; Joung, Jin Hun [Korea Univ., Seoul (Korea, Republic of)
2012-12-15
Diverse designs of collimator have been applied to Single Photon Emission Computed Tomography (SPECT) according to the purpose of acquisition; thus, it is necessary to reflect geometric characteristic of each collimator for successive image reconstruction. This study carry out reconstruction algorithm for imaging system in nuclear medicine with pinhole collimator. Especially, we study to solve sampling problem which caused in the system model of pinhole collimator. System model for a maximum likelihood expectation maximization (MLEM) was developed based on the geometry of the collimator. The projector and back-projector were separately implemented based on the ray-driven and voxel-driven methods, respectively, to overcome sparse sampling problem. We perform phantom study for pinhole collimator by using geant4 application for tomographic emission(GATE) simulation tool. The reconstructed images show promising results. Designed iterative reconstruction algorithm with unmatched system model effective to remove sampling problem artefact. Proposed algorithm can be used not only for pinhole collimator but also for various collimator system of imaging system in nuclear medicine.
Energy Technology Data Exchange (ETDEWEB)
Min, J.; Stubbins, J.; Collins, J. [Univ. of Illinois, Urbana, IL (United States); Rowcliffe, A.F. [Oak Ridge National Lab., TN (United States)
1998-09-01
The stress states that lead to failure of joints between GlidCop{trademark} CuAl25 and 316L SS were examined using finite element modeling techniques to explain experimental observations of behavior of those joints. The joints were formed by hot isostatic pressing (HIP) and bend bar specimens were fabricated with the joint inclined 45{degree} to the major axis of the specimen. The lower surface of the bend bar was notched in order to help induce a precrack for subsequent loading in bending. The precrack was intended to localize a high stress concentration in close proximity to the interface so that its behavior could be examined without complicating factors from the bulk materials and the specimen configuration. Preparatory work to grow acceptable precracks caused the specimen to fail prematurely while the precrack was still progressing into the specimen toward the interface. This prompted the finite element model calculations to help understand the reasons for this behavior from examination of the stress states throughout the specimen. An additional benefit sought from the finite element modeling effort was to understand if the stress states in this non-conventional specimen were representative of those that might be experienced during operation in ITER.
Posterior Predictive Model Checking in Bayesian Networks
Crawford, Aaron
2014-01-01
This simulation study compared the utility of various discrepancy measures within a posterior predictive model checking (PPMC) framework for detecting different types of data-model misfit in multidimensional Bayesian network (BN) models. The investigated conditions were motivated by an applied research program utilizing an operational complex…
A Course in... Model Predictive Control.
Arkun, Yaman; And Others
1988-01-01
Describes a graduate engineering course which specializes in model predictive control. Lists course outline and scope. Discusses some specific topics and teaching methods. Suggests final projects for the students. (MVL)
Impact of heating and current drive mix on the ITER hybrid scenario
Citrin, J.; Artaud, J. F.; Garcia, J.; Hogeweij, G. M. D.; Imbeaux, F.
2010-01-01
Hybrid scenario performance in ITER is studied with the CRONOS integrated modelling suite, using the GLF23 anomalous transport model for heat transport prediction. GLF23 predicted core confinement is optimized through tailoring the q-profile shape by a careful choice of current drive actuators, affe
Equivalency and unbiasedness of grey prediction models
Institute of Scientific and Technical Information of China (English)
Bo Zeng; Chuan Li; Guo Chen; Xianjun Long
2015-01-01
In order to deeply research the structure discrepancy and modeling mechanism among different grey prediction mo-dels, the equivalence and unbiasedness of grey prediction mo-dels are analyzed and verified. The results show that al the grey prediction models that are strictly derived from x(0)(k) +az(1)(k) = b have the identical model structure and simulation precision. Moreover, the unbiased simulation for the homoge-neous exponential sequence can be accomplished. However, the models derived from dx(1)/dt+ax(1) =b are only close to those derived from x(0)(k)+az(1)(k)=b provided that|a|has to satisfy|a| < 0.1; neither could the unbiased simulation for the homoge-neous exponential sequence be achieved. The above conclusions are proved and verified through some theorems and examples.
Predictability of extreme values in geophysical models
Directory of Open Access Journals (Sweden)
A. E. Sterk
2012-09-01
Full Text Available Extreme value theory in deterministic systems is concerned with unlikely large (or small values of an observable evaluated along evolutions of the system. In this paper we study the finite-time predictability of extreme values, such as convection, energy, and wind speeds, in three geophysical models. We study whether finite-time Lyapunov exponents are larger or smaller for initial conditions leading to extremes. General statements on whether extreme values are better or less predictable are not possible: the predictability of extreme values depends on the observable, the attractor of the system, and the prediction lead time.
Schmitz, O.; Becoulet, M.; Cahyna, P.; Evans, T. E.; Feng, Y.; Frerichs, H.; Loarte, A.; Pitts, R. A.; Reiser, D.; Fenstermacher, M. E.; Harting, D.; Kirschner, A.; Kukushkin, A.; Lunt, T.; Saibene, G.; Reiter, D.; Samm, U.; Wiesen, S.
2016-06-01
Results from three-dimensional modeling of plasma edge transport and plasma-wall interactions during application of resonant magnetic perturbation (RMP) fields for control of edge-localized modes in the ITER standard 15 MA Q = 10 H-mode are presented. The full 3D plasma fluid and kinetic neutral transport code EMC3-EIRENE is used for the modeling. Four characteristic perturbed magnetic topologies are considered and discussed with reference to the axisymmetric case without RMP fields. Two perturbation field amplitudes at full and half of the ITER ELM control coil current capability using the vacuum approximation are compared to a case including a strongly screening plasma response. In addition, a vacuum field case at high q 95 = 4.2 featuring increased magnetic shear has been modeled. Formation of a three-dimensional plasma boundary is seen for all four perturbed magnetic topologies. The resonant field amplitudes and the effective radial magnetic field at the separatrix define the shape and extension of the 3D plasma boundary. Opening of the magnetic field lines from inside the separatrix establishes scrape-off layer-like channels of direct parallel particle and heat flux towards the divertor yielding a reduction of the main plasma thermal and particle confinement. This impact on confinement is most accentuated at full RMP current and is strongly reduced when screened RMP fields are considered, as well as for the reduced coil current cases. The divertor fluxes are redirected into a three-dimensional pattern of helical magnetic footprints on the divertor target tiles. At maximum perturbation strength, these fingers stretch out as far as 60 cm across the divertor targets, yielding heat flux spreading and the reduction of peak heat fluxes by 30%. However, at the same time substantial and highly localized heat fluxes reach divertor areas well outside of the axisymmetric heat flux decay profile. Reduced RMP amplitudes due to screening or reduced RMP
Directory of Open Access Journals (Sweden)
Mohamed Mostafa R.
2016-01-01
Full Text Available Self-Excited Permanent Magnet Induction Generator (PMIG is commonly used in wind energy generation systems. The difficulty of Self-Excited Permanent Magnet Induction Generator (SEPMIG modeling is the circuit parameters of the generator vary at each load conditions due to the a change in the frequency and stator voltage. The paper introduces a new modeling for SEPMIG using Gauss-sidle relaxation method. The SEPMIG characteristics using the proposed method are studied at different load conditions according to the wind speed variation, load impedance changes and different shunted capacitor values. The system modeling is investigated due to the magnetizing current variation, the efficiency variation, the power variation and power factor variation. The proposed modeling system satisfies high degree of simplicity and accuracy.
Non-parametric iterative model constraint graph min-cut for automatic kidney segmentation.
Freiman, M; Kronman, A; Esses, S J; Joskowicz, L; Sosna, J
2010-01-01
We present a new non-parametric model constraint graph min-cut algorithm for automatic kidney segmentation in CT images. The segmentation is formulated as a maximum a-posteriori estimation of a model-driven Markov random field. A non-parametric hybrid shape and intensity model is treated as a latent variable in the energy functional. The latent model and labeling map that minimize the energy functional are then simultaneously computed with an expectation maximization approach. The main advantages of our method are that it does not assume a fixed parametric prior model, which is subjective to inter-patient variability and registration errors, and that it combines both the model and the image information into a unified graph min-cut based segmentation framework. We evaluated our method on 20 kidneys from 10 CT datasets with and without contrast agent for which ground-truth segmentations were generated by averaging three manual segmentations. Our method yields an average volumetric overlap error of 10.95%, and average symmetric surface distance of 0.79 mm. These results indicate that our method is accurate and robust for kidney segmentation.
Hybrid modeling and prediction of dynamical systems
Lloyd, Alun L.; Flores, Kevin B.
2017-01-01
Scientific analysis often relies on the ability to make accurate predictions of a system’s dynamics. Mechanistic models, parameterized by a number of unknown parameters, are often used for this purpose. Accurate estimation of the model state and parameters prior to prediction is necessary, but may be complicated by issues such as noisy data and uncertainty in parameters and initial conditions. At the other end of the spectrum exist nonparametric methods, which rely solely on data to build their predictions. While these nonparametric methods do not require a model of the system, their performance is strongly influenced by the amount and noisiness of the data. In this article, we consider a hybrid approach to modeling and prediction which merges recent advancements in nonparametric analysis with standard parametric methods. The general idea is to replace a subset of a mechanistic model’s equations with their corresponding nonparametric representations, resulting in a hybrid modeling and prediction scheme. Overall, we find that this hybrid approach allows for more robust parameter estimation and improved short-term prediction in situations where there is a large uncertainty in model parameters. We demonstrate these advantages in the classical Lorenz-63 chaotic system and in networks of Hindmarsh-Rose neurons before application to experimentally collected structured population data. PMID:28692642
Risk terrain modeling predicts child maltreatment.
Daley, Dyann; Bachmann, Michael; Bachmann, Brittany A; Pedigo, Christian; Bui, Minh-Thuy; Coffman, Jamye
2016-12-01
As indicated by research on the long-term effects of adverse childhood experiences (ACEs), maltreatment has far-reaching consequences for affected children. Effective prevention measures have been elusive, partly due to difficulty in identifying vulnerable children before they are harmed. This study employs Risk Terrain Modeling (RTM), an analysis of the cumulative effect of environmental factors thought to be conducive for child maltreatment, to create a highly accurate prediction model for future substantiated child maltreatment cases in the City of Fort Worth, Texas. The model is superior to commonly used hotspot predictions and more beneficial in aiding prevention efforts in a number of ways: 1) it identifies the highest risk areas for future instances of child maltreatment with improved precision and accuracy; 2) it aids the prioritization of risk-mitigating efforts by informing about the relative importance of the most significant contributing risk factors; 3) since predictions are modeled as a function of easily obtainable data, practitioners do not have to undergo the difficult process of obtaining official child maltreatment data to apply it; 4) the inclusion of a multitude of environmental risk factors creates a more robust model with higher predictive validity; and, 5) the model does not rely on a retrospective examination of past instances of child maltreatment, but adapts predictions to changing environmental conditions. The present study introduces and examines the predictive power of this new tool to aid prevention efforts seeking to improve the safety, health, and wellbeing of vulnerable children.
DEFF Research Database (Denmark)
Sokoler, Leo Emil; Skajaa, Anders; Frison, Gianluca
2013-01-01
algorithm in MATLAB and its performance is analyzed based on a smart grid power management case study. Closed loop simulations show that 1) our algorithm is significantly faster than state-of-the-art IPMs based on sparse linear algebra routines, and 2) warm-starting reduces the number of iterations......In this paper, we present a warm-started homogenous and self-dual interior-point method (IPM) for the linear programs arising in economic model predictive control (MPC) of linear systems. To exploit the structure in the optimization problems, our algorithm utilizes a Riccati iteration procedure...
Radac, Mircea-Bogdan; Precup, Radu-Emil; Petriu, Emil M
2015-11-01
This paper proposes a novel model-free trajectory tracking of multiple-input multiple-output (MIMO) systems by the combination of iterative learning control (ILC) and primitives. The optimal trajectory tracking solution is obtained in terms of previously learned solutions to simple tasks called primitives. The library of primitives that are stored in memory consists of pairs of reference input/controlled output signals. The reference input primitives are optimized in a model-free ILC framework without using knowledge of the controlled process. The guaranteed convergence of the learning scheme is built upon a model-free virtual reference feedback tuning design of the feedback decoupling controller. Each new complex trajectory to be tracked is decomposed into the output primitives regarded as basis functions. The optimal reference input for the control system to track the desired trajectory is next recomposed from the reference input primitives. This is advantageous because the optimal reference input is computed straightforward without the need to learn from repeated executions of the tracking task. In addition, the optimization problem specific to trajectory tracking of square MIMO systems is decomposed in a set of optimization problems assigned to each separate single-input single-output control channel that ensures a convenient model-free decoupling. The new model-free primitive-based ILC approach is capable of planning, reasoning, and learning. A case study dealing with the model-free control tuning for a nonlinear aerodynamic system is included to validate the new approach. The experimental results are given.
Property predictions using microstructural modeling
Energy Technology Data Exchange (ETDEWEB)
Wang, K.G. [Department of Materials Science and Engineering, Rensselaer Polytechnic Institute, CII 9219, 110 8th Street, Troy, NY 12180-3590 (United States)]. E-mail: wangk2@rpi.edu; Guo, Z. [Sente Software Ltd., Surrey Technology Centre, 40 Occam Road, Guildford GU2 7YG (United Kingdom); Sha, W. [Metals Research Group, School of Civil Engineering, Architecture and Planning, The Queen' s University of Belfast, Belfast BT7 1NN (United Kingdom); Glicksman, M.E. [Department of Materials Science and Engineering, Rensselaer Polytechnic Institute, CII 9219, 110 8th Street, Troy, NY 12180-3590 (United States); Rajan, K. [Department of Materials Science and Engineering, Rensselaer Polytechnic Institute, CII 9219, 110 8th Street, Troy, NY 12180-3590 (United States)
2005-07-15
Precipitation hardening in an Fe-12Ni-6Mn maraging steel during overaging is quantified. First, applying our recent kinetic model of coarsening [Phys. Rev. E, 69 (2004) 061507], and incorporating the Ashby-Orowan relationship, we link quantifiable aspects of the microstructures of these steels to their mechanical properties, including especially the hardness. Specifically, hardness measurements allow calculation of the precipitate size as a function of time and temperature through the Ashby-Orowan relationship. Second, calculated precipitate sizes and thermodynamic data determined with Thermo-Calc[copyright] are used with our recent kinetic coarsening model to extract diffusion coefficients during overaging from hardness measurements. Finally, employing more accurate diffusion parameters, we determined the hardness of these alloys independently from theory, and found agreement with experimental hardness data. Diffusion coefficients determined during overaging of these steels are notably higher than those found during the aging - an observation suggesting that precipitate growth during aging and precipitate coarsening during overaging are not controlled by the same diffusion mechanism.
Spatial Economics Model Predicting Transport Volume
Directory of Open Access Journals (Sweden)
Lu Bo
2016-10-01
Full Text Available It is extremely important to predict the logistics requirements in a scientific and rational way. However, in recent years, the improvement effect on the prediction method is not very significant and the traditional statistical prediction method has the defects of low precision and poor interpretation of the prediction model, which cannot only guarantee the generalization ability of the prediction model theoretically, but also cannot explain the models effectively. Therefore, in combination with the theories of the spatial economics, industrial economics, and neo-classical economics, taking city of Zhuanghe as the research object, the study identifies the leading industry that can produce a large number of cargoes, and further predicts the static logistics generation of the Zhuanghe and hinterlands. By integrating various factors that can affect the regional logistics requirements, this study established a logistics requirements potential model from the aspect of spatial economic principles, and expanded the way of logistics requirements prediction from the single statistical principles to an new area of special and regional economics.
Iterative methods for mixed finite element equations
Nakazawa, S.; Nagtegaal, J. C.; Zienkiewicz, O. C.
1985-01-01
Iterative strategies for the solution of indefinite system of equations arising from the mixed finite element method are investigated in this paper with application to linear and nonlinear problems in solid and structural mechanics. The augmented Hu-Washizu form is derived, which is then utilized to construct a family of iterative algorithms using the displacement method as the preconditioner. Two types of iterative algorithms are implemented. Those are: constant metric iterations which does not involve the update of preconditioner; variable metric iterations, in which the inverse of the preconditioning matrix is updated. A series of numerical experiments is conducted to evaluate the numerical performance with application to linear and nonlinear model problems.
The ITER project construction status
Motojima, O.
2015-10-01
The pace of the ITER project in St Paul-lez-Durance, France is accelerating rapidly into its peak construction phase. With the completion of the B2 slab in August 2014, which will support about 400 000 metric tons of the tokamak complex structures and components, the construction is advancing on a daily basis. Magnet, vacuum vessel, cryostat, thermal shield, first wall and divertor structures are under construction or in prototype phase in the ITER member states of China, Europe, India, Japan, Korea, Russia, and the United States. Each of these member states has its own domestic agency (DA) to manage their procurements of components for ITER. Plant systems engineering is being transformed to fully integrate the tokamak and its auxiliary systems in preparation for the assembly and operations phase. CODAC, diagnostics, and the three main heating and current drive systems are also progressing, including the construction of the neutral beam test facility building in Padua, Italy. The conceptual design of the Chinese test blanket module system for ITER has been completed and those of the EU are well under way. Significant progress has been made addressing several outstanding physics issues including disruption load characterization, prediction, avoidance, and mitigation, first wall and divertor shaping, edge pedestal and SOL plasma stability, fuelling and plasma behaviour during confinement transients and W impurity transport. Further development of the ITER Research Plan has included a definition of the required plant configuration for 1st plasma and subsequent phases of ITER operation as well as the major plasma commissioning activities and the needs of the accompanying R&D program to ITER construction by the ITER parties.
Energy Technology Data Exchange (ETDEWEB)
Cwik, T.; Jamnejad, V.; Zuffada, C. [California Institute of Technology, Pasadena, CA (United States)
1994-12-31
The usefulness of finite element modeling follows from the ability to accurately simulate the geometry and three-dimensional fields on the scale of a fraction of a wavelength. To make this modeling practical for engineering design, it is necessary to integrate the stages of geometry modeling and mesh generation, numerical solution of the fields-a stage heavily dependent on the efficient use of a sparse matrix equation solver, and display of field information. The stages of geometry modeling, mesh generation, and field display are commonly completed using commercially available software packages. Algorithms for the numerical solution of the fields need to be written for the specific class of problems considered. Interior problems, i.e. simulating fields in waveguides and cavities, have been successfully solved using finite element methods. Exterior problems, i.e. simulating fields scattered or radiated from structures, are more difficult to model because of the need to numerically truncate the finite element mesh. To practically compute a solution to exterior problems, the domain must be truncated at some finite surface where the Sommerfeld radiation condition is enforced, either approximately or exactly. Approximate methods attempt to truncate the mesh using only local field information at each grid point, whereas exact methods are global, needing information from the entire mesh boundary. In this work, a method that couples three-dimensional finite element (FE) solutions interior to the bounding surface, with an efficient integral equation (IE) solution that exactly enforces the Sommerfeld radiation condition is developed. The bounding surface is taken to be a surface of revolution (SOR) to greatly reduce computational expense in the IE portion of the modeling.
Modeling and Prediction Using Stochastic Differential Equations
DEFF Research Database (Denmark)
Juhl, Rune; Møller, Jan Kloppenborg; Jørgensen, John Bagterp
2016-01-01
Pharmacokinetic/pharmakodynamic (PK/PD) modeling for a single subject is most often performed using nonlinear models based on deterministic ordinary differential equations (ODEs), and the variation between subjects in a population of subjects is described using a population (mixed effects) setup...... that describes the variation between subjects. The ODE setup implies that the variation for a single subject is described by a single parameter (or vector), namely the variance (covariance) of the residuals. Furthermore the prediction of the states is given as the solution to the ODEs and hence assumed...... deterministic and can predict the future perfectly. A more realistic approach would be to allow for randomness in the model due to e.g., the model be too simple or errors in input. We describe a modeling and prediction setup which better reflects reality and suggests stochastic differential equations (SDEs...
Precision Plate Plan View Pattern Predictive Model
Institute of Scientific and Technical Information of China (English)
ZHAO Yang; YANG Quan; HE An-rui; WANG Xiao-chen; ZHANG Yun
2011-01-01
According to the rolling features of plate mill, a 3D elastic-plastic FEM （finite element model） based on full restart method of ANSYS/LS-DYNA was established to study the inhomogeneous plastic deformation of multipass plate rolling. By analyzing the simulation results, the difference of head and tail ends predictive models was found and modified. According to the numerical simulation results of 120 different kinds of conditions, precision plate plan view pattern predictive model was established. Based on these models, the sizing MAS （mizushima automatic plan view pattern control system） method was designed and used on a 2 800 mm plate mill. Comparing the rolled plates with and without PVPP （plan view pattern predictive） model, the reduced width deviation indicates that the olate !olan view Dattern predictive model is preeise.
Unification and extension of monolithic state space and iterative cochlear models.
Rapson, Michael J; Tapson, Jonathan C; Karpul, David
2012-05-01
Time domain cochlear models have primarily followed a method introduced by Allen and Sondhi [J. Acoust. Soc. Am. 66, 123-132 (1979)]. Recently the "state space formalism" proposed by Elliott et al. [J. Acoust. Soc. Am. 122, 2759-2771 (2007)] has been used to simulate a wide range of nonlinear cochlear models. It used a one-dimensional approach that is extended to two dimensions in this paper, using the finite element method. The recently developed "state space formalism" in fact shares a close relationship to the earlier approach. Working from Diependaal et al. [J. Acoust. Soc. Am. 82, 1655-1666 (1987)] the two approaches are compared and the relationship formalized. Understanding this relationship allows models to be converted from one to the other in order to utilize each of their strengths. A second method to derive the state space matrices required for the "state space formalism" is also presented. This method offers improved numerical properties because it uses the information available about the model more effectively. Numerical results support the claims regarding fluid dimension and the underlying similarity of the two approaches. Finally, the recent advances in the state space formalism [Bertaccini and Sisto, J. Comp. Phys. 230, 2575-2587 (2011)] are discussed in terms of this relationship.
Comparison of Iterative Methods for Computing the Pressure Field in a Dynamic Network Model
DEFF Research Database (Denmark)
Mogensen, Kristian; Stenby, Erling Halfdan; Banerjee, Srilekha
1999-01-01
In dynamic network models, the pressure map (the pressure in the pores) must be evaluated at each time step. This calculation involves the solution of a large number of nonlinear algebraic systems of equations and accounts for more than 80 of the total CPU-time. Each nonlinear system requires...
Core-SOL modelling of neon seeded JET discharges with the ITER-like wall
Energy Technology Data Exchange (ETDEWEB)
Telesca, G. [Department of Applied Physics, Ghent University (Belgium); EUROfusion Consortium, JET, Culham Science Centre, Abingdon (United Kingdom); Ivanova-Stanik, I.; Zagoerski, R.; Czarnecka, A. [Institute of Plasma Physics and Laser Microfusion, Warsaw (Poland); EUROfusion Consortium, JET, Culham Science Centre, Abingdon (United Kingdom); Brezinsek, S.; Huber, A.; Wiesen, S. [Forschungszentrum Juelich GmbH, Institut fuer Klima- und Energieforschung-Plasmaphysik, Juelich (Germany); EUROfusion Consortium, JET, Culham Science Centre, Abingdon (United Kingdom); Drewelow, P. [Max-Planck-Institut fuer Plasmaphysik, Greifswald (Germany); EUROfusion Consortium, JET, Culham Science Centre, Abingdon (United Kingdom); Giroud, C. [CCFE Culham, Abingdon (United Kingdom); EUROfusion Consortium, JET, Culham Science Centre, Abingdon (United Kingdom); Collaboration: JET EFDA contributors
2016-08-15
Five ELMy H-mode Ne seeded JET pulses have been simulated with the self-consistent core-SOL model COREDIV. In this five pulse series only the Ne seeding rate was changed shot by shot, allowing a thorough study of the effect of Ne seeding on the total radiated power and of its distribution between core and SOL tobe made. The increase in the simulations of the Ne seeding rate level above that achieved in experiments shows saturation of the total radiated power at a relatively low radiated-heating power ratio (f{sub rad} = 0.60) and a further increase of the ratio of SOL to core radiation, in agreement with the reduction of W release at high Ne seeding level. In spite of the uncertainties caused by the simplified SOL model of COREDIV (neutral model, absence of ELMs and slab model for the SOL), the increase of the perpendicular transport in the SOL with increasing Ne seeding rate, which allows to reproduce numerically the experimental distribution core-SOL of the radiated power, appears to be of general applicability. (copyright 2016 The Authors. Contributions to Plasma Physics published by Wiley-VCH Verlag GmbH and Co. KGaA Weinheim. This)
Multi-Draft Composing: An Iterative Model for Academic Argument Writing
Eckstein, Grant; Chariton, Jessica; McCollum, Robb Mark
2011-01-01
Post-secondary writing teachers in composition and English as a second language (ESL) writing programs are likely familiar with multi-draft composing. Both composition and ESL writing programs share nearly identical multi-draft models despite the very unique and different cultures of each group. We argue that multi-draft composing as it is…
Repurposing and probabilistic integration of data: An iterative and data model independent approach
Wanders, B.
2016-01-01
Besides the scientific paradigms of empiricism, mathematical modelling, and simulation, the method of combining and analysing data in novel ways has become a main research paradigm capable of tackling research questions that could not be answered before. To speed up research in this new paradigm, sc
NBC Hazard Prediction Model Capability Analysis
1999-09-01
Puff( SCIPUFF ) Model Verification and Evaluation Study, Air Resources Laboratory, NOAA, May 1998. Based on the NOAA review, the VLSTRACK developers...TO SUBSTANTIAL DIFFERENCES IN PREDICTIONS HPAC uses a transport and dispersion (T&D) model called SCIPUFF and an associated mean wind field model... SCIPUFF is a model for atmospheric dispersion that uses the Gaussian puff method - an arbitrary time-dependent concentration field is represented
The craft of model making: PSPACE bounds for non-iterative modal logics
Schröder, Lutz
2008-01-01
For lack of general algorithmic methods that apply to wide classes of logics, establishing a complexity bound for a given modal logic is often a laborious task. The present work is a step towards a general theory of the complexity of modal logics. Our main result is that all rank-1 logics enjoy a shallow model property and thus are, under mild assumptions on the format of their axiomatisation, in PSPACE. This leads to a unified derivation of tight PSPACE-bounds for a number of logics including K, KD, coalition logic, graded modal logic, majority logic, and probabilistic modal logic. Our generic algorithm moreover finds tableau proofs that witness pleasant proof-theoretic properties including a weak subformula property. This generality is made possible by a coalgebraic semantics, which conveniently abstracts from the details of a given model class and thus allows covering a broad range of logics in a uniform way.
Corporate prediction models, ratios or regression analysis?
Bijnen, E.J.; Wijn, M.F.C.M.
1994-01-01
The models developed in the literature with respect to the prediction of a company s failure are based on ratios. It has been shown before that these models should be rejected on theoretical grounds. Our study of industrial companies in the Netherlands shows that the ratios which are used in
Modelling Chemical Reasoning to Predict Reactions
Segler, Marwin H S
2016-01-01
The ability to reason beyond established knowledge allows Organic Chemists to solve synthetic problems and to invent novel transformations. Here, we propose a model which mimics chemical reasoning and formalises reaction prediction as finding missing links in a knowledge graph. We have constructed a knowledge graph containing 14.4 million molecules and 8.2 million binary reactions, which represents the bulk of all chemical reactions ever published in the scientific literature. Our model outperforms a rule-based expert system in the reaction prediction task for 180,000 randomly selected binary reactions. We show that our data-driven model generalises even beyond known reaction types, and is thus capable of effectively (re-) discovering novel transformations (even including transition-metal catalysed reactions). Our model enables computers to infer hypotheses about reactivity and reactions by only considering the intrinsic local structure of the graph, and because each single reaction prediction is typically ac...
Built To Last: Using Iterative Development Models for Sustainable Scientific Software Development
Jasiak, M. E.; Truslove, I.; Savoie, M.
2013-12-01
In scientific research, software development exists fundamentally for the results they create. The core research must take focus. It seems natural to researchers, driven by grant deadlines, that every dollar invested in software development should be used to push the boundaries of problem solving. This system of values is frequently misaligned with those of the software being created in a sustainable fashion; short-term optimizations create longer-term sustainability issues. The National Snow and Ice Data Center (NSIDC) has taken bold cultural steps in using agile and lean development and management methodologies to help its researchers meet critical deadlines, while building in the necessary support structure for the code to live far beyond its original milestones. Agile and lean software development and methodologies including Scrum, Kanban, Continuous Delivery and Test-Driven Development have seen widespread adoption within NSIDC. This focus on development methods is combined with an emphasis on explaining to researchers why these methods produce more desirable results for everyone, as well as promoting developers interacting with researchers. This presentation will describe NSIDC's current scientific software development model, how this addresses the short-term versus sustainability dichotomy, the lessons learned and successes realized by transitioning to this agile and lean-influenced model, and the current challenges faced by the organization.
Shen, Yijie; Gong, Mali; Ji, Encai; Fu, Xing; Sun, Licheng
2017-01-01
A new theoretical model, spatial dynamic thermal iteration (SDTI) model, for diode-end-pumped solid-state laser systems is developed, which is both applicable to laser oscillators and amplifiers. The influences of pump beam quality, ground state absorption and depletion (GSA/GSD) and energy transfer upconversion (ETU) are included in our model. According to the basic principles of nonradiative transitions and population dynamics, we can obtain the spatial distribution of heat generation and temperature within the laser crystal through numerically solving heat conduction equation by finite element method (FEM). Furthermore, a spatial mesh iteration algorithm is designed to analyze the temperature dependence of absorption cross section, emission cross section and thermal conductivity. Finally, the simulated results of our SDTI model was proved to precisely coincide with the reported experimental results in classical 888 nm end-pumped Nd:YVO4 laser oscillator and amplifier systems.
Dasari, Nagamalleswararao; Mondal, Wasim Raja; Zhang, Peng; Moreno, Juana; Jarrell, Mark; Vidhyadhiraja, N. S.
2016-09-01
The dynamical mean field theory (DMFT) has emerged as one of the most important frameworks for theoretical investigations of strongly correlated lattice models and real material systems. Within DMFT, a lattice model can be mapped onto the problem of a magnetic impurity embedded in a self-consistently determined bath. The solution of this impurity problem is the most challenging step in this framework. The available numerically exact methods such as quantum Monte Carlo, numerical renormalization group or exact diagonalization are naturally unbiased and accurate, but are computationally expensive. Thus, approximate methods, based e.g. on diagrammatic perturbation theory have gained substantial importance. Although such methods are not always reliable in various parameter regimes such as in the proximity of phase transitions or for strong coupling, the advantages they offer, in terms of being computationally inexpensive, with real frequency output at zero and finite temperatures, compensate for their deficiencies and offer a quick, qualitative analysis of the system behavior. In this work, we have developed such a method, that can be classified as a multi-orbital iterated perturbation theory (MO-IPT) to study N-fold degenerate and non degenerate Anderson impurity models. As applications of the solver, we have embedded the MO-IPT within DMFT and explored lattice models like the single orbital Hubbard model, covalent band insulator and the multi-orbital Hubbard model for density-density type interactions in different parameter regimes. The Hund's coupling effects in case of multiple orbitals is also studied. The limitations and quality of results are gauged through extensive comparison with data from the numerically exact continuous time quantum Monte Carlo method (CTQMC). In the case of the single orbital Hubbard model, covalent band insulators and non degenerate multi-orbital Hubbard models, we obtained an excellent agreement between the Matsubara self-energies of MO
Studio Physics at the Colorado School of Mines: A model for iterative development and assessment
Kohl, Patrick; Kuo, Vincent
2009-05-01
The Colorado School of Mines (CSM) has taught its first-semester introductory physics course using a hybrid lecture/Studio Physics format for several years. Based on this previous success, over the past 18 months we have converted the second semester of our traditional calculus-based introductory physics course (Physics II) to a Studio Physics format. In this talk, we describe the recent history of the Physics II course and of Studio at Mines, discuss the PER-based improvements that we are implementing, and characterize our progress via several metrics, including pre/post Conceptual Survey of Electricity and Magnetism (CSEM) scores, Colorado Learning About Science Survey scores (CLASS), failure rates, and exam scores. We also report on recent attempts to involve students in the department's Senior Design program with our course. Our ultimate goal is to construct one possible model for a practical and successful transition from a lecture course to a Studio (or Studio-like) course.
Model-Based Iterative Reconstruction for Radial Fast Spin-Echo MRI
Block, Kai Tobias; Frahm, Jens
2016-01-01
In radial fast spin-echo MRI, a set of overlapping spokes with an inconsistent T2 weighting is acquired, which results in an averaged image contrast when employing conventional image reconstruction techniques. This work demonstrates that the problem may be overcome with the use of a dedicated reconstruction method that further allows for T2 quantification by extracting the embedded relaxation information. Thus, the proposed reconstruction method directly yields a spin-density and relaxivity map from only a single radial data set. The method is based on an inverse formulation of the problem and involves a modeling of the received MRI signal. Because the solution is found by numerical optimization, the approach exploits all data acquired. Further, it handles multi-coil data and optionally allows for the incorporation of additional prior knowledge. Simulations and experimental results for a phantom and human brain in vivo demonstrate that the method yields spin-density and relaxivity maps that are neither affect...
Evaluation of CASP8 model quality predictions
Cozzetto, Domenico
2009-01-01
The model quality assessment problem consists in the a priori estimation of the overall and per-residue accuracy of protein structure predictions. Over the past years, a number of methods have been developed to address this issue and CASP established a prediction category to evaluate their performance in 2006. In 2008 the experiment was repeated and its results are reported here. Participants were invited to infer the correctness of the protein models submitted by the registered automatic servers. Estimates could apply to both whole models and individual amino acids. Groups involved in the tertiary structure prediction categories were also asked to assign local error estimates to each predicted residue in their own models and their results are also discussed here. The correlation between the predicted and observed correctness measures was the basis of the assessment of the results. We observe that consensus-based methods still perform significantly better than those accepting single models, similarly to what was concluded in the previous edition of the experiment. © 2009 WILEY-LISS, INC.
Directory of Open Access Journals (Sweden)
Xuan Wu
2015-01-01
Full Text Available In order to control the permanent-magnet synchronous motor system (PMSM with different disturbances and nonlinearity, an improved current control algorithm for the PMSM systems using recursive model predictive control (RMPC is developed in this paper. As the conventional MPC has to be computed online, its iterative computational procedure needs long computing time. To enhance computational speed, a recursive method based on recursive Levenberg-Marquardt algorithm (RLMA and iterative learning control (ILC is introduced to solve the learning issue in MPC. RMPC is able to significantly decrease the computation cost of traditional MPC in the PMSM system. The effectiveness of the proposed algorithm has been verified by simulation and experimental results.
Genetic models of homosexuality: generating testable predictions
Gavrilets, Sergey; Rice, William R.
2006-01-01
Homosexuality is a common occurrence in humans and other species, yet its genetic and evolutionary basis is poorly understood. Here, we formulate and study a series of simple mathematical models for the purpose of predicting empirical patterns that can be used to determine the form of selection that leads to polymorphism of genes influencing homosexuality. Specifically, we develop theory to make contrasting predictions about the genetic characteristics of genes influencing homosexuality inclu...
Wind farm production prediction - The Zephyr model
Energy Technology Data Exchange (ETDEWEB)
Landberg, L. [Risoe National Lab., Wind Energy Dept., Roskilde (Denmark); Giebel, G. [Risoe National Lab., Wind Energy Dept., Roskilde (Denmark); Madsen, H. [IMM (DTU), Kgs. Lyngby (Denmark); Nielsen, T.S. [IMM (DTU), Kgs. Lyngby (Denmark); Joergensen, J.U. [Danish Meteorologisk Inst., Copenhagen (Denmark); Lauersen, L. [Danish Meteorologisk Inst., Copenhagen (Denmark); Toefting, J. [Elsam, Fredericia (DK); Christensen, H.S. [Eltra, Fredericia (Denmark); Bjerge, C. [SEAS, Haslev (Denmark)
2002-06-01
This report describes a project - funded by the Danish Ministry of Energy and the Environment - which developed a next generation prediction system called Zephyr. The Zephyr system is a merging between two state-of-the-art prediction systems: Prediktor of Risoe National Laboratory and WPPT of IMM at the Danish Technical University. The numerical weather predictions were generated by DMI's HIRLAM model. Due to technical difficulties programming the system, only the computational core and a very simple version of the originally very complex system were developed. The project partners were: Risoe, DMU, DMI, Elsam, Eltra, Elkraft System, SEAS and E2. (au)
Structural model for the first wall W-based material in ITER project
Institute of Scientific and Technical Information of China (English)
Dehua Xu; Xinkui He; Shuiquan Deng; Yong Zhao
2014-01-01
The preparation, characterization, and test of the first wall materials designed to be used in the fusion reactor have remained challenging problems in the material science. This work uses the first-principles method as implemented in the CASTEP package to study the influ-ences of the doped titanium carbide on the structural sta-bility of the W–TiC material. The calculated total energy and enthalpy have been used as criteria to judge the structural models built with consideration of symmetry. Our simulation indicates that the doped TiC tends to form its own domain up to the investigated nano-scale, which implies a possible phase separation. This result reveals the intrinsic reason for the composite nature of the W–TiC material and provides an explanation for the experimen-tally observed phase separation at the nano-scale. Our approach also sheds a light on explaining the enhancing effects of doped components on the durability, reliability, corrosion resistance, etc., in many special steels.
Izumi, Kenji; Bartlein, Patrick J.
2016-10-01
The inverse modeling through iterative forward modeling (IMIFM) approach was used to reconstruct Last Glacial Maximum (LGM) climates from North American fossil pollen data. The approach was validated using modern pollen data and observed climate data. While the large-scale LGM temperature IMIFM reconstructions are similar to those calculated using conventional statistical approaches, the reconstructions of moisture variables differ between the two approaches. We used two vegetation models, BIOME4 and BIOME5-beta, with the IMIFM approach to evaluate the effects on the LGM climate reconstruction of differences in water use efficiency, carbon use efficiency, and atmospheric CO2 concentrations. Although lower atmospheric CO2 concentrations influence pollen-based LGM moisture reconstructions, they do not significantly affect temperature reconstructions over most of North America. This study implies that the LGM climate was very cold but not very much drier than present over North America, which is inconsistent with previous studies.
Predictive model for segmented poly(urea
Directory of Open Access Journals (Sweden)
Frankl P.
2012-08-01
Full Text Available Segmented poly(urea has been shown to be of significant benefit in protecting vehicles from blast and impact and there have been several experimental studies to determine the mechanisms by which this protective function might occur. One suggested route is by mechanical activation of the glass transition. In order to enable design of protective structures using this material a constitutive model and equation of state are needed for numerical simulation hydrocodes. Determination of such a predictive model may also help elucidate the beneficial mechanisms that occur in polyurea during high rate loading. The tool deployed to do this has been Group Interaction Modelling (GIM – a mean field technique that has been shown to predict the mechanical and physical properties of polymers from their structure alone. The structure of polyurea has been used to characterise the parameters in the GIM scheme without recourse to experimental data and the equation of state and constitutive model predicts response over a wide range of temperatures and strain rates. The shock Hugoniot has been predicted and validated against existing data. Mechanical response in tensile tests has also been predicted and validated.
IHadoop: Asynchronous iterations for MapReduce
Elnikety, Eslam Mohamed Ibrahim
2011-11-01
MapReduce is a distributed programming frame-work designed to ease the development of scalable data-intensive applications for large clusters of commodity machines. Most machine learning and data mining applications involve iterative computations over large datasets, such as the Web hyperlink structures and social network graphs. Yet, the MapReduce model does not efficiently support this important class of applications. The architecture of MapReduce, most critically its dataflow techniques and task scheduling, is completely unaware of the nature of iterative applications; tasks are scheduled according to a policy that optimizes the execution for a single iteration which wastes bandwidth, I/O, and CPU cycles when compared with an optimal execution for a consecutive set of iterations. This work presents iHadoop, a modified MapReduce model, and an associated implementation, optimized for iterative computations. The iHadoop model schedules iterations asynchronously. It connects the output of one iteration to the next, allowing both to process their data concurrently. iHadoop\\'s task scheduler exploits inter-iteration data locality by scheduling tasks that exhibit a producer/consumer relation on the same physical machine allowing a fast local data transfer. For those iterative applications that require satisfying certain criteria before termination, iHadoop runs the check concurrently during the execution of the subsequent iteration to further reduce the application\\'s latency. This paper also describes our implementation of the iHadoop model, and evaluates its performance against Hadoop, the widely used open source implementation of MapReduce. Experiments using different data analysis applications over real-world and synthetic datasets show that iHadoop performs better than Hadoop for iterative algorithms, reducing execution time of iterative applications by 25% on average. Furthermore, integrating iHadoop with HaLoop, a variant Hadoop implementation that caches
PREDICTIVE CAPACITY OF ARCH FAMILY MODELS
Directory of Open Access Journals (Sweden)
Raphael Silveira Amaro
2016-03-01
Full Text Available In the last decades, a remarkable number of models, variants from the Autoregressive Conditional Heteroscedastic family, have been developed and empirically tested, making extremely complex the process of choosing a particular model. This research aim to compare the predictive capacity, using the Model Confidence Set procedure, than five conditional heteroskedasticity models, considering eight different statistical probability distributions. The financial series which were used refers to the log-return series of the Bovespa index and the Dow Jones Industrial Index in the period between 27 October 2008 and 30 December 2014. The empirical evidences showed that, in general, competing models have a great homogeneity to make predictions, either for a stock market of a developed country or for a stock market of a developing country. An equivalent result can be inferred for the statistical probability distributions that were used.
Predictive QSAR modeling of phosphodiesterase 4 inhibitors.
Kovalishyn, Vasyl; Tanchuk, Vsevolod; Charochkina, Larisa; Semenuta, Ivan; Prokopenko, Volodymyr
2012-02-01
A series of diverse organic compounds, phosphodiesterase type 4 (PDE-4) inhibitors, have been modeled using a QSAR-based approach. 48 QSAR models were compared by following the same procedure with different combinations of descriptors and machine learning methods. QSAR methodologies used random forests and associative neural networks. The predictive ability of the models was tested through leave-one-out cross-validation, giving a Q² = 0.66-0.78 for regression models and total accuracies Ac=0.85-0.91 for classification models. Predictions for the external evaluation sets obtained accuracies in the range of 0.82-0.88 (for active/inactive classifications) and Q² = 0.62-0.76 for regressions. The method showed itself to be a potential tool for estimation of IC₅₀ of new drug-like candidates at early stages of drug development. Copyright © 2011 Elsevier Inc. All rights reserved.
Modelling the predictive performance of credit scoring
Directory of Open Access Journals (Sweden)
Shi-Wei Shen
2013-02-01
Full Text Available Orientation: The article discussed the importance of rigour in credit risk assessment.Research purpose: The purpose of this empirical paper was to examine the predictive performance of credit scoring systems in Taiwan.Motivation for the study: Corporate lending remains a major business line for financial institutions. However, in light of the recent global financial crises, it has become extremely important for financial institutions to implement rigorous means of assessing clients seeking access to credit facilities.Research design, approach and method: Using a data sample of 10 349 observations drawn between 1992 and 2010, logistic regression models were utilised to examine the predictive performance of credit scoring systems.Main findings: A test of Goodness of fit demonstrated that credit scoring models that incorporated the Taiwan Corporate Credit Risk Index (TCRI, micro- and also macroeconomic variables possessed greater predictive power. This suggests that macroeconomic variables do have explanatory power for default credit risk.Practical/managerial implications: The originality in the study was that three models were developed to predict corporate firms’ defaults based on different microeconomic and macroeconomic factors such as the TCRI, asset growth rates, stock index and gross domestic product.Contribution/value-add: The study utilises different goodness of fits and receiver operator characteristics during the examination of the robustness of the predictive power of these factors.
Calibrated predictions for multivariate competing risks models.
Gorfine, Malka; Hsu, Li; Zucker, David M; Parmigiani, Giovanni
2014-04-01
Prediction models for time-to-event data play a prominent role in assessing the individual risk of a disease, such as cancer. Accurate disease prediction models provide an efficient tool for identifying individuals at high risk, and provide the groundwork for estimating the population burden and cost of disease and for developing patient care guidelines. We focus on risk prediction of a disease in which family history is an important risk factor that reflects inherited genetic susceptibility, shared environment, and common behavior patterns. In this work family history is accommodated using frailty models, with the main novel feature being allowing for competing risks, such as other diseases or mortality. We show through a simulation study that naively treating competing risks as independent right censoring events results in non-calibrated predictions, with the expected number of events overestimated. Discrimination performance is not affected by ignoring competing risks. Our proposed prediction methodologies correctly account for competing events, are very well calibrated, and easy to implement.
Hossain, F.; Iqbal, N.; Lee, H.; Muhammad, A.
2015-12-01
When it comes to building durable capacity for implementing state of the art technology and earth observation (EO) data for improved decision making, it has been long recognized that a unidirectional approach (from research to application) often does not work. Co-design of capacity building effort has recently been recommended as a better alternative. This approach is a two-way street where scientists and stakeholders engage intimately along the entire chain of actions from design of research experiments to packaging of decision making tools and each party provides an equal amount of input. Scientists execute research experiments based on boundary conditions and outputs that are defined as tangible by stakeholders for decision making. On the other hand, decision making tools are packaged by stakeholders with scientists ensuring the application-specific science is relevant. In this talk, we will overview one such iterative capacity building approach that we have implemented for gravimetry-based satellite (GRACE) EO data for improved groundwater management in Pakistan. We call our approach a hybrid approach where the initial step is a forward model involving a conventional short-term (3 day) capacity building workshop in the stakeholder environment addressing a very large audience. In this forward model, the net is cast wide to 'shortlist' a set of highly motivated stakeholder agency staffs who are then engaged more directly in 1-1 training. In the next step (the backward model), these short listed staffs are then brought back in the research environment of the scientists (supply) for 1-1 and long-term (6 months) intense brainstorming, training, and design of decision making tools. The advantage of this backward model is that it allows for a much better understanding for scientists of the ground conditions and hurdles of making a EO-based scientific innovation work for a specific decision making problem that is otherwise fundamentally impossible in conventional
Energy Technology Data Exchange (ETDEWEB)
Hufnagel, Heike [Institut National de Recherche en Informatique et en Automatique (INRIA), Asclepios Project, Sophia Antipolis (France); University Medical Center Hamburg-Eppendorf, Department of Medical Informatics, Hamburg (Germany); Pennec, Xavier; Ayache, Nicholas [Institut National de Recherche en Informatique et en Automatique (INRIA), Asclepios Project, Sophia Antipolis (France); Ehrhardt, Jan; Handels, Heinz [University Medical Center Hamburg-Eppendorf, Department of Medical Informatics, Hamburg (Germany)
2008-03-15
Identification of point correspondences between shapes is required for statistical analysis of organ shapes differences. Since manual identification of landmarks is not a feasible option in 3D, several methods were developed to automatically find one-to-one correspondences on shape surfaces. For unstructured point sets, however, one-to-one correspondences do not exist but correspondence probabilities can be determined. A method was developed to compute a statistical shape model based on shapes which are represented by unstructured point sets with arbitrary point numbers. A fundamental problem when computing statistical shape models is the determination of correspondences between the points of the shape observations of the training data set. In the absence of landmarks, exact correspondences can only be determined between continuous surfaces, not between unstructured point sets. To overcome this problem, we introduce correspondence probabilities instead of exact correspondences. The correspondence probabilities are found by aligning the observation shapes with the affine expectation maximization-iterative closest points (EM-ICP) registration algorithm. In a second step, the correspondence probabilities are used as input to compute a mean shape (represented once again by an unstructured point set). Both steps are unified in a single optimization criterion which depe nds on the two parameters 'registration transformation' and 'mean shape'. In a last step, a variability model which best represents the variability in the training data set is computed. Experiments on synthetic data sets and in vivo brain structure data sets (MRI) are then designed to evaluate the performance of our algorithm. The new method was applied to brain MRI data sets, and the estimated point correspondences were compared to a statistical shape model built on exact correspondences. Based on established measures of 'generalization ability' and &apos
Modelling language evolution: Examples and predictions.
Gong, Tao; Shuai, Lan; Zhang, Menghan
2014-06-01
We survey recent computer modelling research of language evolution, focusing on a rule-based model simulating the lexicon-syntax coevolution and an equation-based model quantifying the language competition dynamics. We discuss four predictions of these models: (a) correlation between domain-general abilities (e.g. sequential learning) and language-specific mechanisms (e.g. word order processing); (b) coevolution of language and relevant competences (e.g. joint attention); (c) effects of cultural transmission and social structure on linguistic understandability; and (d) commonalities between linguistic, biological, and physical phenomena. All these contribute significantly to our understanding of the evolutions of language structures, individual learning mechanisms, and relevant biological and socio-cultural factors. We conclude the survey by highlighting three future directions of modelling studies of language evolution: (a) adopting experimental approaches for model evaluation; (b) consolidating empirical foundations of models; and (c) multi-disciplinary collaboration among modelling, linguistics, and other relevant disciplines.
Modelling language evolution: Examples and predictions
Gong, Tao; Shuai, Lan; Zhang, Menghan
2014-06-01
We survey recent computer modelling research of language evolution, focusing on a rule-based model simulating the lexicon-syntax coevolution and an equation-based model quantifying the language competition dynamics. We discuss four predictions of these models: (a) correlation between domain-general abilities (e.g. sequential learning) and language-specific mechanisms (e.g. word order processing); (b) coevolution of language and relevant competences (e.g. joint attention); (c) effects of cultural transmission and social structure on linguistic understandability; and (d) commonalities between linguistic, biological, and physical phenomena. All these contribute significantly to our understanding of the evolutions of language structures, individual learning mechanisms, and relevant biological and socio-cultural factors. We conclude the survey by highlighting three future directions of modelling studies of language evolution: (a) adopting experimental approaches for model evaluation; (b) consolidating empirical foundations of models; and (c) multi-disciplinary collaboration among modelling, linguistics, and other relevant disciplines.
Global Solar Dynamo Models: Simulations and Predictions
Indian Academy of Sciences (India)
Mausumi Dikpati; Peter A. Gilman
2008-03-01
Flux-transport type solar dynamos have achieved considerable success in correctly simulating many solar cycle features, and are now being used for prediction of solar cycle timing and amplitude.We first define flux-transport dynamos and demonstrate how they work. The essential added ingredient in this class of models is meridional circulation, which governs the dynamo period and also plays a crucial role in determining the Sun’s memory about its past magnetic fields.We show that flux-transport dynamo models can explain many key features of solar cycles. Then we show that a predictive tool can be built from this class of dynamo that can be used to predict mean solar cycle features by assimilating magnetic field data from previous cycles.
Model Predictive Control of Sewer Networks
Pedersen, Einar B.; Herbertsson, Hannes R.; Niemann, Henrik; Poulsen, Niels K.; Falk, Anne K. V.
2017-01-01
The developments in solutions for management of urban drainage are of vital importance, as the amount of sewer water from urban areas continues to increase due to the increase of the world’s population and the change in the climate conditions. How a sewer network is structured, monitored and controlled have thus become essential factors for effcient performance of waste water treatment plants. This paper examines methods for simplified modelling and controlling a sewer network. A practical approach to the problem is used by analysing simplified design model, which is based on the Barcelona benchmark model. Due to the inherent constraints the applied approach is based on Model Predictive Control.
Mazon, D.; Liegeard, C.; Jardin, A.; Barnsley, R.; Walsh, M.; O'Mullane, M.; Sirinelli, A.; Dorchies, F.
2016-11-01
Measuring Soft X-Ray (SXR) radiation [0.1 keV; 15 keV] in tokamaks is a standard way of extracting valuable information on the particle transport and magnetohydrodynamic activity. Generally, the analysis is performed with detectors positioned close to the plasma for a direct line of sight. A burning plasma, like the ITER deuterium-tritium phase, is too harsh an environment to permit the use of such detectors in close vicinity of the machine. We have thus investigated in this article the possibility of using polycapillary lenses in ITER to transport the SXR information several meters away from the plasma in the complex port-plug geometry.
Energy Technology Data Exchange (ETDEWEB)
Mazon, D., E-mail: Didier.Mazon@cea.fr; Jardin, A. [CEA, IRFM, F-13108 Saint-Paul-lez-Durance (France); Liegeard, C. [Ecole Polytechnique de Paris, Paris (France); Barnsley, R.; Walsh, M.; Sirinelli, A. [ITER Organization, Saint-Paul-lez-Durance (France); O’Mullane, M. [Department of Physics SUPA, University of Strathclyde, Glasgow G4 ONG (United Kingdom); Dorchies, F. [University Bordeaux, CNRS, CEA, CELIA, UMR5107, Talence 33405 (France)
2016-11-15
Measuring Soft X-Ray (SXR) radiation [0.1 keV; 15 keV] in tokamaks is a standard way of extracting valuable information on the particle transport and magnetohydrodynamic activity. Generally, the analysis is performed with detectors positioned close to the plasma for a direct line of sight. A burning plasma, like the ITER deuterium-tritium phase, is too harsh an environment to permit the use of such detectors in close vicinity of the machine. We have thus investigated in this article the possibility of using polycapillary lenses in ITER to transport the SXR information several meters away from the plasma in the complex port-plug geometry.
DKIST Polarization Modeling and Performance Predictions
Harrington, David
2016-05-01
Calibrating the Mueller matrices of large aperture telescopes and associated coude instrumentation requires astronomical sources and several modeling assumptions to predict the behavior of the system polarization with field of view, altitude, azimuth and wavelength. The Daniel K Inouye Solar Telescope (DKIST) polarimetric instrumentation requires very high accuracy calibration of a complex coude path with an off-axis f/2 primary mirror, time dependent optical configurations and substantial field of view. Polarization predictions across a diversity of optical configurations, tracking scenarios, slit geometries and vendor coating formulations are critical to both construction and contined operations efforts. Recent daytime sky based polarization calibrations of the 4m AEOS telescope and HiVIS spectropolarimeter on Haleakala have provided system Mueller matrices over full telescope articulation for a 15-reflection coude system. AEOS and HiVIS are a DKIST analog with a many-fold coude optical feed and similar mirror coatings creating 100% polarization cross-talk with altitude, azimuth and wavelength. Polarization modeling predictions using Zemax have successfully matched the altitude-azimuth-wavelength dependence on HiVIS with the few percent amplitude limitations of several instrument artifacts. Polarization predictions for coude beam paths depend greatly on modeling the angle-of-incidence dependences in powered optics and the mirror coating formulations. A 6 month HiVIS daytime sky calibration plan has been analyzed for accuracy under a wide range of sky conditions and data analysis algorithms. Predictions of polarimetric performance for the DKIST first-light instrumentation suite have been created under a range of configurations. These new modeling tools and polarization predictions have substantial impact for the design, fabrication and calibration process in the presence of manufacturing issues, science use-case requirements and ultimate system calibration
Modelling Chemical Reasoning to Predict Reactions
Segler, Marwin H. S.; Waller, Mark P.
2016-01-01
The ability to reason beyond established knowledge allows Organic Chemists to solve synthetic problems and to invent novel transformations. Here, we propose a model which mimics chemical reasoning and formalises reaction prediction as finding missing links in a knowledge graph. We have constructed a knowledge graph containing 14.4 million molecules and 8.2 million binary reactions, which represents the bulk of all chemical reactions ever published in the scientific literature. Our model outpe...
Predictive Modeling of the CDRA 4BMS
Coker, Robert; Knox, James
2016-01-01
Fully predictive models of the Four Bed Molecular Sieve of the Carbon Dioxide Removal Assembly on the International Space Station are being developed. This virtual laboratory will be used to help reduce mass, power, and volume requirements for future missions. In this paper we describe current and planned modeling developments in the area of carbon dioxide removal to support future crewed Mars missions as well as the resolution of anomalies observed in the ISS CDRA.
Raman Model Predicting Hardness of Covalent Crystals
Zhou, Xiang-Feng; Qian, Quang-Rui; Sun, Jian; Tian, Yongjun; Wang, Hui-Tian
2009-01-01
Based on the fact that both hardness and vibrational Raman spectrum depend on the intrinsic property of chemical bonds, we propose a new theoretical model for predicting hardness of a covalent crystal. The quantitative relationship between hardness and vibrational Raman frequencies deduced from the typical zincblende covalent crystals is validated to be also applicable for the complex multicomponent crystals. This model enables us to nondestructively and indirectly characterize the hardness o...
Predictive Modelling of Mycotoxins in Cereals
Fels, van der H.J.; Liu, C.
2015-01-01
In dit artikel worden de samenvattingen van de presentaties tijdens de 30e bijeenkomst van de Werkgroep Fusarium weergegeven. De onderwerpen zijn: Predictive Modelling of Mycotoxins in Cereals.; Microbial degradation of DON.; Exposure to green leaf volatiles primes wheat against FHB but boosts
Unreachable Setpoints in Model Predictive Control
DEFF Research Database (Denmark)
Rawlings, James B.; Bonné, Dennis; Jørgensen, John Bagterp
2008-01-01
steady state is established for terminal constraint model predictive control (MPC). The region of attraction is the steerable set. Existing analysis methods for closed-loop properties of MPC are not applicable to this new formulation, and a new analysis method is developed. It is shown how to extend...
Predictive Modelling of Mycotoxins in Cereals
Fels, van der H.J.; Liu, C.
2015-01-01
In dit artikel worden de samenvattingen van de presentaties tijdens de 30e bijeenkomst van de Werkgroep Fusarium weergegeven. De onderwerpen zijn: Predictive Modelling of Mycotoxins in Cereals.; Microbial degradation of DON.; Exposure to green leaf volatiles primes wheat against FHB but boosts produ
Prediction modelling for population conviction data
Tollenaar, N.
2017-01-01
In this thesis, the possibilities of using prediction models for judicial penal case data are investigated. The development and refinement of a risk taxation scale based on these data is discussed. When false positives are weighted equally severe as false negatives, 70% can be classified correctly.
A Predictive Model for MSSW Student Success
Napier, Angela Michele
2011-01-01
This study tested a hypothetical model for predicting both graduate GPA and graduation of University of Louisville Kent School of Social Work Master of Science in Social Work (MSSW) students entering the program during the 2001-2005 school years. The preexisting characteristics of demographics, academic preparedness and culture shock along with…
Predictability of extreme values in geophysical models
Sterk, A.E.; Holland, M.P.; Rabassa, P.; Broer, H.W.; Vitolo, R.
2012-01-01
Extreme value theory in deterministic systems is concerned with unlikely large (or small) values of an observable evaluated along evolutions of the system. In this paper we study the finite-time predictability of extreme values, such as convection, energy, and wind speeds, in three geophysical model
A revised prediction model for natural conception
Bensdorp, A.J.; Steeg, J.W. van der; Steures, P.; Habbema, J.D.; Hompes, P.G.; Bossuyt, P.M.; Veen, F. van der; Mol, B.W.; Eijkemans, M.J.; Kremer, J.A.M.; et al.,
2017-01-01
One of the aims in reproductive medicine is to differentiate between couples that have favourable chances of conceiving naturally and those that do not. Since the development of the prediction model of Hunault, characteristics of the subfertile population have changed. The objective of this analysis
Distributed Model Predictive Control via Dual Decomposition
DEFF Research Database (Denmark)
Biegel, Benjamin; Stoustrup, Jakob; Andersen, Palle
2014-01-01
This chapter presents dual decomposition as a means to coordinate a number of subsystems coupled by state and input constraints. Each subsystem is equipped with a local model predictive controller while a centralized entity manages the subsystems via prices associated with the coupling constraints...
Predictive Modelling of Mycotoxins in Cereals
Fels, van der H.J.; Liu, C.
2015-01-01
In dit artikel worden de samenvattingen van de presentaties tijdens de 30e bijeenkomst van de Werkgroep Fusarium weergegeven. De onderwerpen zijn: Predictive Modelling of Mycotoxins in Cereals.; Microbial degradation of DON.; Exposure to green leaf volatiles primes wheat against FHB but boosts produ
Leptogenesis in minimal predictive seesaw models
Björkeroth, Fredrik; Varzielas, Ivo de Medeiros; King, Stephen F
2015-01-01
We estimate the Baryon Asymmetry of the Universe (BAU) arising from leptogenesis within a class of minimal predictive seesaw models involving two right-handed neutrinos and simple Yukawa structures with one texture zero. The two right-handed neutrinos are dominantly responsible for the "atmospheric" and "solar" neutrino masses with Yukawa couplings to $(\
Energy Technology Data Exchange (ETDEWEB)
Hata, Akinori; Yanagawa, Masahiro; Honda, Osamu; Gyobu, Tomoko; Ueda, Ken; Tomiyama, Noriyuki [Osaka University Graduate School of Medicine, Department of Diagnostic and Interventional Radiology, Suita, Osaka (Japan)
2016-12-15
To assess image quality of filtered back-projection (FBP) and model-based iterative reconstruction (MBIR) with a conventional setting and a new lung-specific setting on submillisievert CT. A lung phantom with artificial nodules was scanned with 10 mA at 120 kVp and 80 kVp (0.14 mSv and 0.05 mSv, respectively); images were reconstructed using FBP and MBIR with conventional setting (MBIR{sub Stnd}) and lung-specific settings (MBIR{sub RP20/Tx} and MBIR{sub RP20}). Three observers subjectively scored overall image quality and image findings on a 5-point scale (1 = worst, 5 = best) compared with reference standard images (50 mA-FBP at 120, 100, 80 kVp). Image noise was measured objectively. MBIR{sub RP20/Tx} performed significantly better than MBIR{sub Stnd} for overall image quality in 80-kVp images (p < 0.01), blurring of the border between lung and chest wall in 120p-kVp images (p < 0.05) and the ventral area of 80-kVp images (p < 0.001), and clarity of small vessels in the ventral area of 80-kVp images (p = 0.037). At 120 kVp, 10 mA-MBIR{sub RP20} and 10 mA-MBIR{sub RP20/Tx} showed similar performance to 50 mA-FBP. MBIR{sub Stnd} was better for noise reduction. Except for blurring in 120 kVp-MBIR{sub Stnd}, MBIRs performed better than FBP. Although a conventional setting was advantageous in noise reduction, a lung-specific setting can provide more appropriate image quality, even on submillisievert CT. (orig.)
A two-dimensional iterative panel method and boundary layer model for bio-inspired multi-body wings
Blower, Christopher J.; Dhruv, Akash; Wickenheiser, Adam M.
2014-03-01
The increased use of Unmanned Aerial Vehicles (UAVs) has created a continuous demand for improved flight capabilities and range of use. During the last decade, engineers have turned to bio-inspiration for new and innovative flow control methods for gust alleviation, maneuverability, and stability improvement using morphing aircraft wings. The bio-inspired wing design considered in this study mimics the flow manipulation techniques performed by birds to extend the operating envelope of UAVs through the installation of an array of feather-like panels across the airfoil's upper and lower surfaces while replacing the trailing edge flap. Each flap has the ability to deflect into both the airfoil and the inbound airflow using hinge points with a single degree-of-freedom, situated at 20%, 40%, 60% and 80% of the chord. The installation of the surface flaps offers configurations that enable advantageous maneuvers while alleviating gust disturbances. Due to the number of possible permutations available for the flap configurations, an iterative constant-strength doublet/source panel method has been developed with an integrated boundary layer model to calculate the pressure distribution and viscous drag over the wing's surface. As a result, the lift, drag and moment coefficients for each airfoil configuration can be calculated. The flight coefficients of this numerical method are validated using experimental data from a low speed suction wind tunnel operating at a Reynolds Number 300,000. This method enables the aerodynamic assessment of a morphing wing profile to be performed accurately and efficiently in comparison to Computational Fluid Dynamics methods and experiments as discussed herein.
Specialized Language Models using Dialogue Predictions
Popovici, C; Popovici, Cosmin; Baggia, Paolo
1996-01-01
This paper analyses language modeling in spoken dialogue systems for accessing a database. The use of several language models obtained by exploiting dialogue predictions gives better results than the use of a single model for the whole dialogue interaction. For this reason several models have been created, each one for a specific system question, such as the request or the confirmation of a parameter. The use of dialogue-dependent language models increases the performance both at the recognition and at the understanding level, especially on answers to system requests. Moreover other methods to increase performance, like automatic clustering of vocabulary words or the use of better acoustic models during recognition, does not affect the improvements given by dialogue-dependent language models. The system used in our experiments is Dialogos, the Italian spoken dialogue system used for accessing railway timetable information over the telephone. The experiments were carried out on a large corpus of dialogues coll...
Caries risk assessment models in caries prediction
Directory of Open Access Journals (Sweden)
Amila Zukanović
2013-11-01
Full Text Available Objective. The aim of this research was to assess the efficiency of different multifactor models in caries prediction. Material and methods. Data from the questionnaire and objective examination of 109 examinees was entered into the Cariogram, Previser and Caries-Risk Assessment Tool (CAT multifactor risk assessment models. Caries risk was assessed with the help of all three models for each patient, classifying them as low, medium or high-risk patients. The development of new caries lesions over a period of three years [Decay Missing Filled Tooth (DMFT increment = difference between Decay Missing Filled Tooth Surface (DMFTS index at baseline and follow up], provided for examination of the predictive capacity concerning different multifactor models. Results. The data gathered showed that different multifactor risk assessment models give significantly different results (Friedman test: Chi square = 100.073, p=0.000. Cariogram is the model which identified the majority of examinees as medium risk patients (70%. The other two models were more radical in risk assessment, giving more unfavorable risk –profiles for patients. In only 12% of the patients did the three multifactor models assess the risk in the same way. Previser and CAT gave the same results in 63% of cases – the Wilcoxon test showed that there is no statistically significant difference in caries risk assessment between these two models (Z = -1.805, p=0.071. Conclusions. Evaluation of three different multifactor caries risk assessment models (Cariogram, PreViser and CAT showed that only the Cariogram can successfully predict new caries development in 12-year-old Bosnian children.
Disease prediction models and operational readiness.
Directory of Open Access Journals (Sweden)
Courtney D Corley
Full Text Available The objective of this manuscript is to present a systematic review of biosurveillance models that operate on select agents and can forecast the occurrence of a disease event. We define a disease event to be a biological event with focus on the One Health paradigm. These events are characterized by evidence of infection and or disease condition. We reviewed models that attempted to predict a disease event, not merely its transmission dynamics and we considered models involving pathogens of concern as determined by the US National Select Agent Registry (as of June 2011. We searched commercial and government databases and harvested Google search results for eligible models, using terms and phrases provided by public health analysts relating to biosurveillance, remote sensing, risk assessments, spatial epidemiology, and ecological niche modeling. After removal of duplications and extraneous material, a core collection of 6,524 items was established, and these publications along with their abstracts are presented in a semantic wiki at http://BioCat.pnnl.gov. As a result, we systematically reviewed 44 papers, and the results are presented in this analysis. We identified 44 models, classified as one or more of the following: event prediction (4, spatial (26, ecological niche (28, diagnostic or clinical (6, spread or response (9, and reviews (3. The model parameters (e.g., etiology, climatic, spatial, cultural and data sources (e.g., remote sensing, non-governmental organizations, expert opinion, epidemiological were recorded and reviewed. A component of this review is the identification of verification and validation (V&V methods applied to each model, if any V&V method was reported. All models were classified as either having undergone Some Verification or Validation method, or No Verification or Validation. We close by outlining an initial set of operational readiness level guidelines for disease prediction models based upon established Technology
Model Predictive Control based on Finite Impulse Response Models
DEFF Research Database (Denmark)
Prasath, Guru; Jørgensen, John Bagterp
2008-01-01
We develop a regularized l2 finite impulse response (FIR) predictive controller with input and input-rate constraints. Feedback is based on a simple constant output disturbance filter. The performance of the predictive controller in the face of plant-model mismatch is investigated by simulations...
Rolando, G.; Hansheng, F.; Hongwei, L.; Lin, W.; Wu, W.; Foussat, A.; Ilin, Y.; Libeyre, P.; Nijhuis, A.
2014-01-01
The ITER correction coils (CC) system features shaking hands lap-type joints to interface the terminations of the conductors. The feasibility of operating plasma scenarios depends on the ability of the magnets to retain sufficient temperature and current margins. In this respect, the joints represen
ENSO Prediction using Vector Autoregressive Models
Chapman, D. R.; Cane, M. A.; Henderson, N.; Lee, D.; Chen, C.
2013-12-01
A recent comparison (Barnston et al, 2012 BAMS) shows the ENSO forecasting skill of dynamical models now exceeds that of statistical models, but the best statistical models are comparable to all but the very best dynamical models. In this comparison the leading statistical model is the one based on the Empirical Model Reduction (EMR) method. Here we report on experiments with multilevel Vector Autoregressive models using only sea surface temperatures (SSTs) as predictors. VAR(L) models generalizes Linear Inverse Models (LIM), which are a VAR(1) method, as well as multilevel univariate autoregressive models. Optimal forecast skill is achieved using 12 to 14 months of prior state information (i.e 12-14 levels), which allows SSTs alone to capture the effects of other variables such as heat content as well as seasonality. The use of multiple levels allows the model advancing one month at a time to perform at least as well for a 6 month forecast as a model constructed to explicitly forecast 6 months ahead. We infer that the multilevel model has fully captured the linear dynamics (cf. Penland and Magorian, 1993 J. Climate). Finally, while VAR(L) is equivalent to L-level EMR, we show in a 150 year cross validated assessment that we can increase forecast skill by improving on the EMR initialization procedure. The greatest benefit of this change is in allowing the prediction to make effective use of information over many more months.
Gas explosion prediction using CFD models
Energy Technology Data Exchange (ETDEWEB)
Niemann-Delius, C.; Okafor, E. [RWTH Aachen Univ. (Germany); Buhrow, C. [TU Bergakademie Freiberg Univ. (Germany)
2006-07-15
A number of CFD models are currently available to model gaseous explosions in complex geometries. Some of these tools allow the representation of complex environments within hydrocarbon production plants. In certain explosion scenarios, a correction is usually made for the presence of buildings and other complexities by using crude approximations to obtain realistic estimates of explosion behaviour as can be found when predicting the strength of blast waves resulting from initial explosions. With the advance of computational technology, and greater availability of computing power, computational fluid dynamics (CFD) tools are becoming increasingly available for solving such a wide range of explosion problems. A CFD-based explosion code - FLACS can, for instance, be confidently used to understand the impact of blast overpressures in a plant environment consisting of obstacles such as buildings, structures, and pipes. With its porosity concept representing geometry details smaller than the grid, FLACS can represent geometry well, even when using coarse grid resolutions. The performance of FLACS has been evaluated using a wide range of field data. In the present paper, the concept of computational fluid dynamics (CFD) and its application to gas explosion prediction is presented. Furthermore, the predictive capabilities of CFD-based gaseous explosion simulators are demonstrated using FLACS. Details about the FLACS-code, some extensions made to FLACS, model validation exercises, application, and some results from blast load prediction within an industrial facility are presented. (orig.)
Genetic models of homosexuality: generating testable predictions.
Gavrilets, Sergey; Rice, William R
2006-12-22
Homosexuality is a common occurrence in humans and other species, yet its genetic and evolutionary basis is poorly understood. Here, we formulate and study a series of simple mathematical models for the purpose of predicting empirical patterns that can be used to determine the form of selection that leads to polymorphism of genes influencing homosexuality. Specifically, we develop theory to make contrasting predictions about the genetic characteristics of genes influencing homosexuality including: (i) chromosomal location, (ii) dominance among segregating alleles and (iii) effect sizes that distinguish between the two major models for their polymorphism: the overdominance and sexual antagonism models. We conclude that the measurement of the genetic characteristics of quantitative trait loci (QTLs) found in genomic screens for genes influencing homosexuality can be highly informative in resolving the form of natural selection maintaining their polymorphism.
Characterizing Attention with Predictive Network Models.
Rosenberg, M D; Finn, E S; Scheinost, D; Constable, R T; Chun, M M
2017-04-01
Recent work shows that models based on functional connectivity in large-scale brain networks can predict individuals' attentional abilities. While being some of the first generalizable neuromarkers of cognitive function, these models also inform our basic understanding of attention, providing empirical evidence that: (i) attention is a network property of brain computation; (ii) the functional architecture that underlies attention can be measured while people are not engaged in any explicit task; and (iii) this architecture supports a general attentional ability that is common to several laboratory-based tasks and is impaired in attention deficit hyperactivity disorder (ADHD). Looking ahead, connectivity-based predictive models of attention and other cognitive abilities and behaviors may potentially improve the assessment, diagnosis, and treatment of clinical dysfunction. Copyright © 2017 Elsevier Ltd. All rights reserved.
A Study On Distributed Model Predictive Consensus
Keviczky, Tamas
2008-01-01
We investigate convergence properties of a proposed distributed model predictive control (DMPC) scheme, where agents negotiate to compute an optimal consensus point using an incremental subgradient method based on primal decomposition as described in Johansson et al. [2006, 2007]. The objective of the distributed control strategy is to agree upon and achieve an optimal common output value for a group of agents in the presence of constraints on the agent dynamics using local predictive controllers. Stability analysis using a receding horizon implementation of the distributed optimal consensus scheme is performed. Conditions are given under which convergence can be obtained even if the negotiations do not reach full consensus.
NONLINEAR MODEL PREDICTIVE CONTROL OF CHEMICAL PROCESSES
Directory of Open Access Journals (Sweden)
R. G. SILVA
1999-03-01
Full Text Available A new algorithm for model predictive control is presented. The algorithm utilizes a simultaneous solution and optimization strategy to solve the model's differential equations. The equations are discretized by equidistant collocation, and along with the algebraic model equations are included as constraints in a nonlinear programming (NLP problem. This algorithm is compared with the algorithm that uses orthogonal collocation on finite elements. The equidistant collocation algorithm results in simpler equations, providing a decrease in computation time for the control moves. Simulation results are presented and show a satisfactory performance of this algorithm.
Performance model to predict overall defect density
Directory of Open Access Journals (Sweden)
J Venkatesh
2012-08-01
Full Text Available Management by metrics is the expectation from the IT service providers to stay as a differentiator. Given a project, the associated parameters and dynamics, the behaviour and outcome need to be predicted. There is lot of focus on the end state and in minimizing defect leakage as much as possible. In most of the cases, the actions taken are re-active. It is too late in the life cycle. Root cause analysis and corrective actions can be implemented only to the benefit of the next project. The focus has to shift left, towards the execution phase than waiting for lessons to be learnt post the implementation. How do we pro-actively predict defect metrics and have a preventive action plan in place. This paper illustrates the process performance model to predict overall defect density based on data from projects in an organization.
Neuro-fuzzy modeling in bankruptcy prediction
Directory of Open Access Journals (Sweden)
Vlachos D.
2003-01-01
Full Text Available For the past 30 years the problem of bankruptcy prediction had been thoroughly studied. From the paper of Altman in 1968 to the recent papers in the '90s, the progress of prediction accuracy was not satisfactory. This paper investigates an alternative modeling of the system (firm, combining neural networks and fuzzy controllers, i.e. using neuro-fuzzy models. Classical modeling is based on mathematical models that describe the behavior of the firm under consideration. The main idea of fuzzy control, on the other hand, is to build a model of a human control expert who is capable of controlling the process without thinking in a mathematical model. This control expert specifies his control action in the form of linguistic rules. These control rules are translated into the framework of fuzzy set theory providing a calculus, which can stimulate the behavior of the control expert and enhance its performance. The accuracy of the model is studied using datasets from previous research papers.
Pressure prediction model for compression garment design.
Leung, W Y; Yuen, D W; Ng, Sun Pui; Shi, S Q
2010-01-01
Based on the application of Laplace's law to compression garments, an equation for predicting garment pressure, incorporating the body circumference, the cross-sectional area of fabric, applied strain (as a function of reduction factor), and its corresponding Young's modulus, is developed. Design procedures are presented to predict garment pressure using the aforementioned parameters for clinical applications. Compression garments have been widely used in treating burning scars. Fabricating a compression garment with a required pressure is important in the healing process. A systematic and scientific design method can enable the occupational therapist and compression garments' manufacturer to custom-make a compression garment with a specific pressure. The objectives of this study are 1) to develop a pressure prediction model incorporating different design factors to estimate the pressure exerted by the compression garments before fabrication; and 2) to propose more design procedures in clinical applications. Three kinds of fabrics cut at different bias angles were tested under uniaxial tension, as were samples made in a double-layered structure. Sets of nonlinear force-extension data were obtained for calculating the predicted pressure. Using the value at 0° bias angle as reference, the Young's modulus can vary by as much as 29% for fabric type P11117, 43% for fabric type PN2170, and even 360% for fabric type AP85120 at a reduction factor of 20%. When comparing the predicted pressure calculated from the single-layered and double-layered fabrics, the double-layered construction provides a larger range of target pressure at a particular strain. The anisotropic and nonlinear behaviors of the fabrics have thus been determined. Compression garments can be methodically designed by the proposed analytical pressure prediction model.
de Oliveira, Tiago E; Netz, Paulo A; Kremer, Kurt; Junghans, Christoph; Mukherji, Debashish
2016-05-01
We present a coarse-graining strategy that we test for aqueous mixtures. The method uses pair-wise cumulative coordination as a target function within an iterative Boltzmann inversion (IBI) like protocol. We name this method coordination iterative Boltzmann inversion (C-IBI). While the underlying coarse-grained model is still structure based and, thus, preserves pair-wise solution structure, our method also reproduces solvation thermodynamics of binary and/or ternary mixtures. Additionally, we observe much faster convergence within C-IBI compared to IBI. To validate the robustness, we apply C-IBI to study test cases of solvation thermodynamics of aqueous urea and a triglycine solvation in aqueous urea.
Directory of Open Access Journals (Sweden)
Minal Patel
2016-01-01
Full Text Available Service can be delivered anywhere and anytime in cloud computing using virtualization. The main issue to handle virtualized resources is to balance ongoing workloads. The migration of virtual machines has two major techniques: (i reducing dirty pages using CPU scheduling and (ii compressing memory pages. The available techniques for live migration are not able to predict dirty pages in advance. In the proposed framework, time series based prediction techniques are developed using historical analysis of past data. The time series is generated with transferring of memory pages iteratively. Here, two different regression based models of time series are proposed. The first model is developed using statistical probability based regression model and it is based on ARIMA (autoregressive integrated moving average model. The second one is developed using statistical learning based regression model and it uses SVR (support vector regression model. These models are tested on real data set of Xen to compute downtime, total number of pages transferred, and total migration time. The ARIMA model is able to predict dirty pages with 91.74% accuracy and the SVR model is able to predict dirty pages with 94.61% accuracy that is higher than ARIMA.
Statistical assessment of predictive modeling uncertainty
Barzaghi, Riccardo; Marotta, Anna Maria
2017-04-01
When the results of geophysical models are compared with data, the uncertainties of the model are typically disregarded. We propose a method for defining the uncertainty of a geophysical model based on a numerical procedure that estimates the empirical auto and cross-covariances of model-estimated quantities. These empirical values are then fitted by proper covariance functions and used to compute the covariance matrix associated with the model predictions. The method is tested using a geophysical finite element model in the Mediterranean region. Using a novel χ2 analysis in which both data and model uncertainties are taken into account, the model's estimated tectonic strain pattern due to the Africa-Eurasia convergence in the area that extends from the Calabrian Arc to the Alpine domain is compared with that estimated from GPS velocities while taking into account the model uncertainty through its covariance structure and the covariance of the GPS estimates. The results indicate that including the estimated model covariance in the testing procedure leads to lower observed χ2 values that have better statistical significance and might help a sharper identification of the best-fitting geophysical models.
Energy Technology Data Exchange (ETDEWEB)
Hogan, J.T.; Hillis, D.L.; Galambos, J.; Uckan, N.A. (Oak Ridge National Lab., TN (USA)); Dippel, K.H.; Finken, K.H. (Forschungszentrum Juelich GmbH (Germany, F.R.). Inst. fuer Plasmaphysik); Hulse, R.A.; Budny, R.V. (Princeton Univ., NJ (USA). Plasma Physics Lab.)
1990-01-01
Many studies have shown the importance of the ratio {upsilon}{sub He}/{upsilon}{sub E} in determining the level of He ash accumulation in future reactor systems. Results of the first tokamak He removal experiments have been analysed, and a first estimate of the ratio {upsilon}{sub He}/{upsilon}{sub E} to be expected for future reactor systems has been made. The experiments were carried out for neutral beam heated plasmas in the TEXTOR tokamak, at KFA/Julich. Helium was injected both as a short puff and continuously, and subsequently extracted with the Advanced Limiter Test-II pump limiter. The rate at which the He density decays has been determined with absolutely calibrated charge exchange spectroscopy, and compared with theoretical models, using the Multiple Impurity Species Transport (MIST) code. An analysis of energy confinement has been made with PPPL TRANSP code, to distinguish beam from thermal confinement, especially for low density cases. The ALT-II pump limiter system is found to exhaust the He with maximum exhaust efficiency (8 pumps) of {approximately}8%. We find 1<{upsilon}{sub He}/{upsilon}{sub E}<3.3 for the database of cases analysed to date. Analysis with the ITER TETRA systems code shows that these values would be adequate to achieve the required He concentration with the present ITER divertor He extraction system.
Seasonal Predictability in a Model Atmosphere.
Lin, Hai
2001-07-01
The predictability of atmospheric mean-seasonal conditions in the absence of externally varying forcing is examined. A perfect-model approach is adopted, in which a global T21 three-level quasigeostrophic atmospheric model is integrated over 21 000 days to obtain a reference atmospheric orbit. The model is driven by a time-independent forcing, so that the only source of time variability is the internal dynamics. The forcing is set to perpetual winter conditions in the Northern Hemisphere (NH) and perpetual summer in the Southern Hemisphere.A significant temporal variability in the NH 90-day mean states is observed. The component of that variability associated with the higher-frequency motions, or climate noise, is estimated using a method developed by Madden. In the polar region, and to a lesser extent in the midlatitudes, the temporal variance of the winter means is significantly greater than the climate noise, suggesting some potential predictability in those regions.Forecast experiments are performed to see whether the presence of variance in the 90-day mean states that is in excess of the climate noise leads to some skill in the prediction of these states. Ensemble forecast experiments with nine members starting from slightly different initial conditions are performed for 200 different 90-day means along the reference atmospheric orbit. The serial correlation between the ensemble means and the reference orbit shows that there is skill in the 90-day mean predictions. The skill is concentrated in those regions of the NH that have the largest variance in excess of the climate noise. An EOF analysis shows that nearly all the predictive skill in the seasonal means is associated with one mode of variability with a strong axisymmetric component.
A kinetic model for predicting biodegradation.
Dimitrov, S; Pavlov, T; Nedelcheva, D; Reuschenbach, P; Silvani, M; Bias, R; Comber, M; Low, L; Lee, C; Parkerton, T; Mekenyan, O
2007-01-01
Biodegradation plays a key role in the environmental risk assessment of organic chemicals. The need to assess biodegradability of a chemical for regulatory purposes supports the development of a model for predicting the extent of biodegradation at different time frames, in particular the extent of ultimate biodegradation within a '10 day window' criterion as well as estimating biodegradation half-lives. Conceptually this implies expressing the rate of catabolic transformations as a function of time. An attempt to correlate the kinetics of biodegradation with molecular structure of chemicals is presented. A simplified biodegradation kinetic model was formulated by combining the probabilistic approach of the original formulation of the CATABOL model with the assumption of first order kinetics of catabolic transformations. Nonlinear regression analysis was used to fit the model parameters to OECD 301F biodegradation kinetic data for a set of 208 chemicals. The new model allows the prediction of biodegradation multi-pathways, primary and ultimate half-lives and simulation of related kinetic biodegradation parameters such as biological oxygen demand (BOD), carbon dioxide production, and the nature and amount of metabolites as a function of time. The model may also be used for evaluating the OECD ready biodegradability potential of a chemical within the '10-day window' criterion.
Disease Prediction Models and Operational Readiness
Energy Technology Data Exchange (ETDEWEB)
Corley, Courtney D.; Pullum, Laura L.; Hartley, David M.; Benedum, Corey M.; Noonan, Christine F.; Rabinowitz, Peter M.; Lancaster, Mary J.
2014-03-19
INTRODUCTION: The objective of this manuscript is to present a systematic review of biosurveillance models that operate on select agents and can forecast the occurrence of a disease event. One of the primary goals of this research was to characterize the viability of biosurveillance models to provide operationally relevant information for decision makers to identify areas for future research. Two critical characteristics differentiate this work from other infectious disease modeling reviews. First, we reviewed models that attempted to predict the disease event, not merely its transmission dynamics. Second, we considered models involving pathogens of concern as determined by the US National Select Agent Registry (as of June 2011). Methods: We searched dozens of commercial and government databases and harvested Google search results for eligible models utilizing terms and phrases provided by public health analysts relating to biosurveillance, remote sensing, risk assessments, spatial epidemiology, and ecological niche-modeling, The publication date of search results returned are bound by the dates of coverage of each database and the date in which the search was performed, however all searching was completed by December 31, 2010. This returned 13,767 webpages and 12,152 citations. After de-duplication and removal of extraneous material, a core collection of 6,503 items was established and these publications along with their abstracts are presented in a semantic wiki at http://BioCat.pnnl.gov. Next, PNNL’s IN-SPIRE visual analytics software was used to cross-correlate these publications with the definition for a biosurveillance model resulting in the selection of 54 documents that matched the criteria resulting Ten of these documents, However, dealt purely with disease spread models, inactivation of bacteria, or the modeling of human immune system responses to pathogens rather than predicting disease events. As a result, we systematically reviewed 44 papers and the
Nonlinear model predictive control theory and algorithms
Grüne, Lars
2017-01-01
This book offers readers a thorough and rigorous introduction to nonlinear model predictive control (NMPC) for discrete-time and sampled-data systems. NMPC schemes with and without stabilizing terminal constraints are detailed, and intuitive examples illustrate the performance of different NMPC variants. NMPC is interpreted as an approximation of infinite-horizon optimal control so that important properties like closed-loop stability, inverse optimality and suboptimality can be derived in a uniform manner. These results are complemented by discussions of feasibility and robustness. An introduction to nonlinear optimal control algorithms yields essential insights into how the nonlinear optimization routine—the core of any nonlinear model predictive controller—works. Accompanying software in MATLAB® and C++ (downloadable from extras.springer.com/), together with an explanatory appendix in the book itself, enables readers to perform computer experiments exploring the possibilities and limitations of NMPC. T...
Predictive Modeling in Actinide Chemistry and Catalysis
Energy Technology Data Exchange (ETDEWEB)
Yang, Ping [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)
2016-05-16
These are slides from a presentation on predictive modeling in actinide chemistry and catalysis. The following topics are covered in these slides: Structures, bonding, and reactivity (bonding can be quantified by optical probes and theory, and electronic structures and reaction mechanisms of actinide complexes); Magnetic resonance properties (transition metal catalysts with multi-nuclear centers, and NMR/EPR parameters); Moving to more complex systems (surface chemistry of nanomaterials, and interactions of ligands with nanoparticles); Path forward and conclusions.
Gang, Grace J.; Siewerdsen, Jeffrey H.; Webster Stayman, J.
2017-06-01
Tube current modulation (TCM) is routinely adopted on diagnostic CT scanners for dose reduction. Conventional TCM strategies are generally designed for filtered-backprojection (FBP) reconstruction to satisfy simple image quality requirements based on noise. This work investigates TCM designs for model-based iterative reconstruction (MBIR) to achieve optimal imaging performance as determined by a task-based image quality metric. Additionally, regularization is an important aspect of MBIR that is jointly optimized with TCM, and includes both the regularization strength that controls overall smoothness as well as directional weights that permits control of the isotropy/anisotropy of the local noise and resolution properties. Initial investigations focus on a known imaging task at a single location in the image volume. The framework adopts Fourier and analytical approximations for fast estimation of the local noise power spectrum (NPS) and modulation transfer function (MTF)—each carrying dependencies on TCM and regularization. For the single location optimization, the local detectability index (d‧) of the specific task was directly adopted as the objective function. A covariance matrix adaptation evolution strategy (CMA-ES) algorithm was employed to identify the optimal combination of imaging parameters. Evaluations of both conventional and task-driven approaches were performed in an abdomen phantom for a mid-frequency discrimination task in the kidney. Among the conventional strategies, the TCM pattern optimal for FBP using a minimum variance criterion yielded a worse task-based performance compared to an unmodulated strategy when applied to MBIR. Moreover, task-driven TCM designs for MBIR were found to have the opposite behavior from conventional designs for FBP, with greater fluence assigned to the less attenuating views of the abdomen and less fluence to the more attenuating lateral views. Such TCM patterns exaggerate the intrinsic anisotropy of the MTF and NPS
Probabilistic prediction models for aggregate quarry siting
Robinson, G.R.; Larkins, P.M.
2007-01-01
Weights-of-evidence (WofE) and logistic regression techniques were used in a GIS framework to predict the spatial likelihood (prospectivity) of crushed-stone aggregate quarry development. The joint conditional probability models, based on geology, transportation network, and population density variables, were defined using quarry location and time of development data for the New England States, North Carolina, and South Carolina, USA. The Quarry Operation models describe the distribution of active aggregate quarries, independent of the date of opening. The New Quarry models describe the distribution of aggregate quarries when they open. Because of the small number of new quarries developed in the study areas during the last decade, independent New Quarry models have low parameter estimate reliability. The performance of parameter estimates derived for Quarry Operation models, defined by a larger number of active quarries in the study areas, were tested and evaluated to predict the spatial likelihood of new quarry development. Population density conditions at the time of new quarry development were used to modify the population density variable in the Quarry Operation models to apply to new quarry development sites. The Quarry Operation parameters derived for the New England study area, Carolina study area, and the combined New England and Carolina study areas were all similar in magnitude and relative strength. The Quarry Operation model parameters, using the modified population density variables, were found to be a good predictor of new quarry locations. Both the aggregate industry and the land management community can use the model approach to target areas for more detailed site evaluation for quarry location. The models can be revised easily to reflect actual or anticipated changes in transportation and population features. ?? International Association for Mathematical Geology 2007.
Predicting Footbridge Response using Stochastic Load Models
DEFF Research Database (Denmark)
Pedersen, Lars; Frier, Christian
2013-01-01
Walking parameters such as step frequency, pedestrian mass, dynamic load factor, etc. are basically stochastic, although it is quite common to adapt deterministic models for these parameters. The present paper considers a stochastic approach to modeling the action of pedestrians, but when doing s...... as it pinpoints which decisions to be concerned about when the goal is to predict footbridge response. The studies involve estimating footbridge responses using Monte-Carlo simulations and focus is on estimating vertical structural response to single person loading....
Optimization of tungsten castellated structures for the ITER divertor
Litnovsky, A.; Hellwig, M.; Matveev, D.; Komm, M.; van den Berg, M.; De Temmerman, G.; Rudakov, D.; Ding, F.; Luo, G.-N.; Krieger, K.; Sugiyama, K.; Pitts, R. A.; Petersson, P.
2015-08-01
In ITER, the plasma-facing components (PFCs) of the first wall and the divertor armor will be castellated to improve their thermo-mechanical stability and to limit forces due to induced currents. The fuel accumulation in the gaps may significantly contribute to the in-vessel fuel inventory. Castellation shaping may be the most straightforward way to minimize the fuel inventory and to alleviate the thermal loads onto castellations. A new castellation shape was proposed and comparative modeling of conventional (rectangular) and shaped castellation was performed for ITER conditions. Shaped castellation was predicted to be capable to operate under stationary heat load of 20 MW/m2. An 11-fold decrease of beryllium (Be) content in the gaps of the shaped cells alone with a 7-fold decrease of carbon content was predicted. In order to validate the predictive capabilities of modeling tools used for ITER conditions, the dedicated modeling with the same codes was made for existing tokamaks and benchmarked with the results of multi-machine experiments. For the castellations exposed in TEXTOR and DIII-D, the carbon amount in the gaps of shaped cells was 1.9-2.3 times smaller than that of rectangular ones. Modeling for TEXTOR conditions yielded to 1.5-fold decrease of carbon content in the gaps of shaped castellation outlining fair agreement with the experiment. At the same time, a number of processes, like enhanced erosion of molten layer yet need to be implemented in the codes in order to increase the accuracy of predictions for ITER.
Modelling of steady state erosion of CFC actively water-cooled mock-up for the ITER divertor
Energy Technology Data Exchange (ETDEWEB)
Ogorodnikova, O.V. [Departement de Recherches sur la Fusion Controlee, Association Euratom-CEA, CEA-Cadarache, F-13108 Saint Paul Lez Durance cedex (France)], E-mail: igra32@rambler.ru
2008-04-15
Calculations of the physical and chemical erosion of CFC (carbon fibre composite) monoblocks as outer vertical target of the ITER divertor during normal operation regimes have been done. Off-normal events and ELM's are not considered here. For a set of components under thermal and particles loads at glancing incident angle, variations in the material properties and/or assembly of defects could result in different erosion of actively-cooled components and, thus, in temperature instabilities. Operation regimes where the temperature instability takes place are investigated. It is shown that the temperature and erosion instabilities, probably, are not a critical point for the present design of ITER vertical target if a realistic variation of material properties is assumed, namely, the difference in the thermal conductivities of the neighbouring monoblocks is 20% and the maximum allowable size of a defect between CFC armour and cooling tube is +/-90{sup o} in circumferential direction from the apex.
Modelling of steady state erosion of CFC actively water-cooled mock-up for the ITER divertor
Ogorodnikova, O. V.
2008-04-01
Calculations of the physical and chemical erosion of CFC (carbon fibre composite) monoblocks as outer vertical target of the ITER divertor during normal operation regimes have been done. Off-normal events and ELM's are not considered here. For a set of components under thermal and particles loads at glancing incident angle, variations in the material properties and/or assembly of defects could result in different erosion of actively-cooled components and, thus, in temperature instabilities. Operation regimes where the temperature instability takes place are investigated. It is shown that the temperature and erosion instabilities, probably, are not a critical point for the present design of ITER vertical target if a realistic variation of material properties is assumed, namely, the difference in the thermal conductivities of the neighbouring monoblocks is 20% and the maximum allowable size of a defect between CFC armour and cooling tube is +/-90° in circumferential direction from the apex.
Predictive In Vivo Models for Oncology.
Behrens, Diana; Rolff, Jana; Hoffmann, Jens
2016-01-01
Experimental oncology research and preclinical drug development both substantially require specific, clinically relevant in vitro and in vivo tumor models. The increasing knowledge about the heterogeneity of cancer requested a substantial restructuring of the test systems for the different stages of development. To be able to cope with the complexity of the disease, larger panels of patient-derived tumor models have to be implemented and extensively characterized. Together with individual genetically engineered tumor models and supported by core functions for expression profiling and data analysis, an integrated discovery process has been generated for predictive and personalized drug development.Improved “humanized” mouse models should help to overcome current limitations given by xenogeneic barrier between humans and mice. Establishment of a functional human immune system and a corresponding human microenvironment in laboratory animals will strongly support further research.Drug discovery, systems biology, and translational research are moving closer together to address all the new hallmarks of cancer, increase the success rate of drug development, and increase the predictive value of preclinical models.
Constructing predictive models of human running.
Maus, Horst-Moritz; Revzen, Shai; Guckenheimer, John; Ludwig, Christian; Reger, Johann; Seyfarth, Andre
2015-02-06
Running is an essential mode of human locomotion, during which ballistic aerial phases alternate with phases when a single foot contacts the ground. The spring-loaded inverted pendulum (SLIP) provides a starting point for modelling running, and generates ground reaction forces that resemble those of the centre of mass (CoM) of a human runner. Here, we show that while SLIP reproduces within-step kinematics of the CoM in three dimensions, it fails to reproduce stability and predict future motions. We construct SLIP control models using data-driven Floquet analysis, and show how these models may be used to obtain predictive models of human running with six additional states comprising the position and velocity of the swing-leg ankle. Our methods are general, and may be applied to any rhythmic physical system. We provide an approach for identifying an event-driven linear controller that approximates an observed stabilization strategy, and for producing a reduced-state model which closely recovers the observed dynamics. © 2014 The Author(s) Published by the Royal Society. All rights reserved.
Statistical Seasonal Sea Surface based Prediction Model
Suarez, Roberto; Rodriguez-Fonseca, Belen; Diouf, Ibrahima
2014-05-01
The interannual variability of the sea surface temperature (SST) plays a key role in the strongly seasonal rainfall regime on the West African region. The predictability of the seasonal cycle of rainfall is a field widely discussed by the scientific community, with results that fail to be satisfactory due to the difficulty of dynamical models to reproduce the behavior of the Inter Tropical Convergence Zone (ITCZ). To tackle this problem, a statistical model based on oceanic predictors has been developed at the Universidad Complutense of Madrid (UCM) with the aim to complement and enhance the predictability of the West African Monsoon (WAM) as an alternative to the coupled models. The model, called S4CAST (SST-based Statistical Seasonal Forecast) is based on discriminant analysis techniques, specifically the Maximum Covariance Analysis (MCA) and Canonical Correlation Analysis (CCA). Beyond the application of the model to the prediciton of rainfall in West Africa, its use extends to a range of different oceanic, atmospheric and helth related parameters influenced by the temperature of the sea surface as a defining factor of variability.
Institute of Scientific and Technical Information of China (English)
李铎; 杨小荟; 武强; 张志忠
2002-01-01
The purpose of this paper is to discuss the influential factors of iteration accuracy when we use iteration to determine the numerical model for predicting water yield of deep drawdown mines and calculating the groundwater level. The relationship among the calculation error of groundwater level, the pumping rate, the limit of iteration convergence error, the calculation time, and the aquifer parameters were discussed by using an ideal model. Finally, the water yield of Dianzi iron mine was predicted using the testified numerical model. It is indicated that the calculation error of groundwater level is related to the limit of iteration convergence error, the calculation time and the aquifer parameters, but not to the pumping rate and the variation of groundwater level.
Bonaïti, C; Irlinger, F; Spinnler, H E; Engel, E
2005-05-01
The aim of this study was to develop and validate an iterative procedure based on odor assessment to select odor-active associations of microorganisms from a starting association of 82 strains (G1), which were chosen to be representative of Livarot cheese biodiversity. A 3-step dichotomous procedure was applied to reduce the starting association G1. At each step, 3 methods were used to evaluate the odor proximity between mother (n strains) and daughter (n/2 strains) associations: a direct assessment of odor dissimilarity using an original bidimensional scale system and 2 indirect methods based on comparisons of odor profile or hedonic scores. Odor dissimilarity ratings and odor profile gave reliable and sometimes complementary criteria to select G3 and G4 at the first iteration, G31 and G42 at the second iteration, and G312 and G421 at the final iteration. Principal component analysis of odor profile data permitted the interpretation at least in part, of the 2D multidimensional scaling representation of the similarity data. The second part of the study was dedicated to 1) validating the choice of the dichotomous procedure made at each iteration, and 2) evaluating together the magnitude of odor differences that may exist between G1 and its subsequent simplified associations. The strategy consisted of assessing odor similarity between the 13 cheese models by comparing the contents of their odor-active compounds. By using a purge-and-trap gas chromatography-olfactory/mass spectrometry device, 50 potent odorants were identified in models G312, G421, and in a typical Protected Denomination of Origin Livarot cheese. Their contributions to the odor profile of both selected model cheeses are discussed. These compounds were quantified by purge and trap-gas chromatography-mass spectrometry in the 13 products and the normalized data matrix was transformed to a between-product distance matrix. This instrumental assessment of odor similarities allowed validation of the choice
Zhang, W.; Bobkov, V.; Noterdaeme, J.-M.; Tierens, W.; Bilato, R.; Carralero, D.; Coster, D.; Jacquot, J.; Jacquet, P.; Lunt, T.; Pitts, R. A.; Rohde, V.; Siegl, G.; Fuenfgelder, H.; Aguiam, D.; Silva, A.; Colas, L.; Ceccuzzi, S.; the ASDEX Upgrade Team
2017-07-01
The influence of outer top gas injection on the scrape-off layer (SOL) density and ion cyclotron range of frequency (ICRF) coupling has been studied in ASDEX Upgrade (AUG) L-mode plasmas for the first time. The three-dimensional (3D) edge plasma fluid and neutral transport code EMC3-EIRENE is used to simulate the SOL plasma density, and the 3D wave code RAPLICASOL is used to compute the ICRF coupling resistance with the calculated density. Improvements have been made in the EMC3-EIRENE simulations by fitting transport parameters separately for each gas puffing case. It is found that the calculated local density profiles and coupling resistances are in good agreement with the experimental ones. The results indicate that the SOL density increase depends sensitively on the spreading of the injected outer top gas. If more gas enters into the main chamber through the paths near the top of vessel, the SOL density increase will be more toroidally uniform; if more gas chooses the paths closer to the mid-plane, then the SOL density increase will be more local and more significant. Among the various local gas puffing methods, the mid-plane gas valve close to the antenna is still the best option in terms of improving ICRF coupling. Differences between the outer top gas puffing in AUG and the outer top gas puffing in ITER are briefly summarized. Instructive suggestions for ITER and future plans for ITER gas injection simulations are discussed.
Approximate Modified Policy Iteration
Scherrer, Bruno; Ghavamzadeh, Mohammad; Geist, Matthieu
2012-01-01
Modified policy iteration (MPI) is a dynamic programming (DP) algorithm that contains the two celebrated policy and value iteration methods. Despite its generality, MPI has not been thoroughly studied, especially its approximation form which is used when the state and/or action spaces are large or infinite. In this paper, we propose three approximate MPI (AMPI) algorithms that are extensions of the well-known approximate DP algorithms: fitted-value iteration, fitted-Q iteration, and classification-based policy iteration. We provide an error propagation analysis for AMPI that unifies those for approximate policy and value iteration. We also provide a finite-sample analysis for the classification-based implementation of AMPI (CBMPI), which is more general (and somehow contains) than the analysis of the other presented AMPI algorithms. An interesting observation is that the MPI's parameter allows us to control the balance of errors (in value function approximation and in estimating the greedy policy) in the fina...
Predictive modeling by the cerebellum improves proprioception.
Bhanpuri, Nasir H; Okamura, Allison M; Bastian, Amy J
2013-09-04
Because sensation is delayed, real-time movement control requires not just sensing, but also predicting limb position, a function hypothesized for the cerebellum. Such cerebellar predictions could contribute to perception of limb position (i.e., proprioception), particularly when a person actively moves the limb. Here we show that human cerebellar patients have proprioceptive deficits compared with controls during active movement, but not when the arm is moved passively. Furthermore, when healthy subjects move in a force field with unpredictable dynamics, they have active proprioceptive deficits similar to cerebellar patients. Therefore, muscle activity alone is likely insufficient to enhance proprioception and predictability (i.e., an internal model of the body and environment) is important for active movement to benefit proprioception. We conclude that cerebellar patients have an active proprioceptive deficit consistent with disrupted movement prediction rather than an inability to generally enhance peripheral proprioceptive signals during action and suggest that active proprioceptive deficits should be considered a fundamental cerebellar impairment of clinical importance.
Berry, Brandon; Moretto, Justin; Matthews, Thomas; Smelko, John; Wiltberger, Kelly
2015-01-01
Multi-component, multi-scale Raman spectroscopy modeling results from a monoclonal antibody producing CHO cell culture process including data from two development scales (3 L, 200 L) and a clinical manufacturing scale environment (2,000 L) are presented. Multivariate analysis principles are a critical component to partial least squares (PLS) modeling but can quickly turn into an overly iterative process, thus a simplified protocol is proposed for addressing necessary steps including spectral preprocessing, spectral region selection, and outlier removal to create models exclusively from cell culture process data without the inclusion of spectral data from chemically defined nutrient solutions or targeted component spiking studies. An array of single-scale and combination-scale modeling iterations were generated to evaluate technology capabilities and model scalability. Analysis of prediction errors across models suggests that glucose, lactate, and osmolality are well modeled. Model strength was confirmed via predictive validation and by examining performance similarity across single-scale and combination-scale models. Additionally, accurate predictive models were attained in most cases for viable cell density and total cell density; however, these components exhibited some scale-dependencies that hindered model quality in cross-scale predictions where only development data was used in calibration. Glutamate and ammonium models were also able to achieve accurate predictions in most cases. However, there are differences in the absolute concentration ranges of these components across the datasets of individual bioreactor scales. Thus, glutamate and ammonium PLS models were forced to extrapolate in cases where models were derived from small scale data only but used in cross-scale applications predicting against manufacturing scale batches. © 2014 American Institute of Chemical Engineers.
Hageman, Louis A
2004-01-01
This graduate-level text examines the practical use of iterative methods in solving large, sparse systems of linear algebraic equations and in resolving multidimensional boundary-value problems. Assuming minimal mathematical background, it profiles the relative merits of several general iterative procedures. Topics include polynomial acceleration of basic iterative methods, Chebyshev and conjugate gradient acceleration procedures applicable to partitioning the linear system into a "red/black" block form, adaptive computational algorithms for the successive overrelaxation (SOR) method, and comp
Accurate Holdup Calculations with Predictive Modeling & Data Integration
Energy Technology Data Exchange (ETDEWEB)
Azmy, Yousry [North Carolina State Univ., Raleigh, NC (United States). Dept. of Nuclear Engineering; Cacuci, Dan [Univ. of South Carolina, Columbia, SC (United States). Dept. of Mechanical Engineering
2017-04-03
Bayes’ Theorem, one must have a model y(x) that maps the state variables x (the solution in this case) to the measurements y. In this case, the unknown state variables are the configuration and composition of the heldup SNM. The measurements are the detector readings. Thus, the natural model is neutral-particle radiation transport where a wealth of computational tools exists for performing these simulations accurately and efficiently. The combination of predictive model and Bayesian inference forms the Data Integration with Modeled Predictions (DIMP) method that serves as foundation for this project. The cost functional describing the model-to-data misfit is computed via a norm created by the inverse of the covariance matrix of the model parameters and responses. Since the model y(x) for the holdup problem is nonlinear, a nonlinear optimization on Q is conducted via Newton-type iterative methods to find the optimal values of the model parameters x. This project comprised a collaboration between NC State University (NCSU), the University of South Carolina (USC), and Oak Ridge National Laboratory (ORNL). The project was originally proposed in seven main tasks with an eighth contingency task to be performed if time and funding permitted; in fact time did not permit commencement of the contingency task and it was not performed. The remaining tasks involved holdup analysis with gamma detection strategies and separately with neutrons based on coincidence counting. Early in the project, and upon consultation with experts in coincidence counting it became evident that this approach is not viable for holdup applications and this task was replaced with an alternative, but valuable investigation that was carried out by the USC partner. Nevertheless, the experimental 4 measurements at ORNL of both gamma and neutron sources for the purpose of constructing Detector Response Functions (DRFs) with the associated uncertainties were indeed completed.
A prediction model for Clostridium difficile recurrence
Directory of Open Access Journals (Sweden)
Francis D. LaBarbera
2015-02-01
Full Text Available Background: Clostridium difficile infection (CDI is a growing problem in the community and hospital setting. Its incidence has been on the rise over the past two decades, and it is quickly becoming a major concern for the health care system. High rate of recurrence is one of the major hurdles in the successful treatment of C. difficile infection. There have been few studies that have looked at patterns of recurrence. The studies currently available have shown a number of risk factors associated with C. difficile recurrence (CDR; however, there is little consensus on the impact of most of the identified risk factors. Methods: Our study was a retrospective chart review of 198 patients diagnosed with CDI via Polymerase Chain Reaction (PCR from February 2009 to Jun 2013. In our study, we decided to use a machine learning algorithm called the Random Forest (RF to analyze all of the factors proposed to be associated with CDR. This model is capable of making predictions based on a large number of variables, and has outperformed numerous other models and statistical methods. Results: We came up with a model that was able to accurately predict the CDR with a sensitivity of 83.3%, specificity of 63.1%, and area under curve of 82.6%. Like other similar studies that have used the RF model, we also had very impressive results. Conclusions: We hope that in the future, machine learning algorithms, such as the RF, will see a wider application.
Gamma-Ray Pulsars Models and Predictions
Harding, A K
2001-01-01
Pulsed emission from gamma-ray pulsars originates inside the magnetosphere, from radiation by charged particles accelerated near the magnetic poles or in the outer gaps. In polar cap models, the high energy spectrum is cut off by magnetic pair production above an energy that is dependent on the local magnetic field strength. While most young pulsars with surface fields in the range B = 10^{12} - 10^{13} G are expected to have high energy cutoffs around several GeV, the gamma-ray spectra of old pulsars having lower surface fields may extend to 50 GeV. Although the gamma-ray emission of older pulsars is weaker, detecting pulsed emission at high energies from nearby sources would be an important confirmation of polar cap models. Outer gap models predict more gradual high-energy turnovers at around 10 GeV, but also predict an inverse Compton component extending to TeV energies. Detection of pulsed TeV emission, which would not survive attenuation at the polar caps, is thus an important test of outer gap models. N...
Artificial Neural Network Model for Predicting Compressive
Directory of Open Access Journals (Sweden)
Salim T. Yousif
2013-05-01
Full Text Available Compressive strength of concrete is a commonly used criterion in evaluating concrete. Although testing of the compressive strength of concrete specimens is done routinely, it is performed on the 28th day after concrete placement. Therefore, strength estimation of concrete at early time is highly desirable. This study presents the effort in applying neural network-based system identification techniques to predict the compressive strength of concrete based on concrete mix proportions, maximum aggregate size (MAS, and slump of fresh concrete. Back-propagation neural networks model is successively developed, trained, and tested using actual data sets of concrete mix proportions gathered from literature. The test of the model by un-used data within the range of input parameters shows that the maximum absolute error for model is about 20% and 88% of the output results has absolute errors less than 10%. The parametric study shows that water/cement ratio (w/c is the most significant factor affecting the output of the model. The results showed that neural networks has strong potential as a feasible tool for predicting compressive strength of concrete.
Ground Motion Prediction Models for Caucasus Region
Jorjiashvili, Nato; Godoladze, Tea; Tvaradze, Nino; Tumanova, Nino
2016-04-01
Ground motion prediction models (GMPMs) relate ground motion intensity measures to variables describing earthquake source, path, and site effects. Estimation of expected ground motion is a fundamental earthquake hazard assessment. The most commonly used parameter for attenuation relation is peak ground acceleration or spectral acceleration because this parameter gives useful information for Seismic Hazard Assessment. Since 2003 development of Georgian Digital Seismic Network has started. In this study new GMP models are obtained based on new data from Georgian seismic network and also from neighboring countries. Estimation of models is obtained by classical, statistical way, regression analysis. In this study site ground conditions are additionally considered because the same earthquake recorded at the same distance may cause different damage according to ground conditions. Empirical ground-motion prediction models (GMPMs) require adjustment to make them appropriate for site-specific scenarios. However, the process of making such adjustments remains a challenge. This work presents a holistic framework for the development of a peak ground acceleration (PGA) or spectral acceleration (SA) GMPE that is easily adjustable to different seismological conditions and does not suffer from the practical problems associated with adjustments in the response spectral domain.
Modeling and Prediction of Krueger Device Noise
Guo, Yueping; Burley, Casey L.; Thomas, Russell H.
2016-01-01
This paper presents the development of a noise prediction model for aircraft Krueger flap devices that are considered as alternatives to leading edge slotted slats. The prediction model decomposes the total Krueger noise into four components, generated by the unsteady flows, respectively, in the cove under the pressure side surface of the Krueger, in the gap between the Krueger trailing edge and the main wing, around the brackets supporting the Krueger device, and around the cavity on the lower side of the main wing. For each noise component, the modeling follows a physics-based approach that aims at capturing the dominant noise-generating features in the flow and developing correlations between the noise and the flow parameters that control the noise generation processes. The far field noise is modeled using each of the four noise component's respective spectral functions, far field directivities, Mach number dependencies, component amplitudes, and other parametric trends. Preliminary validations are carried out by using small scale experimental data, and two applications are discussed; one for conventional aircraft and the other for advanced configurations. The former focuses on the parametric trends of Krueger noise on design parameters, while the latter reveals its importance in relation to other airframe noise components.
A generative model for predicting terrorist incidents
Verma, Dinesh C.; Verma, Archit; Felmlee, Diane; Pearson, Gavin; Whitaker, Roger
2017-05-01
A major concern in coalition peace-support operations is the incidence of terrorist activity. In this paper, we propose a generative model for the occurrence of the terrorist incidents, and illustrate that an increase in diversity, as measured by the number of different social groups to which that an individual belongs, is inversely correlated with the likelihood of a terrorist incident in the society. A generative model is one that can predict the likelihood of events in new contexts, as opposed to statistical models which are used to predict the future incidents based on the history of the incidents in an existing context. Generative models can be useful in planning for persistent Information Surveillance and Reconnaissance (ISR) since they allow an estimation of regions in the theater of operation where terrorist incidents may arise, and thus can be used to better allocate the assignment and deployment of ISR assets. In this paper, we present a taxonomy of terrorist incidents, identify factors related to occurrence of terrorist incidents, and provide a mathematical analysis calculating the likelihood of occurrence of terrorist incidents in three common real-life scenarios arising in peace-keeping operations
Abdou, M.; Baker, C.; Casini, G.
1991-07-01
The International Thermonuclear Experimental Reactor (ITER) was designed to operate in two phases. The first phase, which lasts for 6 years, is devoted to machine checkout and physics testing. The second phase lasts for 8 years and is devoted primarily to technology testing. This report describes the technology test program development for ITER, the ancillary equipment outside the torus necessary to support the test modules, the international collaboration aspects of conducting the test program on ITER, the requirements on the machine major parameters and the R and D program required to develop the test modules for testing in ITER.
Optimal feedback scheduling of model predictive controllers
Institute of Scientific and Technical Information of China (English)
Pingfang ZHOU; Jianying XIE; Xiaolong DENG
2006-01-01
Model predictive control (MPC) could not be reliably applied to real-time control systems because its computation time is not well defined. Implemented as anytime algorithm, MPC task allows computation time to be traded for control performance, thus obtaining the predictability in time. Optimal feedback scheduling (FS-CBS) of a set of MPC tasks is presented to maximize the global control performance subject to limited processor time. Each MPC task is assigned with a constant bandwidth server (CBS), whose reserved processor time is adjusted dynamically. The constraints in the FSCBS guarantee scheduler of the total task set and stability of each component. The FS-CBS is shown robust against the variation of execution time of MPC tasks at runtime. Simulation results illustrate its effectiveness.
Klett, Hagen; Rodriguez-Fernandez, Maria; Dineen, Shauna; Leon, Lisa R; Timmer, Jens; Doyle, Francis J
2015-02-01
Heat Stroke (HS) is a life-threatening illness caused by prolonged exposure to heat that causes severe hyperthermia and nervous system abnormalities. The long term consequences of HS are poorly understood and deeper insight is required to find possible treatment strategies. Elevated pro- and anti-inflammatory cytokines during HS recovery suggest to play a major role in the immune response. In this study, we developed a mathematical model to understand the interactions and dynamics of cytokines in the hypothalamus, the main thermoregulatory center in the brain. Uncertainty and identifiability analysis of the calibrated model parameters revealed non-identifiable parameters due to the limited amount of data. To overcome the lack of identifiability of the parameters, an iterative cycle of optimal experimental design, data collection, re-calibration and model reduction was applied and further informative experiments were suggested. Additionally, a new method of approximating the prior distribution of the parameters for Bayesian optimal experimental design based on the profile likelihood is presented.
Objective calibration of numerical weather prediction models
Voudouri, A.; Khain, P.; Carmona, I.; Bellprat, O.; Grazzini, F.; Avgoustoglou, E.; Bettems, J. M.; Kaufmann, P.
2017-07-01
Numerical weather prediction (NWP) and climate models use parameterization schemes for physical processes, which often include free or poorly confined parameters. Model developers normally calibrate the values of these parameters subjectively to improve the agreement of forecasts with available observations, a procedure referred as expert tuning. A practicable objective multi-variate calibration method build on a quadratic meta-model (MM), that has been applied for a regional climate model (RCM) has shown to be at least as good as expert tuning. Based on these results, an approach to implement the methodology to an NWP model is presented in this study. Challenges in transferring the methodology from RCM to NWP are not only restricted to the use of higher resolution and different time scales. The sensitivity of the NWP model quality with respect to the model parameter space has to be clarified, as well as optimize the overall procedure, in terms of required amount of computing resources for the calibration of an NWP model. Three free model parameters affecting mainly turbulence parameterization schemes were originally selected with respect to their influence on the variables associated to daily forecasts such as daily minimum and maximum 2 m temperature as well as 24 h accumulated precipitation. Preliminary results indicate that it is both affordable in terms of computer resources and meaningful in terms of improved forecast quality. In addition, the proposed methodology has the advantage of being a replicable procedure that can be applied when an updated model version is launched and/or customize the same model implementation over different climatological areas.
Prediction models from CAD models of 3D objects
Camps, Octavia I.
1992-11-01
In this paper we present a probabilistic prediction based approach for CAD-based object recognition. Given a CAD model of an object, the PREMIO system combines techniques of analytic graphics and physical models of lights and sensors to predict how features of the object will appear in images. In nearly 4,000 experiments on analytically-generated and real images, we show that in a semi-controlled environment, predicting the detectability of features of the image can successfully guide a search procedure to make informed choices of model and image features in its search for correspondences that can be used to hypothesize the pose of the object. Furthermore, we provide a rigorous experimental protocol that can be used to determine the optimal number of correspondences to seek so that the probability of failing to find a pose and of finding an inaccurate pose are minimized.
Model predictive control of MSMPR crystallizers
Moldoványi, Nóra; Lakatos, Béla G.; Szeifert, Ferenc
2005-02-01
A multi-input-multi-output (MIMO) control problem of isothermal continuous crystallizers is addressed in order to create an adequate model-based control system. The moment equation model of mixed suspension, mixed product removal (MSMPR) crystallizers that forms a dynamical system is used, the state of which is represented by the vector of six variables: the first four leading moments of the crystal size, solute concentration and solvent concentration. Hence, the time evolution of the system occurs in a bounded region of the six-dimensional phase space. The controlled variables are the mean size of the grain; the crystal size-distribution and the manipulated variables are the input concentration of the solute and the flow rate. The controllability and observability as well as the coupling between the inputs and the outputs was analyzed by simulation using the linearized model. It is shown that the crystallizer is a nonlinear MIMO system with strong coupling between the state variables. Considering the possibilities of the model reduction, a third-order model was found quite adequate for the model estimation in model predictive control (MPC). The mean crystal size and the variance of the size distribution can be nearly separately controlled by the residence time and the inlet solute concentration, respectively. By seeding, the controllability of the crystallizer increases significantly, and the overshoots and the oscillations become smaller. The results of the controlling study have shown that the linear MPC is an adaptable and feasible controller of continuous crystallizers.
An Anisotropic Hardening Model for Springback Prediction
Zeng, Danielle; Xia, Z. Cedric
2005-08-01
As more Advanced High-Strength Steels (AHSS) are heavily used for automotive body structures and closures panels, accurate springback prediction for these components becomes more challenging because of their rapid hardening characteristics and ability to sustain even higher stresses. In this paper, a modified Mroz hardening model is proposed to capture realistic Bauschinger effect at reverse loading, such as when material passes through die radii or drawbead during sheet metal forming process. This model accounts for material anisotropic yield surface and nonlinear isotropic/kinematic hardening behavior. Material tension/compression test data are used to accurately represent Bauschinger effect. The effectiveness of the model is demonstrated by comparison of numerical and experimental springback results for a DP600 straight U-channel test.
DEFF Research Database (Denmark)
Engsted, Tom; Møller, Stig Vinther
2010-01-01
We suggest an iterated GMM approach to estimate and test the consumption based habit persistence model of Campbell and Cochrane, and we apply the approach on annual and quarterly Danish stock and bond returns. For comparative purposes we also estimate and test the standard constant relative risk......-aversion (CRRA) model. In addition, we compare the pricing errors of the different models using Hansen and Jagannathan's specification error measure. The main result is that for Denmark the Campbell-Cochrane model does not seem to perform markedly better than the CRRA model. For the long annual sample period...... covering more than 80 years there is absolutely no evidence of superior performance of the Campbell-Cochrane model. For the shorter and more recent quarterly data over a 20-30 year period, there is some evidence of counter-cyclical time-variation in the degree of risk-aversion, in accordance...
DEFF Research Database (Denmark)
Engsted, Tom; Møller, Stig V.
We suggest an iterated GMM approach to estimate and test the consumption based habit persistence model of Campbell and Cochrane (1999), and we apply the approach on annual and quarterly Danish stock and bond returns. For comparative purposes we also estimate and test the standard CRRA model....... In addition, we compare the pricing errors of the different models using Hansen and Jagannathan's (1997) specification error measure. The main result is that for Denmark the Campbell-Cochrane model does not seem to perform markedly better than the CRRA model. For the long annual sample period covering more...... than 80 years there is absolutely no evidence of superior performance of the Campbell-Cochrane model. For the shorter and more recent quarterly data over a 20-30 year period, there is some evidence of counter-cyclical time-variation in the degree of risk-aversion, in accordance with the Campbell...
DEFF Research Database (Denmark)
Engsted, Tom; Møller, Stig V.
We suggest an iterated GMM approach to estimate and test the consumption based habit persistence model of Campbell and Cochrane (1999), and we apply the approach on annual and quarterly Danish stock and bond returns. For comparative purposes we also estimate and test the standard CRRA model....... In addition, we compare the pricing errors of the different models using Hansen and Jagannathan's (1997) specification error measure. The main result is that for Denmark the Campbell-Cochrane model does not seem to perform markedly better than the CRRA model. For the long annual sample period covering more...... than 80 years there is absolutely no evidence of superior performance of the Campbell-Cochrane model. For the shorter and more recent quarterly data over a 20-30 year period, there is some evidence of counter-cyclical time-variation in the degree of risk-aversion, in accordance with the Campbell...
Isma'eel, Hussain A; Sakr, George E; Almedawar, Mohamad M; Fathallah, Jihan; Garabedian, Torkom; Eddine, Savo Bou Zein; Nasreddine, Lara; Elhajj, Imad H
2015-06-01
High dietary salt intake is directly linked to hypertension and cardiovascular diseases (CVDs). Predicting behaviors regarding salt intake habits is vital to guide interventions and increase their effectiveness. We aim to compare the accuracy of an artificial neural network (ANN) based tool that predicts behavior from key knowledge questions along with clinical data in a high cardiovascular risk cohort relative to the least square models (LSM) method. We collected knowledge, attitude and behavior data on 115 patients. A behavior score was calculated to classify patients' behavior towards reducing salt intake. Accuracy comparison between ANN and regression analysis was calculated using the bootstrap technique with 200 iterations. Starting from a 69-item questionnaire, a reduced model was developed and included eight knowledge items found to result in the highest accuracy of 62% CI (58-67%). The best prediction accuracy in the full and reduced models was attained by ANN at 66% and 62%, respectively, compared to full and reduced LSM at 40% and 34%, respectively. The average relative increase in accuracy over all in the full and reduced models is 82% and 102%, respectively. Using ANN modeling, we can predict salt reduction behaviors with 66% accuracy. The statistical model has been implemented in an online calculator and can be used in clinics to estimate the patient's behavior. This will help implementation in future research to further prove clinical utility of this tool to guide therapeutic salt reduction interventions in high cardiovascular risk individuals.
Predictive modelling of ferroelectric tunnel junctions
Velev, Julian P.; Burton, John D.; Zhuravlev, Mikhail Ye; Tsymbal, Evgeny Y.
2016-05-01
Ferroelectric tunnel junctions combine the phenomena of quantum-mechanical tunnelling and switchable spontaneous polarisation of a nanometre-thick ferroelectric film into novel device functionality. Switching the ferroelectric barrier polarisation direction produces a sizable change in resistance of the junction—a phenomenon known as the tunnelling electroresistance effect. From a fundamental perspective, ferroelectric tunnel junctions and their version with ferromagnetic electrodes, i.e., multiferroic tunnel junctions, are testbeds for studying the underlying mechanisms of tunnelling electroresistance as well as the interplay between electric and magnetic degrees of freedom and their effect on transport. From a practical perspective, ferroelectric tunnel junctions hold promise for disruptive device applications. In a very short time, they have traversed the path from basic model predictions to prototypes for novel non-volatile ferroelectric random access memories with non-destructive readout. This remarkable progress is to a large extent driven by a productive cycle of predictive modelling and innovative experimental effort. In this review article, we outline the development of the ferroelectric tunnel junction concept and the role of theoretical modelling in guiding experimental work. We discuss a wide range of physical phenomena that control the functional properties of ferroelectric tunnel junctions and summarise the state-of-the-art achievements in the field.
Simple predictions from multifield inflationary models.
Easther, Richard; Frazer, Jonathan; Peiris, Hiranya V; Price, Layne C
2014-04-25
We explore whether multifield inflationary models make unambiguous predictions for fundamental cosmological observables. Focusing on N-quadratic inflation, we numerically evaluate the full perturbation equations for models with 2, 3, and O(100) fields, using several distinct methods for specifying the initial values of the background fields. All scenarios are highly predictive, with the probability distribution functions of the cosmological observables becoming more sharply peaked as N increases. For N=100 fields, 95% of our Monte Carlo samples fall in the ranges ns∈(0.9455,0.9534), α∈(-9.741,-7.047)×10-4, r∈(0.1445,0.1449), and riso∈(0.02137,3.510)×10-3 for the spectral index, running, tensor-to-scalar ratio, and isocurvature-to-adiabatic ratio, respectively. The expected amplitude of isocurvature perturbations grows with N, raising the possibility that many-field models may be sensitive to postinflationary physics and suggesting new avenues for testing these scenarios.
Predictions of models for environmental radiological assessment
Energy Technology Data Exchange (ETDEWEB)
Peres, Sueli da Silva; Lauria, Dejanira da Costa, E-mail: suelip@ird.gov.br, E-mail: dejanira@irg.gov.br [Instituto de Radioprotecao e Dosimetria (IRD/CNEN-RJ), Servico de Avaliacao de Impacto Ambiental, Rio de Janeiro, RJ (Brazil); Mahler, Claudio Fernando [Coppe. Instituto Alberto Luiz Coimbra de Pos-Graduacao e Pesquisa de Engenharia, Universidade Federal do Rio de Janeiro (UFRJ) - Programa de Engenharia Civil, RJ (Brazil)
2011-07-01
In the field of environmental impact assessment, models are used for estimating source term, environmental dispersion and transfer of radionuclides, exposure pathway, radiation dose and the risk for human beings Although it is recognized that the specific information of local data are important to improve the quality of the dose assessment results, in fact obtaining it can be very difficult and expensive. Sources of uncertainties are numerous, among which we can cite: the subjectivity of modelers, exposure scenarios and pathways, used codes and general parameters. The various models available utilize different mathematical approaches with different complexities that can result in different predictions. Thus, for the same inputs different models can produce very different outputs. This paper presents briefly the main advances in the field of environmental radiological assessment that aim to improve the reliability of the models used in the assessment of environmental radiological impact. The intercomparison exercise of model supplied incompatible results for {sup 137}Cs and {sup 60}Co, enhancing the need for developing reference methodologies for environmental radiological assessment that allow to confront dose estimations in a common comparison base. The results of the intercomparison exercise are present briefly. (author)
Predicting Protein Secondary Structure with Markov Models
DEFF Research Database (Denmark)
Fischer, Paul; Larsen, Simon; Thomsen, Claus
2004-01-01
we are considering here, is to predict the secondary structure from the primary one. To this end we train a Markov model on training data and then use it to classify parts of unknown protein sequences as sheets, helices or coils. We show how to exploit the directional information contained......The primary structure of a protein is the sequence of its amino acids. The secondary structure describes structural properties of the molecule such as which parts of it form sheets, helices or coils. Spacial and other properties are described by the higher order structures. The classification task...
Hierarchical Model Predictive Control for Resource Distribution
DEFF Research Database (Denmark)
Bendtsen, Jan Dimon; Trangbæk, K; Stoustrup, Jakob
2010-01-01
This paper deals with hierarchichal model predictive control (MPC) of distributed systems. A three level hierachical approach is proposed, consisting of a high level MPC controller, a second level of so-called aggregators, controlled by an online MPC-like algorithm, and a lower level of autonomous...... facilitates plug-and-play addition of subsystems without redesign of any controllers. The method is supported by a number of simulations featuring a three-level smart-grid power control system for a small isolated power grid....
Explicit model predictive control accuracy analysis
Knyazev, Andrew; Zhu, Peizhen; Di Cairano, Stefano
2015-01-01
Model Predictive Control (MPC) can efficiently control constrained systems in real-time applications. MPC feedback law for a linear system with linear inequality constraints can be explicitly computed off-line, which results in an off-line partition of the state space into non-overlapped convex regions, with affine control laws associated to each region of the partition. An actual implementation of this explicit MPC in low cost micro-controllers requires the data to be "quantized", i.e. repre...
Critical conceptualism in environmental modeling and prediction.
Christakos, G
2003-10-15
Many important problems in environmental science and engineering are of a conceptual nature. Research and development, however, often becomes so preoccupied with technical issues, which are themselves fascinating, that it neglects essential methodological elements of conceptual reasoning and theoretical inquiry. This work suggests that valuable insight into environmental modeling can be gained by means of critical conceptualism which focuses on the software of human reason and, in practical terms, leads to a powerful methodological framework of space-time modeling and prediction. A knowledge synthesis system develops the rational means for the epistemic integration of various physical knowledge bases relevant to the natural system of interest in order to obtain a realistic representation of the system, provide a rigorous assessment of the uncertainty sources, generate meaningful predictions of environmental processes in space-time, and produce science-based decisions. No restriction is imposed on the shape of the distribution model or the form of the predictor (non-Gaussian distributions, multiple-point statistics, and nonlinear models are automatically incorporated). The scientific reasoning structure underlying knowledge synthesis involves teleologic criteria and stochastic logic principles which have important advantages over the reasoning method of conventional space-time techniques. Insight is gained in terms of real world applications, including the following: the study of global ozone patterns in the atmosphere using data sets generated by instruments on board the Nimbus 7 satellite and secondary information in terms of total ozone-tropopause pressure models; the mapping of arsenic concentrations in the Bangladesh drinking water by assimilating hard and soft data from an extensive network of monitoring wells; and the dynamic imaging of probability distributions of pollutants across the Kalamazoo river.
Active beam spectroscopy for ITER
Energy Technology Data Exchange (ETDEWEB)
Hellermann, M.G. von, E-mail: mgvh@jet.u [FOM Institute Rijnhuizen, Euratom Association, 3430BE Nieuwegein (Netherlands); Barnsley, R. [ITER Organization, 13108 St.-Paul-Lez-Durance, Cadarache (France); Biel, W. [Institut fuer Energieforschung, Plasmaphysik, Forschungszentrum Juelich, Euratom Association, 52425 Juelich (Germany); Delabie, E. [FOM Institute Rijnhuizen, Euratom Association, 3430BE Nieuwegein (Netherlands); Hawkes, N. [Culham Centre for Fusion Energy, Euratom Association, Culham OX14 3DB (United Kingdom); Jaspers, R. [FOM Institute Rijnhuizen, Euratom Association, 3430BE Nieuwegein (Netherlands); Johnson, D. [Princeton Plasma Physics Laboratory, Princeton, NJ-08548 (United States); Klinkhamer, F. [TNO Science and Industry, Stieltjesweg 1, 2628CK Delft (Netherlands); Lischtschenko, O. [FOM Institute Rijnhuizen, Euratom Association, 3430BE Nieuwegein (Netherlands); Marchuk, O. [Institut fuer Energieforschung, Plasmaphysik, Forschungszentrum Juelich, Euratom Association, 52425 Juelich (Germany); Schunke, B. [ITER Organization, 13108 St.-Paul-Lez-Durance, Cadarache (France); Singh, M.J. [Institute for Plasma Research, Bhat, Gandhinagar, Gurajat 384828 (India); Snijders, B. [TNO Science and Industry, Stieltjesweg 1, 2628CK Delft (Netherlands); Summers, H.P. [Culham Centre for Fusion Energy, Euratom Association, Culham OX14 3DB (United Kingdom); Thomas, D. [ITER Organization, 13108 St.-Paul-Lez-Durance, Cadarache (France); Tugarinov, S. [TRINITI Troitsk, Moscow Region 142092 (Russian Federation); Vasu, P. [Institute for Plasma Research, Bhat, Gandhinagar, Gurajat 384828 (India)
2010-11-11
Since the first feasibility studies of active beam spectroscopy on ITER in 1995 the proposed diagnostic has developed into a well advanced and mature system. Substantial progress has been achieved on the physics side including comprehensive performance studies based on an advanced predictive code, which simulates active and passive features of the expected spectral ranges. The simulation has enabled detailed specifications for an optimized instrumentation and has helped to specify suitable diagnostic neutral beam parameters. Four ITER partners share presently the task of developing a suite of ITER active beam diagnostics, which make use of the two 0.5 MeV/amu 18 MW heating neutral beams and a dedicated 0.1 MeV/amu, 3.6 MW diagnostic neutral beam. The IN ITER team is responsible for the DNB development and also for beam physics related aspects of the diagnostic. The RF will be responsible for edge CXRS system covering the outer region of the plasma (1>r/a>0.4) using an equatorial observation port, and the EU will develop the core CXRS system for the very core (0
Threshold power and energy confinement for ITER
Energy Technology Data Exchange (ETDEWEB)
Takizuka, T.
1996-12-31
In order to predict the threshold power for L-H transition and the energy confinement performance in ITER, assembling of database and analyses of them have been progressed. The ITER Threshold Database includes data from 10 divertor tokamaks. Investigation of the database gives a scaling of the threshold power of the form P{sub thr} {proportional_to} B{sub t} n{sub e}{sup 0.75} R{sup 2} {times} (n{sub e} R{sup 2}){sup +-0.25}, which predicts P{sub thr} = 100 {times} 2{sup 0{+-}1} MW for ITER at n{sub e} = 5 {times} 10{sup 19} m{sup {minus}3}. The ITER L-mode Confinement Database has also been expanded by data from 14 tokamaks. A scaling of the thermal energy confinement time in L-mode and ohmic phases is obtained; {tau}{sub th} {approximately} I{sub p} R{sup 1.8} n{sub e}{sup 0.4{sub P{sup {minus}0.73}}}. At the ITER parameter, it becomes about 2.2 sec. For the ignition in ITER, more than 2.5 times of improvement will be required from the L-mode. The ITER H-mode Confinement Database is expanded from data of 6 tokamaks to data of 11 tokamaks. A {tau}{sub th} scaling for ELMy H-mode obtained by a standard regression analysis predicts the ITER confinement time of {tau}{sub th} = 6 {times} (1 {+-} 0.3) sec. The degradation of {tau}{sub th} with increasing n{sub e} R{sup 2} (or decreasing {rho}{sub *}) is not found for ELMy H-mode. An offset linear law scaling with a dimensionally correct form also predicts nearly the same {tau}{sub th} value.
Predictive Capability Maturity Model for computational modeling and simulation.
Energy Technology Data Exchange (ETDEWEB)
Oberkampf, William Louis; Trucano, Timothy Guy; Pilch, Martin M.
2007-10-01
The Predictive Capability Maturity Model (PCMM) is a new model that can be used to assess the level of maturity of computational modeling and simulation (M&S) efforts. The development of the model is based on both the authors experience and their analysis of similar investigations in the past. The perspective taken in this report is one of judging the usefulness of a predictive capability that relies on the numerical solution to partial differential equations to better inform and improve decision making. The review of past investigations, such as the Software Engineering Institute's Capability Maturity Model Integration and the National Aeronautics and Space Administration and Department of Defense Technology Readiness Levels, indicates that a more restricted, more interpretable method is needed to assess the maturity of an M&S effort. The PCMM addresses six contributing elements to M&S: (1) representation and geometric fidelity, (2) physics and material model fidelity, (3) code verification, (4) solution verification, (5) model validation, and (6) uncertainty quantification and sensitivity analysis. For each of these elements, attributes are identified that characterize four increasing levels of maturity. Importantly, the PCMM is a structured method for assessing the maturity of an M&S effort that is directed toward an engineering application of interest. The PCMM does not assess whether the M&S effort, the accuracy of the predictions, or the performance of the engineering system satisfies or does not satisfy specified application requirements.
Dobbs, David E.
2009-01-01
The main purpose of this note is to present and justify proof via iteration as an intuitive, creative and empowering method that is often available and preferable as an alternative to proofs via either mathematical induction or the well-ordering principle. The method of iteration depends only on the fact that any strictly decreasing sequence of…
ITER at Cadarache; ITER a Cadarache
Energy Technology Data Exchange (ETDEWEB)
NONE
2005-06-15
This public information document presents the ITER project (International Thermonuclear Experimental Reactor), the definition of the fusion, the international cooperation and the advantages of the project. It presents also the site of Cadarache, an appropriate scientifical and economical environment. The last part of the documentation recalls the historical aspect of the project and the today mobilization of all partners. (A.L.B.)
A Predictive Maintenance Model for Railway Tracks
DEFF Research Database (Denmark)
Li, Rui; Wen, Min; Salling, Kim Bang
2015-01-01
For the modern railways, maintenance is critical for ensuring safety, train punctuality and overall capacity utilization. The cost of railway maintenance in Europe is high, on average between 30,000 – 100,000 Euro per km per year [1]. Aiming to reduce such maintenance expenditure, this paper...... presents a mathematical model based on Mixed Integer Programming (MIP) which is designed to optimize the predictive railway tamping activities for ballasted track for the time horizon up to four years. The objective function is setup to minimize the actual costs for the tamping machine (measured by time...... recovery on the track quality after tamping operation and (5) Tamping machine operation factors. A Danish railway track between Odense and Fredericia with 57.2 km of length is applied for a time period of two to four years in the proposed maintenance model. The total cost can be reduced with up to 50...
A predictive fitness model for influenza
Łuksza, Marta; Lässig, Michael
2014-03-01
The seasonal human influenza A/H3N2 virus undergoes rapid evolution, which produces significant year-to-year sequence turnover in the population of circulating strains. Adaptive mutations respond to human immune challenge and occur primarily in antigenic epitopes, the antibody-binding domains of the viral surface protein haemagglutinin. Here we develop a fitness model for haemagglutinin that predicts the evolution of the viral population from one year to the next. Two factors are shown to determine the fitness of a strain: adaptive epitope changes and deleterious mutations outside the epitopes. We infer both fitness components for the strains circulating in a given year, using population-genetic data of all previous strains. From fitness and frequency of each strain, we predict the frequency of its descendent strains in the following year. This fitness model maps the adaptive history of influenza A and suggests a principled method for vaccine selection. Our results call for a more comprehensive epidemiology of influenza and other fast-evolving pathogens that integrates antigenic phenotypes with other viral functions coupled by genetic linkage.
Predictive Model of Radiative Neutrino Masses
Babu, K S
2013-01-01
We present a simple and predictive model of radiative neutrino masses. It is a special case of the Zee model which introduces two Higgs doublets and a charged singlet. We impose a family-dependent Z_4 symmetry acting on the leptons, which reduces the number of parameters describing neutrino oscillations to four. A variety of predictions follow: The hierarchy of neutrino masses must be inverted; the lightest neutrino mass is extremely small and calculable; one of the neutrino mixing angles is determined in terms of the other two; the phase parameters take CP-conserving values with \\delta_{CP} = \\pi; and the effective mass in neutrinoless double beta decay lies in a narrow range, m_{\\beta \\beta} = (17.6 - 18.5) meV. The ratio of vacuum expectation values of the two Higgs doublets, tan\\beta, is determined to be either 1.9 or 0.19 from neutrino oscillation data. Flavor-conserving and flavor-changing couplings of the Higgs doublets are also determined from neutrino data. The non-standard neutral Higgs bosons, if t...
A predictive model for dimensional errors in fused deposition modeling
DEFF Research Database (Denmark)
Stolfi, A.
2015-01-01
This work concerns the effect of deposition angle (a) and layer thickness (L) on the dimensional performance of FDM parts using a predictive model based on the geometrical description of the FDM filament profile. An experimental validation over the whole a range from 0° to 177° at 3° steps and two...
Effect on Prediction when Modeling Covariates in Bayesian Nonparametric Models.
Cruz-Marcelo, Alejandro; Rosner, Gary L; Müller, Peter; Stewart, Clinton F
2013-04-01
In biomedical research, it is often of interest to characterize biologic processes giving rise to observations and to make predictions of future observations. Bayesian nonparametric methods provide a means for carrying out Bayesian inference making as few assumptions about restrictive parametric models as possible. There are several proposals in the literature for extending Bayesian nonparametric models to include dependence on covariates. Limited attention, however, has been directed to the following two aspects. In this article, we examine the effect on fitting and predictive performance of incorporating covariates in a class of Bayesian nonparametric models by one of two primary ways: either in the weights or in the locations of a discrete random probability measure. We show that different strategies for incorporating continuous covariates in Bayesian nonparametric models can result in big differences when used for prediction, even though they lead to otherwise similar posterior inferences. When one needs the predictive density, as in optimal design, and this density is a mixture, it is better to make the weights depend on the covariates. We demonstrate these points via a simulated data example and in an application in which one wants to determine the optimal dose of an anticancer drug used in pediatric oncology.
Simulation of divertor targets shielding during transients in ITER
Energy Technology Data Exchange (ETDEWEB)
Pestchanyi, Sergey, E-mail: serguei.pestchanyi@kit.edu [KIT, Hermann-von-Helmholtz-Platz 1, Eggenstein-Leopoldshafen (Germany); Pitts, Richard; Lehnen, Michael [ITER Organization,Route de Vinon-sur-Verdon, CS 90 046, 13067 St. Paul Lez Durance Cedex (France)
2016-11-01
Highlights: • We simulated plasma shielding effect during disruption in ITER using the TOKES code. • It has been found that vaporization is unavoidable under action of ITER transients, but plasma shielding drastically reduces the divertor target damage: the melt pool and the vaporization region widths reduced 10–15 times. • A simplified 1D model describing the melt pool depth and the shielded heat flux to the divertor targets have been developed. • The results of the TOKES simulations have been compared with the analytic model when the model is valid. - Abstract: Direct extrapolation of the disruptive heat flux on ITER conditions predicts severe melting and vaporization of the divertor targets causing their intolerable damage. However, tungsten vaporized from the target at initial stage of the disruption can create plasma shield in front of the target, which effectively protects the target surface from the rest of the heat flux. Estimation of this shielding efficiency has been performed using the TOKES code. The shielding effect under ITER conditions is found to be very strong: the maximal depth of the melt layer reduced 4 times, the melt layer width—more than 10 times and vaporization region shrinks 10–15 times due to shielding for unmitigated disruption of 350 MJ discharge. The simulation results show complex, 2D plasma dynamics of the shield under ITER conditions. However, a simplified analytic model, valid for rough estimation of the maximum value for the shielded flux to the target and for the melt depth at the target surface has been developed.
Iterative Goal Refinement for Robotics
2014-06-01
Iterative Goal Refinement for Robotics Mark Roberts1, Swaroop Vattam1, Ronald Alford2, Bryan Auslander3, Justin Karneeb3, Matthew Molineaux3... robotics researchers and practitioners. We present a goal lifecycle and define a formal model for GR that (1) relates distinct disciplines concerning...researchers to collaborate in exploring this exciting frontier. 1. Introduction Robotic systems often act using incomplete models in environments
DEFF Research Database (Denmark)
Jørgensen, John Bagterp; Jørgensen, Sten Bay
2007-01-01
model is realized from a continuous-discrete-time linear stochastic system specified using transfer functions with time-delays. It is argued that the prediction-error criterion should be selected such that it is compatible with the objective function of the predictive controller in which the model......A Prediction-error-method tailored for model based predictive control is presented. The prediction-error method studied are based on predictions using the Kalman filter and Kalman predictors for a linear discrete-time stochastic state space model. The linear discrete-time stochastic state space...
IMM Iterated Extended Particle Filter Algorithm
Yang Wan; Shouyong Wang; Xing Qin
2013-01-01
In order to solve the tracking problem of radar maneuvering target in nonlinear system model and non-Gaussian noise background, this paper puts forward one interacting multiple model (IMM) iterated extended particle filter algorithm (IMM-IEHPF). The algorithm makes use of multiple modes to model the target motion form to track any maneuvering target and each mode uses iterated extended particle filter (IEHPF) to deal with the state estimation problem of nonlinear non-Gaussian system. IEH...
Two criteria for evaluating risk prediction models.
Pfeiffer, R M; Gail, M H
2011-09-01
We propose and study two criteria to assess the usefulness of models that predict risk of disease incidence for screening and prevention, or the usefulness of prognostic models for management following disease diagnosis. The first criterion, the proportion of cases followed PCF (q), is the proportion of individuals who will develop disease who are included in the proportion q of individuals in the population at highest risk. The second criterion is the proportion needed to follow-up, PNF (p), namely the proportion of the general population at highest risk that one needs to follow in order that a proportion p of those destined to become cases will be followed. PCF (q) assesses the effectiveness of a program that follows 100q% of the population at highest risk. PNF (p) assess the feasibility of covering 100p% of cases by indicating how much of the population at highest risk must be followed. We show the relationship of those two criteria to the Lorenz curve and its inverse, and present distribution theory for estimates of PCF and PNF. We develop new methods, based on influence functions, for inference for a single risk model, and also for comparing the PCFs and PNFs of two risk models, both of which were evaluated in the same validation data.
Methods for Handling Missing Variables in Risk Prediction Models
Held, Ulrike; Kessels, Alfons; Aymerich, Judith Garcia; Basagana, Xavier; ter Riet, Gerben; Moons, Karel G. M.; Puhan, Milo A.
2016-01-01
Prediction models should be externally validated before being used in clinical practice. Many published prediction models have never been validated. Uncollected predictor variables in otherwise suitable validation cohorts are the main factor precluding external validation.We used individual patient
Craniofacial reconstruction as a prediction problem using a Latent Root Regression model
Berar, Maxime; Glaunès, Joan Alexis; Rozenholc, Yves; 10.1016/j.forsciint.2011.03.010
2012-01-01
In this paper, we present a computer-assisted method for facial reconstruction. This method provides an estimation of the facial shape associated with unidentified skeletal remains. Current computer-assisted methods using a statistical framework rely on a common set of extracted points located on the bone and soft-tissue surfaces. Most of the facial reconstruction methods then consist of predicting the position of the soft-tissue surface points, when the positions of the bone surface points are known. We propose to use Latent Root Regression for prediction. The results obtained are then compared to those given by Principal Components Analysis linear models. In conjunction, we have evaluated the influence of the number of skull landmarks used. Anatomical skull landmarks are completed iteratively by points located upon geodesics which link these anatomical landmarks, thus enabling us to artificially increase the number of skull points. Facial points are obtained using a mesh-matching algorithm between a common ...
Energy Technology Data Exchange (ETDEWEB)
Erba, M.; Aniel, T.; Basiuk, V.; Becoulet, A.; Litaudon, X
1997-08-01
A new model based on a combination of a Bohm-like term plus a gyro-Bohm-like term is proposed for the electron and ion heat diffusivity in the L-mode regime, which is the commonest regime of operation of Tokamaks. This model is derived using the dimensionless analysis technique taking into account the indications of scaling laws for the global confinement time and other experimental constraints on the diffusivity. The model has been successfully tested against data from several different experiments from the ITER database and the local Tore Supra data-base. Statistical analysis has shown it to perform better than purely Bohm or gyro-Bohm models and global scaling laws in the chosen dataset. (author) 36 refs.
Influence of impurity seeding on plasma burning scenarios for ITER
Energy Technology Data Exchange (ETDEWEB)
Ivanova-Stanik, I., E-mail: irena.ivanova-stanik@ifpilm.pl [Institute of Plasma Physics and Laser Microfusion, Hery 23, 01-497 Warsaw (Poland); Zagórski, R. [Institute of Plasma Physics and Laser Microfusion, Hery 23, 01-497 Warsaw (Poland); Voitsekhovitch, I. [CCFE, Culham Science Centre, Abingdon, Oxon OX14 3DB (United Kingdom); Brezinsek, S. [Forschungszentrum Jülich GmbH Institut für Energie-und Klimaforschung—Plasmaphysik, Jülich 52425 (Germany)
2016-11-01
Highlights: • The self-consistent (core-edge) COREDIV code has been used to analyze ITER standard inductive scenarios with neon and argon seeding. • In order to achieve wide operational window with the power crossing separatrix above the H-L threshold and simultaneously with tolerable heat load to target plates (<40 MW) relatively strong impurity transport in the core and SOL regions is necessary. • For argon seeding, the operational window is much smaller than for neon case due to enhanced core radiation (in comparison to Ne). - Abstract: ITER expects to produce fusion power of about 0.5GW when operating with tungsten (W) divertor and beryllium (Be) wall. The influx of W from divertor can have significant influence on the discharge performance. This work describes predictive integrated numerical modeling of ITER discharges using the COREDIV code, which self-consistently solves the 1D radial energy and particle transport in the core region and 2D multi-fluid transport in the SOL. Calculations are performed for inductive ITER scenarios with intrinsic (W, Be and He) impurities and with seeded impurities (Ne and Ar) for different particle and heat transport in the core and different radial transport in the SOL. Simulations show, that only for sufficiently high radial diffusion (both in the core and in the SOL regions), it is possible to achieve H-mode mode plasma operation (power to SOL > L-H threshold power) with acceptable low level of power reaching the divertor plates. For argon seeding, the operational window is much smaller than for neon case due to enhanced core radiation (in comparison to Ne). Particle transport in the core characterized by the ratio of particle diffusion to thermal conductivity) has strong influence on the predicted ITER performance.
Estimating the magnitude of prediction uncertainties for the APLE model
Models are often used to predict phosphorus (P) loss from agricultural fields. While it is commonly recognized that model predictions are inherently uncertain, few studies have addressed prediction uncertainties using P loss models. In this study, we conduct an uncertainty analysis for the Annual P ...
Prediction of Catastrophes: an experimental model
Peters, Randall D; Pomeau, Yves
2012-01-01
Catastrophes of all kinds can be roughly defined as short duration-large amplitude events following and followed by long periods of "ripening". Major earthquakes surely belong to the class of 'catastrophic' events. Because of the space-time scales involved, an experimental approach is often difficult, not to say impossible, however desirable it could be. Described in this article is a "laboratory" setup that yields data of a type that is amenable to theoretical methods of prediction. Observations are made of a critical slowing down in the noisy signal of a solder wire creeping under constant stress. This effect is shown to be a fair signal of the forthcoming catastrophe in both of two dynamical models. The first is an "abstract" model in which a time dependent quantity drifts slowly but makes quick jumps from time to time. The second is a realistic physical model for the collective motion of dislocations (the Ananthakrishna set of equations for creep). Hope thus exists that similar changes in the response to ...
Predictive modeling of low solubility semiconductor alloys
Rodriguez, Garrett V.; Millunchick, Joanna M.
2016-09-01
GaAsBi is of great interest for applications in high efficiency optoelectronic devices due to its highly tunable bandgap. However, the experimental growth of high Bi content films has proven difficult. Here, we model GaAsBi film growth using a kinetic Monte Carlo simulation that explicitly takes cation and anion reactions into account. The unique behavior of Bi droplets is explored, and a sharp decrease in Bi content upon Bi droplet formation is demonstrated. The high mobility of simulated Bi droplets on GaAsBi surfaces is shown to produce phase separated Ga-Bi droplets as well as depressions on the film surface. A phase diagram for a range of growth rates that predicts both Bi content and droplet formation is presented to guide the experimental growth of high Bi content GaAsBi films.
Distributed model predictive control made easy
Negenborn, Rudy
2014-01-01
The rapid evolution of computer science, communication, and information technology has enabled the application of control techniques to systems beyond the possibilities of control theory just a decade ago. Critical infrastructures such as electricity, water, traffic and intermodal transport networks are now in the scope of control engineers. The sheer size of such large-scale systems requires the adoption of advanced distributed control approaches. Distributed model predictive control (MPC) is one of the promising control methodologies for control of such systems. This book provides a state-of-the-art overview of distributed MPC approaches, while at the same time making clear directions of research that deserve more attention. The core and rationale of 35 approaches are carefully explained. Moreover, detailed step-by-step algorithmic descriptions of each approach are provided. These features make the book a comprehensive guide both for those seeking an introduction to distributed MPC as well as for those ...
Leptogenesis in minimal predictive seesaw models
Björkeroth, Fredrik; de Anda, Francisco J.; de Medeiros Varzielas, Ivo; King, Stephen F.
2015-10-01
We estimate the Baryon Asymmetry of the Universe (BAU) arising from leptogenesis within a class of minimal predictive seesaw models involving two right-handed neutrinos and simple Yukawa structures with one texture zero. The two right-handed neutrinos are dominantly responsible for the "atmospheric" and "solar" neutrino masses with Yukawa couplings to ( ν e , ν μ , ν τ ) proportional to (0, 1, 1) and (1, n, n - 2), respectively, where n is a positive integer. The neutrino Yukawa matrix is therefore characterised by two proportionality constants with their relative phase providing a leptogenesis-PMNS link, enabling the lightest right-handed neutrino mass to be determined from neutrino data and the observed BAU. We discuss an SU(5) SUSY GUT example, where A 4 vacuum alignment provides the required Yukawa structures with n = 3, while a {{Z}}_9 symmetry fixes the relatives phase to be a ninth root of unity.
Institute of Scientific and Technical Information of China (English)
魏益民; 吴和兵
2001-01-01
We discuss the incomplete semi-iterative method (ISIM) for an approximate solution of a linear fixed point equations x＝ Tx+c with a bounded linear operator T acting on a complex Banach space X such that its resolvent has a pole of order k at the point 1. Sufficient conditions for the convergence of ISIM to a solution of x=Tx+c , where c belongs to the range space of (I-T)k, are established. We show that the ISIM has an attractive feature that it is usually convergent even when the spectral radius of the operator T is greater than 1 and Ind1T≥ 1. Applications in finite Markov chain is considered and illustrative examples are reported, showing the convergence rate of the ISIM is very high.
Luo, Wei; Phung, Dinh; Tran, Truyen; Gupta, Sunil; Rana, Santu; Karmakar, Chandan; Shilton, Alistair; Yearwood, John; Dimitrova, Nevenka; Ho, Tu Bao; Venkatesh, Svetha; Berk, Michael
2016-12-16
As more and more researchers are turning to big data for new opportunities of biomedical discoveries, machine learning models, as the backbone of big data analysis, are mentioned more often in biomedical journals. However, owing to the inherent complexity of machine learning methods, they are prone to misuse. Because of the flexibility in specifying machine learning models, the results are often insufficiently reported in research articles, hindering reliable assessment of model validity and consistent interpretation of model outputs. To attain a set of guidelines on the use of machine learning predictive models within clinical settings to make sure the models are correctly applied and sufficiently reported so that true discoveries can be distinguished from random coincidence. A multidisciplinary panel of machine learning experts, clinicians, and traditional statisticians were interviewed, using an iterative process in accordance with the Delphi method. The process produced a set of guidelines that consists of (1) a list of reporting items to be included in a research article and (2) a set of practical sequential steps for developing predictive models. A set of guidelines was generated to enable correct application of machine learning models and consistent reporting of model specifications and results in biomedical research. We believe that such guidelines will accelerate the adoption of big data analysis, particularly with machine learning methods, in the biomedical research community.
Jouanna, P.; Pèpe, G.; Dweik, J.; Gouze, P.
2010-11-01
Predicting the impact of underground engineering on the environment requires the knowledge of natural media at different scales. In particular, understanding basic phenomena controlling the properties of rocks in the presence of complex fluids necessitates a detailed atomic description of the solid/fluid/solid contacts, the subject of Part I of the present study. First, building the solid interspace between two different crystals in a non-periodic situation is achieved using the ab initio and molecular mechanics code GenMol TM. A description of the fluid confined within the interspace is then derived from the original genetic iterative multi-species (GIMS) algorithm implemented in the same code. This approach consists of equilibrating chemical potentials, cycle after cycle and species after species, between the confined fluid and the free natural fluid. An elementary iteration for a species k consists of different steps incrementing the number Nk of particles k, with other numbers Nk' remaining constant. At each step, an optimum fluid composition is obtained by a genetic process distributing the fluid particles on a grid by stochastic shots, followed in fine by a refining process. The effectiveness of the GIMS approach is demonstrated in the case study of a fluid confined between two (0 0 1) kaolinite faces, with apertures h varying between 4 and 10 Å, connected to a 9-species external solution [H 2O, Cl -, N a+, CO 2(aq), NaCl (aq), Ca 2+, Mg 2+, HCO3-, H 3O +] where concentrations are ranging from 55 to 10 -4 mol/L. The results show a drastic variation in the solute/solvent and cations/ions ratios in the confined fluid when aperture h is lowered to less than 1 nm. These results obtained with a very rapid convergence of the iterative algorithm combined with a very competitive genetic optimizer are validated with high precision on a free solution. This description of contacts between crystals is original and unattainable by standard crystal interface approaches
Methodology for dimensional variation analysis of ITER integrated systems
Energy Technology Data Exchange (ETDEWEB)
Fuentes, F. Javier, E-mail: FranciscoJavier.Fuentes@iter.org [ITER Organization, Route de Vinon-sur-Verdon—CS 90046, 13067 St Paul-lez-Durance (France); Trouvé, Vincent [Assystem Engineering & Operation Services, rue J-M Jacquard CS 60117, 84120 Pertuis (France); Cordier, Jean-Jacques; Reich, Jens [ITER Organization, Route de Vinon-sur-Verdon—CS 90046, 13067 St Paul-lez-Durance (France)
2016-11-01
Highlights: • Tokamak dimensional management methodology, based on 3D variation analysis, is presented. • Dimensional Variation Model implementation workflow is described. • Methodology phases are described in detail. The application of this methodology to the tolerance analysis of ITER Vacuum Vessel is presented. • Dimensional studies are a valuable tool for the assessment of Tokamak PCR (Project Change Requests), DR (Deviation Requests) and NCR (Non-Conformance Reports). - Abstract: The ITER machine consists of a large number of complex systems highly integrated, with critical functional requirements and reduced design clearances to minimize the impact in cost and performances. Tolerances and assembly accuracies in critical areas could have a serious impact in the final performances, compromising the machine assembly and plasma operation. The management of tolerances allocated to part manufacture and assembly processes, as well as the control of potential deviations and early mitigation of non-compliances with the technical requirements, is a critical activity on the project life cycle. A 3D tolerance simulation analysis of ITER Tokamak machine has been developed based on 3DCS dedicated software. This integrated dimensional variation model is representative of Tokamak manufacturing functional tolerances and assembly processes, predicting accurate values for the amount of variation on critical areas. This paper describes the detailed methodology to implement and update the Tokamak Dimensional Variation Model. The model is managed at system level. The methodology phases are illustrated by its application to the Vacuum Vessel (VV), considering the status of maturity of VV dimensional variation model. The following topics are described in this paper: • Model description and constraints. • Model implementation workflow. • Management of input and output data. • Statistical analysis and risk assessment. The management of the integration studies based on
Liu, Haiyan; Lu, Zhenyu; Cisneros, G Andres; Yang, Weitao
2004-07-08
The determination of reaction paths for enzyme systems remains a great challenge for current computational methods. In this paper we present an efficient method for the determination of minimum energy reaction paths with the ab initio quantum mechanical/molecular mechanical approach. Our method is based on an adaptation of the path optimization procedure by Ayala and Schlegel for small molecules in gas phase, the iterative quantum mechanical/molecular mechanical (QM/MM) optimization method developed earlier in our laboratory and the introduction of a new metric defining the distance between different structures in the configuration space. In this method we represent the reaction path by a discrete set of structures. For each structure we partition the atoms into a core set that usually includes the QM subsystem and an environment set that usually includes the MM subsystem. These two sets are optimized iteratively: the core set is optimized to approximate the reaction path while the environment set is optimized to the corresponding energy minimum. In the optimization of the core set of atoms for the reaction path, we introduce a new metric to define the distances between the points on the reaction path, which excludes the soft degrees of freedom from the environment set and includes extra weights on coordinates describing chemical changes. Because the reaction path is represented by discrete structures and the optimization for each can be performed individually with very limited coupling, our method can be executed in a natural and efficient parallelization, with each processor handling one of the structures. We demonstrate the applicability and efficiency of our method by testing it on two systems previously studied by our group, triosephosphate isomerase and 4-oxalocrotonate tautomerase. In both cases the minimum energy paths for both enzymes agree with the previously reported paths.
Pickl, S.
2002-09-01
This paper is concerned with a mathematical derivation of the nonlinear time-discrete Technology-Emissions Means (TEM-) model. A detailed introduction to the dynamics modelling a Joint Implementation Program concerning Kyoto Protocol is given at the end of the paper. As the nonlinear time-discrete dynamics tends to chaotic behaviour, the necessary introduction of control parameters in the dynamics of the TEM model leads to new results in the field of time-discrete control systems. Furthermore the numerical results give new insights into a Joint-Implementation Program and herewith, they may improve this important economic tool. The iterative solution presented at the end might be a useful orientation to anticipate and support Kyoto Process.
Comparing model predictions for ecosystem-based management
DEFF Research Database (Denmark)
Jacobsen, Nis Sand; Essington, Timothy E.; Andersen, Ken Haste
2016-01-01
Ecosystem modeling is becoming an integral part of fisheries management, but there is a need to identify differences between predictions derived from models employed for scientific and management purposes. Here, we compared two models: a biomass-based food-web model (Ecopath with Ecosim (Ew......E)) and a size-structured fish community model. The models were compared with respect to predicted ecological consequences of fishing to identify commonalities and differences in model predictions for the California Current fish community. We compared the models regarding direct and indirect responses to fishing...... on one or more species. The size-based model predicted a higher fishing mortality needed to reach maximum sustainable yield than EwE for most species. The size-based model also predicted stronger top-down effects of predator removals than EwE. In contrast, EwE predicted stronger bottom-up effects...
Kochunas, Brendan; Fitzgerald, Andrew; Larsen, Edward
2017-09-01
A central problem in nuclear reactor analysis is calculating solutions of steady-state k-eigenvalue problems with thermal hydraulic feedback. In this paper we propose and utilize a model problem that permits the theoretical analysis of iterative schemes for solving such problems. To begin, we discuss a model problem (with nonlinear cross section feedback) and its justification. We proceed with a Fourier analysis for source iteration schemes applied to the model problem. Then we analyze commonly-used iteration schemes involving non-linear diffusion acceleration and feedback. For each scheme we show (1) that they are conditionally stable, (2) the conditions that lead to instability, and (3) that traditional relaxation approaches can improve stability. Lastly, we propose a new iteration scheme that theory predicts is an improvement upon the existing methods.
Collins, Michael G; Juvina, Ion; Gluck, Kevin A
2016-01-01
When playing games of strategic interaction, such as iterated Prisoner's Dilemma and iterated Chicken Game, people exhibit specific within-game learning (e.g., learning a game's optimal outcome) as well as transfer of learning between games (e.g., a game's optimal outcome occurring at a higher proportion when played after another game). The reciprocal trust players develop during the first game is thought to mediate transfer of learning effects. Recently, a computational cognitive model using a novel trust mechanism has been shown to account for human behavior in both games, including the transfer between games. We present the results of a study in which we evaluate the model's a priori predictions of human learning and transfer in 16 different conditions. The model's predictive validity is compared against five model variants that lacked a trust mechanism. The results suggest that a trust mechanism is necessary to explain human behavior across multiple conditions, even when a human plays against a non-human agent. The addition of a trust mechanism to the other learning mechanisms within the cognitive architecture, such as sequence learning, instance-based learning, and utility learning, leads to better prediction of the empirical data. It is argued that computational cognitive modeling is a useful tool for studying trust development, calibration, and repair.
Ozone Concentration Prediction via Spatiotemporal Autoregressive Model With Exogenous Variables
Kamoun, W.; Senoussi, R.
2009-04-01
concentration recorded in n=42 stations during the year 2005 within a south region in France, covering an area of approximately 10565 km2. Meteorological covariates are the daily maxima of temperature, wind speed, daily maxima of humidity and atmospheric pressure. Actually, the meteorological factors are not recorded in ozone monitoring sites and thus preliminary interpolation techniques were used and compared subsequently (Gaussian conditional simulation, ordinary kriging or kriging with external drift). Concluding remarks: From the statistical point of view, both simulation study and data analysis showed a fairly robust behaviour of estimation procedures. In both cases, the analysis of residuals proved a significant improvement of error prediction within this framework. From the environmental point of view, the ability of accounting for pertinent local and dynamical meteorological covariates clearly provided a useful tool in prediction methods. Bib [1]: Pfeifer.P.E; Deutsh.S.J. (1980) "A Three-Stage Iterative Procedure for Space-Time Modelling." Technometrics 22: 35-47. Bib [2]: Raffaella Giacomini and Cliff W.J.Granger 2002 - 07 "Aggregation of Space-Time Process" Departement of Economics, University of California, San Diego.
Remaining Useful Lifetime (RUL - Probabilistic Predictive Model
Directory of Open Access Journals (Sweden)
Ephraim Suhir
2011-01-01
Full Text Available Reliability evaluations and assurances cannot be delayed until the device (system is fabricated and put into operation. Reliability of an electronic product should be conceived at the early stages of its design; implemented during manufacturing; evaluated (considering customer requirements and the existing specifications, by electrical, optical and mechanical measurements and testing; checked (screened during manufacturing (fabrication; and, if necessary and appropriate, maintained in the field during the product’s operation Simple and physically meaningful probabilistic predictive model is suggested for the evaluation of the remaining useful lifetime (RUL of an electronic device (system after an appreciable deviation from its normal operation conditions has been detected, and the increase in the failure rate and the change in the configuration of the wear-out portion of the bathtub has been assessed. The general concepts are illustrated by numerical examples. The model can be employed, along with other PHM forecasting and interfering tools and means, to evaluate and to maintain the high level of the reliability (probability of non-failure of a device (system at the operation stage of its lifetime.
A Predictive Model of Geosynchronous Magnetopause Crossings
Dmitriev, A; Chao, J -K
2013-01-01
We have developed a model predicting whether or not the magnetopause crosses geosynchronous orbit at given location for given solar wind pressure Psw, Bz component of interplanetary magnetic field (IMF) and geomagnetic conditions characterized by 1-min SYM-H index. The model is based on more than 300 geosynchronous magnetopause crossings (GMCs) and about 6000 minutes when geosynchronous satellites of GOES and LANL series are located in the magnetosheath (so-called MSh intervals) in 1994 to 2001. Minimizing of the Psw required for GMCs and MSh intervals at various locations, Bz and SYM-H allows describing both an effect of magnetopause dawn-dusk asymmetry and saturation of Bz influence for very large southward IMF. The asymmetry is strong for large negative Bz and almost disappears when Bz is positive. We found that the larger amplitude of negative SYM-H the lower solar wind pressure is required for GMCs. We attribute this effect to a depletion of the dayside magnetic field by a storm-time intensification of t...
Predictive modeling for EBPC in EBDW
Zimmermann, Rainer; Schulz, Martin; Hoppe, Wolfgang; Stock, Hans-Jürgen; Demmerle, Wolfgang; Zepka, Alex; Isoyan, Artak; Bomholt, Lars; Manakli, Serdar; Pain, Laurent
2009-10-01
We demonstrate a flow for e-beam proximity correction (EBPC) to e-beam direct write (EBDW) wafer manufacturing processes, demonstrating a solution that covers all steps from the generation of a test pattern for (experimental or virtual) measurement data creation, over e-beam model fitting, proximity effect correction (PEC), and verification of the results. We base our approach on a predictive, physical e-beam simulation tool, with the possibility to complement this with experimental data, and the goal of preparing the EBPC methods for the advent of high-volume EBDW tools. As an example, we apply and compare dose correction and geometric correction for low and high electron energies on 1D and 2D test patterns. In particular, we show some results of model-based geometric correction as it is typical for the optical case, but enhanced for the particularities of e-beam technology. The results are used to discuss PEC strategies, with respect to short and long range effects.
Approximate iterative algorithms
Almudevar, Anthony Louis
2014-01-01
Iterative algorithms often rely on approximate evaluation techniques, which may include statistical estimation, computer simulation or functional approximation. This volume presents methods for the study of approximate iterative algorithms, providing tools for the derivation of error bounds and convergence rates, and for the optimal design of such algorithms. Techniques of functional analysis are used to derive analytical relationships between approximation methods and convergence properties for general classes of algorithms. This work provides the necessary background in functional analysis a
Approximation of Iteration Number for Gauss-Seidel Using Redlich-Kister Polynomial
Directory of Open Access Journals (Sweden)
M. K. Hasan
2010-01-01
Full Text Available Problem statement: Development of mathematical models based on set of observed data plays a crucial role to describe and predict any phenomena in science, engineering and economics. Therefore, the main purpose of this study was to compare the efficiency of Arithmetic Mean (AM, Geometric Mean (GM and Explicit Group (EG iterative methods to solve system of linear equations via estimation of unknown parameters in linear models. Approach: The system of linear equations for linear models generated by using least square method based on (m+1 set of observed data for number of Gauss-Seidel iteration from various grid sizes. Actually there were two types of linear models considered such as piece-wise linear polynomial and piece-wise Redlich-Kister polynomial. All unknown parameters of these models estimated and calculated by using three proposed iterative methods. Results: Thorough several implementations of numerical experiments, the accuracy for formulations of two proposed models had shown that the use of the third-order Redlich-Kister polynomial has high accuracy compared to linear polynomial case. Conclusion: The efficiency of AM and GM iterative methods based on the Redlich-Kister polynomial is superior as compared to EG iterative method.
Kaplanis, S.; Kaplani, E.
2012-01-01
This paper outlines and formulates a compact and effective simulation model, which predicts the performance of single and double glaze flat-plate collector. The model uses an elaborated iterative simulation algorithm and provides the collector top losses, the glass covers temperatures, the collector absorber temperature, the collector fluid outlet temperature, the system efficiency, and the thermal gain for any operational and environmental conditions. It is a numerical approach based on simu...
DEFF Research Database (Denmark)
Sokoler, Leo Emil; Frison, Gianluca; Edlund, Kristian
2013-01-01
In this paper, we develop an efficient interior-point method (IPM) for the linear programs arising in economic model predictive control of linear systems. The novelty of our algorithm is that it combines a homogeneous and self-dual model, and a specialized Riccati iteration procedure. We test...... the algorithm in a conceptual study of power systems management. Simulations show that in comparison to state of the art software implementation of IPMs, our method is significantly faster and scales in a favourable way....
Federici, Gianfranco; Raffray, A. René
1997-04-01
The transient thermal model RACLETTE (acronym of Rate Analysis Code for pLasma Energy Transfer Transient Evaluation) described in part I of this paper is applied here to analyse the heat transfer and erosion effects of various slow (100 ms-10 s) high power energy transients on the actively cooled plasma facing components (PFCs) of the International Thermonuclear Experimental Reactor (ITER). These have a strong bearing on the PFC design and need careful analysis. The relevant parameters affecting the heat transfer during the plasma excursions are established. The temperature variation with time and space is evaluated together with the extent of vaporisation and melting (the latter only for metals) for the different candidate armour materials considered for the design (i.e., Be for the primary first wall, Be and CFCs for the limiter, Be, W, and CFCs for the divertor plates) and including for certain cases low-density vapour shielding effects. The critical heat flux, the change of the coolant parameters and the possible severe degradation of the coolant heat removal capability that could result under certain conditions during these transients, for example for the limiter, are also evaluated. Based on the results, the design implications on the heat removal performance and erosion damage of the variuos ITER PFCs are critically discussed and some recommendations are made for the selection of the most adequate protection materials and optimum armour thickness.
Model for predicting mountain wave field uncertainties
Damiens, Florentin; Lott, François; Millet, Christophe; Plougonven, Riwal
2017-04-01
Studying the propagation of acoustic waves throughout troposphere requires knowledge of wind speed and temperature gradients from the ground up to about 10-20 km. Typical planetary boundary layers flows are known to present vertical low level shears that can interact with mountain waves, thereby triggering small-scale disturbances. Resolving these fluctuations for long-range propagation problems is, however, not feasible because of computer memory/time restrictions and thus, they need to be parameterized. When the disturbances are small enough, these fluctuations can be described by linear equations. Previous works by co-authors have shown that the critical layer dynamics that occur near the ground produces large horizontal flows and buoyancy disturbances that result in intense downslope winds and gravity wave breaking. While these phenomena manifest almost systematically for high Richardson numbers and when the boundary layer depth is relatively small compare to the mountain height, the process by which static stability affects downslope winds remains unclear. In the present work, new linear mountain gravity wave solutions are tested against numerical predictions obtained with the Weather Research and Forecasting (WRF) model. For Richardson numbers typically larger than unity, the mesoscale model is used to quantify the effect of neglected nonlinear terms on downslope winds and mountain wave patterns. At these regimes, the large downslope winds transport warm air, a so called "Foehn" effect than can impact sound propagation properties. The sensitivity of small-scale disturbances to Richardson number is quantified using two-dimensional spectral analysis. It is shown through a pilot study of subgrid scale fluctuations of boundary layer flows over realistic mountains that the cross-spectrum of mountain wave field is made up of the same components found in WRF simulations. The impact of each individual component on acoustic wave propagation is discussed in terms of
Directory of Open Access Journals (Sweden)
Jing Lu
2014-11-01
Full Text Available We propose a weather prediction model in this article based on neural network and fuzzy inference system (NFIS-WPM, and then apply it to predict daily fuzzy precipitation given meteorological premises for testing. The model consists of two parts: the first part is the “fuzzy rule-based neural network”, which simulates sequential relations among fuzzy sets using artificial neural network; and the second part is the “neural fuzzy inference system”, which is based on the first part, but could learn new fuzzy rules from the previous ones according to the algorithm we proposed. NFIS-WPM (High Pro and NFIS-WPM (Ave are improved versions of this model. It is well known that the need for accurate weather prediction is apparent when considering the benefits. However, the excessive pursuit of accuracy in weather prediction makes some of the “accurate” prediction results meaningless and the numerical prediction model is often complex and time-consuming. By adapting this novel model to a precipitation prediction problem, we make the predicted outcomes of precipitation more accurate and the prediction methods simpler than by using the complex numerical forecasting model that would occupy large computation resources, be time-consuming and which has a low predictive accuracy rate. Accordingly, we achieve more accurate predictive precipitation results than by using traditional artificial neural networks that have low predictive accuracy.
RFI modeling and prediction approach for SATOP applications: RFI prediction models
Nguyen, Tien M.; Tran, Hien T.; Wang, Zhonghai; Coons, Amanda; Nguyen, Charles C.; Lane, Steven A.; Pham, Khanh D.; Chen, Genshe; Wang, Gang
2016-05-01
This paper describes a technical approach for the development of RFI prediction models using carrier synchronization loop when calculating Bit or Carrier SNR degradation due to interferences for (i) detecting narrow-band and wideband RFI signals, and (ii) estimating and predicting the behavior of the RFI signals. The paper presents analytical and simulation models and provides both analytical and simulation results on the performance of USB (Unified S-Band) waveforms in the presence of narrow-band and wideband RFI signals. The models presented in this paper will allow the future USB command systems to detect the RFI presence, estimate the RFI characteristics and predict the RFI behavior in real-time for accurate assessment of the impacts of RFI on the command Bit Error Rate (BER) performance. The command BER degradation model presented in this paper also allows the ground system operator to estimate the optimum transmitted SNR to maintain a required command BER level in the presence of both friendly and un-friendly RFI sources.
Selvin, Joseph; Sathiyanarayanan, Ganesan; Lipton, Anuj N; Al-Dhabi, Naif Abdullah; Valan Arasu, Mariadhas; Kiran, George S
2016-01-01
The important biological macromolecules, such as lipopeptide and glycolipid biosurfactant producing marine actinobacteria were analyzed and their potential linkage between type II polyketide synthase (PKS) genes was explored. A unique feature of type II PKS genes is their high amino acid (AA) sequence homology and conserved gene organization. These enzymes mediate the biosynthesis of polyketide natural products with enormous structural complexity and chemical nature by combinatorial use of various domains. Therefore, deciphering the order of AA sequence encoded by PKS domains tailored the chemical structure of polyketide analogs still remains a great challenge. The present work deals with an in vitro and in silico analysis of PKS type II genes from five actinobacterial species to correlate KS domain architecture and structural features. Our present analysis reveals the unique protein domain organization of iterative type II PKS and KS domain of marine actinobacteria. The findings of this study would have implications in metabolic pathway reconstruction and design of semi-synthetic genomes to achieve rational design of novel natural products.
Directory of Open Access Journals (Sweden)
George Seghal Kiran
2016-02-01
Full Text Available The important biological macromolecules such as lipopeptide and glycolipid biosurfactant producing marine actinobacteria were analyzed and their potential linkage between type II polyketide synthase (PKS genes was also explored. A unique feature of type II PKS genes is their high amino acid sequence homology and conserved gene organization. These enzymes mediate the biosynthesis of polyketide natural products with enormous structural complexity and chemical nature by combinatorial use of various domains. Therefore, deciphering the order of amino acid sequence encoded by PKS domains tailored the chemical structure of polyketide analogues still remains a great challenge. The present work deals with an in vitro and in silico analysis of PKS type II genes from five actinobacterial species with known PKS and metabolic products to correlate the domain architecture and structural features shared with known PKS proteins. Our present analysis reveals the unique protein domain organization of iterative type II PKS and KS domain of marine actinobacteria. The findings of this study would have implications in metabolic pathway reconstruction and design of semi-synthetic genomes to achieve rational design of novel natural products.
Fitzgerald, M.; Sharapov, S. E.; Rodrigues, P.; Borba, D.
2016-11-01
We use the HAGIS code to compute the nonlinear stability of the Q = 10 ITER baseline scenario to toroidal Alfvén eigenmodes (TAE) and the subsequent effects of these modes on fusion alpha-particle redistribution. Our calculations build upon an earlier linear stability survey (Rodrigues et al 2015 Nucl. Fusion 55 083003) which provides accurate values of bulk ion, impurity ion and electron thermal Landau damping for our HAGIS calculations. Nonlinear calculations of up to 129 coupled TAEs with toroidal mode numbers in the range n = 1-35 have been performed. The effects of frequency sweeping were also included to examine possible phase space hole and clump convective transport. We find that even parity core localised modes are dominant (expected from linear theory), and that linearly stable global modes are destabilised nonlinearly. Landau damping is found to be important in reducing saturation amplitudes of coupled modes to below δ {{B}r}/{{B}0}˜ 3× {{10}-4} . For these amplitudes, stochastic transport of alpha-particles occurs in a narrow region where predominantly core localised modes are found, implying the formation of a transport barrier at r/a≈ 0.5 , beyond which, the weakly driven global modes are found. We find that for flat q profiles in this baseline scenario, alpha particle transport losses and redistribution by TAEs is minimal.
Prediction models : the right tool for the right problem
Kappen, Teus H.; Peelen, Linda M.
2016-01-01
PURPOSE OF REVIEW: Perioperative prediction models can help to improve personalized patient care by providing individual risk predictions to both patients and providers. However, the scientific literature on prediction model development and validation can be quite technical and challenging to unders
Energy Technology Data Exchange (ETDEWEB)
Vdovin, V. [NRC Kurchatov Institute Tokamak Physics Institute, Moscow (Russian Federation)
2014-02-12
The Innovative concept and 3D full wave code modeling Off-axis current drive by RF waves in large scale tokamaks, reactors FNSF-AT, ITER and DEMO for steady state operation with high efficiency was proposed [1] to overcome problems well known for LH method [2]. The scheme uses the helicons radiation (fast magnetosonic waves at high (20–40) IC frequency harmonics) at frequencies of 500–1000 MHz, propagating in the outer regions of the plasmas with a rotational transform. It is expected that the current generated by Helicons will help to have regimes with negative magnetic shear and internal transport barrier to ensure stability at high normalized plasma pressure β{sub N} > 3 (the so-called Advanced scenarios) of interest for FNSF and the commercial reactor. Modeling with full wave three-dimensional codes PSTELION and STELEC2 showed flexible control of the current profile in the reactor plasmas of ITER, FNSF-AT and DEMO [2,3], using multiple frequencies, the positions of the antennae and toroidal waves slow down. Also presented are the results of simulations of current generation by helicons in tokamaks DIII-D, T-15MD and JT-60SA [3]. In DEMO and Power Plant antenna is strongly simplified, being some analoge of mirrors based ECRF launcher, as will be shown. For spherical tokamaks the Helicons excitation scheme does not provide efficient Off-axis CD profile flexibility due to strong coupling of helicons with O-mode, also through the boundary conditions in low aspect machines, and intrinsic large amount of trapped electrons, as is shown by STELION modeling for the NSTX tokamak. Brief history of Helicons experimental and modeling exploration in straight plasmas, tokamaks and tokamak based fusion Reactors projects is given, including planned joint DIII-D – Kurchatov Institute experiment on helicons CD [1].
Vdovin, V.
2014-02-01
The Innovative concept and 3D full wave code modeling Off-axis current drive by RF waves in large scale tokamaks, reactors FNSF-AT, ITER and DEMO for steady state operation with high efficiency was proposed [1] to overcome problems well known for LH method [2]. The scheme uses the helicons radiation (fast magnetosonic waves at high (20-40) IC frequency harmonics) at frequencies of 500-1000 MHz, propagating in the outer regions of the plasmas with a rotational transform. It is expected that the current generated by Helicons will help to have regimes with negative magnetic shear and internal transport barrier to ensure stability at high normalized plasma pressure βN > 3 (the so-called Advanced scenarios) of interest for FNSF and the commercial reactor. Modeling with full wave three-dimensional codes PSTELION and STELEC2 showed flexible control of the current profile in the reactor plasmas of ITER, FNSF-AT and DEMO [2,3], using multiple frequencies, the positions of the antennae and toroidal waves slow down. Also presented are the results of simulations of current generation by helicons in tokamaks DIII-D, T-15MD and JT-60SA [3]. In DEMO and Power Plant antenna is strongly simplified, being some analoge of mirrors based ECRF launcher, as will be shown. For spherical tokamaks the Helicons excitation scheme does not provide efficient Off-axis CD profile flexibility due to strong coupling of helicons with O-mode, also through the boundary conditions in low aspect machines, and intrinsic large amount of trapped electrons, as is shown by STELION modeling for the NSTX tokamak. Brief history of Helicons experimental and modeling exploration in straight plasmas, tokamaks and tokamak based fusion Reactors projects is given, including planned joint DIII-D - Kurchatov Institute experiment on helicons CD [1].
Foundation Settlement Prediction Based on a Novel NGM Model
Directory of Open Access Journals (Sweden)
Peng-Yu Chen
2014-01-01
Full Text Available Prediction of foundation or subgrade settlement is very important during engineering construction. According to the fact that there are lots of settlement-time sequences with a nonhomogeneous index trend, a novel grey forecasting model called NGM (1,1,k,c model is proposed in this paper. With an optimized whitenization differential equation, the proposed NGM (1,1,k,c model has the property of white exponential law coincidence and can predict a pure nonhomogeneous index sequence precisely. We used two case studies to verify the predictive effect of NGM (1,1,k,c model for settlement prediction. The results show that this model can achieve excellent prediction accuracy; thus, the model is quite suitable for simulation and prediction of approximate nonhomogeneous index sequence and has excellent application value in settlement prediction.
Predictability of the Indian Ocean Dipole in the coupled models
Liu, Huafeng; Tang, Youmin; Chen, Dake; Lian, Tao
2017-03-01
In this study, the Indian Ocean Dipole (IOD) predictability, measured by the Indian Dipole Mode Index (DMI), is comprehensively examined at the seasonal time scale, including its actual prediction skill and potential predictability, using the ENSEMBLES multiple model ensembles and the recently developed information-based theoretical framework of predictability. It was found that all model predictions have useful skill, which is normally defined by the anomaly correlation coefficient larger than 0.5, only at around 2-3 month leads. This is mainly because there are more false alarms in predictions as leading time increases. The DMI predictability has significant seasonal variation, and the predictions whose target seasons are boreal summer (JJA) and autumn (SON) are more reliable than that for other seasons. All of models fail to predict the IOD onset before May and suffer from the winter (DJF) predictability barrier. The potential predictability study indicates that, with the model development and initialization improvement, the prediction of IOD onset is likely to be improved but the winter barrier cannot be overcome. The IOD predictability also has decadal variation, with a high skill during the 1960s and the early 1990s, and a low skill during the early 1970s and early 1980s, which is very consistent with the potential predictability. The main factors controlling the IOD predictability, including its seasonal and decadal variations, are also analyzed in this study.
Kargoll, Boris; Omidalizarandi, Mohammad; Loth, Ina; Paffenholz, Jens-André; Alkhatib, Hamza
2017-09-01
In this paper, we investigate a linear regression time series model of possibly outlier-afflicted observations and autocorrelated random deviations. This colored noise is represented by a covariance-stationary autoregressive (AR) process, in which the independent error components follow a scaled (Student's) t-distribution. This error model allows for the stochastic modeling of multiple outliers and for an adaptive robust maximum likelihood (ML) estimation of the unknown regression and AR coefficients, the scale parameter, and the degree of freedom of the t-distribution. This approach is meant to be an extension of known estimators, which tend to focus only on the regression model, or on the AR error model, or on normally distributed errors. For the purpose of ML estimation, we derive an expectation conditional maximization either algorithm, which leads to an easy-to-implement version of iteratively reweighted least squares. The estimation performance of the algorithm is evaluated via Monte Carlo simulations for a Fourier as well as a spline model in connection with AR colored noise models of different orders and with three different sampling distributions generating the white noise components. We apply the algorithm to a vibration dataset recorded by a high-accuracy, single-axis accelerometer, focusing on the evaluation of the estimated AR colored noise model.
Leptogenesis in minimal predictive seesaw models
Energy Technology Data Exchange (ETDEWEB)
Björkeroth, Fredrik [School of Physics and Astronomy, University of Southampton,Southampton, SO17 1BJ (United Kingdom); Anda, Francisco J. de [Departamento de Física, CUCEI, Universidad de Guadalajara,Guadalajara (Mexico); Varzielas, Ivo de Medeiros; King, Stephen F. [School of Physics and Astronomy, University of Southampton,Southampton, SO17 1BJ (United Kingdom)
2015-10-15
We estimate the Baryon Asymmetry of the Universe (BAU) arising from leptogenesis within a class of minimal predictive seesaw models involving two right-handed neutrinos and simple Yukawa structures with one texture zero. The two right-handed neutrinos are dominantly responsible for the “atmospheric” and “solar” neutrino masses with Yukawa couplings to (ν{sub e},ν{sub μ},ν{sub τ}) proportional to (0,1,1) and (1,n,n−2), respectively, where n is a positive integer. The neutrino Yukawa matrix is therefore characterised by two proportionality constants with their relative phase providing a leptogenesis-PMNS link, enabling the lightest right-handed neutrino mass to be determined from neutrino data and the observed BAU. We discuss an SU(5) SUSY GUT example, where A{sub 4} vacuum alignment provides the required Yukawa structures with n=3, while a ℤ{sub 9} symmetry fixes the relatives phase to be a ninth root of unity.
QSPR Models for Octane Number Prediction
Directory of Open Access Journals (Sweden)
Jabir H. Al-Fahemi
2014-01-01
Full Text Available Quantitative structure-property relationship (QSPR is performed as a means to predict octane number of hydrocarbons via correlating properties to parameters calculated from molecular structure; such parameters are molecular mass M, hydration energy EH, boiling point BP, octanol/water distribution coefficient logP, molar refractivity MR, critical pressure CP, critical volume CV, and critical temperature CT. Principal component analysis (PCA and multiple linear regression technique (MLR were performed to examine the relationship between multiple variables of the above parameters and the octane number of hydrocarbons. The results of PCA explain the interrelationships between octane number and different variables. Correlation coefficients were calculated using M.S. Excel to examine the relationship between multiple variables of the above parameters and the octane number of hydrocarbons. The data set was split into training of 40 hydrocarbons and validation set of 25 hydrocarbons. The linear relationship between the selected descriptors and the octane number has coefficient of determination (R2=0.932, statistical significance (F=53.21, and standard errors (s =7.7. The obtained QSPR model was applied on the validation set of octane number for hydrocarbons giving RCV2=0.942 and s=6.328.