Predictive integrated modelling for ITER scenarios
International Nuclear Information System (INIS)
Artaud, J.F.; Imbeaux, F.; Aniel, T.; Basiuk, V.; Eriksson, L.G.; Giruzzi, G.; Hoang, G.T.; Huysmans, G.; Joffrin, E.; Peysson, Y.; Schneider, M.; Thomas, P.
2005-01-01
The uncertainty on the prediction of ITER scenarios is evaluated. 2 transport models which have been extensively validated against the multi-machine database are used for the computation of the transport coefficients. The first model is GLF23, the second called Kiauto is a model in which the profile of dilution coefficient is a gyro Bohm-like analytical function, renormalized in order to get profiles consistent with a given global energy confinement scaling. The package of codes CRONOS is used, it gives access to the dynamics of the discharge and allows the study of interplay between heat transport, current diffusion and sources. The main motivation of this work is to study the influence of parameters such plasma current, heat, density, impurities and toroidal moment transport. We can draw the following conclusions: 1) the target Q = 10 can be obtained in ITER hybrid scenario at I p = 13 MA, using either the DS03 two terms scaling or the GLF23 model based on the same pedestal; 2) I p = 11.3 MA, Q = 10 can be reached only assuming a very peaked pressure profile and a low pedestal; 3) at fixed Greenwald fraction, Q increases with density peaking; 4) achieving a stationary q-profile with q > 1 requires a large non-inductive current fraction (80%) that could be provided by 20 to 40 MW of LHCD; and 5) owing to the high temperature the q-profile penetration is delayed and q = 1 is reached about 600 s in ITER hybrid scenario at I p = 13 MA, in the absence of active q-profile control. (A.C.)
Test of 1-D transport models, and their predictions for ITER
International Nuclear Information System (INIS)
Mikkelsen, D.; Bateman, G.; Boucher, D.
2001-01-01
A number of proposed tokamak thermal transport models are tested by comparing their predictions with measurements from several tokamaks. The necessary data have been provided for a total of 75 discharges from C-mod, DIII-D, JET, JT-60U, T10, and TFTR. A standard prediction methodology has been developed, and three codes have been benchmarked; these 'standard' codes have been relied on for testing most of the transport models. While a wide range of physical transport processes has been tested, no single model has emerged as clearly superior to all competitors for simulating H-mode discharges. In order to winnow the field, further tests of the effect of sheared flows and of the 'stiffness' of transport are planned. Several of the models have been used to predict ITER performance, with widely varying results. With some transport models ITER's predicted fusion power depends strongly on the 'pedestal' temperature, but ∼ 1GW (Q=10) is predicted for most models if the pedestal temperature is at least 4 keV. (author)
Tests of 1-D transport models, and their predictions for ITER
International Nuclear Information System (INIS)
Mikkelsen, D.R.; Bateman, G.; Boucher, D.
1999-01-01
A number of proposed tokamak thermal transport models are tested by comparing their predictions with measurements from several tokamaks. The necessary data have been provided for a total of 75 discharges from C-mod, DIII-D, JET, JT-60U, T10, and TFTR. A standard prediction methodology has been developed, and three codes have been benchmarked; these 'standard' codes have been relied on for testing most of the transport models. While a wide range of physical transport processes has been tested, no single model has emerged as clearly superior to all competitors for simulating H-mode discharges. In order to winnow the field, further tests of the effect of sheared flows and of the 'stiffness' of transport are planned. Several of the models have been used to predict ITER performance, with widely varying results. With some transport models ITER's predicted fusion power depends strongly on the 'pedestal' temperature, but ∼ 1GW (Q=10) is predicted for most models if the pedestal temperature is at least 4 keV. (author)
Comparison of ITER performance predicted by semi-empirical and theory-based transport models
International Nuclear Information System (INIS)
Mukhovatov, V.; Shimomura, Y.; Polevoi, A.
2003-01-01
The values of Q=(fusion power)/(auxiliary heating power) predicted for ITER by three different methods, i.e., transport model based on empirical confinement scaling, dimensionless scaling technique, and theory-based transport models are compared. The energy confinement time given by the ITERH-98(y,2) scaling for an inductive scenario with plasma current of 15 MA and plasma density 15% below the Greenwald value is 3.6 s with one technical standard deviation of ±14%. These data are translated into a Q interval of [7-13] at the auxiliary heating power P aux = 40 MW and [7-28] at the minimum heating power satisfying a good confinement ELMy H-mode. Predictions of dimensionless scalings and theory-based transport models such as Weiland, MMM and IFS/PPPL overlap with the empirical scaling predictions within the margins of uncertainty. (author)
Iterated non-linear model predictive control based on tubes and contractive constraints.
Murillo, M; Sánchez, G; Giovanini, L
2016-05-01
This paper presents a predictive control algorithm for non-linear systems based on successive linearizations of the non-linear dynamic around a given trajectory. A linear time varying model is obtained and the non-convex constrained optimization problem is transformed into a sequence of locally convex ones. The robustness of the proposed algorithm is addressed adding a convex contractive constraint. To account for linearization errors and to obtain more accurate results an inner iteration loop is added to the algorithm. A simple methodology to obtain an outer bounding-tube for state trajectories is also presented. The convergence of the iterative process and the stability of the closed-loop system are analyzed. The simulation results show the effectiveness of the proposed algorithm in controlling a quadcopter type unmanned aerial vehicle. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.
Behaviors of impurity in ITER and DEMOs using BALDUR integrated predictive modeling code
International Nuclear Information System (INIS)
Onjun, Thawatchai; Buangam, Wannapa; Wisitsorasak, Apiwat
2015-01-01
The behaviors of impurity are investigated using self-consistent modeling of 1.5D BALDUR integrated predictive modeling code, in which theory-based models are used for both core and edge region. In these simulations, a combination of NCLASS neoclassical transport and Multi-mode (MMM95) anomalous transport model is used to compute a core transport. The boundary is taken to be at the top of the pedestal, where the pedestal values are described using a theory-based pedestal model. This pedestal temperature model is based on a combination of magnetic and flow shear stabilization pedestal width scaling and an infinite-n ballooning pressure gradient model. The time evolution of plasma current, temperature and density profiles is carried out for ITER and DEMOs plasmas. As a result, the impurity behaviors such as impurity accumulation and impurity transport can be investigated. (author)
Electron cyclotron current drive predictions for ITER: Comparison of different models
International Nuclear Information System (INIS)
Marushchenko, N.B.; Maassberg, H.; Beidler, C.D.; Turkin, Yu.
2007-01-01
Full text: Due to its high localization and operational flexibility, Electron Cyclotron Current Drive (ECCD) is envisaged for stabilizing the Neoclassical Tearing Mode (NTM) in tokamaks and correcting the rotational transform profile in stellarators. While the spatial location of the electron cyclotron resonant interaction is usually calculated by the ray-tracing technique, numerical tools for calculating the ECCD efficiency are not so common. Two different methods are often applied: i) direct calculation by Fokker-Planck modelling, and ii) by the adjoint approach technique. In the present report we analyze and compare different models used in the adjoint approach technique from the point of view of ITER applications. The numerical tools for calculating the ECCD efficiency developed to date do not completely cover the range of collisional regimes for the electrons involved in the current drive. Only two opposite limits are well developed, collisional and collisionless. Nevertheless, for the densities and temperatures expected for ECCD application in ITER, the collisionless limit model (with trapped particles taken into account) is quite suitable. We analyze the requisite ECCD scenarios with help of the new ray tracing code TRAVIS with the adjoint approach implemented. The (adjoint) Green's function applied for the current drive calculations is formulated with momentum conservation taken into account; this is especially important and even crucial for scenarios, in which mainly bulk electrons are responsible for absorption of the RF power. For comparison, the most common 'high speed limit' model in which the collision operator neglects the integral part and which is approximated by terms valid only for the tail electrons, produces an ECCD efficiency which is an underestimate for some cases by a factor of about 2. In order to select the appropriate model, a rough criterion of 'high speed limit' model applicability is formulated. The results are verified also by
An Iterative Approach for Distributed Model Predictive Control of Irrigation Canals
Doan, D.; Keviczky, T.; Negenborn, R.R.; De Schutter, B.
2009-01-01
Optimization techniques have played a fundamental role in designing automatic control systems for the most part of the past half century. This dependence is ever more obvious in today’s wide-spread use of online optimization-based control methods, such as Model Predictive Control (MPC) [1]. The
Iterative method for Amado's model
International Nuclear Information System (INIS)
Tomio, L.
1980-01-01
A recently proposed iterative method for solving scattering integral equations is applied to the spin doublet and spin quartet neutron-deuteron scattering in the Amado model. The method is tested numerically in the calculation of scattering lengths and phase-shifts and results are found better than those obtained by using the conventional Pade technique. (Author) [pt
International Nuclear Information System (INIS)
Pangione, L.; Lister, J.B.
2008-01-01
The ITER CODAC (COntrol, Data Access and Communication) conceptual design resulted from 2 years of activity. One result was a proposed functional partitioning of CODAC into different CODAC Systems, each of them partitioned into other CODAC Systems. Considering the large size of this project, simple use of human language assisted by figures would certainly be ineffective in creating an unambiguous description of all interactions and all relations between these Systems. Moreover, the underlying design is resident in the mind of the designers, who must consider all possible situations that could happen to each system. There is therefore a need to model the whole of CODAC with a clear and preferably graphical method, which allows the designers to verify the correctness and the consistency of their project. The aim of this paper is to describe the work started on ITER CODAC modeling using Matlab/Simulink. The main feature of this tool is the possibility of having a simple, graphical, intuitive representation of a complex system and ultimately to run a numerical simulation of it. Using Matlab/Simulink, each CODAC System was represented in a graphical and intuitive form with its relations and interactions through the definition of a small number of simple rules. In a Simulink diagram, each system was represented as a 'black box', both containing, and connected to, a number of other systems. In this way it is possible to move vertically between systems on different levels, to show the relation of membership, or horizontally to analyse the information exchange between systems at the same level. This process can be iterated, starting from a global diagram, in which only CODAC appears with the Plant Systems and the external sites, and going deeper down to the mathematical model of each CODAC system. The Matlab/Simulink features for simulating the whole top diagram encourage us to develop the idea of completing the functionalities of all systems in order to finally have a full
Predictive Simulations of ITER Including Neutral Beam Driven Toroidal Rotation
International Nuclear Information System (INIS)
Halpern, Federico D.; Kritz, Arnold H.; Bateman, G.; Pankin, Alexei Y.; Budny, Robert V.; McCune, Douglas C.
2008-01-01
Predictive simulations of ITER [R. Aymar et al., Plasma Phys. Control. Fusion 44, 519 2002] discharges are carried out for the 15 MA high confinement mode (H-mode) scenario using PTRANSP, the predictive version of the TRANSP code. The thermal and toroidal momentum transport equations are evolved using turbulent and neoclassical transport models. A predictive model is used to compute the temperature and width of the H-mode pedestal. The ITER simulations are carried out for neutral beam injection (NBI) heated plasmas, for ion cyclotron resonant frequency (ICRF) heated plasmas, and for plasmas heated with a mix of NBI and ICRF. It is shown that neutral beam injection drives toroidal rotation that improves the confinement and fusion power production in ITER. The scaling of fusion power with respect to the input power and to the pedestal temperature is studied. It is observed that, in simulations carried out using the momentum transport diffusivity computed using the GLF23 model [R.Waltz et al., Phys. Plasmas 4, 2482 (1997)], the fusion power increases with increasing injected beam power and central rotation frequency. It is found that the ITER target fusion power of 500 MW is produced with 20 MW of NBI power when the pedesta temperature is 3.5 keV. 2008 American Institute of Physics. [DOI: 10.1063/1.2931037
Predictive Variable Gain Iterative Learning Control for PMSM
Directory of Open Access Journals (Sweden)
Huimin Xu
2015-01-01
Full Text Available A predictive variable gain strategy in iterative learning control (ILC is introduced. Predictive variable gain iterative learning control is constructed to improve the performance of trajectory tracking. A scheme based on predictive variable gain iterative learning control for eliminating undesirable vibrations of PMSM system is proposed. The basic idea is that undesirable vibrations of PMSM system are eliminated from two aspects of iterative domain and time domain. The predictive method is utilized to determine the learning gain in the ILC algorithm. Compression mapping principle is used to prove the convergence of the algorithm. Simulation results demonstrate that the predictive variable gain is superior to constant gain and other variable gains.
Disruption modeling in support of ITER
International Nuclear Information System (INIS)
Bandyopadhyay, I.
2015-01-01
Plasma current disruptions and Vertical Displacement Events (VDEs) are one of the major concerns in any tokamak as they lead to large electromagnetic forces to tokamak first wall components and vacuum vessel. Their occurrence also means disruption to steady state operations of tokamaks. Thus future fusion reactors like ITER must ensure that disruptions and VDEs are minimized. However, since there is still finite probability of their occurrence, one must be able to characterize disruptions and VDEs and able to predict, for example, the plasma current quench time and halo current amplitude, which mainly determine the magnitude of the electromagnetic forces. There is a concerted effort globally to understand and predict plasma and halo current evolution during disruption in tokamaks through MHD simulations. Even though Disruption and VDEs are often 3D MHD perturbations in nature, presently they are mostly simulated using 2D axisymmetric MHD codes like the Tokamak Simulation Code (TSC) and DINA. These codes are also extensively benchmarked against experimental data in present day tokamaks to improve these models and their ability to predict these events in ITER. More detailed 3D models like M3D are only recently being developed, but they are yet to be benchmarked against experiments, as also they are massively computationally exhaustive
Modeling of ELM Dynamics in ITER
International Nuclear Information System (INIS)
Pankin, A.Y.; Bateman, G.; Kritz, A.H.; Brennan, D.P.; Snyder, P.B.; Kruger, S.
2007-01-01
Edge localized modes (ELMs) are large scale instabilities that alter the H-mode pedestal, reduce the total plasma stored energy, and can result in heat pulses to the divertor plates. These modes can be triggered by pressure driven ballooning modes or by current driven peeling instabilities. In this study, stability analyses are carried out for a series of ITER equilibria that are generated with the TEQ and TOQ equilibrium codes. The H-mode pedestal pressure and parallel component of plasma current density are varied in a systematic way in order to include the relevant parameter space for a specific ITER discharge. Ideal MHD stability codes, DCON, ELITE, and BALOO code, are employed to determine whether or not each ITER equilibrium profile is unstable to peeling or ballooning modes in the pedestal region. Several equilibria that are close to the marginal stability boundary for peeling and ballooning modes are tested with the NIMROD non-ideal MHD code. The effects of finite resistivity are studied in a series of linear NIMROD computations. It is found that the peeling-ballooning stability threshold is very sensitive to the resistivity and viscosity profiles, which vary dramatically over a wide range near the separatrix. Due to the effects of finite resistivity and viscosity, the peeling-ballooning stability threshold is shifted compared to the ideal threshold. A fundamental question in the integrated modeling of ELMy H-mode discharges concerning how much plasma and current density is removed during each ELM crash can be addressed with nonlinear non-ideal MHD simulations. In this study, the NIMROD computer simulations are continued into the nonlinear stage for several ITER equilibria that are marginally unstable to peeling or ballooning modes. The role of two-fluid and finite Larmor radius effects on the ELM dynamics in ITER geometry is examined. The formation of ELM filament structures, which are observed in many existing tokamak experiments, is demonstrated for ITER
Wall conditioning for ITER: Current experimental and modeling activities
Energy Technology Data Exchange (ETDEWEB)
Douai, D., E-mail: david.douai@cea.fr [CEA, IRFM, Association Euratom-CEA, 13108 St. Paul lez Durance (France); Kogut, D. [CEA, IRFM, Association Euratom-CEA, 13108 St. Paul lez Durance (France); Wauters, T. [LPP-ERM/KMS, Association Belgian State, 1000 Brussels (Belgium); Brezinsek, S. [FZJ, Institut für Energie- und Klimaforschung Plasmaphysik, 52441 Jülich (Germany); Hagelaar, G.J.M. [Laboratoire Plasma et Conversion d’Energie, UMR5213, Toulouse (France); Hong, S.H. [National Fusion Research Institute, Daejeon 305-806 (Korea, Republic of); Lomas, P.J. [CCFE, Culham Science Centre, OX14 3DB Abingdon (United Kingdom); Lyssoivan, A. [LPP-ERM/KMS, Association Belgian State, 1000 Brussels (Belgium); Nunes, I. [Associação EURATOM-IST, Instituto de Plasmas e Fusão Nuclear, 1049-001 Lisboa (Portugal); Pitts, R.A. [ITER International Organization, F-13067 St. Paul lez Durance (France); Rohde, V. [Max-Planck-Institut für Plasmaphysik, 85748 Garching (Germany); Vries, P.C. de [ITER International Organization, F-13067 St. Paul lez Durance (France)
2015-08-15
Wall conditioning will be required in ITER to control fuel and impurity recycling, as well as tritium (T) inventory. Analysis of conditioning cycle on the JET, with its ITER-Like Wall is presented, evidencing reduced need for wall cleaning in ITER compared to JET–CFC. Using a novel 2D multi-fluid model, current density during Glow Discharge Conditioning (GDC) on the in-vessel plasma-facing components (PFC) of ITER is predicted to approach the simple expectation of total anode current divided by wall surface area. Baking of the divertor to 350 °C should desorb the majority of the co-deposited T. ITER foresees the use of low temperature plasma based techniques compatible with the permanent toroidal magnetic field, such as Ion (ICWC) or Electron Cyclotron Wall Conditioning (ECWC), for tritium removal between ITER plasma pulses. Extrapolation of JET ICWC results to ITER indicates removal comparable to estimated T-retention in nominal ITER D:T shots, whereas GDC may be unattractive for that purpose.
Iotti, Robert
2015-04-01
ITER is an international experimental facility being built by seven Parties to demonstrate the long term potential of fusion energy. The ITER Joint Implementation Agreement (JIA) defines the structure and governance model of such cooperation. There are a number of necessary conditions for such international projects to be successful: a complete design, strong systems engineering working with an agreed set of requirements, an experienced organization with systems and plans in place to manage the project, a cost estimate backed by industry, and someone in charge. Unfortunately for ITER many of these conditions were not present. The paper discusses the priorities in the JIA which led to setting up the project with a Central Integrating Organization (IO) in Cadarache, France as the ITER HQ, and seven Domestic Agencies (DAs) located in the countries of the Parties, responsible for delivering 90%+ of the project hardware as Contributions-in-Kind and also financial contributions to the IO, as ``Contributions-in-Cash.'' Theoretically the Director General (DG) is responsible for everything. In practice the DG does not have the power to control the work of the DAs, and there is not an effective management structure enabling the IO and the DAs to arbitrate disputes, so the project is not really managed, but is a loose collaboration of competing interests. Any DA can effectively block a decision reached by the DG. Inefficiencies in completing design while setting up a competent organization from scratch contributed to the delays and cost increases during the initial few years. So did the fact that the original estimate was not developed from industry input. Unforeseen inflation and market demand on certain commodities/materials further exacerbated the cost increases. Since then, improvements are debatable. Does this mean that the governance model of ITER is a wrong model for international scientific cooperation? I do not believe so. Had the necessary conditions for success
ITER Dynamic Tritium Inventory Modeling Code
International Nuclear Information System (INIS)
Cristescu, Ioana-R.; Doerr, L.; Busigin, A.; Murdoch, D.
2005-01-01
A tool for tritium inventory evaluation within each sub-system of the Fuel Cycle of ITER is vital, with respect to both the process of licensing ITER and also for operation. It is very likely that measurements of total tritium inventories may not be possible for all sub-systems, however tritium accounting may be achieved by modeling its hold-up within each sub-system and by validating these models in real-time against the monitored flows and tritium streams between the systems. To get reliable results, an accurate dynamic modeling of the tritium content in each sub-system is necessary. In order to optimize the configuration and operation of the ITER fuel cycle, a dynamic fuel cycle model was developed progressively in the decade up to 2000-2001. As the design for some sub-systems from the fuel cycle (i.e. Vacuum pumping, Neutral Beam Injectors (NBI)) have substantially progressed meanwhile, a new code developed under a different platform to incorporate these modifications has been developed. The new code is taking over the models and algorithms for some subsystems, such as Isotope Separation System (ISS); where simplified models have been previously considered, more detailed have been introduced, as for the Water Detritiation System (WDS). To reflect all these changes, the new code developed inside EU participating team was nominated TRIMO (Tritium Inventory Modeling), to emphasize the use of the code on assessing the tritium inventory within ITER
CFTSIM-ITER dynamic fuel cycle model
International Nuclear Information System (INIS)
Busigin, A.; Gierszewski, P.
1998-01-01
Dynamic system models have been developed for specific tritium systems with considerable detail and for integrated fuel cycles with lesser detail (e.g. D. Holland, B. Merrill, Analysis of tritium migration and deposition in fusion reactor systems, Proceedings of the Ninth Symposium Eng. Problems of Fusion Research (1981); M.A. Abdou, E. Vold, C. Gung, M. Youssef, K. Shin, DT fuel self-sufficiency in fusion reactors, Fusion Technol. (1986); G. Spannagel, P. Gierszewski, Dynamic tritium inventory of a NET/ITER fuel cycle with lithium salt solution blanket, Fusion Eng. Des. (1991); W. Kuan, M.A. Abdou, R.S. Willms, Dynamic simulation of a proposed ITER tritium processing system, Fusion Technol. (1995)). In order to provide a tool to understand and optimize the behavior of the ITER fuel cycle, a dynamic fuel cycle model called CFTSIM is under development. The CFTSIM code incorporates more detailed ITER models, specifically for the important isotope separation system, and also has an easier-to-use graphical interface. This paper provides an overview of CFTSIM Version 1.0. The models included are those with significant and varying tritium inventories over a test campaign: fueling, plasma and first wall, pumping, fuel cleanup, isotope separation and storage. An illustration of the results is shown. (orig.)
International Nuclear Information System (INIS)
Terwilliger, Thomas C.; Grosse-Kunstleve, Ralf W.; Afonine, Pavel V.; Moriarty, Nigel W.; Adams, Paul D.; Read, Randy J.; Zwart, Peter H.; Hung, Li-Wei
2008-01-01
An OMIT procedure is presented that has the benefits of iterative model building density modification and refinement yet is essentially unbiased by the atomic model that is built. A procedure for carrying out iterative model building, density modification and refinement is presented in which the density in an OMIT region is essentially unbiased by an atomic model. Density from a set of overlapping OMIT regions can be combined to create a composite ‘iterative-build’ OMIT map that is everywhere unbiased by an atomic model but also everywhere benefiting from the model-based information present elsewhere in the unit cell. The procedure may have applications in the validation of specific features in atomic models as well as in overall model validation. The procedure is demonstrated with a molecular-replacement structure and with an experimentally phased structure and a variation on the method is demonstrated by removing model bias from a structure from the Protein Data Bank
ITER plasma safety interface models and assessments
International Nuclear Information System (INIS)
Uckan, N.A.; Bartels, H-W.; Honda, T.; Amano, T.; Boucher, D.; Post, D.; Wesley, J.
1996-01-01
Physics models and requirements to be used as a basis for safety analysis studies are developed and physics results motivated by safety considerations are presented for the ITER design. Physics specifications are provided for enveloping plasma dynamic events for Category I (operational event), Category II (likely event), and Category III (unlikely event). A safety analysis code SAFALY has been developed to investigate plasma anomaly events. The plasma response to ex-vessel component failure and machine response to plasma transients are considered
Active Player Modeling in the Iterated Prisoner's Dilemma
Park, Hyunsoo; Kim, Kyung-Joong
2016-01-01
The iterated prisoner’s dilemma (IPD) is well known within the domain of game theory. Although it is relatively simple, it can also elucidate important problems related to cooperation and trust. Generally, players can predict their opponents’ actions when they are able to build a precise model of their behavior based on their game playing experience. However, it is difficult to make such predictions based on a limited number of games. The creation of a precise model requires the use of not on...
International Nuclear Information System (INIS)
Laan, J.G. van der; Akiba, M.; Seki, M.; Hassanein, A.; Tanchuk, V.
1991-01-01
An evaluation is given for the prediction for disruption erosion in the International Thermonuclear Engineering Reactor (ITER). At first, a description is given of the relation between plasma operating paramters and system dimensions to the predictions of loading parameters of Plasma Facing Components (PFC) in off-normal events. Numerical results from ITER parties on the prediction of disruption erosion are compared for a few typical cases and discussed. Apart from some differences in the codes, the observed discrepancies can be ascribed to different input data of material properties and boundary conditions. Some physical models for vapour shielding and their effects on numerical results are mentioned. Experimental results from ITER parties, obtained with electron and laser beams, are also compared. Erosion rates for the candidate ITER PFC materials are shown to depend very strongly on the energy deposition parameters, which are based on plasma physics considerations, and on the assumed material loss mechanisms. Lifetimes estimates for divertor plate and first wall armour are given for carbon, tungsten and beryllium, based on the erosion in the thermal quench phase. (orig.)
Iterative and non-iterative solutions of engine flows using ASM and k-ε turbulence models
International Nuclear Information System (INIS)
Khaleghi, H.; Fallah, E.
2003-01-01
Various turbulent models are widely developed in order to make a good prediction of turbulence phenomena in different applications. The standard k-ε model shows a poor prediction for some applications. The Reynolds Stress Model (RSM) is expected to give a better prediction of turbulent characteristics, because a separate differential equation for each Reynolds stress component is solved in this model. In order to save both time and memory in this calculation a new Algebraic Stress Model (ASM) which was developed by Lumly et al in 1995 is used for calculations of flow characteristics in the internal combustion engine chamber. With using turbulent realizability principles, this model becomes a powerful and reliable turbulence model. In this paper the abilities of the model is examined in internal combustion engine flows. The results of ASM and k-ε models are compared with the experimental data. It is shown that the poor predictions of k-ε model are modified by ASM model. Also in this paper non-iterative PISO and iterative SIMPLE solution algorithms are compared. The results show that the PISO solution algorithm is the preferred and more efficient procedure in the calculation of internal combustion engine. (author)
Active Player Modeling in the Iterated Prisoner's Dilemma.
Park, Hyunsoo; Kim, Kyung-Joong
2016-01-01
The iterated prisoner's dilemma (IPD) is well known within the domain of game theory. Although it is relatively simple, it can also elucidate important problems related to cooperation and trust. Generally, players can predict their opponents' actions when they are able to build a precise model of their behavior based on their game playing experience. However, it is difficult to make such predictions based on a limited number of games. The creation of a precise model requires the use of not only an appropriate learning algorithm and framework but also a good dataset. Active learning approaches have recently been introduced to machine learning communities. The approach can usually produce informative datasets with relatively little effort. Therefore, we have proposed an active modeling technique to predict the behavior of IPD players. The proposed method can model the opponent player's behavior while taking advantage of interactive game environments. This experiment used twelve representative types of players as opponents, and an observer used an active modeling algorithm to model these opponents. This observer actively collected data and modeled the opponent's behavior online. Most of our data showed that the observer was able to build, through direct actions, a more accurate model of an opponent's behavior than when the data were collected through random actions.
Active Player Modeling in the Iterated Prisoner’s Dilemma
Directory of Open Access Journals (Sweden)
Hyunsoo Park
2016-01-01
Full Text Available The iterated prisoner’s dilemma (IPD is well known within the domain of game theory. Although it is relatively simple, it can also elucidate important problems related to cooperation and trust. Generally, players can predict their opponents’ actions when they are able to build a precise model of their behavior based on their game playing experience. However, it is difficult to make such predictions based on a limited number of games. The creation of a precise model requires the use of not only an appropriate learning algorithm and framework but also a good dataset. Active learning approaches have recently been introduced to machine learning communities. The approach can usually produce informative datasets with relatively little effort. Therefore, we have proposed an active modeling technique to predict the behavior of IPD players. The proposed method can model the opponent player’s behavior while taking advantage of interactive game environments. This experiment used twelve representative types of players as opponents, and an observer used an active modeling algorithm to model these opponents. This observer actively collected data and modeled the opponent’s behavior online. Most of our data showed that the observer was able to build, through direct actions, a more accurate model of an opponent’s behavior than when the data were collected through random actions.
International Nuclear Information System (INIS)
Groebner, R.J.; Snyder, P.B.; Leonard, A.W.; Chang, C.S.; Maingi, R.; Boyle, D.P.; Diallo, A.; Hughes, J.W.; Davis, E.M.; Ernst, D.R.; Landreman, M.; Xu, X.Q.; Boedo, J.A.; Cziegler, I.; Diamond, P.H.; Eldon, D.P.; Callen, J.D.; Canik, J.M.; Elder, J.D.; Fulton, D.P.
2013-01-01
Joint experiment/theory/modelling research has led to increased confidence in predictions of the pedestal height in ITER. This work was performed as part of a US Department of Energy Joint Research Target in FY11 to identify physics processes that control the H-mode pedestal structure. The study included experiments on C-Mod, DIII-D and NSTX as well as interpretation of experimental data with theory-based modelling codes. This work provides increased confidence in the ability of models for peeling–ballooning stability, bootstrap current, pedestal width and pedestal height scaling to make correct predictions, with some areas needing further work also being identified. A model for pedestal pressure height has made good predictions in existing machines for a range in pressure of a factor of 20. This provides a solid basis for predicting the maximum pedestal pressure height in ITER, which is found to be an extrapolation of a factor of 3 beyond the existing data set. Models were studied for a number of processes that are proposed to play a role in the pedestal n e and T e profiles. These processes include neoclassical transport, paleoclassical transport, electron temperature gradient turbulence and neutral fuelling. All of these processes may be important, with the importance being dependent on the plasma regime. Studies with several electromagnetic gyrokinetic codes show that the gradients in and on top of the pedestal can drive a number of instabilities. (paper)
Plasma burn-through simulations using the DYON code and predictions for ITER
International Nuclear Information System (INIS)
Kim, Hyun-Tae; Sips, A C C; De Vries, P C
2013-01-01
This paper will discuss simulations of the full ionization process (i.e. plasma burn-through), fundamental to creating high temperature plasma. By means of an applied electric field, the gas is partially ionized by the electron avalanche process. In order for the electron temperature to increase, the remaining neutrals need to be fully ionized in the plasma burn-through phase, as radiation is the main contribution to the electron power loss. The radiated power loss can be significantly affected by impurities resulting from interaction with the plasma facing components. The DYON code is a plasma burn-through simulator developed at Joint European Torus (JET) (Kim et al and EFDA-JET Contributors 2012 Nucl. Fusion 52 103016, Kim, Sips and EFDA-JET Contributors 2013 Nucl. Fusion 53 083024). The dynamic evolution of the plasma temperature and plasma densities including the impurity content is calculated in a self-consistent way using plasma wall interaction models. The recent installation of a beryllium wall at JET enabled validation of the plasma burn-through model in the presence of new, metallic plasma facing components. The simulation results of the plasma burn-through phase show a consistent good agreement against experiments at JET, and explain differences observed during plasma initiation with the old carbon plasma facing components. In the International Thermonuclear Experimental Reactor (ITER), the allowable toroidal electric field is restricted to 0.35 (V m −1 ), which is significantly lower compared to the typical value (∼1 (V m −1 )) used in the present devices. The limitation on toroidal electric field also reduces the range of other operation parameters during plasma formation in ITER. Thus, predictive simulations of plasma burn-through in ITER using validated model is of crucial importance. This paper provides an overview of the DYON code and the validation, together with new predictive simulations for ITER using the DYON code. (paper)
Development of ITER 3D neutronics model and nuclear analyses
International Nuclear Information System (INIS)
Zeng, Q.; Zheng, S.; Lu, L.; Li, Y.; Ding, A.; Hu, H.; Wu, Y.
2007-01-01
ITER nuclear analyses rely on the calculations with the three-dimensional (3D) Monte Carlo code e.g. the widely-used MCNP. However, continuous changes in the design of the components require the 3D neutronics model for nuclear analyses should be updated. Nevertheless, the modeling of a complex geometry with MCNP by hand is a very time-consuming task. It is an efficient way to develop CAD-based interface code for automatic conversion from CAD models to MCNP input files. Based on the latest CAD model and the available interface codes, the two approaches of updating 3D nuetronics model have been discussed by ITER IT (International Team): The first is to start with the existing MCNP model 'Brand' and update it through a combination of direct modification of the MCNP input file and generation of models for some components directly from the CAD data; The second is to start from the full CAD model, make the necessary simplifications, and generate the MCNP model by one of the interface codes. MCAM as an advanced CAD-based MCNP interface code developed by FDS Team in China has been successfully applied to update the ITER 3D neutronics model by adopting the above two approaches. The Brand model has been updated to generate portions of the geometry based on the newest CAD model by MCAM. MCAM has also successfully performed conversion to MCNP neutronics model from a full ITER CAD model which is simplified and issued by ITER IT to benchmark the above interface codes. Based on the two updated 3D neutronics models, the related nuclear analyses are performed. This paper presents the status of ITER 3D modeling by using MCAM and its nuclear analyses, as well as a brief introduction of advanced version of MCAM. (authors)
International Nuclear Information System (INIS)
1991-01-01
This report discusses the following topics on ITER research and development: trituim modeling; liquid metal blanket modeling; free surface liquid metal studies; and thermal conductance and thermal control experiments and modeling
Transient thermal hydraulic modeling and analysis of ITER divertor plate system
International Nuclear Information System (INIS)
El-Morshedy, Salah El-Din; Hassanein, Ahmed
2009-01-01
A mathematical model has been developed/updated to simulate the steady state and transient thermal-hydraulics of the International Thermonuclear Experimental Reactor (ITER) divertor module. The model predicts the thermal response of the armour coating, divertor plate structural materials and coolant channels. The selected heat transfer correlations cover all operating conditions of ITER under both normal and off-normal situations. The model also accounts for the melting, vaporization, and solidification of the armour material. The developed model is to provide a quick benchmark of the HEIGHTS multidimensional comprehensive simulation package. The present model divides the coolant channels into a specified axial regions and the divertor plate into a specified radial zones, then a two-dimensional heat conduction calculation is created to predict the temperature distribution for both steady and transient states. The model is benchmarked against experimental data performed at Sandia National Laboratory for both bare and swirl tape coolant channel mockups. The results show very good agreements with the data for steady and transient states. The model is then used to predict the thermal behavior of the ITER plasma facing and structural materials due to plasma instability event where 60 MJ/m 2 plasma energy is deposited over 500 ms. The results for ITER divertor response is analyzed and compared with HEIGHTS results.
Transient thermal hydraulic modeling and analysis of ITER divertor plate system
Energy Technology Data Exchange (ETDEWEB)
El-Morshedy, Salah El-Din [Argonne National Laboratory, Argonne, IL (United States); Atomic Energy Authority, Cairo (Egypt)], E-mail: selmorshedy@etrr2-aea.org.eg; Hassanein, Ahmed [Purdue University, West Lafayette, IN (United States)], E-mail: hassanein@purdue.edu
2009-12-15
A mathematical model has been developed/updated to simulate the steady state and transient thermal-hydraulics of the International Thermonuclear Experimental Reactor (ITER) divertor module. The model predicts the thermal response of the armour coating, divertor plate structural materials and coolant channels. The selected heat transfer correlations cover all operating conditions of ITER under both normal and off-normal situations. The model also accounts for the melting, vaporization, and solidification of the armour material. The developed model is to provide a quick benchmark of the HEIGHTS multidimensional comprehensive simulation package. The present model divides the coolant channels into a specified axial regions and the divertor plate into a specified radial zones, then a two-dimensional heat conduction calculation is created to predict the temperature distribution for both steady and transient states. The model is benchmarked against experimental data performed at Sandia National Laboratory for both bare and swirl tape coolant channel mockups. The results show very good agreements with the data for steady and transient states. The model is then used to predict the thermal behavior of the ITER plasma facing and structural materials due to plasma instability event where 60 MJ/m{sup 2} plasma energy is deposited over 500 ms. The results for ITER divertor response is analyzed and compared with HEIGHTS results.
Solving large mixed linear models using preconditioned conjugate gradient iteration.
Strandén, I; Lidauer, M
1999-12-01
Continuous evaluation of dairy cattle with a random regression test-day model requires a fast solving method and algorithm. A new computing technique feasible in Jacobi and conjugate gradient based iterative methods using iteration on data is presented. In the new computing technique, the calculations in multiplication of a vector by a matrix were recorded to three steps instead of the commonly used two steps. The three-step method was implemented in a general mixed linear model program that used preconditioned conjugate gradient iteration. Performance of this program in comparison to other general solving programs was assessed via estimation of breeding values using univariate, multivariate, and random regression test-day models. Central processing unit time per iteration with the new three-step technique was, at best, one-third that needed with the old technique. Performance was best with the test-day model, which was the largest and most complex model used. The new program did well in comparison to other general software. Programs keeping the mixed model equations in random access memory required at least 20 and 435% more time to solve the univariate and multivariate animal models, respectively. Computations of the second best iteration on data took approximately three and five times longer for the animal and test-day models, respectively, than did the new program. Good performance was due to fast computing time per iteration and quick convergence to the final solutions. Use of preconditioned conjugate gradient based methods in solving large breeding value problems is supported by our findings.
DISIS: prediction of drug response through an iterative sure independence screening.
Directory of Open Access Journals (Sweden)
Yun Fang
Full Text Available Prediction of drug response based on genomic alterations is an important task in the research of personalized medicine. Current elastic net model utilized a sure independence screening to select relevant genomic features with drug response, but it may neglect the combination effect of some marginally weak features. In this work, we applied an iterative sure independence screening scheme to select drug response relevant features from the Cancer Cell Line Encyclopedia (CCLE dataset. For each drug in CCLE, we selected up to 40 features including gene expressions, mutation and copy number alterations of cancer-related genes, and some of them are significantly strong features but showing weak marginal correlation with drug response vector. Lasso regression based on the selected features showed that our prediction accuracies are higher than those by elastic net regression for most drugs.
Test facility TIMO for testing the ITER model cryopump
International Nuclear Information System (INIS)
Haas, H.; Day, C.; Mack, A.; Methe, S.; Boissin, J.C.; Schummer, P.; Murdoch, D.K.
2001-01-01
Within the framework of the European Fusion Technology Programme, FZK is involved in the research and development process for a vacuum pump system of a future fusion reactor. As a result of these activities, the concept and the necessary requirements for the primary vacuum system of the ITER fusion reactor were defined. Continuing that development process, FZK has been preparing the test facility TIMO (Test facility for ITER Model pump) since 1996. This test facility provides for testing a cryopump all needed infrastructure as for example a process gas supply including a metering system, a test vessel, the cryogenic supply for the different temperature levels and a gas analysing system. For manufacturing the ITER model pump an order was given to the company L' Air Liquide in the form of a NET contract. (author)
Test facility TIMO for testing the ITER model cryopump
International Nuclear Information System (INIS)
Haas, H.; Day, C.; Mack, A.; Methe, S.; Boissin, J.C.; Schummer, P.; Murdoch, D.K.
1999-01-01
Within the framework of the European Fusion Technology Programme, FZK is involved in the research and development process for a vacuum pump system of a future fusion reactor. As a result of these activities, the concept and the necessary requirements for the primary vacuum system of the ITER fusion reactor were defined. Continuing that development process, FZK has been preparing the test facility TIMO (Test facility for ITER Model pump) since 1996. This test facility provides for testing a cryopump all needed infrastructure as for example a process gas supply including a metering system, a test vessel, the cryogenic supply for the different temperature levels and a gas analysing system. For manufacturing the ITER model pump an order was given to the company L'Air Liquide in the form of a NET contract. (author)
Mixed price and load forecasting of electricity markets by a new iterative prediction method
International Nuclear Information System (INIS)
Amjady, Nima; Daraeepour, Ali
2009-01-01
Load and price forecasting are the two key issues for the participants of current electricity markets. However, load and price of electricity markets have complex characteristics such as nonlinearity, non-stationarity and multiple seasonality, to name a few (usually, more volatility is seen in the behavior of electricity price signal). For these reasons, much research has been devoted to load and price forecast, especially in the recent years. However, previous research works in the area separately predict load and price signals. In this paper, a mixed model for load and price forecasting is presented, which can consider interactions of these two forecast processes. The mixed model is based on an iterative neural network based prediction technique. It is shown that the proposed model can present lower forecast errors for both load and price compared with the previous separate frameworks. Another advantage of the mixed model is that all required forecast features (from load or price) are predicted within the model without assuming known values for these features. So, the proposed model can better be adapted to real conditions of an electricity market. The forecast accuracy of the proposed mixed method is evaluated by means of real data from the New York and Spanish electricity markets. The method is also compared with some of the most recent load and price forecast techniques. (author)
Results of the ITER toroidal field model coil project
International Nuclear Information System (INIS)
Salpietro, E.; Maix, R.
2001-01-01
In the scope of the ITER EDA one of the seven largest projects was devoted to the development, manufacture and testing of a Toroidal Field Model Coil (TFMC). The industry consortium AGAN manufactured the TFMC based on on a conceptual design developed by the ITER EDA EU Home Team. The TFMC was completed and assembled in the test facility TOSKA of the Forschungszentrum Karlsruhe in the first half of 2001. The first testing phase started in June 2001 and lasted till October 2001. The first results have shown that the main goals of the project have been achieved
RF modeling of the ITER-relevant lower hybrid antenna
International Nuclear Information System (INIS)
Hillairet, J.; Ceccuzzi, S.; Belo, J.; Marfisi, L.; Artaud, J.F.; Bae, Y.S.; Berger-By, G.; Bernard, J.M.; Cara, Ph.; Cardinali, A.; Castaldo, C.; Cesario, R.; Decker, J.; Delpech, L.; Ekedahl, A.; Garcia, J.; Garibaldi, P.; Goniche, M.; Guilhem, D.; Hoang, G.T.
2011-01-01
In the frame of the EFDA task HCD-08-03-01, a 5 GHz Lower Hybrid system which should be able to deliver 20 MW CW on ITER and sustain the expected high heat fluxes has been reviewed. The design and overall dimensions of the key RF elements of the launcher and its subsystem has been updated from the 2001 design in collaboration with ITER organization. Modeling of the LH wave propagation and absorption into the plasma shows that the optimal parallel index must be chosen between 1.9 and 2.0 for the ITER steady-state scenario. The present study has been made with n || = 2.0 but can be adapted for n || = 1.9. Individual components have been studied separately giving confidence on the global RF design of the whole antenna.
Iterative integral parameter identification of a respiratory mechanics model.
Schranz, Christoph; Docherty, Paul D; Chiew, Yeong Shiong; Möller, Knut; Chase, J Geoffrey
2012-07-18
Patient-specific respiratory mechanics models can support the evaluation of optimal lung protective ventilator settings during ventilation therapy. Clinical application requires that the individual's model parameter values must be identified with information available at the bedside. Multiple linear regression or gradient-based parameter identification methods are highly sensitive to noise and initial parameter estimates. Thus, they are difficult to apply at the bedside to support therapeutic decisions. An iterative integral parameter identification method is applied to a second order respiratory mechanics model. The method is compared to the commonly used regression methods and error-mapping approaches using simulated and clinical data. The clinical potential of the method was evaluated on data from 13 Acute Respiratory Distress Syndrome (ARDS) patients. The iterative integral method converged to error minima 350 times faster than the Simplex Search Method using simulation data sets and 50 times faster using clinical data sets. Established regression methods reported erroneous results due to sensitivity to noise. In contrast, the iterative integral method was effective independent of initial parameter estimations, and converged successfully in each case tested. These investigations reveal that the iterative integral method is beneficial with respect to computing time, operator independence and robustness, and thus applicable at the bedside for this clinical application.
Iterative integral parameter identification of a respiratory mechanics model
Directory of Open Access Journals (Sweden)
Schranz Christoph
2012-07-01
Full Text Available Abstract Background Patient-specific respiratory mechanics models can support the evaluation of optimal lung protective ventilator settings during ventilation therapy. Clinical application requires that the individual’s model parameter values must be identified with information available at the bedside. Multiple linear regression or gradient-based parameter identification methods are highly sensitive to noise and initial parameter estimates. Thus, they are difficult to apply at the bedside to support therapeutic decisions. Methods An iterative integral parameter identification method is applied to a second order respiratory mechanics model. The method is compared to the commonly used regression methods and error-mapping approaches using simulated and clinical data. The clinical potential of the method was evaluated on data from 13 Acute Respiratory Distress Syndrome (ARDS patients. Results The iterative integral method converged to error minima 350 times faster than the Simplex Search Method using simulation data sets and 50 times faster using clinical data sets. Established regression methods reported erroneous results due to sensitivity to noise. In contrast, the iterative integral method was effective independent of initial parameter estimations, and converged successfully in each case tested. Conclusion These investigations reveal that the iterative integral method is beneficial with respect to computing time, operator independence and robustness, and thus applicable at the bedside for this clinical application.
Iteration schemes for parallelizing models of superconductivity
Energy Technology Data Exchange (ETDEWEB)
Gray, P.A. [Michigan State Univ., East Lansing, MI (United States)
1996-12-31
The time dependent Lawrence-Doniach model, valid for high fields and high values of the Ginzburg-Landau parameter, is often used for studying vortex dynamics in layered high-T{sub c} superconductors. When solving these equations numerically, the added degrees of complexity due to the coupling and nonlinearity of the model often warrant the use of high-performance computers for their solution. However, the interdependence between the layers can be manipulated so as to allow parallelization of the computations at an individual layer level. The reduced parallel tasks may then be solved independently using a heterogeneous cluster of networked workstations connected together with Parallel Virtual Machine (PVM) software. Here, this parallelization of the model is discussed and several computational implementations of varying degrees of parallelism are presented. Computational results are also given which contrast properties of convergence speed, stability, and consistency of these implementations. Included in these results are models involving the motion of vortices due to an applied current and pinning effects due to various material properties.
Speeding up predictive electromagnetic simulations for ITER application
Energy Technology Data Exchange (ETDEWEB)
Alekseev, A.B. [ITER Organization, Route de Vinon sur Verdon, 13067 St. Paul Lez Durance Cedex (France); Amoskov, V.M. [JSC “NIIEFA”, Doroga na Metallostroy 3, St. Petersburg, 196641 (Russian Federation); Bazarov, A.M., E-mail: alexander.bazarov@gmail.com [JSC “NIIEFA”, Doroga na Metallostroy 3, St. Petersburg, 196641 (Russian Federation); Belov, A.V. [JSC “NIIEFA”, Doroga na Metallostroy 3, St. Petersburg, 196641 (Russian Federation); Belyakov, V.A. [JSC “NIIEFA”, Doroga na Metallostroy 3, St. Petersburg, 196641 (Russian Federation); St. Petersburg State University, 7/9 Universitetskaya Embankment, St. Petersburg, 199034 (Russian Federation); Gapionok, E.I. [JSC “NIIEFA”, Doroga na Metallostroy 3, St. Petersburg, 196641 (Russian Federation); Gornikel, I.V. [Alphysica GmbH, Unterreut, 6, D-76135, Karlsruhe (Germany); Gribov, Yu. V. [ITER Organization, Route de Vinon sur Verdon, 13067 St. Paul Lez Durance Cedex (France); Kukhtin, V.P.; Lamzin, E.A. [JSC “NIIEFA”, Doroga na Metallostroy 3, St. Petersburg, 196641 (Russian Federation); Sytchevsky, S.E. [JSC “NIIEFA”, Doroga na Metallostroy 3, St. Petersburg, 196641 (Russian Federation); St. Petersburg State University, 7/9 Universitetskaya Embankment, St. Petersburg, 199034 (Russian Federation)
2017-05-15
Highlights: • A general concept of engineering EM simulator for tokamak application is proposed. • An algorithm is based on influence functions and superposition principle. • The software works with extensive databases and offers parallel processing. • The simulator allows us to obtain the solution hundreds times faster. - Abstract: The paper presents an attempt to proceed to a general concept of software environment for fast and consistent multi-task simulation of EM transients (engineering simulator for tokamak applications). As an example, the ITER tokamak is taken to introduce a computational technique. The strategy exploits parallel processing with optimized simulation algorithms based on using of influence functions and superposition principle to take full advantage of parallelism. The software has been tested on a multi-core supercomputer. The results were compared with data obtained in TYPHOON computations. A discrepancy was found to be below 0.4%. The computation cost for the simulator is proportional to the number of observation points. An average computation time with the simulator is found to be by hundreds times less than the time required to solve numerically a relevant system of differential equations for known software tools.
Speeding up predictive electromagnetic simulations for ITER application
International Nuclear Information System (INIS)
Alekseev, A.B.; Amoskov, V.M.; Bazarov, A.M.; Belov, A.V.; Belyakov, V.A.; Gapionok, E.I.; Gornikel, I.V.; Gribov, Yu. V.; Kukhtin, V.P.; Lamzin, E.A.; Sytchevsky, S.E.
2017-01-01
Highlights: • A general concept of engineering EM simulator for tokamak application is proposed. • An algorithm is based on influence functions and superposition principle. • The software works with extensive databases and offers parallel processing. • The simulator allows us to obtain the solution hundreds times faster. - Abstract: The paper presents an attempt to proceed to a general concept of software environment for fast and consistent multi-task simulation of EM transients (engineering simulator for tokamak applications). As an example, the ITER tokamak is taken to introduce a computational technique. The strategy exploits parallel processing with optimized simulation algorithms based on using of influence functions and superposition principle to take full advantage of parallelism. The software has been tested on a multi-core supercomputer. The results were compared with data obtained in TYPHOON computations. A discrepancy was found to be below 0.4%. The computation cost for the simulator is proportional to the number of observation points. An average computation time with the simulator is found to be by hundreds times less than the time required to solve numerically a relevant system of differential equations for known software tools.
Plasma-safety assessment model and safety analyses of ITER
International Nuclear Information System (INIS)
Honda, T.; Okazaki, T.; Bartels, H.-H.; Uckan, N.A.; Sugihara, M.; Seki, Y.
2001-01-01
A plasma-safety assessment model has been provided on the basis of the plasma physics database of the International Thermonuclear Experimental Reactor (ITER) to analyze events including plasma behavior. The model was implemented in a safety analysis code (SAFALY), which consists of a 0-D dynamic plasma model and a 1-D thermal behavior model of the in-vessel components. Unusual plasma events of ITER, e.g., overfueling, were calculated using the code and plasma burning is found to be self-bounded by operation limits or passively shut down due to impurity ingress from overheated divertor targets. Sudden transition of divertor plasma might lead to failure of the divertor target because of a sharp increase of the heat flux. However, the effects of the aggravating failure can be safely handled by the confinement boundaries. (author)
Noda, Y; Goshima, S; Nagata, S; Miyoshi, T; Kawada, H; Kawai, N; Tanahashi, Y; Matsuo, M
2018-06-01
To compare right adrenal vein (RAV) visualisation and contrast enhancement degree on adrenal venous phase images reconstructed using adaptive statistical iterative reconstruction (ASiR) and model-based iterative reconstruction (MBIR) techniques. This prospective study was approved by the institutional review board, and written informed consent was waived. Fifty-seven consecutive patients who underwent adrenal venous phase imaging were enrolled. The same raw data were reconstructed using ASiR 40% and MBIR. The expert and beginner independently reviewed computed tomography (CT) images. RAV visualisation rates, background noise, and CT attenuation of the RAV, right adrenal gland, inferior vena cava (IVC), hepatic vein, and bilateral renal veins were compared between the two reconstruction techniques. RAV visualisation rates were higher with MBIR than with ASiR (95% versus 88%, p=0.13 in expert and 93% versus 75%, p=0.002 in beginner, respectively). RAV visualisation confidence ratings with MBIR were significantly greater than with ASiR (pASiR (pASiR (p=0.0013 and 0.02). Reconstruction of adrenal venous phase images using MBIR significantly reduces background noise, leading to an improvement in the RAV visualisation compared with ASiR. Copyright © 2018 The Royal College of Radiologists. Published by Elsevier Ltd. All rights reserved.
Hierarchical models and iterative optimization of hybrid systems
Energy Technology Data Exchange (ETDEWEB)
Rasina, Irina V. [Ailamazyan Program Systems Institute, Russian Academy of Sciences, Peter One str. 4a, Pereslavl-Zalessky, 152021 (Russian Federation); Baturina, Olga V. [Trapeznikov Control Sciences Institute, Russian Academy of Sciences, Profsoyuznaya str. 65, 117997, Moscow (Russian Federation); Nasatueva, Soelma N. [Buryat State University, Smolina str.24a, Ulan-Ude, 670000 (Russian Federation)
2016-06-08
A class of hybrid control systems on the base of two-level discrete-continuous model is considered. The concept of this model was proposed and developed in preceding works as a concretization of the general multi-step system with related optimality conditions. A new iterative optimization procedure for such systems is developed on the base of localization of the global optimality conditions via contraction the control set.
Assessment and modeling of inductive and non-inductive scenarios for ITER
International Nuclear Information System (INIS)
Boucher, D.; Vayakis, G.; Moreau, D.
1999-01-01
This paper presents recent developments in modeling and simulations of ITER performances and scenarios. The first part presents an improved modeling of coupled divertor/main plasma operation including the simulation of the measurements involved in the control loop. The second part explores the fusion performances predicted under non-inductive operation with internal transport barrier. The final part covers a detailed scenario for non-inductive operation using a reverse shear configuration with lower hybrid and fast wave current drive. (author)
Modeling Results For the ITER Cryogenic Fore Pump. Final Report
Energy Technology Data Exchange (ETDEWEB)
Pfotenhauer, John M. [University of Wisconsin, Madison, WI (United States); Zhang, Dongsheng [University of Wisconsin, Madison, WI (United States)
2014-03-31
A numerical model characterizing the operation of a cryogenic fore-pump (CFP) for ITER has been developed at the University of Wisconsin – Madison during the period from March 15, 2011 through June 30, 2014. The purpose of the ITER-CFP is to separate hydrogen isotopes from helium gas, both making up the exhaust components from the ITER reactor. The model explicitly determines the amount of hydrogen that is captured by the supercritical-helium-cooled pump as a function of the inlet temperature of the supercritical helium, its flow rate, and the inlet conditions of the hydrogen gas flow. Furthermore the model computes the location and amount of hydrogen captured in the pump as a function of time. Throughout the model’s development, and as a calibration check for its results, it has been extensively compared with the measurements of a CFP prototype tested at Oak Ridge National Lab. The results of the model demonstrate that the quantity of captured hydrogen is very sensitive to the inlet temperature of the helium coolant on the outside of the cryopump. Furthermore, the model can be utilized to refine those tests, and suggests methods that could be incorporated in the testing to enhance the usefulness of the measured data.
Towards fully authentic modelling of ITER divertor plasmas
International Nuclear Information System (INIS)
Maddison, G.P.; Hotston, E.S.; Reiter, D.; Boerner, P.
1991-01-01
Ignited next step tokamaks such as NET or ITER are expected to use a poloidal magnetic divertor to facilitate exhaust of plasma particles and energy. We report a development coupling together detailed computational models for both plasma and recycled neutral particle transport processes, to produce highly detailed and consistent design solutions. A particular aspect is involvement of an accurate specification of edge magnetic geometries, determined by an original equilibrium discretisation code, named LINDA. Initial results for a prototypical 22MA ITER double-null configuration are presented. Uncertainties in such modelling are considered, especially with regard to intrinsic physical scale lengths. Similar results produced with a simple, analytical treatment of recycling are also compared. Finally, a further extension allowing true oblique target sections is anticipated. (author) 8 refs., 5 figs
A fast and robust iterative algorithm for prediction of RNA pseudoknotted secondary structures
2014-01-01
Background Improving accuracy and efficiency of computational methods that predict pseudoknotted RNA secondary structures is an ongoing challenge. Existing methods based on free energy minimization tend to be very slow and are limited in the types of pseudoknots that they can predict. Incorporating known structural information can improve prediction accuracy; however, there are not many methods for prediction of pseudoknotted structures that can incorporate structural information as input. There is even less understanding of the relative robustness of these methods with respect to partial information. Results We present a new method, Iterative HFold, for pseudoknotted RNA secondary structure prediction. Iterative HFold takes as input a pseudoknot-free structure, and produces a possibly pseudoknotted structure whose energy is at least as low as that of any (density-2) pseudoknotted structure containing the input structure. Iterative HFold leverages strengths of earlier methods, namely the fast running time of HFold, a method that is based on the hierarchical folding hypothesis, and the energy parameters of HotKnots V2.0. Our experimental evaluation on a large data set shows that Iterative HFold is robust with respect to partial information, with average accuracy on pseudoknotted structures steadily increasing from roughly 54% to 79% as the user provides up to 40% of the input structure. Iterative HFold is much faster than HotKnots V2.0, while having comparable accuracy. Iterative HFold also has significantly better accuracy than IPknot on our HK-PK and IP-pk168 data sets. Conclusions Iterative HFold is a robust method for prediction of pseudoknotted RNA secondary structures, whose accuracy with more than 5% information about true pseudoknot-free structures is better than that of IPknot, and with about 35% information about true pseudoknot-free structures compares well with that of HotKnots V2.0 while being significantly faster. Iterative HFold and all data used in
Pipeline Processing with an Iterative, Context-Based Detection Model
2016-01-22
wave precursor artifacts. Distortion definitely is reduced with the addition of more channels to the processed data stream (comparing trace 3 to...limitations of fully automatic hypothesis evaluation with a test case of two events in Central Asia – a deep Hindu Kush earthquake and a shallow earthquake in...AFRL-RV-PS- AFRL-RV-PS- TR-2016-0080 TR-2016-0080 PIPELINE PROCESSING WITH AN ITERATIVE, CONTEXT-BASED DETECTION MODEL T. Kværna, et al
Weld distortion prediction of the ITER Vacuum Vessel using Finite Element simulations
Energy Technology Data Exchange (ETDEWEB)
Caixas, Joan, E-mail: joan.caixas@f4e.europa.eu [F4E, c/ Josep Pla, n.2, Torres Diagonal Litoral, Edificio B3, E-08019 Barcelona (Spain); Guirao, Julio [Numerical Analysis Technologies, S. L., Marqués de San Esteban 52, Entlo, 33209 Gijon (Spain); Bayon, Angel; Jones, Lawrence; Arbogast, Jean François [F4E, c/ Josep Pla, n.2, Torres Diagonal Litoral, Edificio B3, E-08019 Barcelona (Spain); Barbensi, Andrea [Ansaldo Nucleare, Corso F.M. Perrone, 25, I-16152 Genoa (Italy); Dans, Andres [F4E, c/ Josep Pla, n.2, Torres Diagonal Litoral, Edificio B3, E-08019 Barcelona (Spain); Facca, Aldo [Mangiarotti, Pannellia di Sedegliano, I-33039 Sedegliano (UD) (Italy); Fernandez, Elena; Fernández, José [F4E, c/ Josep Pla, n.2, Torres Diagonal Litoral, Edificio B3, E-08019 Barcelona (Spain); Iglesias, Silvia [Numerical Analysis Technologies, S. L., Marqués de San Esteban 52, Entlo, 33209 Gijon (Spain); Jimenez, Marc; Jucker, Philippe; Micó, Gonzalo [F4E, c/ Josep Pla, n.2, Torres Diagonal Litoral, Edificio B3, E-08019 Barcelona (Spain); Ordieres, Javier [Numerical Analysis Technologies, S. L., Marqués de San Esteban 52, Entlo, 33209 Gijon (Spain); Pacheco, Jose Miguel [F4E, c/ Josep Pla, n.2, Torres Diagonal Litoral, Edificio B3, E-08019 Barcelona (Spain); Paoletti, Roberto [Walter Tosto, Via Erasmo Piaggio, 72, I-66100 Chieti Scalo (Italy); Sanguinetti, Gian Paolo [Ansaldo Nucleare, Corso F.M. Perrone, 25, I-16152 Genoa (Italy); Stamos, Vassilis [F4E, c/ Josep Pla, n.2, Torres Diagonal Litoral, Edificio B3, E-08019 Barcelona (Spain); Tacconelli, Massimiliano [Walter Tosto, Via Erasmo Piaggio, 72, I-66100 Chieti Scalo (Italy)
2013-10-15
Highlights: ► Computational simulations of the weld processes can rapidly assess different sequences. ► Prediction of welding distortion to optimize the manufacturing sequence. ► Accurate shape prediction after each manufacture phase allows to generate modified procedures and pre-compensate distortions. ► The simulation methodology is improved using condensed computation techniques with ANSYS in order to reduce computation resources. ► For each welding process, the models are calibrated with the results of coupons and mock-ups. -- Abstract: The as-welded surfaces of the ITER Vacuum Vessel sectors need to be within a very tight tolerance, without a full-scale prototype. In order to predict welding distortion and optimize the manufacturing sequence, the industrial contract includes extensive computational simulations of the weld processes which can rapidly assess different sequences. The accurate shape prediction, after each manufacturing phase, enables actual distortions to be compared with the welding simulations to generate modified procedures and pre-compensate distortions. While previous mock-ups used heavy welded-on jigs to try to restrain the distortions, this method allows the use of lightweight jigs and yields important cost and rework savings. In order to enable the optimization of different alternative welding sequences, the simulation methodology is improved using condensed computation techniques with ANSYS in order to reduce computational resources. For each welding process, the models are calibrated with the results of coupons and mock-ups. The calibration is used to construct representative models of each segment and sector. This paper describes the application to the construction of the Vacuum Vessel sector of the enhanced simulation methodology with condensed Finite Element computation techniques and results of the calibration on several test pieces for different types of welds.
Weld distortion prediction of the ITER Vacuum Vessel using Finite Element simulations
International Nuclear Information System (INIS)
Caixas, Joan; Guirao, Julio; Bayon, Angel; Jones, Lawrence; Arbogast, Jean François; Barbensi, Andrea; Dans, Andres; Facca, Aldo; Fernandez, Elena; Fernández, José; Iglesias, Silvia; Jimenez, Marc; Jucker, Philippe; Micó, Gonzalo; Ordieres, Javier; Pacheco, Jose Miguel; Paoletti, Roberto; Sanguinetti, Gian Paolo; Stamos, Vassilis; Tacconelli, Massimiliano
2013-01-01
Highlights: ► Computational simulations of the weld processes can rapidly assess different sequences. ► Prediction of welding distortion to optimize the manufacturing sequence. ► Accurate shape prediction after each manufacture phase allows to generate modified procedures and pre-compensate distortions. ► The simulation methodology is improved using condensed computation techniques with ANSYS in order to reduce computation resources. ► For each welding process, the models are calibrated with the results of coupons and mock-ups. -- Abstract: The as-welded surfaces of the ITER Vacuum Vessel sectors need to be within a very tight tolerance, without a full-scale prototype. In order to predict welding distortion and optimize the manufacturing sequence, the industrial contract includes extensive computational simulations of the weld processes which can rapidly assess different sequences. The accurate shape prediction, after each manufacturing phase, enables actual distortions to be compared with the welding simulations to generate modified procedures and pre-compensate distortions. While previous mock-ups used heavy welded-on jigs to try to restrain the distortions, this method allows the use of lightweight jigs and yields important cost and rework savings. In order to enable the optimization of different alternative welding sequences, the simulation methodology is improved using condensed computation techniques with ANSYS in order to reduce computational resources. For each welding process, the models are calibrated with the results of coupons and mock-ups. The calibration is used to construct representative models of each segment and sector. This paper describes the application to the construction of the Vacuum Vessel sector of the enhanced simulation methodology with condensed Finite Element computation techniques and results of the calibration on several test pieces for different types of welds
First operation experiences with ITER-FEAT model pump
International Nuclear Information System (INIS)
Mack, A.; Day, Chr.; Haas, H.; Murdoch, D.K.; Boissin, J.C.; Schummer, P.
2001-01-01
Design and manufacturing of the model cryopump for ITER-FEAT have been finished. After acceptance tests at the contractor's premises the pump was installed in the TIMO-facility which was prepared for testing the pump under ITER-FEAT relevant operating conditions. The procedures for the final acceptance tests are described. Travelling time, positioning accuracy and leak rate of the main valve are within the requirements. The heat loads to the 5 and 80 K circuits are a factor two better than the designed values. The maximum pumping speeds for H 2 , D 2 , He, Ne were measured. The value of 58 m 3 /s for D 2 is well above the contractual required value of 40 m 3 /s
ITER transient consequences for material damage: modelling versus experiments
International Nuclear Information System (INIS)
Bazylev, B; Janeschitz, G; Landman, I; Pestchanyi, S; Loarte, A; Federici, G; Merola, M; Linke, J; Zhitlukhin, A; Podkovyrov, V; Klimov, N; Safronov, V
2007-01-01
Carbon-fibre composite (CFC) and tungsten macrobrush armours are foreseen as PFC for the ITER divertor. In ITER the main mechanisms of metallic armour damage remain surface melting and melt motion erosion. In the case of CFC armour, due to rather different heat conductivities of CFC fibres a noticeable erosion of the PAN bundles may occur at rather small heat loads. Experiments carried out in the plasma gun facilities QSPA-T for the ITER like edge localized mode (ELM) heat load also demonstrated significant erosion of the frontal and lateral brush edges. Numerical simulations of the CFC and tungsten (W) macrobrush target damage accounting for the heat loads at the face and lateral brush edges were carried out for QSPA-T conditions using the three-dimensional (3D) code PHEMOBRID. The modelling results of CFC damage are in a good qualitative and quantitative agreement with the experiments. Estimation of the droplet splashing caused by the Kelvin-Helmholtz (KH) instability was performed
Modelling the physics in iterative reconstruction for transmission computed tomography
Nuyts, Johan; De Man, Bruno; Fessler, Jeffrey A.; Zbijewski, Wojciech; Beekman, Freek J.
2013-01-01
There is an increasing interest in iterative reconstruction (IR) as a key tool to improve quality and increase applicability of X-ray CT imaging. IR has the ability to significantly reduce patient dose, it provides the flexibility to reconstruct images from arbitrary X-ray system geometries and it allows to include detailed models of photon transport and detection physics, to accurately correct for a wide variety of image degrading effects. This paper reviews discretisation issues and modelling of finite spatial resolution, Compton scatter in the scanned object, data noise and the energy spectrum. Widespread implementation of IR with highly accurate model-based correction, however, still requires significant effort. In addition, new hardware will provide new opportunities and challenges to improve CT with new modelling. PMID:23739261
Modelling Feedback in Virtual Patients: An Iterative Approach.
Stathakarou, Natalia; Kononowicz, Andrzej A; Henningsohn, Lars; McGrath, Cormac
2018-01-01
Virtual Patients (VPs) offer learners the opportunity to practice clinical reasoning skills and have recently been integrated in Massive Open Online Courses (MOOCs). Feedback is a central part of a branched VP, allowing the learner to reflect on the consequences of their decisions and actions. However, there is insufficient guidance on how to design feedback models within VPs and especially in the context of their application in MOOCs. In this paper, we share our experiences from building a feedback model for a bladder cancer VP in a Urology MOOC, following an iterative process in three steps. Our results demonstrate how we can systematize the process of improving the quality of VP components by the application of known literature frameworks and extend them with a feedback module. We illustrate the design and re-design process and exemplify with content from our VP. Our results can act as starting point for discussions on modelling feedback in VPs and invite future research on the topic.
ITER-like current ramps in JET with ILW: experiments, modelling and consequences for ITER
Czech Academy of Sciences Publication Activity Database
Hogeweij, G.M.D.; Calabrò, G.; Sips, A.C.C.; Maggi, C.F.; De Tommasi, G.M.; Joffrin, E.; Loarte, A.; Maviglia, F.; Mlynář, Jan; Rimini, F.G.; Pütterich, T.
2015-01-01
Roč. 55, č. 1 (2015), 013009-013009 ISSN 0029-5515 Institutional support: RVO:61389021 Keywords : tokamak * ramp-up * JET * ITER Subject RIV: BL - Plasma and Gas Discharge Physics Impact factor: 4.040, year: 2015 http://iopscience.iop.org/article/10.1088/0029-5515/55/1/013009#metrics
Towards Automated Binding Affinity Prediction Using an Iterative Linear Interaction Energy Approach
Directory of Open Access Journals (Sweden)
C. Ruben Vosmeer
2014-01-01
Full Text Available Binding affinity prediction of potential drugs to target and off-target proteins is an essential asset in drug development. These predictions require the calculation of binding free energies. In such calculations, it is a major challenge to properly account for both the dynamic nature of the protein and the possible variety of ligand-binding orientations, while keeping computational costs tractable. Recently, an iterative Linear Interaction Energy (LIE approach was introduced, in which results from multiple simulations of a protein-ligand complex are combined into a single binding free energy using a Boltzmann weighting-based scheme. This method was shown to reach experimental accuracy for flexible proteins while retaining the computational efficiency of the general LIE approach. Here, we show that the iterative LIE approach can be used to predict binding affinities in an automated way. A workflow was designed using preselected protein conformations, automated ligand docking and clustering, and a (semi-automated molecular dynamics simulation setup. We show that using this workflow, binding affinities of aryloxypropanolamines to the malleable Cytochrome P450 2D6 enzyme can be predicted without a priori knowledge of dominant protein-ligand conformations. In addition, we provide an outlook for an approach to assess the quality of the LIE predictions, based on simulation outcomes only.
Modelling of radiation impact on ITER Beryllium wall
Landman, I. S.; Janeschitz, G.
2009-04-01
In the ITER H-Mode confinement regime, edge localized instabilities (ELMs) will perturb the discharge. Plasma lost after each ELM moves along magnetic field lines and impacts on divertor armour, causing plasma contamination by back propagating eroded carbon or tungsten. These impurities produce enhanced radiation flux distributed mainly over the beryllium main chamber wall. The simulation of the complicated processes involved are subject of the integrated tokamak code TOKES that is currently under development. This work describes the new TOKES model for radiation transport through confined plasma. Equations for level populations of the multi-fluid plasma species and the propagation of different kinds of radiation (resonance, recombination and bremsstrahlung photons) are implemented. First simulation results without account of resonance lines are presented.
Modelling of radiation impact on ITER Beryllium wall
Energy Technology Data Exchange (ETDEWEB)
Landman, I.S. [Forschungszentrum Karlsruhe, IHM, FUSION, P.O. Box 3640, 76021 Karlsruhe (Germany)], E-mail: igor.landman@ihm.fzk.de; Janeschitz, G. [Forschungszentrum Karlsruhe, IHM, FUSION, P.O. Box 3640, 76021 Karlsruhe (Germany)
2009-04-30
In the ITER H-Mode confinement regime, edge localized instabilities (ELMs) will perturb the discharge. Plasma lost after each ELM moves along magnetic field lines and impacts on divertor armour, causing plasma contamination by back propagating eroded carbon or tungsten. These impurities produce enhanced radiation flux distributed mainly over the beryllium main chamber wall. The simulation of the complicated processes involved are subject of the integrated tokamak code TOKES that is currently under development. This work describes the new TOKES model for radiation transport through confined plasma. Equations for level populations of the multi-fluid plasma species and the propagation of different kinds of radiation (resonance, recombination and bremsstrahlung photons) are implemented. First simulation results without account of resonance lines are presented.
Weld distortion prediction and control of the ITER vacuum vessel manufacturing mock-ups
International Nuclear Information System (INIS)
Ottolini, Marco; Barbensi, Andrea
2014-01-01
The fabrication of the ITER Vacuum Vessel Sectors is an unprecedented challenge, due to their dimensions, the close tolerances, the complex 'D' shape. The technological issues were faced by the production of full scale mock ups to confirm the manufacturing feasibility to achieve very tight tolerances and qualify the main manufacturing processes, by a step by step welding distortion control, by the qualification of not conventional NDT inspection techniques and by innovative 3D dimensional inspections. The Supplier is required to fabricate at least two mock ups, inboard and outboard, related to the manufacturing method of the VV Sectors, to demonstrate the control of the welding distortions to achieve tolerances, optimizing welding sequences and calibrating of welding distortions computer simulations. The stages of this preparatory activity are: prediction of welding distortion for fabrication mock ups representative of selected segments; demonstration that distortion predictions are consistent with experimental results from 3D dimensional inspection; understanding of reasons of possible deviations between numerical and experimental results and definition of action to solve these issues; demonstration that possible calculation simplifications, adopted to speed up the analysis process, do not affect significantly the welding distortion prediction. This paper describes the weld distortion prediction and control on the manufacturing mock-ups of ITER Vacuum Vessel Sectors, with particular emphasis to the lessons learned. (authors)
Notohamiprodjo, S; Deak, Z; Meurer, F; Maertz, F; Mueck, F G; Geyer, L L; Wirth, S
2015-01-01
The purpose of this study was to compare cranial CT (CCT) image quality (IQ) of the MBIR algorithm with standard iterative reconstruction (ASiR). In this institutional review board (IRB)-approved study, raw data sets of 100 unenhanced CCT examinations (120 kV, 50-260 mAs, 20 mm collimation, 0.984 pitch) were reconstructed with both ASiR and MBIR. Signal-to-noise (SNR) and contrast-to-noise (CNR) were calculated from attenuation values measured in caudate nucleus, frontal white matter, anterior ventricle horn, fourth ventricle, and pons. Two radiologists, who were blinded to the reconstruction algorithms, evaluated anonymized multiplanar reformations of 2.5 mm with respect to depiction of different parenchymal structures and impact of artefacts on IQ with a five-point scale (0: unacceptable, 1: less than average, 2: average, 3: above average, 4: excellent). MBIR decreased artefacts more effectively than ASiR (p ASiR was 2 (p ASiR (p ASiR. As CCT is an examination that is frequently required, the use of MBIR may allow for substantial reduction of radiation exposure caused by medical diagnostics. • Model-Based iterative reconstruction (MBIR) effectively decreased artefacts in cranial CT. • MBIR reconstructed images were rated with significantly higher scores for image quality. • Model-Based iterative reconstruction may allow reduced-dose diagnostic examination protocols.
Predictive modeling of complications.
Osorio, Joseph A; Scheer, Justin K; Ames, Christopher P
2016-09-01
Predictive analytic algorithms are designed to identify patterns in the data that allow for accurate predictions without the need for a hypothesis. Therefore, predictive modeling can provide detailed and patient-specific information that can be readily applied when discussing the risks of surgery with a patient. There are few studies using predictive modeling techniques in the adult spine surgery literature. These types of studies represent the beginning of the use of predictive analytics in spine surgery outcomes. We will discuss the advancements in the field of spine surgery with respect to predictive analytics, the controversies surrounding the technique, and the future directions.
Model-based normalization for iterative 3D PET image
International Nuclear Information System (INIS)
Bai, B.; Li, Q.; Asma, E.; Leahy, R.M.; Holdsworth, C.H.; Chatziioannou, A.; Tai, Y.C.
2002-01-01
We describe a method for normalization in 3D PET for use with maximum a posteriori (MAP) or other iterative model-based image reconstruction methods. This approach is an extension of previous factored normalization methods in which we include separate factors for detector sensitivity, geometric response, block effects and deadtime. Since our MAP reconstruction approach already models some of the geometric factors in the forward projection, the normalization factors must be modified to account only for effects not already included in the model. We describe a maximum likelihood approach to joint estimation of the count-rate independent normalization factors, which we apply to data from a uniform cylindrical source. We then compute block-wise and block-profile deadtime correction factors using singles and coincidence data, respectively, from a multiframe cylindrical source. We have applied this method for reconstruction of data from the Concorde microPET P4 scanner. Quantitative evaluation of this method using well-counter measurements of activity in a multicompartment phantom compares favourably with normalization based directly on cylindrical source measurements. (author)
A comparison of linear interpolation models for iterative CT reconstruction.
Hahn, Katharina; Schöndube, Harald; Stierstorfer, Karl; Hornegger, Joachim; Noo, Frédéric
2016-12-01
Recent reports indicate that model-based iterative reconstruction methods may improve image quality in computed tomography (CT). One difficulty with these methods is the number of options available to implement them, including the selection of the forward projection model and the penalty term. Currently, the literature is fairly scarce in terms of guidance regarding this selection step, whereas these options impact image quality. Here, the authors investigate the merits of three forward projection models that rely on linear interpolation: the distance-driven method, Joseph's method, and the bilinear method. The authors' selection is motivated by three factors: (1) in CT, linear interpolation is often seen as a suitable trade-off between discretization errors and computational cost, (2) the first two methods are popular with manufacturers, and (3) the third method enables assessing the importance of a key assumption in the other methods. One approach to evaluate forward projection models is to inspect their effect on discretized images, as well as the effect of their transpose on data sets, but significance of such studies is unclear since the matrix and its transpose are always jointly used in iterative reconstruction. Another approach is to investigate the models in the context they are used, i.e., together with statistical weights and a penalty term. Unfortunately, this approach requires the selection of a preferred objective function and does not provide clear information on features that are intrinsic to the model. The authors adopted the following two-stage methodology. First, the authors analyze images that progressively include components of the singular value decomposition of the model in a reconstructed image without statistical weights and penalty term. Next, the authors examine the impact of weights and penalty on observed differences. Image quality metrics were investigated for 16 different fan-beam imaging scenarios that enabled probing various aspects
Goodenberger, Martin H; Wagner-Bartak, Nicolaus A; Gupta, Shiva; Liu, Xinming; Yap, Ramon Q; Sun, Jia; Tamm, Eric P; Jensen, Corey T
The purpose of this study was to compare abdominopelvic computed tomography images reconstructed with adaptive statistical iterative reconstruction-V (ASIR-V) with model-based iterative reconstruction (Veo 3.0), ASIR, and filtered back projection (FBP). Abdominopelvic computed tomography scans for 36 patients (26 males and 10 females) were reconstructed using FBP, ASIR (80%), Veo 3.0, and ASIR-V (30%, 60%, 90%). Mean ± SD patient age was 32 ± 10 years with mean ± SD body mass index of 26.9 ± 4.4 kg/m. Images were reviewed by 2 independent readers in a blinded, randomized fashion. Hounsfield unit, noise, and contrast-to-noise ratio (CNR) values were calculated for each reconstruction algorithm for further comparison. Phantom evaluation of low-contrast detectability (LCD) and high-contrast resolution was performed. Adaptive statistical iterative reconstruction-V 30%, ASIR-V 60%, and ASIR 80% were generally superior qualitatively compared with ASIR-V 90%, Veo 3.0, and FBP (P ASIR-V 60% with respective CNR values of 5.54 ± 2.39, 8.78 ± 3.15, and 3.49 ± 1.77 (P ASIR 80% had the best and worst spatial resolution, respectively. Adaptive statistical iterative reconstruction-V 30% and ASIR-V 60% provided the best combination of qualitative and quantitative performance. Adaptive statistical iterative reconstruction 80% was equivalent qualitatively, but demonstrated inferior spatial resolution and LCD.
Scalar flux modeling in turbulent flames using iterative deconvolution
Nikolaou, Z. M.; Cant, R. S.; Vervisch, L.
2018-04-01
In the context of large eddy simulations, deconvolution is an attractive alternative for modeling the unclosed terms appearing in the filtered governing equations. Such methods have been used in a number of studies for non-reacting and incompressible flows; however, their application in reacting flows is limited in comparison. Deconvolution methods originate from clearly defined operations, and in theory they can be used in order to model any unclosed term in the filtered equations including the scalar flux. In this study, an iterative deconvolution algorithm is used in order to provide a closure for the scalar flux term in a turbulent premixed flame by explicitly filtering the deconvoluted fields. The assessment of the method is conducted a priori using a three-dimensional direct numerical simulation database of a turbulent freely propagating premixed flame in a canonical configuration. In contrast to most classical a priori studies, the assessment is more stringent as it is performed on a much coarser mesh which is constructed using the filtered fields as obtained from the direct simulations. For the conditions tested in this study, deconvolution is found to provide good estimates both of the scalar flux and of its divergence.
Parameter study on dynamic behavior of ITER tokamak scaled model
International Nuclear Information System (INIS)
Nakahira, Masataka; Takeda, Nobukazu
2004-12-01
This report summarizes that the study on dynamic behavior of ITER tokamak scaled model according to the parametric analysis of base plate thickness, in order to find a reasonable solution to give the sufficient rigidity without affecting the dynamic behavior. For this purpose, modal analyses were performed changing the base plate thickness from the present design of 55 mm to 100 mm, 150 mm and 190 mm. Using these results, the modification plan of the plate thickness was studied. It was found that the thickness of 150 mm gives well fitting of 1st natural frequency about 90% of ideal rigid case. Thus, the modification study was performed to find out the adequate plate thickness. Considering the material availability, transportation and weldability, it was found that the 300mm thickness would be a limitation. The analysis result of 300mm thickness case showed 97% fitting of 1st natural frequency to the ideal rigid case. It was however found that the bolt length was too long and it gave additional twisting mode. As a result, it was concluded that the base plate thickness of 150mm or 190mm gives sufficient rigidity for the dynamic behavior of the scaled model. (author)
International Nuclear Information System (INIS)
Liu, Y B; Su, Y M; Ju, L; Huang, S L
2012-01-01
A new numerical method was developed for predicting the steady hydrodynamic performance of propeller-rudder-bulb system. In the calculation, the rudder and bulb was taken into account as a whole, the potential based surface panel method was applied both to propeller and rudder-bulb system. The interaction between propeller and rudder-bulb was taken into account by velocity potential iteration in which the influence of propeller rotation was considered by the average influence coefficient. In the influence coefficient computation, the singular value should be found and deleted. Numerical results showed that the method presented is effective for predicting the steady hydrodynamic performance of propeller-rudder system and propeller-rudder-bulb system. Comparing with the induced velocity iterative method, the method presented can save programming and calculation time. Changing dimensions, the principal parameter—bulb size that affect energy-saving effect was studied, the results show that the bulb on rudder have a optimal size at the design advance coefficient.
Use of MCAM in creating 3D neutronics model for ITER building
International Nuclear Information System (INIS)
Zeng Qin; Wang Guozhong; Dang Tongqiang; Long Pengcheng; Loughlin, Michael
2012-01-01
Highlights: ► We created a 3D neutronics model of the ITER building. ► The model was produced from the engineering CAD model by MCAM software. ► The neutron flux map in the ITER building was calculated. - Abstract: The three dimensional (3D) neutronics reference model of International Thermonuclear Experimental Reactor (ITER) only defines the tokamak machine and extends to the bio-shield. In order to meet further 3D neutronics analysis needs, it is necessary to create a 3D reference model of the ITER building. Monte Carlo Automatic Modeling Program for Radiation Transport Simulation (MCAM) was developed as a computer aided design (CAD) based bi-directional interface program between general CAD systems and Monte Carlo radiation transport simulation codes. With the help of MCAM version 4.8, the 3D neutronics model of ITER building was created based on the engineering CAD model. The calculation of the neutron flux map in ITER building during operation showed the correctness and usability of the model. This model is the first detailed ITER building 3D neutronics model and it will be made available to all international organization collaborators as a reference model.
Use of MCAM in creating 3D neutronics model for ITER building
Energy Technology Data Exchange (ETDEWEB)
Zeng Qin [Institute of Nuclear Energy Safety Technology, Chinese Academy of Sciences, Hefei, Anhui 230031 (China); School of Nuclear Science and Technology, University of Science and Technology of China, Hefei, Anhui 230027 (China); Wang Guozhong, E-mail: mango33@mail.ustc.edu.cn [School of Nuclear Science and Technology, University of Science and Technology of China, Hefei, Anhui 230027 (China); Dang Tongqiang [School of Nuclear Science and Technology, University of Science and Technology of China, Hefei, Anhui 230027 (China); Long Pengcheng [Institute of Nuclear Energy Safety Technology, Chinese Academy of Sciences, Hefei, Anhui 230031 (China); School of Nuclear Science and Technology, University of Science and Technology of China, Hefei, Anhui 230027 (China); Loughlin, Michael [ITER Organization, Route de Vinon sur Verdon, 13115 St. Paul-Lz-Durance (France)
2012-08-15
Highlights: Black-Right-Pointing-Pointer We created a 3D neutronics model of the ITER building. Black-Right-Pointing-Pointer The model was produced from the engineering CAD model by MCAM software. Black-Right-Pointing-Pointer The neutron flux map in the ITER building was calculated. - Abstract: The three dimensional (3D) neutronics reference model of International Thermonuclear Experimental Reactor (ITER) only defines the tokamak machine and extends to the bio-shield. In order to meet further 3D neutronics analysis needs, it is necessary to create a 3D reference model of the ITER building. Monte Carlo Automatic Modeling Program for Radiation Transport Simulation (MCAM) was developed as a computer aided design (CAD) based bi-directional interface program between general CAD systems and Monte Carlo radiation transport simulation codes. With the help of MCAM version 4.8, the 3D neutronics model of ITER building was created based on the engineering CAD model. The calculation of the neutron flux map in ITER building during operation showed the correctness and usability of the model. This model is the first detailed ITER building 3D neutronics model and it will be made available to all international organization collaborators as a reference model.
Exploring the knowledge behind predictions in everyday cognition: an iterated learning study.
Stephens, Rachel G; Dunn, John C; Rao, Li-Lin; Li, Shu
2015-10-01
Making accurate predictions about events is an important but difficult task. Recent work suggests that people are adept at this task, making predictions that reflect surprisingly accurate knowledge of the distributions of real quantities. Across three experiments, we used an iterated learning procedure to explore the basis of this knowledge: to what extent is domain experience critical to accurate predictions and how accurate are people when faced with unfamiliar domains? In Experiment 1, two groups of participants, one resident in Australia, the other in China, predicted the values of quantities familiar to both (movie run-times), unfamiliar to both (the lengths of Pharaoh reigns), and familiar to one but unfamiliar to the other (cake baking durations and the lengths of Beijing bus routes). While predictions from both groups were reasonably accurate overall, predictions were inaccurate in the selectively unfamiliar domains and, surprisingly, predictions by the China-resident group were also inaccurate for a highly familiar domain: local bus route lengths. Focusing on bus routes, two follow-up experiments with Australia-resident groups clarified the knowledge and strategies that people draw upon, plus important determinants of accurate predictions. For unfamiliar domains, people appear to rely on extrapolating from (not simply directly applying) related knowledge. However, we show that people's predictions are subject to two sources of error: in the estimation of quantities in a familiar domain and extension to plausible values in an unfamiliar domain. We propose that the key to successful predictions is not simply domain experience itself, but explicit experience of relevant quantities.
Parallelization of the model-based iterative reconstruction algorithm DIRA
International Nuclear Information System (INIS)
Oertenberg, A.; Sandborg, M.; Alm Carlsson, G.; Malusek, A.; Magnusson, M.
2016-01-01
New paradigms for parallel programming have been devised to simplify software development on multi-core processors and many-core graphical processing units (GPU). Despite their obvious benefits, the parallelization of existing computer programs is not an easy task. In this work, the use of the Open Multiprocessing (OpenMP) and Open Computing Language (OpenCL) frameworks is considered for the parallelization of the model-based iterative reconstruction algorithm DIRA with the aim to significantly shorten the code's execution time. Selected routines were parallelized using OpenMP and OpenCL libraries; some routines were converted from MATLAB to C and optimised. Parallelization of the code with the OpenMP was easy and resulted in an overall speedup of 15 on a 16-core computer. Parallelization with OpenCL was more difficult owing to differences between the central processing unit and GPU architectures. The resulting speedup was substantially lower than the theoretical peak performance of the GPU; the cause was explained. (authors)
Archaeological predictive model set.
2015-03-01
This report is the documentation for Task 7 of the Statewide Archaeological Predictive Model Set. The goal of this project is to : develop a set of statewide predictive models to assist the planning of transportation projects. PennDOT is developing t...
ITER Side Correction Coil Quench model and analysis
Nicollet, S.; Bessette, D.; Ciazynski, D.; Duchateau, J. L.; Gauthier, F.; Lacroix, B.
2016-12-01
Previous thermohydraulic studies performed for the ITER TF, CS and PF magnet systems have brought some important information on the detection and consequences of a quench as a function of the initial conditions (deposited energy, heated length). Even if the temperature margin of the Correction Coils is high, their behavior during a quench should also be studied since a quench is likely to be triggered by potential anomalies in joints, ground fault on the instrumentation wires, etc. A model has been developed with the SuperMagnet Code (Bagnasco et al., 2010) for a Side Correction Coil (SCC2) with four pancakes cooled in parallel, each of them represented by a Thea module (with the proper Cable In Conduit Conductor characteristics). All the other coils of the PF cooling loop are hydraulically connected in parallel (top/bottom correction coils and six Poloidal Field Coils) are modeled by Flower modules with equivalent hydraulics properties. The model and the analysis results are presented for five quench initiation cases with/without fast discharge: two quenches initiated by a heat input to the innermost turn of one pancake (case 1 and case 2) and two other quenches initiated at the innermost turns of four pancakes (case 3 and case 4). In the 5th case, the quench is initiated at the middle turn of one pancake. The impact on the cooling circuit, e.g. the exceedance of the opening pressure of the quench relief valves, is detailed in case of an undetected quench (i.e. no discharge of the magnet). Particular attention is also paid to a possible secondary quench detection system based on measured thermohydraulic signals (pressure, temperature and/or helium mass flow rate). The maximum cable temperature achieved in case of a fast current discharge (primary detection by voltage) is compared to the design hot spot criterion of 150 K, which includes the contribution of helium and jacket.
Levy, R.; Mcginness, H.
1976-01-01
Investigations were performed to predict the power available from the wind at the Goldstone, California, antenna site complex. The background for power prediction was derived from a statistical evaluation of available wind speed data records at this location and at nearby locations similarly situated within the Mojave desert. In addition to a model for power prediction over relatively long periods of time, an interim simulation model that produces sample wind speeds is described. The interim model furnishes uncorrelated sample speeds at hourly intervals that reproduce the statistical wind distribution at Goldstone. A stochastic simulation model to provide speed samples representative of both the statistical speed distributions and correlations is also discussed.
Improving head and neck CTA with hybrid and model-based iterative reconstruction techniques
Niesten, J. M.; van der Schaaf, I. C.; Vos, P. C.; Willemink, MJ; Velthuis, B. K.
2015-01-01
AIM: To compare image quality of head and neck computed tomography angiography (CTA) reconstructed with filtered back projection (FBP), hybrid iterative reconstruction (HIR) and model-based iterative reconstruction (MIR) algorithms. MATERIALS AND METHODS: The raw data of 34 studies were
ITER central solenoid model coil heat treatment complete and assembly started
International Nuclear Information System (INIS)
Thome, R.J.; Okuno, K.
1998-01-01
A major R and D task in the ITER program is to fabricate a Superconducting Model Coil for the Central Solenoid to establish the design and fabrication methods for ITER size coils and to demonstrate conductor performance. Completion of its components is expected in 1998, to be followed by assembly with structural components and testing in a facility at JAERI
Analytical prediction of thermal performance of hypervapotron and its application to ITER
International Nuclear Information System (INIS)
Baxi, C.B.; Falter, H.
1992-09-01
A hypervapotron (HV) is a water cooled device made of high thermal conductivity material such as copper. A surface heat flux of up to 30 MW/m 2 has been achieved in copper hypervapotrans cooled by water at a velocity of 10 m/s and at a pressure of six bar. Hypervapotrons have been used in the past as beam dumps at the Joint European Torus (JET). It is planned to use them for diverter cooling during Mark II upgrade of the JET. Although a large amount of experimental data has been collected on these devices, an analytical performance prediction has not been done before due to the complexity of the heat transfer mechanisms. A method to analytically predict the thermal performance of the hypervapotron is described. The method uses a combination of a number of thermal hydraulic correlations and a finite element analysis. The analytical prediction shows an excellent agreement with experimental results over a wide range of velocities, pressures, subcooling, and geometries. The method was used to predict the performance of hypervapotron made of beryllium. Merits for the use of hypervapotrons for International Thermonuclear Experimental Reactor (ITER) and Tokamak Physics Experiment (TPX) are discussed
Wu, M. Q.; Pan, C. K.; Chan, V. S.; Li, G. Q.; Garofalo, A. M.; Jian, X.; Liu, L.; Ren, Q. L.; Chen, J. L.; Gao, X.; Gong, X. Z.; Ding, S. Y.; Qian, J. P.; Cfetr Physics Team
2018-04-01
Time-dependent integrated modeling of DIII-D ITER-like and high bootstrap current plasma ramp-up discharges has been performed with the equilibrium code EFIT, and the transport codes TGYRO and ONETWO. Electron and ion temperature profiles are simulated by TGYRO with the TGLF (SAT0 or VX model) turbulent and NEO neoclassical transport models. The VX model is a new empirical extension of the TGLF turbulent model [Jian et al., Nucl. Fusion 58, 016011 (2018)], which captures the physics of multi-scale interaction between low-k and high-k turbulence from nonlinear gyro-kinetic simulation. This model is demonstrated to accurately model low Ip discharges from the EAST tokamak. Time evolution of the plasma current density profile is simulated by ONETWO with the experimental current ramp-up rate. The general trend of the predicted evolution of the current density profile is consistent with that obtained from the equilibrium reconstruction with Motional Stark effect constraints. The predicted evolution of βN , li , and βP also agrees well with the experiments. For the ITER-like cases, the predicted electron and ion temperature profiles using TGLF_Sat0 agree closely with the experimental measured profiles, and are demonstrably better than other proposed transport models. For the high bootstrap current case, the predicted electron and ion temperature profiles perform better in the VX model. It is found that the SAT0 model works well at high IP (>0.76 MA) while the VX model covers a wider range of plasma current ( IP > 0.6 MA). The results reported in this paper suggest that the developed integrated modeling could be a candidate for ITER and CFETR ramp-up engineering design modeling.
Inverse and Predictive Modeling
Energy Technology Data Exchange (ETDEWEB)
Syracuse, Ellen Marie [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)
2017-09-27
The LANL Seismo-Acoustic team has a strong capability in developing data-driven models that accurately predict a variety of observations. These models range from the simple – one-dimensional models that are constrained by a single dataset and can be used for quick and efficient predictions – to the complex – multidimensional models that are constrained by several types of data and result in more accurate predictions. Team members typically build models of geophysical characteristics of Earth and source distributions at scales of 1 to 1000s of km, the techniques used are applicable for other types of physical characteristics at an even greater range of scales. The following cases provide a snapshot of some of the modeling work done by the Seismo- Acoustic team at LANL.
An iterative stochastic ensemble method for parameter estimation of subsurface flow models
International Nuclear Information System (INIS)
Elsheikh, Ahmed H.; Wheeler, Mary F.; Hoteit, Ibrahim
2013-01-01
Parameter estimation for subsurface flow models is an essential step for maximizing the value of numerical simulations for future prediction and the development of effective control strategies. We propose the iterative stochastic ensemble method (ISEM) as a general method for parameter estimation based on stochastic estimation of gradients using an ensemble of directional derivatives. ISEM eliminates the need for adjoint coding and deals with the numerical simulator as a blackbox. The proposed method employs directional derivatives within a Gauss–Newton iteration. The update equation in ISEM resembles the update step in ensemble Kalman filter, however the inverse of the output covariance matrix in ISEM is regularized using standard truncated singular value decomposition or Tikhonov regularization. We also investigate the performance of a set of shrinkage based covariance estimators within ISEM. The proposed method is successfully applied on several nonlinear parameter estimation problems for subsurface flow models. The efficiency of the proposed algorithm is demonstrated by the small size of utilized ensembles and in terms of error convergence rates
An iterative stochastic ensemble method for parameter estimation of subsurface flow models
Elsheikh, Ahmed H.
2013-06-01
Parameter estimation for subsurface flow models is an essential step for maximizing the value of numerical simulations for future prediction and the development of effective control strategies. We propose the iterative stochastic ensemble method (ISEM) as a general method for parameter estimation based on stochastic estimation of gradients using an ensemble of directional derivatives. ISEM eliminates the need for adjoint coding and deals with the numerical simulator as a blackbox. The proposed method employs directional derivatives within a Gauss-Newton iteration. The update equation in ISEM resembles the update step in ensemble Kalman filter, however the inverse of the output covariance matrix in ISEM is regularized using standard truncated singular value decomposition or Tikhonov regularization. We also investigate the performance of a set of shrinkage based covariance estimators within ISEM. The proposed method is successfully applied on several nonlinear parameter estimation problems for subsurface flow models. The efficiency of the proposed algorithm is demonstrated by the small size of utilized ensembles and in terms of error convergence rates. © 2013 Elsevier Inc.
Numerical modeling of 3D halo current path in ITER structures
Energy Technology Data Exchange (ETDEWEB)
Bettini, Paolo; Marconato, Nicolò; Furno Palumbo, Maurizio; Peruzzo, Simone [Consorzio RFX, EURATOM-ENEA Association, C.so Stati Uniti 4, 35127 Padova (Italy); Specogna, Ruben, E-mail: ruben.specogna@uniud.it [DIEGM, Università di Udine, Via delle Scienze, 208, 33100 Udine (Italy); Albanese, Raffaele; Rubinacci, Guglielmo; Ventre, Salvatore; Villone, Fabio [Consorzio CREATE, EURATOM-ENEA Association, Via Claudio 21, 80125 Napoli (Italy)
2013-10-15
Highlights: ► Two numerical codes for the evaluation of halo currents in 3D structures are presented. ► A simplified plasma model is adopted to provide the input (halo current injected into the FW). ► Two representative test cases of ITER symmetric and asymmetric VDEs have been analyzed. ► The proposed approaches provide results in excellent agreement for both cases. -- Abstract: Disruptions represent one of the main concerns for Tokamak operation, especially in view of fusion reactors, or experimental test reactors, due to the electro-mechanical loads induced by halo and eddy currents. The development of a predictive tool which allows to estimate the magnitude and spatial distribution of the halo current forces is of paramount importance in order to ensure robust vessel and in-vessel component design. With this aim, two numerical codes (CARIDDI, CAFE) have been developed, which allow to calculate the halo current path (resistive distribution) in the passive structures surrounding the plasma. The former is based on an integral formulation for the eddy currents problem particularized to the static case; the latter implements a pair of 3D FEM complementary formulations for the solution of the steady-state current conduction problem. A simplified plasma model is adopted to provide the inputs (halo current injected into the first wall). Two representative test cases (ITER symmetric and asymmetric VDEs) have been selected to cross check the results of the proposed approaches.
When your words count: a discriminative model to predict approval of referrals
Directory of Open Access Journals (Sweden)
Adol Esquivel
2009-12-01
Conclusions Three iterations of the model correctly predicted at least 75% of the approved referrals in the validation set. A correct prediction of whether or not a referral will be approved can be made in three out of four cases.
Moment based model predictive control for systems with additive uncertainty
Saltik, M.B.; Ozkan, L.; Weiland, S.; Ludlage, J.H.A.
2017-01-01
In this paper, we present a model predictive control (MPC) strategy based on the moments of the state variables and the cost functional. The statistical properties of the state predictions are calculated through the open loop iteration of dynamics and used in the formulation of MPC cost function. We
Directory of Open Access Journals (Sweden)
Raftery Adrian E
2009-02-01
Full Text Available Abstract Background Microarray technology is increasingly used to identify potential biomarkers for cancer prognostics and diagnostics. Previously, we have developed the iterative Bayesian Model Averaging (BMA algorithm for use in classification. Here, we extend the iterative BMA algorithm for application to survival analysis on high-dimensional microarray data. The main goal in applying survival analysis to microarray data is to determine a highly predictive model of patients' time to event (such as death, relapse, or metastasis using a small number of selected genes. Our multivariate procedure combines the effectiveness of multiple contending models by calculating the weighted average of their posterior probability distributions. Our results demonstrate that our iterative BMA algorithm for survival analysis achieves high prediction accuracy while consistently selecting a small and cost-effective number of predictor genes. Results We applied the iterative BMA algorithm to two cancer datasets: breast cancer and diffuse large B-cell lymphoma (DLBCL data. On the breast cancer data, the algorithm selected a total of 15 predictor genes across 84 contending models from the training data. The maximum likelihood estimates of the selected genes and the posterior probabilities of the selected models from the training data were used to divide patients in the test (or validation dataset into high- and low-risk categories. Using the genes and models determined from the training data, we assigned patients from the test data into highly distinct risk groups (as indicated by a p-value of 7.26e-05 from the log-rank test. Moreover, we achieved comparable results using only the 5 top selected genes with 100% posterior probabilities. On the DLBCL data, our iterative BMA procedure selected a total of 25 genes across 3 contending models from the training data. Once again, we assigned the patients in the validation set to significantly distinct risk groups (p
International Nuclear Information System (INIS)
Yasaka, Koichiro; Katsura, Masaki; Akahane, Masaaki; Sato, Jiro; Matsuda, Izuru; Ohtomo, Kuni
2016-01-01
Iterative reconstruction methods have attracted attention for reducing radiation doses in computed tomography (CT). To investigate the detectability of pancreatic calcification using dose-reduced CT reconstructed with model-based iterative construction (MBIR) and adaptive statistical iterative reconstruction (ASIR). This prospective study approved by Institutional Review Board included 85 patients (57 men, 28 women; mean age, 69.9 years; mean body weight, 61.2 kg). Unenhanced CT was performed three times with different radiation doses (reference-dose CT [RDCT], low-dose CT [LDCT], ultralow-dose CT [ULDCT]). From RDCT, LDCT, and ULDCT, images were reconstructed with filtered-back projection (R-FBP, used for establishing reference standard), ASIR (L-ASIR), and MBIR and ASIR (UL-MBIR and UL-ASIR), respectively. A lesion (pancreatic calcification) detection test was performed by two blinded radiologists with a five-point certainty level scale. Dose-length products of RDCT, LDCT, and ULDCT were 410, 97, and 36 mGy-cm, respectively. Nine patients had pancreatic calcification. The sensitivity for detecting pancreatic calcification with UL-MBIR was high (0.67–0.89) compared to L-ASIR or UL-ASIR (0.11–0.44), and a significant difference was seen between UL-MBIR and UL-ASIR for one reader (P = 0.014). The area under the receiver-operating characteristic curve for UL-MBIR (0.818–0.860) was comparable to that for L-ASIR (0.696–0.844). The specificity was lower with UL-MBIR (0.79–0.92) than with L-ASIR or UL-ASIR (0.96–0.99), and a significant difference was seen for one reader (P < 0.01). In UL-MBIR, pancreatic calcification can be detected with high sensitivity, however, we should pay attention to the slightly lower specificity
Yasaka, Koichiro; Katsura, Masaki; Akahane, Masaaki; Sato, Jiro; Matsuda, Izuru; Ohtomo, Kuni
2016-01-01
Iterative reconstruction methods have attracted attention for reducing radiation doses in computed tomography (CT). To investigate the detectability of pancreatic calcification using dose-reduced CT reconstructed with model-based iterative construction (MBIR) and adaptive statistical iterative reconstruction (ASIR). This prospective study approved by Institutional Review Board included 85 patients (57 men, 28 women; mean age, 69.9 years; mean body weight, 61.2 kg). Unenhanced CT was performed three times with different radiation doses (reference-dose CT [RDCT], low-dose CT [LDCT], ultralow-dose CT [ULDCT]). From RDCT, LDCT, and ULDCT, images were reconstructed with filtered-back projection (R-FBP, used for establishing reference standard), ASIR (L-ASIR), and MBIR and ASIR (UL-MBIR and UL-ASIR), respectively. A lesion (pancreatic calcification) detection test was performed by two blinded radiologists with a five-point certainty level scale. Dose-length products of RDCT, LDCT, and ULDCT were 410, 97, and 36 mGy-cm, respectively. Nine patients had pancreatic calcification. The sensitivity for detecting pancreatic calcification with UL-MBIR was high (0.67-0.89) compared to L-ASIR or UL-ASIR (0.11-0.44), and a significant difference was seen between UL-MBIR and UL-ASIR for one reader (P = 0.014). The area under the receiver-operating characteristic curve for UL-MBIR (0.818-0.860) was comparable to that for L-ASIR (0.696-0.844). The specificity was lower with UL-MBIR (0.79-0.92) than with L-ASIR or UL-ASIR (0.96-0.99), and a significant difference was seen for one reader (P < 0.01). In UL-MBIR, pancreatic calcification can be detected with high sensitivity, however, we should pay attention to the slightly lower specificity.
International Nuclear Information System (INIS)
Alekseev, A.; Arslanova, D.; Belov, A.; Belyakov, V.; Gapionok, E.; Gornikel, I.; Gribov, Y.; Ioki, K.; Kukhtin, V.; Lamzin, E.; Sugihara, M.; Sychevsky, S.; Terasawa, A.; Utin, Y.
2013-01-01
A set of detailed computational models are reviewed that covers integrally the system “vacuum vessel (VV), cryostat, and thermal shields (TS)” to study transient electromagnetics (EMs) in the ITER machine. The models have been developed in the course of activities requested and supervised by the ITER Organization. EM analysis is enabled for all ITER operational scenarios. The input data are derived from results of DINA code simulations. The external EM fields are modeled accurate to the input data description. The known magnetic shell approach can be effectively applied to simulate thin-walled structures of the ITER machine. Using an integral–differential formulation, a single unknown is determined within the shells in terms of the vector electric potential taken only at the nodes of a finite-element (FE) mesh of the conducting structures. As a result, the FE mesh encompasses only the system “VV + Cryostat + TS”. The 3D model requires much higher computational resources as compared to a shell model based on the equivalent approximation. The shell models have been developed for all principal conducting structures in the system “VV + Cryostat + TS” including regular ports and neutral beam ports. The structures are described in details in accordance with the latest design. The models have also been applied for simulations of EM transients in components of diagnostic systems and cryopumps and estimation of the 3D effects of the ITER structures on the plasma performance. The developed models have been elaborated and applied for the last 15 years to support the ITER design activities. The finalization of the ITER VV design enables this set of models to be considered ready to use in plasma-physics computations and the development of ITER simulators
CSIR Research Space (South Africa)
Johnson, S
2010-02-01
Full Text Available metapopulations was the focus of a Bayesian Network (BN) modelling workshop in South Africa. Using a new heuristics, Iterative Bayesian Network Development Cycle (IBNDC), described in this paper, several networks were formulated to distinguish between the unique...
Xu, Peng; Tian, Yin; Lei, Xu; Hu, Xiao; Yao, Dezhong
2008-12-01
How to localize the neural electric activities within brain effectively and precisely from the scalp electroencephalogram (EEG) recordings is a critical issue for current study in clinical neurology and cognitive neuroscience. In this paper, based on the charge source model and the iterative re-weighted strategy, proposed is a new maximum neighbor weight based iterative sparse source imaging method, termed as CMOSS (Charge source model based Maximum neighbOr weight Sparse Solution). Different from the weight used in focal underdetermined system solver (FOCUSS) where the weight for each point in the discrete solution space is independently updated in iterations, the new designed weight for each point in each iteration is determined by the source solution of the last iteration at both the point and its neighbors. Using such a new weight, the next iteration may have a bigger chance to rectify the local source location bias existed in the previous iteration solution. The simulation studies with comparison to FOCUSS and LORETA for various source configurations were conducted on a realistic 3-shell head model, and the results confirmed the validation of CMOSS for sparse EEG source localization. Finally, CMOSS was applied to localize sources elicited in a visual stimuli experiment, and the result was consistent with those source areas involved in visual processing reported in previous studies.
Scaling of the MHD perturbation amplitude required to trigger a disruption and predictions for ITER
de Vries, P. C.; Pautasso, G.; Nardon, E.; Cahyna, P.; Gerasimov, S.; Havlicek, J.; Hender, T. C.; Huijsmans, G. T. A.; Lehnen, M.; Maraschek, M.; Markovič, T.; Snipes, J. A.; the COMPASS Team; the ASDEX Upgrade Team; Contributors, JET
2016-02-01
The amplitude of locked instabilities, likely magnetic islands, seen as precursors to disruptions has been studied using data from the JET, ASDEX Upgrade and COMPASS tokamaks. It was found that the thermal quench, that often initiates the disruption, is triggered when the amplitude has reached a distinct level. This information can be used to determine thresholds for simple disruption prediction schemes. The measured amplitude in part depends on the distance of the perturbation to the measurement coils. Hence the threshold for the measured amplitude depends on the mode location (i.e. the rational q-surface) and thus indirectly on parameters such as the edge safety factor, q 95, and the internal inductance, li(3), that determine the shape of the q-profile. These dependencies can be used to set the disruption thresholds more precisely. For the ITER baseline scenario, with typically q 95 = 3.2, li(3) = 0.9 and taking into account the position of the measurement coils on ITER, the maximum allowable measured locked mode amplitude normalized to engineering parameters was estimated to be a·B ML(r c)/I p = 0.92 m mT/MA, or directly as a fraction edge poloidal magnetic field: B ML(r c)/B θ (a) = 5 · 10-3. But these values decrease for operation at higher q 95 or lower li(3). The analysis found furthermore that the above empirical criterion to trigger a thermal quench is more consistent with a criterion derived with the concept of a critical island size, i.e. the thermal quench seemed to be triggered at a distinct island width.
Yasaka, Koichiro; Katsura, Masaki; Akahane, Masaaki; Sato, Jiro; Matsuda, Izuru; Ohtomo, Kuni
2013-12-01
To evaluate dose reduction and image quality of abdominopelvic computed tomography (CT) reconstructed with model-based iterative reconstruction (MBIR) compared to adaptive statistical iterative reconstruction (ASIR). In this prospective study, 85 patients underwent referential-, low-, and ultralow-dose unenhanced abdominopelvic CT. Images were reconstructed with ASIR for low-dose (L-ASIR) and ultralow-dose CT (UL-ASIR), and with MBIR for ultralow-dose CT (UL-MBIR). Image noise was measured in the abdominal aorta and iliopsoas muscle. Subjective image analyses and a lesion detection study (adrenal nodules) were conducted by two blinded radiologists. A reference standard was established by a consensus panel of two different radiologists using referential-dose CT reconstructed with filtered back projection. Compared to low-dose CT, there was a 63% decrease in dose-length product with ultralow-dose CT. UL-MBIR had significantly lower image noise than L-ASIR and UL-ASIR (all pASIR and UL-ASIR (all pASIR in diagnostic acceptability (p>0.65), or diagnostic performance for adrenal nodules (p>0.87). MBIR significantly improves image noise and streak artifacts compared to ASIR, and can achieve radiation dose reduction without severely compromising image quality.
Ehret, Phillip J; Monroe, Brian M; Read, Stephen J
2015-05-01
We present a neural network implementation of central components of the iterative reprocessing (IR) model. The IR model argues that the evaluation of social stimuli (attitudes, stereotypes) is the result of the IR of stimuli in a hierarchy of neural systems: The evaluation of social stimuli develops and changes over processing. The network has a multilevel, bidirectional feedback evaluation system that integrates initial perceptual processing and later developing semantic processing. The network processes stimuli (e.g., an individual's appearance) over repeated iterations, with increasingly higher levels of semantic processing over time. As a result, the network's evaluations of stimuli evolve. We discuss the implications of the network for a number of different issues involved in attitudes and social evaluation. The success of the network supports the IR model framework and provides new insights into attitude theory. © 2014 by the Society for Personality and Social Psychology, Inc.
Bias in iterative reconstruction of low-statistics PET data: benefits of a resolution model
Energy Technology Data Exchange (ETDEWEB)
Walker, M D; Asselin, M-C; Julyan, P J; Feldmann, M; Matthews, J C [School of Cancer and Enabling Sciences, Wolfson Molecular Imaging Centre, MAHSC, University of Manchester, Manchester M20 3LJ (United Kingdom); Talbot, P S [Mental Health and Neurodegeneration Research Group, Wolfson Molecular Imaging Centre, MAHSC, University of Manchester, Manchester M20 3LJ (United Kingdom); Jones, T, E-mail: matthew.walker@manchester.ac.uk [Academic Department of Radiation Oncology, Christie Hospital, University of Manchester, Manchester M20 4BX (United Kingdom)
2011-02-21
Iterative image reconstruction methods such as ordered-subset expectation maximization (OSEM) are widely used in PET. Reconstructions via OSEM are however reported to be biased for low-count data. We investigated this and considered the impact for dynamic PET. Patient listmode data were acquired in [{sup 11}C]DASB and [{sup 15}O]H{sub 2}O scans on the HRRT brain PET scanner. These data were subsampled to create many independent, low-count replicates. The data were reconstructed and the images from low-count data were compared to the high-count originals (from the same reconstruction method). This comparison enabled low-statistics bias to be calculated for the given reconstruction, as a function of the noise-equivalent counts (NEC). Two iterative reconstruction methods were tested, one with and one without an image-based resolution model (RM). Significant bias was observed when reconstructing data of low statistical quality, for both subsampled human and simulated data. For human data, this bias was substantially reduced by including a RM. For [{sup 11}C]DASB the low-statistics bias in the caudate head at 1.7 M NEC (approx. 30 s) was -5.5% and -13% with and without RM, respectively. We predicted biases in the binding potential of -4% and -10%. For quantification of cerebral blood flow for the whole-brain grey- or white-matter, using [{sup 15}O]H{sub 2}O and the PET autoradiographic method, a low-statistics bias of <2.5% and <4% was predicted for reconstruction with and without the RM. The use of a resolution model reduces low-statistics bias and can hence be beneficial for quantitative dynamic PET.
Cunha-Filho, A. G.; Briend, Y. P. J.; de Lima, A. M. G.; Donadon, M. V.
2018-05-01
The flutter boundary prediction of complex aeroelastic systems is not an easy task. In some cases, these analyses may become prohibitive due to the high computational cost and time associated with the large number of degrees of freedom of the aeroelastic models, particularly when the aeroelastic model incorporates a control strategy with the aim of suppressing the flutter phenomenon, such as the use of viscoelastic treatments. In this situation, the use of a model reduction method is essential. However, the construction of a modal reduction basis for aeroviscoelastic systems is still a challenge, owing to the inherent frequency- and temperature-dependent behavior of the viscoelastic materials. Thus, the main contribution intended for the present study is to propose an efficient and accurate iterative enriched Ritz basis to deal with aeroviscoelastic systems. The main features and capabilities of the proposed model reduction method are illustrated in the prediction of flutter boundary for a thin three-layer sandwich flat panel and a typical aeronautical stiffened panel, both under supersonic flow.
SNS Superconducting RF cavity modeling-iterative learning control
International Nuclear Information System (INIS)
Kwon, S.-I.; Regan, Amy; Wang, Y.-M.
2002-01-01
The Spallation Neutron Source (SNS) Superconducting RF (SRF) linear accelerator is operated with a pulsed beam. For the SRF control system to track the repetitive electromagnetic field reference trajectory, both feedback and feedforward controllers have been proposed. The feedback controller is utilized to guarantee the closed loop system stability and the feedforward controller is used to improve the tracking performance for the repetitive reference trajectory and to suppress repetitive disturbances. As the iteration number increases, the feedforward controller decreases the tracking error. Numerical simulations demonstrate that inclusion of the feedforward controller significantly improves the control system performance over its performance with just the feedback controller
SNS Superconducting RF cavity modeling-iterative learning control
Kwon, S I; Wang, Y M
2002-01-01
The Spallation Neutron Source (SNS) Superconducting RF (SRF) linear accelerator is operated with a pulsed beam. For the SRF control system to track the repetitive electromagnetic field reference trajectory, both feedback and feedforward controllers have been proposed. The feedback controller is utilized to guarantee the closed loop system stability and the feedforward controller is used to improve the tracking performance for the repetitive reference trajectory and to suppress repetitive disturbances. As the iteration number increases, the feedforward controller decreases the tracking error. Numerical simulations demonstrate that inclusion of the feedforward controller significantly improves the control system performance over its performance with just the feedback controller.
Application of MCAM in generating Monte Carlo model for ITER port limiter
International Nuclear Information System (INIS)
Lu Lei; Li Ying; Ding Aiping; Zeng Qin; Huang Chenyu; Wu Yican
2007-01-01
On the basis of the pre-processing and conversion functions supplied by MCAM (Monte-Carlo Particle Transport Calculated Automatic Modeling System), this paper performed the generation of ITER Port Limiter MC (Monte-Carlo) calculation model from the CAD engineering model. The result was validated by using reverse function of MCAM and MCNP PLOT 2D cross-section drawing program. the successful application of MCAM to ITER Port Limiter demonstrates that MCAM is capable of dramatically increasing the efficiency and accuracy to generate MC calculation models from CAD engineering models with complex geometry comparing with the traditional manual modeling method. (authors)
Modelling controlled VDE's and ramp-down scenarios in ITER
Lodestro, L. L.; Kolesnikov, R. A.; Meyer, W. H.; Pearlstein, L. D.; Humphreys, D. A.; Walker, M. L.
2011-10-01
Following the design reviews of recent years, the ITER poloidal-field coil-set design, including in-vessel coils (VS3), and the divertor configuration have settled down. The divertor and its material composition (the latter has not been finalized) affect the development of fiducial equilibria and scenarios together with the coils through constraints on strike-point locations and limits on the PF and control systems. Previously we have reported on our studies simulating controlled vertical events in ITER with the JCT 2001 controller to which we added a PID VS3 circuit. In this paper we report and compare controlled VDE results using an optimized integrated VS and shape controller in the updated configuration. We also present our recent simulations of alternate ramp-down scenarios, looking at the effects of ramp-down time and shape strategies, using these controllers. This work performed under the auspices of the U.S. Department of Energy by LLNL under Contract DE-AC52-07NA27344.
Energy Technology Data Exchange (ETDEWEB)
Virot, F., E-mail: francois.virot@irsn.fr; Barrachin, M.; Souvi, S.; Cantrel, L.
2014-10-15
Highlights: • Standard enthalpies of formation of BeH, BeH{sub 2}, BeOH, Be(OH){sub 2} have been calculated. • The impact of hydrogen isotopy on thermodynamic properties has been shown. • Speciation in the vacuum vessel shows that the main tritiated species is tritiated steam. • Beryllium hydroxide and hydride could exist during an accidental event. - Abstract: By quantum chemistry calculations, we have evaluated the standard enthalpies of formation of some gaseous species of the Be-O-H chemical system: BeH, BeH{sub 2}, BeOH, Be(OH){sub 2} for which the values in the referenced thermodynamic databases (NIST-JANAF [1] or COACH [2]) were, due to the lack of experimental data, estimated or reported with a large uncertainty. Comparison between post-HF, DFT approaches and available experimental data allows validation of the ability of an accurate exchange-correlation functional, VSXC, to predict the thermo-chemical properties of the beryllium species of interest. Deviation of enthalpy of formation induced by changes in hydrogen isotopy has been also calculated. From these new theoretically determinated data, we have calculated the chemical speciation in conditions simulating an accident of water ingress in the vacuum vessel of ITER.
Abawajy, Jemal; Kelarev, Andrei; Chowdhury, Morshed U; Jelinek, Herbert F
2016-01-01
Blood biochemistry attributes form an important class of tests, routinely collected several times per year for many patients with diabetes. The objective of this study is to investigate the role of blood biochemistry for improving the predictive accuracy of the diagnosis of cardiac autonomic neuropathy (CAN) progression. Blood biochemistry contributes to CAN, and so it is a causative factor that can provide additional power for the diagnosis of CAN especially in the absence of a complete set of Ewing tests. We introduce automated iterative multitier ensembles (AIME) and investigate their performance in comparison to base classifiers and standard ensemble classifiers for blood biochemistry attributes. AIME incorporate diverse ensembles into several tiers simultaneously and combine them into one automatically generated integrated system so that one ensemble acts as an integral part of another ensemble. We carried out extensive experimental analysis using large datasets from the diabetes screening research initiative (DiScRi) project. The results of our experiments show that several blood biochemistry attributes can be used to supplement the Ewing battery for the detection of CAN in situations where one or more of the Ewing tests cannot be completed because of the individual difficulties faced by each patient in performing the tests. The results show that AIME provide higher accuracy as a multitier CAN classification paradigm. The best predictive accuracy of 99.57% has been obtained by the AIME combining decorate on top tier with bagging on middle tier based on random forest. Practitioners can use these findings to increase the accuracy of CAN diagnosis.
Guillemaut, C.; Metzger, C.; Moulton, D.; Heinola, K.; O’Mullane, M.; Balboa, I.; Boom, J.; Matthews, G. F.; Silburn, S.; Solano, E. R.; contributors, JET
2018-06-01
The design and operation of future fusion devices relying on H-mode plasmas requires reliable modelling of edge-localized modes (ELMs) for precise prediction of divertor target conditions. An extensive experimental validation of simple analytical predictions of the time evolution of target plasma loads during ELMs has been carried out here in more than 70 JET-ITER-like wall H-mode experiments with a wide range of conditions. Comparisons of these analytical predictions with diagnostic measurements of target ion flux density, power density, impact energy and electron temperature during ELMs are presented in this paper and show excellent agreement. The analytical predictions tested here are made with the ‘free-streaming’ kinetic model (FSM) which describes ELMs as a quasi-neutral plasma bunch expanding along the magnetic field lines into the Scrape-Off Layer without collisions. Consequences of the FSM on energy reflection and deposition on divertor targets during ELMs are also discussed.
Experimental modelling of plasma-graphite surface interaction in ITER
Energy Technology Data Exchange (ETDEWEB)
Martynenko, Yu.V.; Guseva, M.I.; Gureev, V.M.; Danelyan, L.S.; Neumoin, V.E.; Petrov, V.B.; Khripunov, B.I.; Sokolov, Yu.A.; Stativkina, O.V.; Stolyarova, V.G. [Rossijskij Nauchnyj Tsentr ``Kurchatovskij Inst.``, Moscow (Russian Federation); Vasiliev, V.I.; Strunnikov, V.M. [TRINITI, Troizk (Russian Federation)
1998-10-01
The investigation of graphite erosion under normal operation ITER regime and disruption was performed by means of exposure of RGT graphite samples in a stationary deuterium plasma to a dose of 10{sup 22} cm{sup -2} and subsequent irradiation by power (250 MW/cm{sup 2}) pulse deuterium plasma flow imitating disruption. The stationary plasma exposure was carried out in the installation LENTA with the energy of deuterium ions being 200 eV at target temperatures of 770 C and 1150 C. The preliminary exposure in stationary plasma at temperature of physical sputtering does not essentially change the erosion due to a disruption, whereas exposure at the temperature of radiation enhanced sublimation dramatically increases the erosion due to disruption. In the latter case, the depth of erosion due to a disruption is determined by the depth of a layer with decreased strength. (orig.) 9 refs.
Helium embrittlement model and program plan for weldability of ITER materials
International Nuclear Information System (INIS)
Louthan, M.R. Jr.; Kanne, W.R. Jr.; Tosten, M.H.; Rankin, D.T.; Cross, B.J.
1997-02-01
This report presents a refined model of how helium embrittles irradiated stainless steel during welding. The model was developed based on experimental observations drawn from experience at the Savannah River Site and from an extensive literature search. The model shows how helium content, stress, and temperature interact to produce embrittlement. The model takes into account defect structure, time, and gradients in stress, temperature and composition. The report also proposes an experimental program based on the refined helium embrittlement model. A parametric study of the effect of initial defect density on the resulting helium bubble distribution and weldability of tritium aged material is proposed to demonstrate the roll that defects play in embrittlement. This study should include samples charged using vastly different aging times to obtain equivalent helium contents. Additionally, studies to establish the minimal sample thickness and size are needed for extrapolation to real structural materials. The results of these studies should provide a technical basis for the use of tritium aged materials to predict the weldability of irradiated structures. Use of tritium charged and aged material would provide a cost effective approach to developing weld repair techniques for ITER components
International Nuclear Information System (INIS)
Choo, Ji Yung; Goo, Jin Mo; Park, Chang Min; Park, Sang Joon; Lee, Chang Hyun; Shim, Mi-Suk
2014-01-01
To evaluate filtered back projection (FBP) and two iterative reconstruction (IR) algorithms and their effects on the quantitative analysis of lung parenchyma and airway measurements on computed tomography (CT) images. Low-dose chest CT obtained in 281 adult patients were reconstructed using three algorithms: FBP, adaptive statistical IR (ASIR) and model-based IR (MBIR). Measurements of each dataset were compared: total lung volume, emphysema index (EI), airway measurements of the lumen and wall area as well as average wall thickness. Accuracy of airway measurements of each algorithm was also evaluated using an airway phantom. EI using a threshold of -950 HU was significantly different among the three algorithms in decreasing order of FBP (2.30 %), ASIR (1.49 %) and MBIR (1.20 %) (P < 0.01). Wall thickness was also significantly different among the three algorithms with FBP (2.09 mm) demonstrating thicker walls than ASIR (2.00 mm) and MBIR (1.88 mm) (P < 0.01). Airway phantom analysis revealed that MBIR showed the most accurate value for airway measurements. The three algorithms presented different EIs and wall thicknesses, decreasing in the order of FBP, ASIR and MBIR. Thus, care should be taken in selecting the appropriate IR algorithm on quantitative analysis of the lung. (orig.)
Energy Technology Data Exchange (ETDEWEB)
Choo, Ji Yung [Seoul National University Medical Research Center, Department of Radiology, Seoul National University College of Medicine, and Institute of Radiation Medicine, Seoul (Korea, Republic of); Korea University Ansan Hospital, Ansan-si, Department of Radiology, Gyeonggi-do (Korea, Republic of); Goo, Jin Mo; Park, Chang Min; Park, Sang Joon [Seoul National University Medical Research Center, Department of Radiology, Seoul National University College of Medicine, and Institute of Radiation Medicine, Seoul (Korea, Republic of); Seoul National University, Cancer Research Institute, Seoul (Korea, Republic of); Lee, Chang Hyun; Shim, Mi-Suk [Seoul National University Medical Research Center, Department of Radiology, Seoul National University College of Medicine, and Institute of Radiation Medicine, Seoul (Korea, Republic of)
2014-04-15
To evaluate filtered back projection (FBP) and two iterative reconstruction (IR) algorithms and their effects on the quantitative analysis of lung parenchyma and airway measurements on computed tomography (CT) images. Low-dose chest CT obtained in 281 adult patients were reconstructed using three algorithms: FBP, adaptive statistical IR (ASIR) and model-based IR (MBIR). Measurements of each dataset were compared: total lung volume, emphysema index (EI), airway measurements of the lumen and wall area as well as average wall thickness. Accuracy of airway measurements of each algorithm was also evaluated using an airway phantom. EI using a threshold of -950 HU was significantly different among the three algorithms in decreasing order of FBP (2.30 %), ASIR (1.49 %) and MBIR (1.20 %) (P < 0.01). Wall thickness was also significantly different among the three algorithms with FBP (2.09 mm) demonstrating thicker walls than ASIR (2.00 mm) and MBIR (1.88 mm) (P < 0.01). Airway phantom analysis revealed that MBIR showed the most accurate value for airway measurements. The three algorithms presented different EIs and wall thicknesses, decreasing in the order of FBP, ASIR and MBIR. Thus, care should be taken in selecting the appropriate IR algorithm on quantitative analysis of the lung. (orig.)
Cultural Resource Predictive Modeling
2017-10-01
CR cultural resource CRM cultural resource management CRPM Cultural Resource Predictive Modeling DoD Department of Defense ESTCP Environmental...resource management ( CRM ) legal obligations under NEPA and the NHPA, military installations need to demonstrate that CRM decisions are based on objective...maxim “one size does not fit all,” and demonstrate that DoD installations have many different CRM needs that can and should be met through a variety
Verification of SuperMC with ITER C-Lite neutronic model
Energy Technology Data Exchange (ETDEWEB)
Zhang, Shu [School of Nuclear Science and Technology, University of Science and Technology of China, Hefei, Anhui, 230027 (China); Key Laboratory of Neutronics and Radiation Safety, Institute of Nuclear Energy Safety Technology, Chinese Academy of Sciences, Hefei, Anhui, 230031 (China); Yu, Shengpeng [Key Laboratory of Neutronics and Radiation Safety, Institute of Nuclear Energy Safety Technology, Chinese Academy of Sciences, Hefei, Anhui, 230031 (China); He, Peng, E-mail: peng.he@fds.org.cn [Key Laboratory of Neutronics and Radiation Safety, Institute of Nuclear Energy Safety Technology, Chinese Academy of Sciences, Hefei, Anhui, 230031 (China)
2016-12-15
Highlights: • Verification of the SuperMC Monte Carlo transport code with ITER C-Lite model. • The modeling of the ITER C-Lite model using the latest SuperMC/MCAM. • All the calculated quantities are consistent with MCNP well. • Efficient variance reduction methods are adopted to accelerate the calculation. - Abstract: In pursit of accurate and high fidelity simulation, the reference model of ITER is becoming more and more detailed and complicated. Due to the complexity in geometry and the thick shielding of the reference model, the accurate modeling and precise simulaion of fusion neutronics are very challenging. Facing these difficulties, SuperMC, the Monte Carlo simulation software system developed by the FDS Team, has optimized its CAD interface for the automatic converting of more complicated models and increased its calculation efficiency with advanced variance reduction methods To demonstrate its capabilites of automatic modeling, neutron/photon coupled simulation and visual analysis for the ITER facility, numerical benchmarks using the ITER C-Lite neutronic model were performed. The nuclear heating in divertor and inboard toroidal field (TF) coils and a global neutron flux map were evaluated. All the calculated nuclear heating is compared with the results of the MCNP code and good consistencies between the two codes is shown. Using the global variance reduction methods in SuperMC, the average speed-up is 292 times for the calculation of inboard TF coils nuclear heating, and 91 times for the calculation of global flux map, compared with the analog run. These tests have shown that SuperMC is suitable for the design and analysis of ITER facility.
Candidate Prediction Models and Methods
DEFF Research Database (Denmark)
Nielsen, Henrik Aalborg; Nielsen, Torben Skov; Madsen, Henrik
2005-01-01
This document lists candidate prediction models for Work Package 3 (WP3) of the PSO-project called ``Intelligent wind power prediction systems'' (FU4101). The main focus is on the models transforming numerical weather predictions into predictions of power production. The document also outlines...... the possibilities w.r.t. different numerical weather predictions actually available to the project....
Ginsburg, Shiphra; Eva, Kevin; Regehr, Glenn
2013-10-01
Although scores on in-training evaluation reports (ITERs) are often criticized for poor reliability and validity, ITER comments may yield valuable information. The authors assessed across-rotation reliability of ITER scores in one internal medicine program, ability of ITER scores and comments to predict postgraduate year three (PGY3) performance, and reliability and incremental predictive validity of attendings' analysis of written comments. Numeric and narrative data from the first two years of ITERs for one cohort of residents at the University of Toronto Faculty of Medicine (2009-2011) were assessed for reliability and predictive validity of third-year performance. Twenty-four faculty attendings rank-ordered comments (without scores) such that each resident was ranked by three faculty. Mean ITER scores and comment rankings were submitted to regression analyses; dependent variables were PGY3 ITER scores and program directors' rankings. Reliabilities of ITER scores across nine rotations for 63 residents were 0.53 for both postgraduate year one (PGY1) and postgraduate year two (PGY2). Interrater reliabilities across three attendings' rankings were 0.83 for PGY1 and 0.79 for PGY2. There were strong correlations between ITER scores and comments within each year (0.72 and 0.70). Regressions revealed that PGY1 and PGY2 ITER scores collectively explained 25% of variance in PGY3 scores and 46% of variance in PGY3 rankings. Comment rankings did not improve predictions. ITER scores across multiple rotations showed decent reliability and predictive validity. Comment ranks did not add to the predictive ability, but correlation analyses suggest that trainee performance can be measured through these comments.
Benchmarking of MCAM 4.0 with the ITER 3D Model
International Nuclear Information System (INIS)
Ying Li; Lei Lu; Aiping Ding; Haimin Hu; Qin Zeng; Shanliang Zheng; Yican Wu
2006-01-01
Monte Carlo particle transport simulations are widely employed in fields such as nuclear engineering, radio-therapy and space science. Describing and verifying the 3D geometry of fusion devices, however, are among the most complex tasks of MCNP calculation problems in nuclear analysis. The manual modeling of a complex geometry for MCNP code, though a common practice, is an extensive, time-consuming, and error prone task. An efficient solution is to shift the geometric modeling into Computer Aided Design(CAD) systems and to use an interface for MCNP to convert the CAD model to MCNP file. The advantage of this approach lies in the fact that it allows access to full features of modern CAD systems facilitating the geometric modeling and utilizing the existing CAD models. MCAM(MCNP Automatic Modeling System) is an integrated tool for CAD model preprocessing, accurate bi-directional conversion between CAD/MCNP models, neutronics property processing and geometric modeling developed by FDS team in ASIPP and Hefei University of Technology. MCAM4.0 has been extended and enhanced to support various CAD file formats and the preprocessing of CAD model, such as healing, automatic model reconstruction, overlap detection and correction, automatic void modeling. The ITER international benchmark model is provided by ITER international team to compare the CAD/MCNP programs being developed in the ITER participant teams. It is created in CATIA/V5, which has been chosen as the CAD system for ITER design, including all the important parts and components of the ITER device. The benchmark model contains vast curve surfaces, which can fully test the ability of MCNP/CAD codes. The whole processing procedure of this model will be presented in this paper, which includes the geometric model processing, neutroics property processing, converting to MCNP input file, calculating with MCNP and analysis. The nuclear analysis results of the model will be given in the end. Although these preliminary
Simulation of transport in the ignited ITER with 1.5-D predictive code
International Nuclear Information System (INIS)
Becker, G.
1995-01-01
The confinement in the bulk and scrape-off layer plasmas of the ITER EDA and CDA is investigated with special versions of the 1.5-D BALDUR predictive transport code for the case of peaked density profiles (C υ = 1.0). The code self-consistently computes 2-D equilibria and solves 1-D transport equations with empirical transport coefficients for the ohmic, L and ELMy H mode regimes. Self-sustained steady state thermonuclear burn is demonstrated for up to 500 s. It is shown to be compatible with the strong radiation losses for divertor heat load reduction caused by the seeded impurities iron, neon and argon. The corresponding global and local energy and particle transport are presented. The required radiation corrected energy confinement times of the EDA and CDA are found to be close to 4 s. In the reference cases, the steady state helium fraction is 7%. The fractions of iron, neon and argon needed for the prescribed radiative power loss are given. It is shown that high radiative losses from the confinement zone, mainly by bremsstrahlung, cannot be avoided. The radiation profiles of iron and argon are found to be the same, with two thirds of the total radiation being emitted from closed flux surfaces. Fuel dilution due to iron and argon is small. The neon radiation is more peripheral. But neon is found to cause high fuel dilution. The combined dilution effect by helium and neon conflicts with burn control, self-sustained burn and divertor power reduction. Raising the helium fraction above 10% leads to the same difficulties owing to fuel dilution. The high helium levels of the present EDA design are thus unacceptable. The bootstrap current has only a small impact on the current profile. The sawtooth dominated region is found to cover 35% of the plasma cross-section. Local stability analysis of ideal ballooning modes shows that the plasma is everywhere well below the stability limit. 23 refs, 34 figs, 3 tabs
Tritium inventory in the ITER PFC`s: predictions, uncertainties, R and D status and priority needs
Energy Technology Data Exchange (ETDEWEB)
Federici, G. [ITER, Garching (Germany). JWS; Anderl, R.; Longhurst, G. [Idaho National Engineering and Environmental Laboratory, Idaho Falls, Idaho 83415 (United States); Brooks, J.N. [Argonne National Laboratory, 9700 S. Cass Avenue, Argonne, Illinois 60439 (United States); Causey, R.; Cowgill, D.; Wampler, W.; Wilson, K.; Youchison, D. [Sandia National Laboratories, Livermore California and Albuquerque, New Mexico (United States); Coad, J.P.; Peacock, A.; Pick, M. [JET Joint Undertaking, Abingdon, Oxfordshire OX14 3EA (United Kingdom); Doerner, R.; Luckhardt, S. [University of California San Diego, La Jolla, California 92093-0417 (United States); Haasz, A.A. [University of Toronto, Institute for Aerospace Studies, Ontario M3H 5T6 (Canada); Mueller, D.; Skinner, C.H. [Plasma Physics Laboratory, Princeton University, Princeton, New Jersey 08543 (United States); Wong, C. [General Atomics, PO Box 85608, San Diego, California 92186-9784 (United States); Wu, C. [NET Team, Boltzmannstrasse 2, 85748 Garching (Germany)
1998-09-01
New data on hydrogen plasma isotopes retention in beryllium and tungsten are now becoming available from various laboratories for conditions similar to those expected in the International Thermonuclear Experimental Reactor (ITER) where previous data were either missing or largely scattered. Together with a significant advancement in understanding, they have warranted a revisitation of the previous estimates of tritium inventory in ITER, with beryllium as the plasma facing material for the first-wall components, and tungsten in the divertor with some carbon-fibre-composites clad areas, near the strike points. Based on these analyses, it is shown that the area of primary concern with, respect to tritium inventory, remains codeposition with carbon and possibly beryllium on the divertor surfaces. Here, modelling of ITER divertor conditions continues to show potentially large codeposition rates which are confirmed by tokamak findings. Contrary to the tritium residing deep in the bulk of materials, this surface tritium represents a safety hazard as it can be easily mobilised in the event of an accident. It could, however, be possibly removed and recovered. It is concluded that active and efficient methods to remove the codeposited layers are needed in ITER and periodic conditioning/cleaning would be required to control the tritium inventory and avoid exhausting the available fuel supply. Some methods which could possibly be used for in-situ cleaning are briefly discussed in conjunction with the research and development work required to extrapolate their applicability to ITER. (orig.) 53 refs.
International Nuclear Information System (INIS)
Youssef, M.Z.; Feder, R.; Davis, I.
2007-01-01
The ITER IT has adopted the newly developed FEM, 3-D, and CAD-based Discrete Ordinates code, ATTILA for the neutronics studies contingent on its success in predicting key neutronics parameters and nuclear field according to the stringent QA requirements set forth by the Management and Quality Program (MQP). ATTILA has the advantage of providing a full flux and response functions mapping everywhere in one run where components subjected to excessive radiation level and strong streaming paths can be identified. The ITER neutronics community had agreed to use a standard CAD model of ITER (40 degree sector, denoted ''Benchmark CAD Model'') to compare results for several responses selected for calculation benchmarking purposes to test the efficiency and accuracy of the CAD-MCNP approach developed by each party. Since ATTILA seems to lend itself as a powerful design tool with minimal turnaround time, it was decided to benchmark this model with ATTILA as well and compare the results to those obtained with the CAD MCNP calculations. In this paper we report such comparison for five responses, namely: (1) Neutron wall load on the surface of the 18 shield blanket module (SBM), (2) Neutron flux and nuclear heating rate in the divertor cassette, (3) nuclear heating rate in the winding pack of the inner leg of the TF coil, (4) Radial flux profile across dummy port plug and shield plug placed in the equatorial port, and (5) Flux at seven point locations situated behind the equatorial port plug. (orig.)
Experiment and Modeling of ITER Demonstration Discharges in the DIII-D Tokamak
International Nuclear Information System (INIS)
Park, Jin Myung; Doyle, E. J.; Ferron, J.R.; Holcomb, C.T.; Jackson, G.L.; Lao, L.L.; Luce, T.C.; Owen, Larry W.; Murakami, Masanori; Osborne, T.H.; Politzer, P.A.; Prater, R.; Snyder, P.B.
2011-01-01
DIII-D is providing experimental evaluation of 4 leading ITER operational scenarios: the baseline scenario in ELMing H-mode, the advanced inductive scenario, the hybrid scenario, and the steady state scenario. The anticipated ITER shape, aspect ratio and value of I/αB were reproduced, with the size reduced by a factor of 3.7, while matching key performance targets for β N and H 98 . Since 2008, substantial experimental progress was made to improve the match to other expected ITER parameters for the baseline scenario. A lower density baseline discharge was developed with improved stationarity and density control to match the expected ITER edge pedestal collisionality (ν* e ∼ 0.1). Target values for β N and H 98 were maintained at lower collisionality (lower density) operation without loss in fusion performance but with significant change in ELM characteristics. The effects of lower plasma rotation were investigated by adding counter-neutral beam power, resulting in only a modest reduction in confinement. Robust preemptive stabilization of 2/1 NTMs was demonstrated for the first time using ECCD under ITER-like conditions. Data from these experiments were used extensively to test and develop theory and modeling for realistic ITER projection and for further development of its optimum scenarios in DIII-D. Theory-based modeling of core transport (TGLF) with an edge pedestal boundary condition provided by the EPED1 model reproduces T e and T i profiles reasonably well for the 4 ITER scenarios developed in DIII-D. Modeling of the baseline scenario for low and high rotation discharges indicates that a modest performance increase of ∼ 15% is needed to compensate for the expected lower rotation of ITER. Modeling of the steady-state scenario reproduces a strong dependence of confinement, stability, and noninductive fraction (f NI ) on q 95 , as found in the experimental I p scan, indicating that optimization of the q profile is critical to simultaneously achieving the
Predictive Surface Complexation Modeling
Energy Technology Data Exchange (ETDEWEB)
Sverjensky, Dimitri A. [Johns Hopkins Univ., Baltimore, MD (United States). Dept. of Earth and Planetary Sciences
2016-11-29
Surface complexation plays an important role in the equilibria and kinetics of processes controlling the compositions of soilwaters and groundwaters, the fate of contaminants in groundwaters, and the subsurface storage of CO_{2} and nuclear waste. Over the last several decades, many dozens of individual experimental studies have addressed aspects of surface complexation that have contributed to an increased understanding of its role in natural systems. However, there has been no previous attempt to develop a model of surface complexation that can be used to link all the experimental studies in order to place them on a predictive basis. Overall, my research has successfully integrated the results of the work of many experimentalists published over several decades. For the first time in studies of the geochemistry of the mineral-water interface, a practical predictive capability for modeling has become available. The predictive correlations developed in my research now enable extrapolations of experimental studies to provide estimates of surface chemistry for systems not yet studied experimentally and for natural and anthropogenically perturbed systems.
Real-Time Optimization for Economic Model Predictive Control
DEFF Research Database (Denmark)
Sokoler, Leo Emil; Edlund, Kristian; Frison, Gianluca
2012-01-01
In this paper, we develop an efficient homogeneous and self-dual interior-point method for the linear programs arising in economic model predictive control. To exploit structure in the optimization problems, the algorithm employs a highly specialized Riccati iteration procedure. Simulations show...
International Nuclear Information System (INIS)
Shimomura, Y.; Aymar, R.; Chuyanov, V.; Huguet, M.; Parker, R.R.
2001-01-01
This report summarizes technical works of six years done by the ITER Joint Central Team and Home Teams under terms of Agreement of the ITER Engineering Design Activities. The major products are as follows: complete and detailed engineering design with supporting assessments, industrial-based cost estimates and schedule, non-site specific comprehensive safety and environmental assessment, and technology R and D to validate and qualify design including proof of technologies and industrial manufacture and testing of full size or scalable models of key components. The ITER design is at an advanced stage of maturity and contains sufficient technical information for a construction decision. The operation of ITER will demonstrate the availability of a new energy source, fusion. (author)
International Nuclear Information System (INIS)
Shimomura, Y.; Aymar, R.; Chuyanov, V.; Huguet, M.; Parker, R.
1999-01-01
This report summarizes technical works of six years done by the ITER Joint Central Team and Home Teams under terms of Agreement of the ITER Engineering Design Activities. The major products are as follows: complete and detailed engineering design with supporting assessments, industrial-based cost estimates and schedule, non-site specific comprehensive safety and environmental assessment, and technology R and D to validate and qualify design including proof of technologies and industrial manufacture and testing of full size or scalable models of key components. The ITER design is at an advanced stage of maturity and contains sufficient technical information for a construction decision. The operation of ITER will demonstrate the availability of a new energy source, fusion. (author)
Directory of Open Access Journals (Sweden)
D. G. Patalakh
2018-02-01
Full Text Available Purpose. Development of calculation of electromagnetic and electromechanic transients is in asynchronous engines without iterations. Methodology. Numeral methods of integration of usual differential equations, programming. Findings. As the system of equations, describing the dynamics of asynchronous engine, contents the products of rotor and stator currents and product of rotation frequency of rotor and currents, so this system is nonlinear one. The numeral solution of nonlinear differential equations supposes an iteration process on every step of integration. Time-continuing and badly converging iteration process may be the reason of calculation slowing. The improvement of numeral method by the way of an iteration process removing is offered. As result the modeling time is reduced. The improved numeral method is applied for integration of differential equations, describing the dynamics of asynchronous engine. Originality. The improvement of numeral method allowing to execute numeral integrations of differential equations containing product of functions is offered, that allows to avoid an iteration process on every step of integration and shorten modeling time. Practical value. On the basis of the offered methodology the universal program of modeling of electromechanics processes in asynchronous engines could be developed as taking advantage on fast-acting.
Update of the ITER MELCOR model for the validation of the Cryostat design
Energy Technology Data Exchange (ETDEWEB)
Martínez, M.; Labarta, C.; Terrón, S.; Izquierdo, J.; Perlado, J.M.
2015-07-01
Some transients can compromise the vacuum in the Cryostat of ITER and cause significant loads. A MELCOR model has been updated in order to assess this loads. Transients have been run with this model and its result will be used in the mechanical assessment of the cryostat. (Author)
Ichikawa, Yasutaka; Kitagawa, Kakuya; Nagasawa, Naoki; Murashima, Shuichi; Sakuma, Hajime
2013-08-09
The recently developed model-based iterative reconstruction (MBIR) enables significant reduction of image noise and artifacts, compared with adaptive statistical iterative reconstruction (ASIR) and filtered back projection (FBP). The purpose of this study was to evaluate lesion detectability of low-dose chest computed tomography (CT) with MBIR in comparison with ASIR and FBP. Chest CT was acquired with 64-slice CT (Discovery CT750HD) with standard-dose (5.7 ± 2.3 mSv) and low-dose (1.6 ± 0.8 mSv) conditions in 55 patients (aged 72 ± 7 years) who were suspected of lung disease on chest radiograms. Low-dose CT images were reconstructed with MBIR, ASIR 50% and FBP, and standard-dose CT images were reconstructed with FBP, using a reconstructed slice thickness of 0.625 mm. Two observers evaluated the image quality of abnormal lung and mediastinal structures on a 5-point scale (Score 5 = excellent and score 1 = non-diagnostic). The objective image noise was also measured as the standard deviation of CT intensity in the descending aorta. The image quality score of enlarged mediastinal lymph nodes on low-dose MBIR CT (4.7 ± 0.5) was significantly improved in comparison with low-dose FBP and ASIR CT (3.0 ± 0.5, p = 0.004; 4.0 ± 0.5, p = 0.02, respectively), and was nearly identical to the score of standard-dose FBP image (4.8 ± 0.4, p = 0.66). Concerning decreased lung attenuation (bulla, emphysema, or cyst), the image quality score on low-dose MBIR CT (4.9 ± 0.2) was slightly better compared to low-dose FBP and ASIR CT (4.5 ± 0.6, p = 0.01; 4.6 ± 0.5, p = 0.01, respectively). There were no significant differences in image quality scores of visualization of consolidation or mass, ground-glass attenuation, or reticular opacity among low- and standard-dose CT series. Image noise with low-dose MBIR CT (11.6 ± 1.0 Hounsfield units (HU)) were significantly lower than with low-dose ASIR (21.1 ± 2.6 HU, p standard-dose FBP CT (16.6 ± 2.3 HU, p 70%, MBIR can provide
International Nuclear Information System (INIS)
Rosenbluth, M.N.
1999-01-01
The design of an experimental thermonuclear reactor requires both cutting-edge technology and physics predictions precise enough to carry forward the design. The past few years of worldwide physics studies have seen great progress in understanding, innovation and integration. We will discuss this progress and the remaining issues in several key physics areas. (1) Transport and plasma confinement. A worldwide database has led to an 'empirical scaling law' for tokamaks which predicts adequate confinement for the ITER fusion mission, albeit with considerable but acceptable uncertainty. The ongoing revolution in computer capabilities has given rise to new gyrofluid and gyrokinetic simulations of microphysics which may be expected in the near future to attain predictive accuracy. Important databases on H-mode characteristics and helium retention have also been assembled. (2) Divertors, heat removal and fuelling. A novel concept for heat removal - the radiative, baffled, partially detached divertor - has been designed for ITER. Extensive two-dimensional (2D) calculations have been performed and agree qualitatively with recent experiments. Preliminary studies of the interaction of this configuration with core confinement are encouraging and the success of inside pellet launch provides an attractive alternative fuelling method. (3) Macrostability. The ITER mission can be accomplished well within ideal magnetohydrodynamic (MHD) stability limits, except for internal kink modes. Comparisons with JET, as well as a theoretical model including kinetic effects, predict such sawteeth will be benign in ITER. Alternative scenarios involving delayed current penetration or off-axis current drive may be employed if required. The recent discovery of neoclassical beta limits well below ideal MHD limits poses a threat to performance. Extrapolation to reactor scale is as yet unclear. In theory such modes are controllable by current drive profile control or feedback and experiments should
DEFF Research Database (Denmark)
Van Daele, Timothy; Gernaey, Krist V.; Ringborg, Rolf Hoffmeyer
2017-01-01
The aim of model calibration is to estimate unique parameter values from available experimental data, here applied to a biocatalytic process. The traditional approach of first gathering data followed by performing a model calibration is inefficient, since the information gathered during...... experimentation is not actively used to optimise the experimental design. By applying an iterative robust model-based optimal experimental design, the limited amount of data collected is used to design additional informative experiments. The algorithm is used here to calibrate the initial reaction rate of an ω......-transaminase catalysed reaction in a more accurate way. The parameter confidence region estimated from the Fisher Information Matrix is compared with the likelihood confidence region, which is a more accurate, but also a computationally more expensive method. As a result, an important deviation between both approaches...
Coupled iterated map models of action potential dynamics in a one-dimensional cable of cardiac cells
International Nuclear Information System (INIS)
Wang Shihong; Xie Yuanfang; Qu Zhilin
2008-01-01
Low-dimensional iterated map models have been widely used to study action potential dynamics in isolated cardiac cells. Coupled iterated map models have also been widely used to investigate action potential propagation dynamics in one-dimensional (1D) coupled cardiac cells, however, these models are usually empirical and not carefully validated. In this study, we first developed two coupled iterated map models which are the standard forms of diffusively coupled maps and overcome the limitations of the previous models. We then determined the coupling strength and space constant by quantitatively comparing the 1D action potential duration profile from the coupled cardiac cell model described by differential equations with that of the coupled iterated map models. To further validate the coupled iterated map models, we compared the stability conditions of the spatially uniform state of the coupled iterated maps and those of the 1D ionic model and showed that the coupled iterated map model could well recapitulate the stability conditions, i.e. the spatially uniform state is stable unless the state is chaotic. Finally, we combined conduction into the developed coupled iterated map model to study the effects of coupling strength on wave stabilities and showed that the diffusive coupling between cardiac cells tends to suppress instabilities during reentry in a 1D ring and the onset of discordant alternans in a periodically paced 1D cable
An Overview of Recent Advances in the Iterative Analysis of Coupled Models for Wave Propagation
Directory of Open Access Journals (Sweden)
D. Soares
2014-01-01
Full Text Available Wave propagation problems can be solved using a variety of methods. However, in many cases, the joint use of different numerical procedures to model different parts of the problem may be advisable and strategies to perform the coupling between them must be developed. Many works have been published on this subject, addressing the case of electromagnetic, acoustic, or elastic waves and making use of different strategies to perform this coupling. Both direct and iterative approaches can be used, and they may exhibit specific advantages and disadvantages. This work focuses on the use of iterative coupling schemes for the analysis of wave propagation problems, presenting an overview of the application of iterative procedures to perform the coupling between different methods. Both frequency- and time-domain analyses are addressed, and problems involving acoustic, mechanical, and electromagnetic wave propagation problems are illustrated.
Extending the reach of strong-coupling: an iterative technique for Hamiltonian lattice models
International Nuclear Information System (INIS)
Alberty, J.; Greensite, J.; Patkos, A.
1983-12-01
The authors propose an iterative method for doing lattice strong-coupling-like calculations in a range of medium to weak couplings. The method is a modified Lanczos scheme, with greatly improved convergence properties. The technique is tested on the Mathieu equation and on a Hamiltonian finite-chain XY model, with excellent results. (Auth.)
Polynomial factor models : non-iterative estimation via method-of-moments
Schuberth, Florian; Büchner, Rebecca; Schermelleh-Engel, Karin; Dijkstra, Theo K.
2017-01-01
We introduce a non-iterative method-of-moments estimator for non-linear latent variable (LV) models. Under the assumption of joint normality of all exogenous variables, we use the corrected moments of linear combinations of the observed indicators (proxies) to obtain consistent path coefficient and
Solving large test-day models by iteration on data and preconditioned conjugate gradient.
Lidauer, M; Strandén, I; Mäntysaari, E A; Pösö, J; Kettunen, A
1999-12-01
A preconditioned conjugate gradient method was implemented into an iteration on a program for data estimation of breeding values, and its convergence characteristics were studied. An algorithm was used as a reference in which one fixed effect was solved by Gauss-Seidel method, and other effects were solved by a second-order Jacobi method. Implementation of the preconditioned conjugate gradient required storing four vectors (size equal to number of unknowns in the mixed model equations) in random access memory and reading the data at each round of iteration. The preconditioner comprised diagonal blocks of the coefficient matrix. Comparison of algorithms was based on solutions of mixed model equations obtained by a single-trait animal model and a single-trait, random regression test-day model. Data sets for both models used milk yield records of primiparous Finnish dairy cows. Animal model data comprised 665,629 lactation milk yields and random regression test-day model data of 6,732,765 test-day milk yields. Both models included pedigree information of 1,099,622 animals. The animal model ¿random regression test-day model¿ required 122 ¿305¿ rounds of iteration to converge with the reference algorithm, but only 88 ¿149¿ were required with the preconditioned conjugate gradient. To solve the random regression test-day model with the preconditioned conjugate gradient required 237 megabytes of random access memory and took 14% of the computation time needed by the reference algorithm.
Development of Acoustic Model-Based Iterative Reconstruction Technique for Thick-Concrete Imaging
Energy Technology Data Exchange (ETDEWEB)
Almansouri, Hani [Purdue University; Clayton, Dwight A [ORNL; Kisner, Roger A [ORNL; Polsky, Yarom [ORNL; Bouman, Charlie [Purdue University; Santos-Villalobos, Hector J [ORNL
2015-01-01
Ultrasound signals have been used extensively for non-destructive evaluation (NDE). However, typical reconstruction techniques, such as the synthetic aperture focusing technique (SAFT), are limited to quasi-homogenous thin media. New ultrasonic systems and reconstruction algorithms are in need for one-sided NDE of non-homogenous thick objects. An application example space is imaging of reinforced concrete structures for commercial nuclear power plants (NPPs). These structures provide important foundation, support, shielding, and containment functions. Identification and management of aging and degradation of concrete structures is fundamental to the proposed long-term operation of NPPs. Another example is geothermal and oil/gas production wells. These multi-layered structures are composed of steel, cement, and several types of soil and rocks. Ultrasound systems with greater penetration range and image quality will allow for better monitoring of the well s health and prediction of high-pressure hydraulic fracturing of the rock. These application challenges need to be addressed with an integrated imaging approach, where the application, hardware, and reconstruction software are highly integrated and optimized. Therefore, we are developing an ultrasonic system with Model-Based Iterative Reconstruction (MBIR) as the image reconstruction backbone. As the first implementation of MBIR for ultrasonic signals, this paper document the first implementation of the algorithm and show reconstruction results for synthetically generated data.
Development of Acoustic Model-Based Iterative Reconstruction Technique for Thick-Concrete Imaging
Energy Technology Data Exchange (ETDEWEB)
Almansouri, Hani [Purdue University; Clayton, Dwight A [ORNL; Kisner, Roger A [ORNL; Polsky, Yarom [ORNL; Bouman, Charlie [Purdue University; Santos-Villalobos, Hector J [ORNL
2016-01-01
Ultrasound signals have been used extensively for non-destructive evaluation (NDE). However, typical reconstruction techniques, such as the synthetic aperture focusing technique (SAFT), are limited to quasi-homogenous thin media. New ultrasonic systems and reconstruction algorithms are in need for one-sided NDE of non-homogenous thick objects. An application example space is imaging of reinforced concrete structures for commercial nuclear power plants (NPPs). These structures provide important foundation, support, shielding, and containment functions. Identification and management of aging and degradation of concrete structures is fundamental to the proposed long-term operation of NPPs. Another example is geothermal and oil/gas production wells. These multi-layered structures are composed of steel, cement, and several types of soil and rocks. Ultrasound systems with greater penetration range and image quality will allow for better monitoring of the well's health and prediction of high-pressure hydraulic fracturing of the rock. These application challenges need to be addressed with an integrated imaging approach, where the application, hardware, and reconstruction software are highly integrated and optimized. Therefore, we are developing an ultrasonic system with Model-Based Iterative Reconstruction (MBIR) as the image reconstruction backbone. As the first implementation of MBIR for ultrasonic signals, this paper document the first implementation of the algorithm and show reconstruction results for synthetically generated data.
Development of acoustic model-based iterative reconstruction technique for thick-concrete imaging
Almansouri, Hani; Clayton, Dwight; Kisner, Roger; Polsky, Yarom; Bouman, Charles; Santos-Villalobos, Hector
2016-02-01
Ultrasound signals have been used extensively for non-destructive evaluation (NDE). However, typical reconstruction techniques, such as the synthetic aperture focusing technique (SAFT), are limited to quasi-homogenous thin media. New ultrasonic systems and reconstruction algorithms are in need for one-sided NDE of non-homogenous thick objects. An application example space is imaging of reinforced concrete structures for commercial nuclear power plants (NPPs). These structures provide important foundation, support, shielding, and containment functions. Identification and management of aging and degradation of concrete structures is fundamental to the proposed long-term operation of NPPs. Another example is geothermal and oil/gas production wells. These multi-layered structures are composed of steel, cement, and several types of soil and rocks. Ultrasound systems with greater penetration range and image quality will allow for better monitoring of the well's health and prediction of high-pressure hydraulic fracturing of the rock. These application challenges need to be addressed with an integrated imaging approach, where the application, hardware, and reconstruction software are highly integrated and optimized. Therefore, we are developing an ultrasonic system with Model-Based Iterative Reconstruction (MBIR) as the image reconstruction backbone. As the first implementation of MBIR for ultrasonic signals, this paper document the first implementation of the algorithm and show reconstruction results for synthetically generated data.1
Systematic vacuum study of the ITER model cryopump by test particle Monte Carlo simulation
Energy Technology Data Exchange (ETDEWEB)
Luo, Xueli; Haas, Horst; Day, Christian [Institute for Technical Physics, Karlsruhe Institute of Technology, P.O. Box 3640, 76021 Karlsruhe (Germany)
2011-07-01
The primary pumping systems on the ITER torus are based on eight tailor-made cryogenic pumps because not any standard commercial vacuum pump can meet the ITER working criteria. This kind of cryopump can provide high pumping speed, especially for light gases, by the cryosorption on activated charcoal at 4.5 K. In this paper we will present the systematic Monte Carlo simulation results of the model pump in a reduced scale by ProVac3D, a new Test Particle Monte Carlo simulation program developed by KIT. The simulation model has included the most important mechanical structures such as sixteen cryogenic panels working at 4.5 K, the 80 K radiation shield envelope with baffles, the pump housing, inlet valve and the TIMO (Test facility for the ITER Model Pump) test facility. Three typical gas species, i.e., deuterium, protium and helium are simulated. The pumping characteristics have been obtained. The result is in good agreement with the experiment data up to the gas throughput of 1000 sccm, which marks the limit for free molecular flow. This means that ProVac3D is a useful tool in the design of the prototype cryopump of ITER. Meanwhile, the capture factors at different critical positions are calculated. They can be used as the important input parameters for a follow-up Direct Simulation Monte Carlo (DSMC) simulation for higher gas throughput.
Completion of the ITER central solenoid model coils installation
International Nuclear Information System (INIS)
Tsuji, H.
1999-01-01
The short article details how dozens of problems, regarding the central solenoid model coils installation, were faced and successfully overcome one by one at JAERI-Naga. A black and white photograph shows K. Kwano, a staff member of the JAERI superconducting magnet laboratory, to be still inside the vacuum tank while the lid is already being brought down..
Electrostatic ion thrusters - towards predictive modeling
Energy Technology Data Exchange (ETDEWEB)
Kalentev, O.; Matyash, K.; Duras, J.; Lueskow, K.F.; Schneider, R. [Ernst-Moritz-Arndt Universitaet Greifswald, D-17489 (Germany); Koch, N. [Technische Hochschule Nuernberg Georg Simon Ohm, Kesslerplatz 12, D-90489 Nuernberg (Germany); Schirra, M. [Thales Electronic Systems GmbH, Soeflinger Strasse 100, D-89077 Ulm (Germany)
2014-02-15
The development of electrostatic ion thrusters so far has mainly been based on empirical and qualitative know-how, and on evolutionary iteration steps. This resulted in considerable effort regarding prototype design, construction and testing and therefore in significant development and qualification costs and high time demands. For future developments it is anticipated to implement simulation tools which allow for quantitative prediction of ion thruster performance, long-term behavior and space craft interaction prior to hardware design and construction. Based on integrated numerical models combining self-consistent kinetic plasma models with plasma-wall interaction modules a new quality in the description of electrostatic thrusters can be reached. These open the perspective for predictive modeling in this field. This paper reviews the application of a set of predictive numerical modeling tools on an ion thruster model of the HEMP-T (High Efficiency Multi-stage Plasma Thruster) type patented by Thales Electron Devices GmbH. (copyright 2014 WILEY-VCH Verlag GmbH and Co. KGaA, Weinheim) (orig.)
Confidence scores for prediction models
DEFF Research Database (Denmark)
Gerds, Thomas Alexander; van de Wiel, MA
2011-01-01
In medical statistics, many alternative strategies are available for building a prediction model based on training data. Prediction models are routinely compared by means of their prediction performance in independent validation data. If only one data set is available for training and validation,...
ATHENA calculation model for the ITER-FEAT divertor cooling system. Final report with updates
International Nuclear Information System (INIS)
Eriksson, John; Sjoeberg, A.; Sponton, L.L.
2001-05-01
An ATHENA model of the ITER-FEAT divertor cooling system has been developed for the purpose of calculating and evaluating consequences of different thermal-hydraulic accidents as specified in the Accident Analysis Specifications for the ITER-FEAT Generic Site Safety Report. The model is able to assess situations for a variety of conceivable operational transients from small flow disturbances to more critical conditions such as total blackout caused by a loss of offsite and emergency power. The main objective for analyzing this type of scenarios is to determine margins against jeopardizing the integrity of the divertor cooling system components and pipings. The model of the divertor primary heat transport system encompasses the divertor cassettes, the port limiter systems, the pressurizer, the heat exchanger and all feed and return pipes of these components. The development was pursued according to practices and procedures outlined in the ATHENA code manuals using available modelling components such as volumes, junctions, heat structures and process controls
A numerical model for the simulation of quench in the ITER magnets
International Nuclear Information System (INIS)
Bottura, L.
1996-01-01
A computational model describing the initiation and evolution of normal zones in the cable-in-conduit superconductors designed for the international thermonuclear experimental reactor (ITER) is presented. Because of the particular geometry of the ITER cables, the model treats separately the helium momenta in the two cooling channels and the temperatures of the cable constituents. The numerical implementation of the model is discussed in conjunction with the selection of a well-suited solution algorithm. In particular, the solution procedure chosen is based on an implicit upwind finite element technique with adaptive time step and mesh size adjustment possibilities. The time step and mesh adaption procedures are described. Examples of application of the model are also reported. 39 refs., 6 figs., 2 tabs
International Nuclear Information System (INIS)
Troyon, F.
1997-01-01
Recurrent attacks against ITER, the new generation of tokamak are a mix of political and scientific arguments. This short article draws a historical review of the European fusion program. This program has allowed to build and manage several installations in the aim of getting experimental results necessary to lead the program forwards. ITER will bring together a fusion reactor core with technologies such as materials, superconductive coils, heating devices and instrumentation in order to validate and delimit the operating range. ITER will be a logical and decisive step towards the use of controlled fusion. (A.C.)
Kazemi, Mahdi; Arefi, Mohammad Mehdi
2017-03-01
In this paper, an online identification algorithm is presented for nonlinear systems in the presence of output colored noise. The proposed method is based on extended recursive least squares (ERLS) algorithm, where the identified system is in polynomial Wiener form. To this end, an unknown intermediate signal is estimated by using an inner iterative algorithm. The iterative recursive algorithm adaptively modifies the vector of parameters of the presented Wiener model when the system parameters vary. In addition, to increase the robustness of the proposed method against variations, a robust RLS algorithm is applied to the model. Simulation results are provided to show the effectiveness of the proposed approach. Results confirm that the proposed method has fast convergence rate with robust characteristics, which increases the efficiency of the proposed model and identification approach. For instance, the FIT criterion will be achieved 92% in CSTR process where about 400 data is used. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.
Scaling of the MHD perturbation amplitude required to trigger a disruption and predictions for ITER
Czech Academy of Sciences Publication Activity Database
de Vries, P.C.; Pautasso, G.; Nardon, E.; Cahyna, Pavel; Gerasimov, S.; Havlíček, Josef; Hender, T.C.; Huijsmans, G.T.A.; Lehnen, M.; Maraschek, M.; Markovič, Tomáš; Snipes, J.A.
2016-01-01
Roč. 56, č. 2 (2016), č. článku 026007. ISSN 0029-5515 R&D Projects: GA MŠk(CZ) LM2011021 EU Projects: European Commission(XE) 633053 - EUROfusion Institutional support: RVO:61389021 Keywords : disruptions * locked modes * MHD instabilities * ITER * COMPASS tokamak Subject RIV: BL - Plasma and Gas Discharge Physics OBOR OECD: Fluids and plasma physics (including surface physics) Impact factor: 3.307, year: 2016 http://iopscience.iop.org/article/10.1088/0029-5515/56/2/026007/meta
International Nuclear Information System (INIS)
Huguet, M.
2003-01-01
The ITER magnets are long-lead time items and the preparation of their construction is the subject of a major and coordinated effort of the ITER International Team and Participant Teams. The results of the ITER model coil programme constitute the basis and the main source of data for the preparation of the technical specifications for the procurement of the ITER magnets. A review of the salient results of the ITER model coil programme is given and the significance of these results for the preparation of full size industrial production is explained. The model coil programme has confirmed the validity of the design and the manufacturer's ability to produce the coils with the required quality level. The programme has also allowed the optimisation of the conductor design and the identification of further development which would lead to cost reductions of the toroidal field coil case. (author)
International Nuclear Information System (INIS)
Maekawa, Fujio; Konno, Chikara; Kasugai, Yoshimi; Oyama, Yukio; Uno, Yoshitomo; Maekawa, Hiroshi; Ikeda, Yujiro
2000-01-01
As an R and D Task of shielding neutronics experiment under the Engineering Design Activities of the International Thermonuclear Experimental Reactor (ITER), streaming experiments with simulating a gap configuration formed by two neighboring blanket modules of ITER were carried out at the FNS (Fusion Neutron Source) facility. In this work, prediction capability of various nuclear design parameters was investigated through analysis of the experiments. The Monte Carlo transport calculation code MCNP-4A and the FENDL/E-1.0 and JENDL Fusion File cross section data libraries were used for the analysis with detailed modeling of the experimental conditions. As a result, all the measured quantities were reproduced within about ±30% by the calculations. It was concluded that these calculation tools were capable of predicting nuclear design parameters, such as helium production rates at connection legs of blanket modules to the back plate and nuclear responses in toroidal field coils, with uncertainty of ±30% for the geometry where gap-streaming effect was significant. (author)
Building generic anatomical models using virtual model cutting and iterative registration
Directory of Open Access Journals (Sweden)
Hallgrímsson Benedikt
2010-02-01
Full Text Available Abstract Background Using 3D generic models to statistically analyze trends in biological structure changes is an important tool in morphometrics research. Therefore, 3D generic models built for a range of populations are in high demand. However, due to the complexity of biological structures and the limited views of them that medical images can offer, it is still an exceptionally difficult task to quickly and accurately create 3D generic models (a model is a 3D graphical representation of a biological structure based on medical image stacks (a stack is an ordered collection of 2D images. We show that the creation of a generic model that captures spatial information exploitable in statistical analyses is facilitated by coupling our generalized segmentation method to existing automatic image registration algorithms. Methods The method of creating generic 3D models consists of the following processing steps: (i scanning subjects to obtain image stacks; (ii creating individual 3D models from the stacks; (iii interactively extracting sub-volume by cutting each model to generate the sub-model of interest; (iv creating image stacks that contain only the information pertaining to the sub-models; (v iteratively registering the corresponding new 2D image stacks; (vi averaging the newly created sub-models based on intensity to produce the generic model from all the individual sub-models. Results After several registration procedures are applied to the image stacks, we can create averaged image stacks with sharp boundaries. The averaged 3D model created from those image stacks is very close to the average representation of the population. The image registration time varies depending on the image size and the desired accuracy of the registration. Both volumetric data and surface model for the generic 3D model are created at the final step. Conclusions Our method is very flexible and easy to use such that anyone can use image stacks to create models and
PREDICTED PERCENTAGE DISSATISFIED (PPD) MODEL ...
African Journals Online (AJOL)
HOD
their low power requirements, are relatively cheap and are environment friendly. ... PREDICTED PERCENTAGE DISSATISFIED MODEL EVALUATION OF EVAPORATIVE COOLING ... The performance of direct evaporative coolers is a.
Mechanical and Electrical Modeling of Strands in Two ITER CS Cable Designs
Torre, A; Ciazynski, D
2014-01-01
Following the test of the first Central Solenoid (CS) conductor short samples for the International Thermonuclear Experimental Reactor (ITER) in the SULTAN facility, Iter Organization (IO) decided to manufacture and test two alternate samples using four different cable designs. These samples, while using the same Nb$_{3}$Sn strand, were meant to assess the influence of various cable design parameters on the conductor performance and behavior under mechanical cycling. In particular, the second of these samples, CSIO2, aimed at comparing designs with modified cabling twist pitches sequences. This sample has been tested, and the two legs exhibited very different behaviors. To help understand what could lead to such a difference, these two cables were mechanically modeled using the MULTIFIL code, and the resulting strain map was used as an input into the CEA electrical code CARMEN. This article presents the main data extracted from the mechanical simulation and its use into the electrical modeling of individual s...
Clustered iterative stochastic ensemble method for multi-modal calibration of subsurface flow models
Elsheikh, Ahmed H.
2013-05-01
A novel multi-modal parameter estimation algorithm is introduced. Parameter estimation is an ill-posed inverse problem that might admit many different solutions. This is attributed to the limited amount of measured data used to constrain the inverse problem. The proposed multi-modal model calibration algorithm uses an iterative stochastic ensemble method (ISEM) for parameter estimation. ISEM employs an ensemble of directional derivatives within a Gauss-Newton iteration for nonlinear parameter estimation. ISEM is augmented with a clustering step based on k-means algorithm to form sub-ensembles. These sub-ensembles are used to explore different parts of the search space. Clusters are updated at regular intervals of the algorithm to allow merging of close clusters approaching the same local minima. Numerical testing demonstrates the potential of the proposed algorithm in dealing with multi-modal nonlinear parameter estimation for subsurface flow models. © 2013 Elsevier B.V.
An Iterative Algorithm to Determine the Dynamic User Equilibrium in a Traffic Simulation Model
Gawron, C.
An iterative algorithm to determine the dynamic user equilibrium with respect to link costs defined by a traffic simulation model is presented. Each driver's route choice is modeled by a discrete probability distribution which is used to select a route in the simulation. After each simulation run, the probability distribution is adapted to minimize the travel costs. Although the algorithm does not depend on the simulation model, a queuing model is used for performance reasons. The stability of the algorithm is analyzed for a simple example network. As an application example, a dynamic version of Braess's paradox is studied.
Dynamic analysis of ITER tokamak. Based on results of vibration test using scaled model
International Nuclear Information System (INIS)
Takeda, Nobukazu; Kakudate, Satoshi; Nakahira, Masataka
2005-01-01
The vibration experiments of the support structures with flexible plates for the ITER major components such as toroidal field coil (TF coil) and vacuum vessel (VV) were performed using small-sized flexible plates aiming to obtain its basic mechanical characteristics such as dependence of the stiffness on the loading angle. The experimental results were compared with the analytical ones in order to estimate an adequate analytical model for ITER support structure with flexible plates. As a result, the bolt connection of the flexible plates on the base plate strongly affected on the stiffness of the flexible plates. After studies of modeling the connection of the bolts, it is found that the analytical results modeling the bolts with finite stiffness only in the axial direction and infinite stiffness in the other directions agree well with the experimental ones. Based on this, numerical analysis regarding the actual support structure of the ITER VV and TF coil was performed. The support structure composed of flexible plates and connection bolts was modeled as a spring composed of only two spring elements simulating the in-plane and out-of-plane stiffness of the support structure with flexible plates including the effect of connection bolts. The stiffness of both spring models for VV and TF coil agree well with that of shell models, simulating actual structures such as flexible plates and connection bolts based on the experimental results. It is therefore found that the spring model with the only two values of stiffness enables to simplify the complicated support structure with flexible plates for the dynamic analysis of the VV and TF coil. Using the proposed spring model, the dynamic analysis of the VV and TF coil for the ITER were performed to estimate the integrity under the design earthquake. As a result, it is found that the maximum relative displacement of 8.6 mm between VV and TF coil is much less than 100 mm, so that the integrity of the VV and TF coil of the
Jackson, B. V.; Yu, H. S.; Hick, P. P.; Buffington, A.; Odstrcil, D.; Kim, T. K.; Pogorelov, N. V.; Tokumaru, M.; Bisi, M. M.; Kim, J.; Yun, J.
2017-12-01
The University of California, San Diego has developed an iterative remote-sensing time-dependent three-dimensional (3-D) reconstruction technique which provides volumetric maps of density, velocity, and magnetic field. We have applied this technique in near real time for over 15 years with a kinematic model approximation to fit data from ground-based interplanetary scintillation (IPS) observations. Our modeling concept extends volumetric data from an inner boundary placed above the Alfvén surface out to the inner heliosphere. We now use this technique to drive 3-D MHD models at their inner boundary and generate output 3-D data files that are fit to remotely-sensed observations (in this case IPS observations), and iterated. These analyses are also iteratively fit to in-situ spacecraft measurements near Earth. To facilitate this process, we have developed a traceback from input 3-D MHD volumes to yield an updated boundary in density, temperature, and velocity, which also includes magnetic-field components. Here we will show examples of this analysis using the ENLIL 3D-MHD and the University of Alabama Multi-Scale Fluid-Kinetic Simulation Suite (MS-FLUKSS) heliospheric codes. These examples help refine poorly-known 3-D MHD variables (i.e., density, temperature), and parameters (gamma) by fitting heliospheric remotely-sensed data between the region near the solar surface and in-situ measurements near Earth.
Energy Technology Data Exchange (ETDEWEB)
Notohamiprodjo, S.; Deak, Z.; Meurer, F.; Maertz, F.; Mueck, F.G.; Geyer, L.L.; Wirth, S. [Ludwig-Maximilians University Hospital of Munich, Institute for Clinical Radiology, Munich (Germany)
2015-01-15
The purpose of this study was to compare cranial CT (CCT) image quality (IQ) of the MBIR algorithm with standard iterative reconstruction (ASiR). In this institutional review board (IRB)-approved study, raw data sets of 100 unenhanced CCT examinations (120 kV, 50-260 mAs, 20 mm collimation, 0.984 pitch) were reconstructed with both ASiR and MBIR. Signal-to-noise (SNR) and contrast-to-noise (CNR) were calculated from attenuation values measured in caudate nucleus, frontal white matter, anterior ventricle horn, fourth ventricle, and pons. Two radiologists, who were blinded to the reconstruction algorithms, evaluated anonymized multiplanar reformations of 2.5 mm with respect to depiction of different parenchymal structures and impact of artefacts on IQ with a five-point scale (0: unacceptable, 1: less than average, 2: average, 3: above average, 4: excellent). MBIR decreased artefacts more effectively than ASiR (p < 0.01). The median depiction score for MBIR was 3, whereas the median value for ASiR was 2 (p < 0.01). SNR and CNR were significantly higher in MBIR than ASiR (p < 0.01). MBIR showed significant improvement of IQ parameters compared to ASiR. As CCT is an examination that is frequently required, the use of MBIR may allow for substantial reduction of radiation exposure caused by medical diagnostics. (orig.)
International Nuclear Information System (INIS)
Notohamiprodjo, S.; Deak, Z.; Meurer, F.; Maertz, F.; Mueck, F.G.; Geyer, L.L.; Wirth, S.
2015-01-01
The purpose of this study was to compare cranial CT (CCT) image quality (IQ) of the MBIR algorithm with standard iterative reconstruction (ASiR). In this institutional review board (IRB)-approved study, raw data sets of 100 unenhanced CCT examinations (120 kV, 50-260 mAs, 20 mm collimation, 0.984 pitch) were reconstructed with both ASiR and MBIR. Signal-to-noise (SNR) and contrast-to-noise (CNR) were calculated from attenuation values measured in caudate nucleus, frontal white matter, anterior ventricle horn, fourth ventricle, and pons. Two radiologists, who were blinded to the reconstruction algorithms, evaluated anonymized multiplanar reformations of 2.5 mm with respect to depiction of different parenchymal structures and impact of artefacts on IQ with a five-point scale (0: unacceptable, 1: less than average, 2: average, 3: above average, 4: excellent). MBIR decreased artefacts more effectively than ASiR (p < 0.01). The median depiction score for MBIR was 3, whereas the median value for ASiR was 2 (p < 0.01). SNR and CNR were significantly higher in MBIR than ASiR (p < 0.01). MBIR showed significant improvement of IQ parameters compared to ASiR. As CCT is an examination that is frequently required, the use of MBIR may allow for substantial reduction of radiation exposure caused by medical diagnostics. (orig.)
Numerical modeling of the radiative transfer in a turbid medium using the synthetic iteration.
Budak, Vladimir P; Kaloshin, Gennady A; Shagalov, Oleg V; Zheltov, Victor S
2015-07-27
In this paper we propose the fast, but the accurate algorithm for numerical modeling of light fields in the turbid media slab. For the numerical solution of the radiative transfer equation (RTE) it is required its discretization based on the elimination of the solution anisotropic part and the replacement of the scattering integral by a finite sum. The solution regular part is determined numerically. A good choice of the method of the solution anisotropic part elimination determines the high convergence of the algorithm in the mean square metric. The method of synthetic iterations can be used to improve the convergence in the uniform metric. A significant increase in the solution accuracy with the use of synthetic iterations allows applying the two-stream approximation for the regular part determination. This approach permits to generalize the proposed method in the case of an arbitrary 3D geometry of the medium.
ITER EDA newsletter. V. 7, no. 7
International Nuclear Information System (INIS)
1998-07-01
This newsletter contains the articles: 'Extraordinary ITER council meeting', 'ITER EDA final safety meeting' and 'Summary report of the 3rd combined workshop of the ITER confinement and transport and ITER confinement database and modeling expert groups'
Directory of Open Access Journals (Sweden)
C. Xu
2016-06-01
Full Text Available Automatic image registration is a vital yet challenging task, particularly for multi-sensor remote sensing images. Given the diversity of the data, it is unlikely that a single registration algorithm or a single image feature will work satisfactorily for all applications. Focusing on this issue, the mainly contribution of this paper is to propose an automatic optical-to-SAR image registration method using –level and refinement model: Firstly, a multi-level strategy of coarse-to-fine registration is presented, the visual saliency features is used to acquire coarse registration, and then specific area and line features are used to refine the registration result, after that, sub-pixel matching is applied using KNN Graph. Secondly, an iterative strategy that involves adaptive parameter adjustment for re-extracting and re-matching features is presented. Considering the fact that almost all feature-based registration methods rely on feature extraction results, the iterative strategy improve the robustness of feature matching. And all parameters can be automatically and adaptively adjusted in the iterative procedure. Thirdly, a uniform level set segmentation model for optical and SAR images is presented to segment conjugate features, and Voronoi diagram is introduced into Spectral Point Matching (VSPM to further enhance the matching accuracy between two sets of matching points. Experimental results show that the proposed method can effectively and robustly generate sufficient, reliable point pairs and provide accurate registration.
Energy Technology Data Exchange (ETDEWEB)
Kaasalainen, Touko; Lampinen, Anniina [University of Helsinki and Helsinki University Hospital, HUS Medical Imaging Center, Radiology, POB 340, Helsinki (Finland); University of Helsinki, Department of Physics, Helsinki (Finland); Palmu, Kirsi [University of Helsinki and Helsinki University Hospital, HUS Medical Imaging Center, Radiology, POB 340, Helsinki (Finland); School of Science, Aalto University, Department of Biomedical Engineering and Computational Science, Helsinki (Finland); Reijonen, Vappu; Kortesniemi, Mika [University of Helsinki and Helsinki University Hospital, HUS Medical Imaging Center, Radiology, POB 340, Helsinki (Finland); Leikola, Junnu [University of Helsinki and Helsinki University Hospital, Department of Plastic Surgery, Helsinki (Finland); Kivisaari, Riku [University of Helsinki and Helsinki University Hospital, Department of Neurosurgery, Helsinki (Finland)
2015-09-15
Medical professionals need to exercise particular caution when developing CT scanning protocols for children who require multiple CT studies, such as those with craniosynostosis. To evaluate the utility of ultra-low-dose CT protocols with model-based iterative reconstruction techniques for craniosynostosis imaging. We scanned two pediatric anthropomorphic phantoms with a 64-slice CT scanner using different low-dose protocols for craniosynostosis. We measured organ doses in the head region with metal-oxide-semiconductor field-effect transistor (MOSFET) dosimeters. Numerical simulations served to estimate organ and effective doses. We objectively and subjectively evaluated the quality of images produced by adaptive statistical iterative reconstruction (ASiR) 30%, ASiR 50% and Veo (all by GE Healthcare, Waukesha, WI). Image noise and contrast were determined for different tissues. Mean organ dose with the newborn phantom was decreased up to 83% compared to the routine protocol when using ultra-low-dose scanning settings. Similarly, for the 5-year phantom the greatest radiation dose reduction was 88%. The numerical simulations supported the findings with MOSFET measurements. The image quality remained adequate with Veo reconstruction, even at the lowest dose level. Craniosynostosis CT with model-based iterative reconstruction could be performed with a 20-μSv effective dose, corresponding to the radiation exposure of plain skull radiography, without compromising required image quality. (orig.)
A fast iterative model for discrete velocity calculations on triangular grids
International Nuclear Information System (INIS)
Szalmas, Lajos; Valougeorgis, Dimitris
2010-01-01
A fast synthetic type iterative model is proposed to speed up the slow convergence of discrete velocity algorithms for solving linear kinetic equations on triangular lattices. The efficiency of the scheme is verified both theoretically by a discrete Fourier stability analysis and computationally by solving a rarefied gas flow problem. The stability analysis of the discrete kinetic equations yields the spectral radius of the typical and the proposed iterative algorithms and reveal the drastically improved performance of the latter one for any grid resolution. This is the first time that stability analysis of the full discrete kinetic equations related to rarefied gas theory is formulated, providing the detailed dependency of the iteration scheme on the discretization parameters in the phase space. The corresponding characteristics of the model deduced by solving numerically the rarefied gas flow through a duct with triangular cross section are in complete agreement with the theoretical findings. The proposed approach may open a way for fast computation of rarefied gas flows on complex geometries in the whole range of gas rarefaction including the hydrodynamic regime.
Computed tomography depiction of small pediatric vessels with model-based iterative reconstruction
Energy Technology Data Exchange (ETDEWEB)
Koc, Gonca; Courtier, Jesse L.; Phelps, Andrew; Marcovici, Peter A.; MacKenzie, John D. [UCSF Benioff Children' s Hospital, Department of Radiology and Biomedical Imaging, San Francisco, CA (United States)
2014-07-15
Computed tomography (CT) is extremely important in characterizing blood vessel anatomy and vascular lesions in children. Recent advances in CT reconstruction technology hold promise for improved image quality and also reductions in radiation dose. This report evaluates potential improvements in image quality for the depiction of small pediatric vessels with model-based iterative reconstruction (Veo trademark), a technique developed to improve image quality and reduce noise. To evaluate Veo trademark as an improved method when compared to adaptive statistical iterative reconstruction (ASIR trademark) for the depiction of small vessels on pediatric CT. Seventeen patients (mean age: 3.4 years, range: 2 days to 10.0 years; 6 girls, 11 boys) underwent contrast-enhanced CT examinations of the chest and abdomen in this HIPAA compliant and institutional review board approved study. Raw data were reconstructed into separate image datasets using Veo trademark and ASIR trademark algorithms (GE Medical Systems, Milwaukee, WI). Four blinded radiologists subjectively evaluated image quality. The pulmonary, hepatic, splenic and renal arteries were evaluated for the length and number of branches depicted. Datasets were compared with parametric and non-parametric statistical tests. Readers stated a preference for Veo trademark over ASIR trademark images when subjectively evaluating image quality criteria for vessel definition, image noise and resolution of small anatomical structures. The mean image noise in the aorta and fat was significantly less for Veo trademark vs. ASIR trademark reconstructed images. Quantitative measurements of mean vessel lengths and number of branches vessels delineated were significantly different for Veo trademark and ASIR trademark images. Veo trademark consistently showed more of the vessel anatomy: longer vessel length and more branching vessels. When compared to the more established adaptive statistical iterative reconstruction algorithm, model
Computed tomography depiction of small pediatric vessels with model-based iterative reconstruction
International Nuclear Information System (INIS)
Koc, Gonca; Courtier, Jesse L.; Phelps, Andrew; Marcovici, Peter A.; MacKenzie, John D.
2014-01-01
Computed tomography (CT) is extremely important in characterizing blood vessel anatomy and vascular lesions in children. Recent advances in CT reconstruction technology hold promise for improved image quality and also reductions in radiation dose. This report evaluates potential improvements in image quality for the depiction of small pediatric vessels with model-based iterative reconstruction (Veo trademark), a technique developed to improve image quality and reduce noise. To evaluate Veo trademark as an improved method when compared to adaptive statistical iterative reconstruction (ASIR trademark) for the depiction of small vessels on pediatric CT. Seventeen patients (mean age: 3.4 years, range: 2 days to 10.0 years; 6 girls, 11 boys) underwent contrast-enhanced CT examinations of the chest and abdomen in this HIPAA compliant and institutional review board approved study. Raw data were reconstructed into separate image datasets using Veo trademark and ASIR trademark algorithms (GE Medical Systems, Milwaukee, WI). Four blinded radiologists subjectively evaluated image quality. The pulmonary, hepatic, splenic and renal arteries were evaluated for the length and number of branches depicted. Datasets were compared with parametric and non-parametric statistical tests. Readers stated a preference for Veo trademark over ASIR trademark images when subjectively evaluating image quality criteria for vessel definition, image noise and resolution of small anatomical structures. The mean image noise in the aorta and fat was significantly less for Veo trademark vs. ASIR trademark reconstructed images. Quantitative measurements of mean vessel lengths and number of branches vessels delineated were significantly different for Veo trademark and ASIR trademark images. Veo trademark consistently showed more of the vessel anatomy: longer vessel length and more branching vessels. When compared to the more established adaptive statistical iterative reconstruction algorithm, model
Bootstrap prediction and Bayesian prediction under misspecified models
Fushiki, Tadayoshi
2005-01-01
We consider a statistical prediction problem under misspecified models. In a sense, Bayesian prediction is an optimal prediction method when an assumed model is true. Bootstrap prediction is obtained by applying Breiman's `bagging' method to a plug-in prediction. Bootstrap prediction can be considered to be an approximation to the Bayesian prediction under the assumption that the model is true. However, in applications, there are frequently deviations from the assumed model. In this paper, bo...
Directory of Open Access Journals (Sweden)
Lei Yang
2014-01-01
Full Text Available Cliques (maximal complete subnets in protein-protein interaction (PPI network are an important resource used to analyze protein complexes and functional modules. Clique-based methods of predicting PPI complement the data defection from biological experiments. However, clique-based predicting methods only depend on the topology of network. The false-positive and false-negative interactions in a network usually interfere with prediction. Therefore, we propose a method combining clique-based method of prediction and gene ontology (GO annotations to overcome the shortcoming and improve the accuracy of predictions. According to different GO correcting rules, we generate two predicted interaction sets which guarantee the quality and quantity of predicted protein interactions. The proposed method is applied to the PPI network from the Database of Interacting Proteins (DIP and most of the predicted interactions are verified by another biological database, BioGRID. The predicted protein interactions are appended to the original protein network, which leads to clique extension and shows the significance of biological meaning.
ITER containment design-assist analysis
International Nuclear Information System (INIS)
Nguyen, T.H.
1992-03-01
In this report, the analysis methods, models and assumptions used to predict the pressure and temperature transients in the ITER containment following a loss of coolant accident are presented. The ITER reactor building is divided into 10 different volumes (zones) based on their functional design. The base model presented in this report will be modified in volume 2 in order to determine the peak pressure, the required size of openings between various functional zones and the differential pressures on walls separating these zones
DEFF Research Database (Denmark)
Precht, Helle; Kitslaar, Pieter H.; Broersen, Alexander
2017-01-01
Purpose: Investigate the influence of adaptive statistical iterative reconstruction (ASIR) and the model- based IR (Veo) reconstruction algorithm in coronary computed tomography angiography (CCTA) im- ages on quantitative measurements in coronary arteries for plaque volumes and intensities. Methods...
MODEL PREDICTIVE CONTROL FUNDAMENTALS
African Journals Online (AJOL)
2012-07-02
Jul 2, 2012 ... signal based on a process model, coping with constraints on inputs and ... paper, we will present an introduction to the theory and application of MPC with Matlab codes ... section 5 presents the simulation results and section 6.
Melanoma Risk Prediction Models
Developing statistical models that estimate the probability of developing melanoma cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.
Energy Technology Data Exchange (ETDEWEB)
Kuya, Keita; Shinohara, Yuki; Fujii, Shinya; Ogawa, Toshihide [Tottori University, Division of Radiology, Department of Pathophysiological Therapeutic Science, Faculty of Medicine, Yonago (Japan); Sakamoto, Makoto; Watanabe, Takashi [Tottori University, Division of Neurosurgery, Department of Brain and Neurosciences, Faculty of Medicine, Yonago (Japan); Iwata, Naoki; Kishimoto, Junichi [Tottori University, Division of Clinical Radiology Faculty of Medicine, Yonago (Japan); Kaminou, Toshio [Osaka Minami Medical Center, Department of Radiology, Osaka (Japan)
2014-11-15
Follow-up CT angiography (CTA) is routinely performed for post-procedure management after carotid artery stenting (CAS). However, the stent lumen tends to be underestimated because of stent artifacts on CTA reconstructed with the filtered back projection (FBP) technique. We assessed the utility of new iterative reconstruction techniques, such as adaptive statistical iterative reconstruction (ASIR) and model-based iterative reconstruction (MBIR), for CTA after CAS in comparison with FBP. In a phantom study, we evaluated the differences among the three reconstruction techniques with regard to the relationship between the stent luminal diameter and the degree of underestimation of stent luminal diameter. In a clinical study, 34 patients who underwent follow-up CTA after CAS were included. We compared the stent luminal diameters among FBP, ASIR, and MBIR, and performed visual assessment of low attenuation area (LAA) in the stent lumen using a three-point scale. In the phantom study, stent luminal diameter was increasingly underestimated as luminal diameter became smaller in all CTA images. Stent luminal diameter was larger with MBIR than with the other reconstruction techniques. Similarly, in the clinical study, stent luminal diameter was larger with MBIR than with the other reconstruction techniques. LAA detectability scores of MBIR were greater than or equal to those of FBP and ASIR in all cases. MBIR improved the accuracy of assessment of stent luminal diameter and LAA detectability in the stent lumen when compared with FBP and ASIR. We conclude that MBIR is a useful reconstruction technique for CTA after CAS. (orig.)
International Nuclear Information System (INIS)
Kuya, Keita; Shinohara, Yuki; Fujii, Shinya; Ogawa, Toshihide; Sakamoto, Makoto; Watanabe, Takashi; Iwata, Naoki; Kishimoto, Junichi; Kaminou, Toshio
2014-01-01
Follow-up CT angiography (CTA) is routinely performed for post-procedure management after carotid artery stenting (CAS). However, the stent lumen tends to be underestimated because of stent artifacts on CTA reconstructed with the filtered back projection (FBP) technique. We assessed the utility of new iterative reconstruction techniques, such as adaptive statistical iterative reconstruction (ASIR) and model-based iterative reconstruction (MBIR), for CTA after CAS in comparison with FBP. In a phantom study, we evaluated the differences among the three reconstruction techniques with regard to the relationship between the stent luminal diameter and the degree of underestimation of stent luminal diameter. In a clinical study, 34 patients who underwent follow-up CTA after CAS were included. We compared the stent luminal diameters among FBP, ASIR, and MBIR, and performed visual assessment of low attenuation area (LAA) in the stent lumen using a three-point scale. In the phantom study, stent luminal diameter was increasingly underestimated as luminal diameter became smaller in all CTA images. Stent luminal diameter was larger with MBIR than with the other reconstruction techniques. Similarly, in the clinical study, stent luminal diameter was larger with MBIR than with the other reconstruction techniques. LAA detectability scores of MBIR were greater than or equal to those of FBP and ASIR in all cases. MBIR improved the accuracy of assessment of stent luminal diameter and LAA detectability in the stent lumen when compared with FBP and ASIR. We conclude that MBIR is a useful reconstruction technique for CTA after CAS. (orig.)
Modelling bankruptcy prediction models in Slovak companies
Directory of Open Access Journals (Sweden)
Kovacova Maria
2017-01-01
Full Text Available An intensive research from academics and practitioners has been provided regarding models for bankruptcy prediction and credit risk management. In spite of numerous researches focusing on forecasting bankruptcy using traditional statistics techniques (e.g. discriminant analysis and logistic regression and early artificial intelligence models (e.g. artificial neural networks, there is a trend for transition to machine learning models (support vector machines, bagging, boosting, and random forest to predict bankruptcy one year prior to the event. Comparing the performance of this with unconventional approach with results obtained by discriminant analysis, logistic regression, and neural networks application, it has been found that bagging, boosting, and random forest models outperform the others techniques, and that all prediction accuracy in the testing sample improves when the additional variables are included. On the other side the prediction accuracy of old and well known bankruptcy prediction models is quiet high. Therefore, we aim to analyse these in some way old models on the dataset of Slovak companies to validate their prediction ability in specific conditions. Furthermore, these models will be modelled according to new trends by calculating the influence of elimination of selected variables on the overall prediction ability of these models.
Directory of Open Access Journals (Sweden)
Junqiu Liu
2018-04-01
Full Text Available In order to mitigate environmental and ecological impacts resulting from groundwater overexploitation, we developed a multiple-iterated dual control model consisting of four modules for groundwater exploitation and water level. First, a water resources allocation model integrating calculation module of groundwater allowable withdrawal was built to predict future groundwater recharge and discharge. Then, the results were input into groundwater numerical model to simulate water levels. Groundwater exploitation was continuously optimized using the critical groundwater level as the feedback, and a groundwater multiple-iterated technique was applied to the feedback process. The proposed model was successfully applied to a typical region in Shenyang in northeast China. Results showed the groundwater numerical model was verified in simulating water levels, with a mean absolute error of 0.44 m, an average relative error of 1.33%, and a root-mean-square error of 0.46 m. The groundwater exploitation reduced from 290.33 million m3 to 116.76 million m3 and the average water level recovered from 34.27 m to 34.72 m in planning year. Finally, we proposed the strategies for water resources management in which the water levels should be controlled within the critical groundwater level. The developed model provides a promising approach for water resources allocation and sustainable groundwater management, especially for those regions with overexploited groundwater.
Nonconvex model predictive control for commercial refrigeration
Gybel Hovgaard, Tobias; Boyd, Stephen; Larsen, Lars F. S.; Bagterp Jørgensen, John
2013-08-01
We consider the control of a commercial multi-zone refrigeration system, consisting of several cooling units that share a common compressor, and is used to cool multiple areas or rooms. In each time period we choose cooling capacity to each unit and a common evaporation temperature. The goal is to minimise the total energy cost, using real-time electricity prices, while obeying temperature constraints on the zones. We propose a variation on model predictive control to achieve this goal. When the right variables are used, the dynamics of the system are linear, and the constraints are convex. The cost function, however, is nonconvex due to the temperature dependence of thermodynamic efficiency. To handle this nonconvexity we propose a sequential convex optimisation method, which typically converges in fewer than 5 or so iterations. We employ a fast convex quadratic programming solver to carry out the iterations, which is more than fast enough to run in real time. We demonstrate our method on a realistic model, with a full year simulation and 15-minute time periods, using historical electricity prices and weather data, as well as random variations in thermal load. These simulations show substantial cost savings, on the order of 30%, compared to a standard thermostat-based control system. Perhaps more important, we see that the method exhibits sophisticated response to real-time variations in electricity prices. This demand response is critical to help balance real-time uncertainties in generation capacity associated with large penetration of intermittent renewable energy sources in a future smart grid.
Predictive models of moth development
Degree-day models link ambient temperature to insect life-stages, making such models valuable tools in integrated pest management. These models increase management efficacy by predicting pest phenology. In Wisconsin, the top insect pest of cranberry production is the cranberry fruitworm, Acrobasis v...
Predictive Models and Computational Embryology
EPA’s ‘virtual embryo’ project is building an integrative systems biology framework for predictive models of developmental toxicity. One schema involves a knowledge-driven adverse outcome pathway (AOP) framework utilizing information from public databases, standardized ontologies...
Predictive Modeling in Race Walking
Directory of Open Access Journals (Sweden)
Krzysztof Wiktorowicz
2015-01-01
Full Text Available This paper presents the use of linear and nonlinear multivariable models as tools to support training process of race walkers. These models are calculated using data collected from race walkers’ training events and they are used to predict the result over a 3 km race based on training loads. The material consists of 122 training plans for 21 athletes. In order to choose the best model leave-one-out cross-validation method is used. The main contribution of the paper is to propose the nonlinear modifications for linear models in order to achieve smaller prediction error. It is shown that the best model is a modified LASSO regression with quadratic terms in the nonlinear part. This model has the smallest prediction error and simplified structure by eliminating some of the predictors.
International Nuclear Information System (INIS)
Banerjee, Santanu; Vasu, P; Von Hellermann, M; Jaspers, R J E
2010-01-01
Contamination of optical signals by reflections from the tokamak vessel wall is a matter of great concern. For machines such as ITER and future reactors, where the vessel wall will be predominantly metallic, this is potentially a risk factor for quantitative optical emission spectroscopy. This is, in particular, the case when bremsstrahlung continuum radiation from the bulk plasma is used as a common reference light source for the cross-calibration of visible spectroscopy. In this paper the reflected contribution to the continuum level in Textor and ITER has been estimated for the detection channels meant for charge exchange recombination spectroscopy (CXRS). A model assuming diffuse reflection has been developed for the bremsstrahlung which is a much extended source. Based on this model, it is shown that in the case of ITER upper port 3, a wall with a moderate reflectivity of 20% leads to the wall reflected fraction being as high as 55-60% of the weak signals in the edge channels. In contrast, a complete bidirectional reflectance distribution function (BRDF) based model has been developed in order to estimate the reflections from more localized sources like the charge exchange (CX) emission from a neutral beam in tokamaks. The largest signal contamination of ∼15% is seen in the core CX channels, where the true CX signal level is much lower than that in the edge channels. Similar values are obtained for Textor also. These results indicate that the contributions from wall reflections may be large enough to significantly distort the overall spectral features of CX data, warranting an analysis at different wavelengths.
Energy Technology Data Exchange (ETDEWEB)
Banerjee, Santanu; Vasu, P [Institute for Plasma Research, Bhat, Gandhinagar 382 428, Gujarat (India); Von Hellermann, M [FOM Institute for Plasma Physics, Rijnhuizen (Netherlands); Jaspers, R J E, E-mail: sbanerje@ipr.res.i [Applied Physics Department, Eindhoven University of Technology, Eindhoven (Netherlands)
2010-12-15
Contamination of optical signals by reflections from the tokamak vessel wall is a matter of great concern. For machines such as ITER and future reactors, where the vessel wall will be predominantly metallic, this is potentially a risk factor for quantitative optical emission spectroscopy. This is, in particular, the case when bremsstrahlung continuum radiation from the bulk plasma is used as a common reference light source for the cross-calibration of visible spectroscopy. In this paper the reflected contribution to the continuum level in Textor and ITER has been estimated for the detection channels meant for charge exchange recombination spectroscopy (CXRS). A model assuming diffuse reflection has been developed for the bremsstrahlung which is a much extended source. Based on this model, it is shown that in the case of ITER upper port 3, a wall with a moderate reflectivity of 20% leads to the wall reflected fraction being as high as 55-60% of the weak signals in the edge channels. In contrast, a complete bidirectional reflectance distribution function (BRDF) based model has been developed in order to estimate the reflections from more localized sources like the charge exchange (CX) emission from a neutral beam in tokamaks. The largest signal contamination of {approx}15% is seen in the core CX channels, where the true CX signal level is much lower than that in the edge channels. Similar values are obtained for Textor also. These results indicate that the contributions from wall reflections may be large enough to significantly distort the overall spectral features of CX data, warranting an analysis at different wavelengths.
An Iterative Optimization Algorithm for Lens Distortion Correction Using Two-Parameter Models
Directory of Open Access Journals (Sweden)
Daniel Santana-Cedrés
2016-12-01
Full Text Available We present a method for the automatic estimation of two-parameter radial distortion models, considering polynomial as well as division models. The method first detects the longest distorted lines within the image by applying the Hough transform enriched with a radial distortion parameter. From these lines, the first distortion parameter is estimated, then we initialize the second distortion parameter to zero and the two-parameter model is embedded into an iterative nonlinear optimization process to improve the estimation. This optimization aims at reducing the distance from the edge points to the lines, adjusting two distortion parameters as well as the coordinates of the center of distortion. Furthermore, this allows detecting more points belonging to the distorted lines, so that the Hough transform is iteratively repeated to extract a better set of lines until no improvement is achieved. We present some experiments on real images with significant distortion to show the ability of the proposed approach to automatically correct this type of distortion as well as a comparison between the polynomial and division models.
Upgrade of DC power supply system in ITER CS model coil test facility
International Nuclear Information System (INIS)
Shimono, Mitsugu; Uno, Yasuhiro; Yamazaki, Keita; Kawano, Katsumi; Isono, Takaaki
2014-03-01
Objective of the ITER CS Model Coil Test Facility is to evaluate a large scale superconducting conductor for fusion using the Central Solenoid (CS) Model Coil, which can generate a 13T magnetic field in the inner bore with a 1.5 m diameter. The facility is composed of a helium refrigerator / liquefier system, a DC power supply system, a vacuum system and a data acquisition system. The DC power supply system supplies currents to two superconducting coils, the CS Model Coil and an insert coil. A 50-kA DC power supply is installed for the CS Model Coil and two 30 kA DC power supplies are installed for an insert coil. In order to evaluate superconducting performance of a conductor used for ITER Toroidal Field (TF) coils whose operating current is 68 kA, the line for an insert coil is upgraded. A 10 kA DC power supply was added, DC circuit breakers were upgraded, bus bars and current measuring instrument were replaced. In accordance to the upgrade, operation manual was revised. (author)
Mohd Fo'ad Rohani; Mohd Aizaini Maarof; Ali Selamat; Houssain Kettani
2010-01-01
This paper proposes a Multi-Level Sampling (MLS) approach for continuous Loss of Self-Similarity (LoSS) detection using iterative window. The method defines LoSS based on Second Order Self-Similarity (SOSS) statistical model. The Optimization Method (OM) is used to estimate self-similarity parameter since it is fast and more accurate in comparison with other estimation methods known in the literature. Probability of LoSS detection is introduced to measure continuous LoSS detection performance...
Modeling of ITER edge plasma in the presence of resonant magnetic perturbations
Energy Technology Data Exchange (ETDEWEB)
Rozhansky, V.; Kaveeva, E.; Veselova, I.; Voskoboynikov, S. [Peter the Great St. Petersburg Polytechnic University, St. Petersburg (Russian Federation); Coster, D. [Max-Planck Institut fur Plasmaphysik, EURATOM Association, Garching (Germany)
2016-08-15
The modeling of the ITER edge is performed with the use of the code B2SOLPS5.2 in the presence of the electron conductivity caused by RMPs as well as for the reference case with the same input parameters but without RMPs. The radial electric field close to the neoclassical one is obtained without RMPs. Even the modest level of RMPs changes the direction of the electric field and causes the toroidal spin-up of the edge plasma. At the same time the pump-out effect is small. (copyright 2016 WILEY-VCH Verlag GmbH and Co. KGaA, Weinheim)
Iterative optimisation of Monte Carlo detector models using measurements and simulations
Energy Technology Data Exchange (ETDEWEB)
Marzocchi, O., E-mail: olaf@marzocchi.net [European Patent Office, Rijswijk (Netherlands); Leone, D., E-mail: debora.leone@kit.edu [Institute for Nuclear Waste Disposal, Karlsruhe Institute of Technology, Karlsruhe (Germany)
2015-04-11
This work proposes a new technique to optimise the Monte Carlo models of radiation detectors, offering the advantage of a significantly lower user effort and therefore an improved work efficiency compared to the prior techniques. The method consists of four steps, two of which are iterative and suitable for automation using scripting languages. The four steps consist in the acquisition in the laboratory of measurement data to be used as reference; the modification of a previously available detector model; the simulation of a tentative model of the detector to obtain the coefficients of a set of linear equations; the solution of the system of equations and the update of the detector model. Steps three and four can be repeated for more accurate results. This method avoids the “try and fail” approach typical of the prior techniques.
The construction of geological model using an iterative approach (Step 1 and Step 2)
International Nuclear Information System (INIS)
Matsuoka, Toshiyuki; Kumazaki, Naoki; Saegusa, Hiromitsu; Sasaki, Keiichi; Endo, Yoshinobu; Amano, Kenji
2005-03-01
One of the main goals of the Mizunami Underground Research Laboratory (MIU) Project is to establish appropriate methodologies for reliably investigating and assessing the deep subsurface. This report documents the results of geological modeling of Step 1 and Step 2 using the iterative investigation approach at the site-scale (several 100m to several km in area). For the Step 1 model, existing information (e.g. literature), and results from geological mapping and reflection seismic survey were used. For the Step 2 model, additional information obtained from the geological investigation using existing borehole and the shallow borehole investigation were incorporated. As a result of this study, geological elements that should be represented in the model were defined, and several major faults with trends of NNW, EW and NE trend were identified (or inferred) in the vicinity of the MIU-site. (author)
International Nuclear Information System (INIS)
2001-10-01
This ITER CTA newsletter contains results of the ITER toroidal field model coil project presented by ITER EU Home Team (Garching) and an article in commemoration of the late Dr. Charles Maisonnier, one of the former leaders of ITER who made significant contributions to its development
von Cramon-Taubadel, Noreen; Lycett, Stephen J
2008-05-01
Recent studies comparing craniometric and neutral genetic affinity matrices have concluded that, on average, human cranial variation fits a model of neutral expectation. While human craniometric and genetic data fit a model of isolation by geographic distance, it is not yet clear whether this is due to geographically mediated gene flow or human dispersal events. Recently, human genetic data have been shown to fit an iterative founder effect model of dispersal with an African origin, in line with the out-of-Africa replacement model for modern human origins, and Manica et al. (Nature 448 (2007) 346-349) have demonstrated that human craniometric data also fit this model. However, in contrast with the neutral model of cranial evolution suggested by previous studies, Manica et al. (2007) made the a priori assumption that cranial form has been subject to climatically driven natural selection and therefore correct for climate prior to conducting their analyses. Here we employ a modified theoretical and methodological approach to test whether human cranial variability fits the iterative founder effect model. In contrast with Manica et al. (2007) we employ size-adjusted craniometric variables, since climatic factors such as temperature have been shown to correlate with aspects of cranial size. Despite these differences, we obtain similar results to those of Manica et al. (2007), with up to 26% of global within-population craniometric variation being explained by geographic distance from sub-Saharan Africa. Comparative analyses using non-African origins do not yield significant results. The implications of these results are discussed in the light of the modern human origins debate. (c) 2007 Wiley-Liss, Inc.
Model-based iterative learning control of Parkinsonian state in thalamic relay neuron
Liu, Chen; Wang, Jiang; Li, Huiyan; Xue, Zhiqin; Deng, Bin; Wei, Xile
2014-09-01
Although the beneficial effects of chronic deep brain stimulation on Parkinson's disease motor symptoms are now largely confirmed, the underlying mechanisms behind deep brain stimulation remain unclear and under debate. Hence, the selection of stimulation parameters is full of challenges. Additionally, due to the complexity of neural system, together with omnipresent noises, the accurate model of thalamic relay neuron is unknown. Thus, the iterative learning control of the thalamic relay neuron's Parkinsonian state based on various variables is presented. Combining the iterative learning control with typical proportional-integral control algorithm, a novel and efficient control strategy is proposed, which does not require any particular knowledge on the detailed physiological characteristics of cortico-basal ganglia-thalamocortical loop and can automatically adjust the stimulation parameters. Simulation results demonstrate the feasibility of the proposed control strategy to restore the fidelity of thalamic relay in the Parkinsonian condition. Furthermore, through changing the important parameter—the maximum ionic conductance densities of low-threshold calcium current, the dominant characteristic of the proposed method which is independent of the accurate model can be further verified.
A Block Iterative Finite Element Model for Nonlinear Leaky Aquifer Systems
Gambolati, Giuseppe; Teatini, Pietro
1996-01-01
A new quasi three-dimensional finite element model of groundwater flow is developed for highly compressible multiaquifer systems where aquitard permeability and elastic storage are dependent on hydraulic drawdown. The model is solved by a block iterative strategy, which is naturally suggested by the geological structure of the porous medium and can be shown to be mathematically equivalent to a block Gauss-Seidel procedure. As such it can be generalized into a block overrelaxation procedure and greatly accelerated by the use of the optimum overrelaxation factor. Results for both linear and nonlinear multiaquifer systems emphasize the excellent computational performance of the model and indicate that convergence in leaky systems can be improved up to as much as one order of magnitude.
A proximity algorithm accelerated by Gauss-Seidel iterations for L1/TV denoising models
Li, Qia; Micchelli, Charles A.; Shen, Lixin; Xu, Yuesheng
2012-09-01
Our goal in this paper is to improve the computational performance of the proximity algorithms for the L1/TV denoising model. This leads us to a new characterization of all solutions to the L1/TV model via fixed-point equations expressed in terms of the proximity operators. Based upon this observation we develop an algorithm for solving the model and establish its convergence. Furthermore, we demonstrate that the proposed algorithm can be accelerated through the use of the componentwise Gauss-Seidel iteration so that the CPU time consumed is significantly reduced. Numerical experiments using the proposed algorithm for impulsive noise removal are included, with a comparison to three recently developed algorithms. The numerical results show that while the proposed algorithm enjoys a high quality of the restored images, as the other three known algorithms do, it performs significantly better in terms of computational efficiency measured in the CPU time consumed.
A proximity algorithm accelerated by Gauss–Seidel iterations for L1/TV denoising models
International Nuclear Information System (INIS)
Li, Qia; Shen, Lixin; Xu, Yuesheng; Micchelli, Charles A
2012-01-01
Our goal in this paper is to improve the computational performance of the proximity algorithms for the L1/TV denoising model. This leads us to a new characterization of all solutions to the L1/TV model via fixed-point equations expressed in terms of the proximity operators. Based upon this observation we develop an algorithm for solving the model and establish its convergence. Furthermore, we demonstrate that the proposed algorithm can be accelerated through the use of the componentwise Gauss–Seidel iteration so that the CPU time consumed is significantly reduced. Numerical experiments using the proposed algorithm for impulsive noise removal are included, with a comparison to three recently developed algorithms. The numerical results show that while the proposed algorithm enjoys a high quality of the restored images, as the other three known algorithms do, it performs significantly better in terms of computational efficiency measured in the CPU time consumed. (paper)
Using a web-based, iterative education model to enhance clinical clerkships.
Alexander, Erik K; Bloom, Nurit; Falchuk, Kenneth H; Parker, Michael
2006-10-01
Although most clinical clerkship curricula are designed to provide all students consistent exposure to defined course objectives, it is clear that individual students are diverse in their backgrounds and baseline knowledge. Ideally, the learning process should be individualized towards the strengths and weakness of each student, but, until recently, this has proved prohibitively time-consuming. The authors describe a program to develop and evaluate an iterative, Web-based educational model assessing medical students' knowledge deficits and allowing targeted teaching shortly after their identification. Beginning in 2002, a new educational model was created, validated, and applied in a prospective fashion to medical students during an internal medicine clerkship at Harvard Medical School. Using a Web-based platform, five validated questions were delivered weekly and a specific knowledge deficiency identified. Teaching targeted to the deficiency was provided to an intervention cohort of five to seven students in each clerkship, though not to controls (the remaining 7-10 students). Effectiveness of this model was assessed by performance on the following week's posttest question. Specific deficiencies were readily identified weekly using this model. Throughout the year, however, deficiencies varied unpredictably. Teaching targeted to deficiencies resulted in significantly better performance on follow-up questioning compared to the performance of those who did not receive this intervention. This model was easily applied in an additive fashion to the current curriculum, and student acceptance was high. The authors conclude that a Web-based, iterative assessment model can effectively target specific curricular needs unique to each group; focus teaching in a rapid, formative, and highly efficient manner; and may improve the efficiency of traditional clerkship teaching.
International Nuclear Information System (INIS)
Scheffel, Hans; Stolzmann, Paul; Schlett, Christopher L.; Engel, Leif-Christopher; Major, Gyöngi Petra; Károlyi, Mihály; Do, Synho; Maurovich-Horvat, Pál; Hoffmann, Udo
2012-01-01
Objectives: To compare image quality of coronary artery plaque visualization at CT angiography with images reconstructed with filtered back projection (FBP), adaptive statistical iterative reconstruction (ASIR), and model based iterative reconstruction (MBIR) techniques. Methods: The coronary arteries of three ex vivo human hearts were imaged by CT and reconstructed with FBP, ASIR and MBIR. Coronary cross-sectional images were co-registered between the different reconstruction techniques and assessed for qualitative and quantitative image quality parameters. Readers were blinded to the reconstruction algorithm. Results: A total of 375 triplets of coronary cross-sectional images were co-registered. Using MBIR, 26% of the images were rated as having excellent overall image quality, which was significantly better as compared to ASIR and FBP (4% and 13%, respectively, all p < 0.001). Qualitative assessment of image noise demonstrated a noise reduction by using ASIR as compared to FBP (p < 0.01) and further noise reduction by using MBIR (p < 0.001). The contrast-to-noise-ratio (CNR) using MBIR was better as compared to ASIR and FBP (44 ± 19, 29 ± 15, 26 ± 9, respectively; all p < 0.001). Conclusions: Using MBIR improved image quality, reduced image noise and increased CNR as compared to the other available reconstruction techniques. This may further improve the visualization of coronary artery plaque and allow radiation reduction.
Energy Technology Data Exchange (ETDEWEB)
Lee, Eun Chae; Kim, Yeo Koon; Chun, Eun Ju; Choi, Sang IL [Dept. of of Radiology, Seoul National University Bundang Hospital, Seongnam (Korea, Republic of)
2016-05-15
To assess the performance of model-based iterative reconstruction (MBIR) technique for evaluation of coronary artery stents on coronary CT angiography (CCTA). Twenty-two patients with coronary stent implantation who underwent CCTA were retrospectively enrolled for comparison of image quality between filtered back projection (FBP), adaptive statistical iterative reconstruction (ASIR) and MBIR. In each data set, image noise was measured as the standard deviation of the measured attenuation units within circular regions of interest in the ascending aorta (AA) and left main coronary artery (LM). To objectively assess the noise and blooming artifacts in coronary stent, we additionally measured the standard deviation of the measured attenuation and intra-luminal stent diameters of total 35 stents with dedicated software. All image noise measured in the AA (all p < 0.001), LM (p < 0.001, p = 0.001) and coronary stent (all p < 0.001) were significantly lower with MBIR in comparison to those with FBP or ASIR. Intraluminal stent diameter was significantly higher with MBIR, as compared with ASIR or FBP (p < 0.001, p = 0.001). MBIR can reduce image noise and blooming artifact from the stent, leading to better in-stent assessment in patients with coronary artery stent.
Energy Technology Data Exchange (ETDEWEB)
Ortuno, J E; Kontaxakis, G; Rubio, J L; Santos, A [Departamento de Ingenieria Electronica (DIE), Universidad Politecnica de Madrid, Ciudad Universitaria s/n, 28040 Madrid (Spain); Guerra, P [Networking Research Center on Bioengineering, Biomaterials and Nanomedicine (CIBER-BBN), Madrid (Spain)], E-mail: juanen@die.upm.es
2010-04-07
A fully 3D iterative image reconstruction algorithm has been developed for high-resolution PET cameras composed of pixelated scintillator crystal arrays and rotating planar detectors, based on the ordered subsets approach. The associated system matrix is precalculated with Monte Carlo methods that incorporate physical effects not included in analytical models, such as positron range effects and interaction of the incident gammas with the scintillator material. Custom Monte Carlo methodologies have been developed and optimized for modelling of system matrices for fast iterative image reconstruction adapted to specific scanner geometries, without redundant calculations. According to the methodology proposed here, only one-eighth of the voxels within two central transaxial slices need to be modelled in detail. The rest of the system matrix elements can be obtained with the aid of axial symmetries and redundancies, as well as in-plane symmetries within transaxial slices. Sparse matrix techniques for the non-zero system matrix elements are employed, allowing for fast execution of the image reconstruction process. This 3D image reconstruction scheme has been compared in terms of image quality to a 2D fast implementation of the OSEM algorithm combined with Fourier rebinning approaches. This work confirms the superiority of fully 3D OSEM in terms of spatial resolution, contrast recovery and noise reduction as compared to conventional 2D approaches based on rebinning schemes. At the same time it demonstrates that fully 3D methodologies can be efficiently applied to the image reconstruction problem for high-resolution rotational PET cameras by applying accurate pre-calculated system models and taking advantage of the system's symmetries.
Nonconvex Model Predictive Control for Commercial Refrigeration
DEFF Research Database (Denmark)
Hovgaard, Tobias Gybel; Larsen, Lars F.S.; Jørgensen, John Bagterp
2013-01-01
function, however, is nonconvex due to the temperature dependence of thermodynamic efficiency. To handle this nonconvexity we propose a sequential convex optimization method, which typically converges in fewer than 5 or so iterations. We employ a fast convex quadratic programming solver to carry out...... the iterations, which is more than fast enough to run in real-time. We demonstrate our method on a realistic model, with a full year simulation and 15 minute time periods, using historical electricity prices and weather data, as well as random variations in thermal load. These simulations show substantial cost...... capacity associated with large penetration of intermittent renewable energy sources in a future smart grid....
International Nuclear Information System (INIS)
Chen Jian-Lin; Li Lei; Wang Lin-Yuan; Cai Ai-Long; Xi Xiao-Qi; Zhang Han-Ming; Li Jian-Xin; Yan Bin
2015-01-01
The projection matrix model is used to describe the physical relationship between reconstructed object and projection. Such a model has a strong influence on projection and backprojection, two vital operations in iterative computed tomographic reconstruction. The distance-driven model (DDM) is a state-of-the-art technology that simulates forward and back projections. This model has a low computational complexity and a relatively high spatial resolution; however, it includes only a few methods in a parallel operation with a matched model scheme. This study introduces a fast and parallelizable algorithm to improve the traditional DDM for computing the parallel projection and backprojection operations. Our proposed model has been implemented on a GPU (graphic processing unit) platform and has achieved satisfactory computational efficiency with no approximation. The runtime for the projection and backprojection operations with our model is approximately 4.5 s and 10.5 s per loop, respectively, with an image size of 256×256×256 and 360 projections with a size of 512×512. We compare several general algorithms that have been proposed for maximizing GPU efficiency by using the unmatched projection/backprojection models in a parallel computation. The imaging resolution is not sacrificed and remains accurate during computed tomographic reconstruction. (paper)
International Nuclear Information System (INIS)
Kim, Jin Hyeok; Choo, Ki Seok; Moon, Tae Yong; Lee, Jun Woo; Jeon, Ung Bae; Kim, Tae Un; Hwang, Jae Yeon; Yun, Myeong-Ja; Jeong, Dong Wook; Lim, Soo Jin
2016-01-01
To evaluate the subjective and objective qualities of computed tomography (CT) venography images at 80 kVp using model-based iterative reconstruction (MBIR) and to compare these with those of filtered back projection (FBP) and adaptive statistical iterative reconstruction (ASIR) using the same CT data sets. Forty-four patients (mean age: 56.1 ± 18.1) who underwent 80 kVp CT venography (CTV) for the evaluation of deep vein thrombosis (DVT) during 4 months were enrolled in this retrospective study. The same raw data were reconstructed using FBP, ASIR, and MBIR. Objective and subjective image analysis were performed at the inferior vena cava (IVC), femoral vein, and popliteal vein. The mean CNR of MBIR was significantly greater than those of FBP and ASIR and images reconstructed using MBIR had significantly lower objective image noise (p <.001). Subjective image quality and confidence of detecting DVT by MBIR group were significantly greater than those of FBP and ASIR (p <.005), and MBIR had the lowest score for subjective image noise (p <.001). CTV at 80 kVp with MBIR was superior to FBP and ASIR regarding subjective and objective image qualities. (orig.)
Energy Technology Data Exchange (ETDEWEB)
Harder, Annemarie M. den, E-mail: a.m.denharder@umcutrecht.nl [Department of Radiology, University Medical Center Utrecht, Utrecht (Netherlands); Wolterink, Jelmer M. [Image Sciences Institute, University Medical Center Utrecht, Utrecht (Netherlands); Willemink, Martin J.; Schilham, Arnold M.R.; Jong, Pim A. de [Department of Radiology, University Medical Center Utrecht, Utrecht (Netherlands); Budde, Ricardo P.J. [Department of Radiology, Erasmus Medical Center, Rotterdam (Netherlands); Nathoe, Hendrik M. [Department of Cardiology, University Medical Center Utrecht, Utrecht (Netherlands); Išgum, Ivana [Image Sciences Institute, University Medical Center Utrecht, Utrecht (Netherlands); Leiner, Tim [Department of Radiology, University Medical Center Utrecht, Utrecht (Netherlands)
2016-11-15
Highlights: • Iterative reconstruction (IR) allows for low dose coronary calcium scoring (CCS). • Radiation dose can be safely reduced to 0.4 mSv with hybrid and model-based IR. • FBP is not feasible at these dose levels due to excessive noise. - Abstract: Purpose: To determine the effect of model-based iterative reconstruction (IR) on coronary calcium quantification using different submillisievert CT acquisition protocols. Methods: Twenty-eight patients received a clinically indicated non contrast-enhanced cardiac CT. After the routine dose acquisition, low-dose acquisitions were performed with 60%, 40% and 20% of the routine dose mAs. Images were reconstructed with filtered back projection (FBP), hybrid IR (HIR) and model-based IR (MIR) and Agatston scores, calcium volumes and calcium mass scores were determined. Results: Effective dose was 0.9, 0.5, 0.4 and 0.2 mSv, respectively. At 0.5 and 0.4 mSv, differences in Agatston scores with both HIR and MIR compared to FBP at routine dose were small (−0.1 to −2.9%), while at 0.2 mSv, differences in Agatston scores of −12.6 to −14.6% occurred. Reclassification of risk category at reduced dose levels was more frequent with MIR (21–25%) than with HIR (18%). Conclusions: Radiation dose for coronary calcium scoring can be safely reduced to 0.4 mSv using both HIR and MIR, while FBP is not feasible at these dose levels due to excessive noise. Further dose reduction can lead to an underestimation in Agatston score and subsequent reclassification to lower risk categories. Mass scores were unaffected by dose reductions.
Interpretation of ensembles created by multiple iterative rebuilding of macromolecular models
International Nuclear Information System (INIS)
Terwilliger, Thomas C.; Grosse-Kunstleve, Ralf W.; Afonine, Pavel V.; Adams, Paul D.; Moriarty, Nigel W.; Zwart, Peter; Read, Randy J.; Turk, Dusan; Hung, Li-Wei
2007-01-01
Heterogeneity in ensembles generated by independent model rebuilding principally reflects the limitations of the data and of the model-building process rather than the diversity of structures in the crystal. Automation of iterative model building, density modification and refinement in macromolecular crystallography has made it feasible to carry out this entire process multiple times. By using different random seeds in the process, a number of different models compatible with experimental data can be created. Sets of models were generated in this way using real data for ten protein structures from the Protein Data Bank and using synthetic data generated at various resolutions. Most of the heterogeneity among models produced in this way is in the side chains and loops on the protein surface. Possible interpretations of the variation among models created by repetitive rebuilding were investigated. Synthetic data were created in which a crystal structure was modelled as the average of a set of ‘perfect’ structures and the range of models obtained by rebuilding a single starting model was examined. The standard deviations of coordinates in models obtained by repetitive rebuilding at high resolution are small, while those obtained for the same synthetic crystal structure at low resolution are large, so that the diversity within a group of models cannot generally be a quantitative reflection of the actual structures in a crystal. Instead, the group of structures obtained by repetitive rebuilding reflects the precision of the models, and the standard deviation of coordinates of these structures is a lower bound estimate of the uncertainty in coordinates of the individual models
International Nuclear Information System (INIS)
Terwilliger, Thomas C.; Grosse-Kunstleve, Ralf W.; Afonine, Pavel V.; Moriarty, Nigel W.; Zwart, Peter H.; Hung, Li-Wei; Read, Randy J.; Adams, Paul D.
2008-01-01
The highly automated PHENIX AutoBuild wizard is described. The procedure can be applied equally well to phases derived from isomorphous/anomalous and molecular-replacement methods. The PHENIX AutoBuild wizard is a highly automated tool for iterative model building, structure refinement and density modification using RESOLVE model building, RESOLVE statistical density modification and phenix.refine structure refinement. Recent advances in the AutoBuild wizard and phenix.refine include automated detection and application of NCS from models as they are built, extensive model-completion algorithms and automated solvent-molecule picking. Model-completion algorithms in the AutoBuild wizard include loop building, crossovers between chains in different models of a structure and side-chain optimization. The AutoBuild wizard has been applied to a set of 48 structures at resolutions ranging from 1.1 to 3.2 Å, resulting in a mean R factor of 0.24 and a mean free R factor of 0.29. The R factor of the final model is dependent on the quality of the starting electron density and is relatively independent of resolution
Energy Technology Data Exchange (ETDEWEB)
Los Alamos National Laboratory, Mailstop M888, Los Alamos, NM 87545, USA; Lawrence Berkeley National Laboratory, One Cyclotron Road, Building 64R0121, Berkeley, CA 94720, USA; Department of Haematology, University of Cambridge, Cambridge CB2 0XY, England; Terwilliger, Thomas; Terwilliger, T.C.; Grosse-Kunstleve, Ralf Wilhelm; Afonine, P.V.; Moriarty, N.W.; Zwart, P.H.; Hung, L.-W.; Read, R.J.; Adams, P.D.
2007-04-29
The PHENIX AutoBuild Wizard is a highly automated tool for iterative model-building, structure refinement and density modification using RESOLVE or TEXTAL model-building, RESOLVE statistical density modification, and phenix.refine structure refinement. Recent advances in the AutoBuild Wizard and phenix.refine include automated detection and application of NCS from models as they are built, extensive model completion algorithms, and automated solvent molecule picking. Model completion algorithms in the AutoBuild Wizard include loop-building, crossovers between chains in different models of a structure, and side-chain optimization. The AutoBuild Wizard has been applied to a set of 48 structures at resolutions ranging from 1.1 {angstrom} to 3.2 {angstrom}, resulting in a mean R-factor of 0.24 and a mean free R factor of 0.29. The R-factor of the final model is dependent on the quality of the starting electron density, and relatively independent of resolution.
International Nuclear Information System (INIS)
Gillia, O.; Bucci, Ph.; Vidotto, F.; Leibold, J.-M.; Boireau, B.; Boudot, C.; Cottin, A.; Lorenzetto, P.; Jacquinot, F.
2006-01-01
In components of blanket modules for ITER, intricate cooling networks are needed in order to evacuate all heat coming from the plasma. Hot Isostatic Pressing (HIPing) technology is a very convenient method to produce near net shape components with complex cooling network through massive stainless steel parts by bonding together tubes inserted in grooves machined in bulk stainless steel. Powder is often included in the process so as to release difficulties arising with gaps closure between tube and solid part or between several solid parts. In the mean time, it releases the machining precision needed on the parts to assemble before HIP. However, inserting powder in the assembly means densification, i.e. volume change of powder during the HIP cycle. This leads to global and local shape changes of HIPed parts. In order to control the deformations, modelling and computer simulation are used. This modelling and computer simulation work has been done in support to the fabrication of a shield prototype for the ITER blanket. Problems such as global bending of the whole part and deformations of tubes in their powder bed are addressed. It is important that the part does not bend too much. It is important as well to have circular tube shape after HIP, firstly in order to avoid their rupture during HIP but also because non destructive ultrasonic examination is needed to check the quality of the densification and bonding between tube and powder or solid parts; the insertions of a probe in the tubes requires a minimal circular tube shape. For simulation purposes, the behaviour of the different materials has to be modelled. Although the modelling of the massive stainless steel behaviour is not neglected, the most critical modelling is about power. For this study, a thorough investigation on the powder behaviour has been performed with some in-situ HIP dilatometry experiments and some interrupted HIP cycles on trial parts. These experiments have allowed the identification of a
Barriers and strategies to an iterative model of advance care planning communication.
Ahluwalia, Sangeeta C; Bekelman, David B; Huynh, Alexis K; Prendergast, Thomas J; Shreve, Scott; Lorenz, Karl A
2015-12-01
Early and repeated patient-provider conversations about advance care planning (ACP) are now widely recommended. We sought to characterize barriers and strategies for realizing an iterative model of ACP patient-provider communication. A total of 2 multidisciplinary focus groups and 3 semistructured interviews with 20 providers at a large Veterans Affairs medical center. Thematic analysis was employed to identify salient themes. Barriers included variation among providers in approaches to ACP, lack of useful information about patient values to guide decision making, and ineffective communication between providers across settings. Strategies included eliciting patient values rather than specific treatment choices and an increased role for primary care in the ACP process. Greater attention to connecting providers across the continuum, maximizing the potential of the electronic health record, and linking patient experiences to their values may help to connect ACP communication across the continuum. © The Author(s) 2014.
Anti-alias filter in AORSA for modeling ICRF heating of DT plasmas in ITER
Berry, L. A.; Batchelor, D. B.; Jaeger, E. F.; RF SciDAC Team
2011-10-01
The spectral wave solver AORSA has been used extensively to model full-field, ICRF heating scenarios for DT plasmas in ITER. In these scenarios, the tritium (T) second harmonic cyclotron resonance is positioned near the magnetic axis, where fast magnetosonic waves are efficiently absorbed by tritium ions. In some cases, a fundamental deuterium (D) cyclotron layer can also be located within the plasma, but close to the high field boundary. In this case, the existence of multiple ion cyclotron resonances presents a serious challenge for numerical simulation because short-wavelength, mode-converted waves can be excited close to the plasma edge at the ion-ion hybrid layer. Although the left hand circularly polarized component of the wave field is partially shielded from the fundamental D resonance, some power penetrates, and a small fraction (typically LLC.
Conductor fabrication for ITER Model Coils. Status of the EU cabling and jacketing activities
International Nuclear Information System (INIS)
Corte, A. della; Ricci, M.V.; Spadoni, M.; Bessette, D.; Duchateau, J.L.; Salpietro, E.; Garre, R.; Rossi, S.; Penco, R.; Laurenti, A.
1994-01-01
The conductors for the ITER magnets are being defined according to the operating requirements of the machine. To demonstrate the technological feasibility of the main features of the magnets, two model coils (central solenoid and toroidal field), with bores in the range 2-3 m, will be manufactured. This is the first significant industrial production of full-size conductor (a total of about 6.5 km for these coils). One cabling and one jacketing line have been assembled in Europe. The former can cable up to 1100 m (6 tons) unit lengths; the latter, which can also handle 1000 m conductor lengths, has been assembled in a shorter version (320 m). A description of the lines is reported, together with the results of the trials performed up to now. (author) 2 figs
Accuracy improvement of a hybrid robot for ITER application using POE modeling method
International Nuclear Information System (INIS)
Wang, Yongbo; Wu, Huapeng; Handroos, Heikki
2013-01-01
Highlights: ► The product of exponential (POE) formula for error modeling of hybrid robot. ► Differential Evolution (DE) algorithm for parameter identification. ► Simulation results are given to verify the effectiveness of the method. -- Abstract: This paper focuses on the kinematic calibration of a 10 degree-of-freedom (DOF) redundant serial–parallel hybrid robot to improve its accuracy. The robot was designed to perform the assembling and repairing tasks of the vacuum vessel (VV) of the international thermonuclear experimental reactor (ITER). By employing the product of exponentials (POEs) formula, we extended the POE-based calibration method from serial robot to redundant serial–parallel hybrid robot. The proposed method combines the forward and inverse kinematics together to formulate a hybrid calibration method for serial–parallel hybrid robot. Because of the high nonlinear characteristics of the error model and too many error parameters need to be identified, the traditional iterative linear least-square algorithms cannot be used to identify the parameter errors. This paper employs a global optimization algorithm, Differential Evolution (DE), to identify parameter errors by solving the inverse kinematics of the hybrid robot. Furthermore, after the parameter errors were identified, the DE algorithm was adopted to numerically solve the forward kinematics of the hybrid robot to demonstrate the accuracy improvement of the end-effector. Numerical simulations were carried out by generating random parameter errors at the allowed tolerance limit and generating a number of configuration poses in the robot workspace. Simulation of the real experimental conditions shows that the accuracy of the end-effector can be improved to the same precision level of the given external measurement device
Accuracy improvement of a hybrid robot for ITER application using POE modeling method
Energy Technology Data Exchange (ETDEWEB)
Wang, Yongbo, E-mail: yongbo.wang@hotmail.com [Laboratory of Intelligent Machines, Lappeenranta University of Technology, FIN-53851 Lappeenranta (Finland); Wu, Huapeng; Handroos, Heikki [Laboratory of Intelligent Machines, Lappeenranta University of Technology, FIN-53851 Lappeenranta (Finland)
2013-10-15
Highlights: ► The product of exponential (POE) formula for error modeling of hybrid robot. ► Differential Evolution (DE) algorithm for parameter identification. ► Simulation results are given to verify the effectiveness of the method. -- Abstract: This paper focuses on the kinematic calibration of a 10 degree-of-freedom (DOF) redundant serial–parallel hybrid robot to improve its accuracy. The robot was designed to perform the assembling and repairing tasks of the vacuum vessel (VV) of the international thermonuclear experimental reactor (ITER). By employing the product of exponentials (POEs) formula, we extended the POE-based calibration method from serial robot to redundant serial–parallel hybrid robot. The proposed method combines the forward and inverse kinematics together to formulate a hybrid calibration method for serial–parallel hybrid robot. Because of the high nonlinear characteristics of the error model and too many error parameters need to be identified, the traditional iterative linear least-square algorithms cannot be used to identify the parameter errors. This paper employs a global optimization algorithm, Differential Evolution (DE), to identify parameter errors by solving the inverse kinematics of the hybrid robot. Furthermore, after the parameter errors were identified, the DE algorithm was adopted to numerically solve the forward kinematics of the hybrid robot to demonstrate the accuracy improvement of the end-effector. Numerical simulations were carried out by generating random parameter errors at the allowed tolerance limit and generating a number of configuration poses in the robot workspace. Simulation of the real experimental conditions shows that the accuracy of the end-effector can be improved to the same precision level of the given external measurement device.
International Nuclear Information System (INIS)
Kitsunezaki, Akio
1998-01-01
In cooperation of four countries, Japan, USA, EU and Russia, ITER plan has been proceeding as ''the conceptual design activities'' from 1988 to 1990 and ''the industrial design activities'' since 1992. To construct ITER, the legal and work side of ITER operation has been investigated by four countries. However, their economic conditions have been changed to be wrong. So that, construction of ITER can not begin after end of industrial design activities in 1998. Accordingly, they determined to continue the industrial design activities more three years in order to study low cost options and to test the superconductive model·coil. (S.Y.)
Energy Technology Data Exchange (ETDEWEB)
Shirota, Go; Maeda, Eriko; Namiki, Yoko; Bari, Razibul; Abe, Osamu [The University of Tokyo, Department of Radiology, Graduate School of Medicine, Tokyo (Japan); Ino, Kenji [The University of Tokyo Hospital, Imaging Center, Tokyo (Japan); Torigoe, Rumiko [Toshiba Medical Systems, Tokyo (Japan)
2017-10-15
Full iterative reconstruction algorithm is available, but its diagnostic quality in pediatric cardiac CT is unknown. To compare the imaging quality of two algorithms, full and hybrid iterative reconstruction, in pediatric cardiac CT. We included 49 children with congenital cardiac anomalies who underwent cardiac CT. We compared quality of images reconstructed using the two algorithms (full and hybrid iterative reconstruction) based on a 3-point scale for the delineation of the following anatomical structures: atrial septum, ventricular septum, right atrium, right ventricle, left atrium, left ventricle, main pulmonary artery, ascending aorta, aortic arch including the patent ductus arteriosus, descending aorta, right coronary artery and left main trunk. We evaluated beam-hardening artifacts from contrast-enhancement material using a 3-point scale, and we evaluated the overall image quality using a 5-point scale. We also compared image noise, signal-to-noise ratio and contrast-to-noise ratio between the algorithms. The overall image quality was significantly higher with full iterative reconstruction than with hybrid iterative reconstruction (3.67±0.79 vs. 3.31±0.89, P=0.0072). The evaluation scores for most of the gross structures were higher with full iterative reconstruction than with hybrid iterative reconstruction. There was no significant difference between full and hybrid iterative reconstruction for the presence of beam-hardening artifacts. Image noise was significantly lower in full iterative reconstruction, while signal-to-noise ratio and contrast-to-noise ratio were significantly higher in full iterative reconstruction. The diagnostic quality was superior in images with cardiac CT reconstructed with electrocardiogram-gated full iterative reconstruction. (orig.)
International Nuclear Information System (INIS)
Shirota, Go; Maeda, Eriko; Namiki, Yoko; Bari, Razibul; Abe, Osamu; Ino, Kenji; Torigoe, Rumiko
2017-01-01
Full iterative reconstruction algorithm is available, but its diagnostic quality in pediatric cardiac CT is unknown. To compare the imaging quality of two algorithms, full and hybrid iterative reconstruction, in pediatric cardiac CT. We included 49 children with congenital cardiac anomalies who underwent cardiac CT. We compared quality of images reconstructed using the two algorithms (full and hybrid iterative reconstruction) based on a 3-point scale for the delineation of the following anatomical structures: atrial septum, ventricular septum, right atrium, right ventricle, left atrium, left ventricle, main pulmonary artery, ascending aorta, aortic arch including the patent ductus arteriosus, descending aorta, right coronary artery and left main trunk. We evaluated beam-hardening artifacts from contrast-enhancement material using a 3-point scale, and we evaluated the overall image quality using a 5-point scale. We also compared image noise, signal-to-noise ratio and contrast-to-noise ratio between the algorithms. The overall image quality was significantly higher with full iterative reconstruction than with hybrid iterative reconstruction (3.67±0.79 vs. 3.31±0.89, P=0.0072). The evaluation scores for most of the gross structures were higher with full iterative reconstruction than with hybrid iterative reconstruction. There was no significant difference between full and hybrid iterative reconstruction for the presence of beam-hardening artifacts. Image noise was significantly lower in full iterative reconstruction, while signal-to-noise ratio and contrast-to-noise ratio were significantly higher in full iterative reconstruction. The diagnostic quality was superior in images with cardiac CT reconstructed with electrocardiogram-gated full iterative reconstruction. (orig.)
Iterative model reconstruction reduces calcified plaque volume in coronary CT angiography
Energy Technology Data Exchange (ETDEWEB)
Károlyi, Mihály, E-mail: mihaly.karolyi@cirg.hu [MTA-SE Cardiovascular Imaging Research Group, Heart and Vascular Center, Semmelweis University, 68. Varosmajor st, 1122, Budapest (Hungary); Szilveszter, Bálint, E-mail: szilveszter.balint@gmail.com [MTA-SE Cardiovascular Imaging Research Group, Heart and Vascular Center, Semmelweis University, 68. Varosmajor st, 1122, Budapest (Hungary); Kolossváry, Márton, E-mail: martonandko@gmail.com [MTA-SE Cardiovascular Imaging Research Group, Heart and Vascular Center, Semmelweis University, 68. Varosmajor st, 1122, Budapest (Hungary); Takx, Richard A.P, E-mail: richard.takx@gmail.com [Department of Radiology, University Medical Center Utrecht, 100 Heidelberglaan, 3584, CX Utrecht (Netherlands); Celeng, Csilla, E-mail: celengcsilla@gmail.com [MTA-SE Cardiovascular Imaging Research Group, Heart and Vascular Center, Semmelweis University, 68. Varosmajor st, 1122, Budapest (Hungary); Bartykowszki, Andrea, E-mail: bartyandi@gmail.com [MTA-SE Cardiovascular Imaging Research Group, Heart and Vascular Center, Semmelweis University, 68. Varosmajor st, 1122, Budapest (Hungary); Jermendy, Ádám L., E-mail: adam.jermendy@gmail.com [MTA-SE Cardiovascular Imaging Research Group, Heart and Vascular Center, Semmelweis University, 68. Varosmajor st, 1122, Budapest (Hungary); Panajotu, Alexisz, E-mail: panajotualexisz@gmail.com [MTA-SE Cardiovascular Imaging Research Group, Heart and Vascular Center, Semmelweis University, 68. Varosmajor st, 1122, Budapest (Hungary); Karády, Júlia, E-mail: karadyjulia@gmail.com [MTA-SE Cardiovascular Imaging Research Group, Heart and Vascular Center, Semmelweis University, 68. Varosmajor st, 1122, Budapest (Hungary); and others
2017-02-15
Objective: To assess the impact of iterative model reconstruction (IMR) on calcified plaque quantification as compared to filtered back projection reconstruction (FBP) and hybrid iterative reconstruction (HIR) in coronary computed tomography angiography (CTA). Methods: Raw image data of 52 patients who underwent 256-slice CTA were reconstructed with IMR, HIR and FBP. We evaluated qualitative, quantitative image quality parameters and quantified calcified and partially calcified plaque volumes using automated software. Results: Overall qualitative image quality significantly improved with HIR as compared to FBP, and further improved with IMR (p < 0.01 all). Contrast-to-noise ratios were improved with IMR, compared to HIR and FBP (51.0 [43.5–59.9], 20.3 [16.2–25.9] and 14.0 [11.2–17.7], respectively, all p < 0.01) Overall plaque volumes were lowest with IMR and highest with FBP (121.7 [79.3–168.4], 138.7 [90.6–191.7], 147.0 [100.7–183.6]). Similarly, calcified volumes (>130 HU) were decreased with IMR as compared to HIR and FBP (105.9 [62.1–144.6], 110.2 [63.8–166.6], 115.9 [81.7–164.2], respectively, p < 0.05 all). High-attenuation non-calcified volumes (90–129 HU) yielded similar values with FBP and HIR (p = 0.81), however it was lower with IMR (p < 0.05 both). Intermediate- (30–89 HU) and low-attenuation (<30 HU) non-calcified volumes showed no significant difference (p = 0.22 and p = 0.67, respectively). Conclusions: IMR improves image quality of coronary CTA and decreases calcified plaque volumes.
Bioprocess iterative batch-to-batch optimization based on hybrid parametric/nonparametric models.
Teixeira, Ana P; Clemente, João J; Cunha, António E; Carrondo, Manuel J T; Oliveira, Rui
2006-01-01
This paper presents a novel method for iterative batch-to-batch dynamic optimization of bioprocesses. The relationship between process performance and control inputs is established by means of hybrid grey-box models combining parametric and nonparametric structures. The bioreactor dynamics are defined by material balance equations, whereas the cell population subsystem is represented by an adjustable mixture of nonparametric and parametric models. Thus optimizations are possible without detailed mechanistic knowledge concerning the biological system. A clustering technique is used to supervise the reliability of the nonparametric subsystem during the optimization. Whenever the nonparametric outputs are unreliable, the objective function is penalized. The technique was evaluated with three simulation case studies. The overall results suggest that the convergence to the optimal process performance may be achieved after a small number of batches. The model unreliability risk constraint along with sampling scheduling are crucial to minimize the experimental effort required to attain a given process performance. In general terms, it may be concluded that the proposed method broadens the application of the hybrid parametric/nonparametric modeling technique to "newer" processes with higher potential for optimization.
ITER EDA Newsletter. V. 3, no. 8
International Nuclear Information System (INIS)
1994-08-01
This ITER EDA (Engineering Design Activities) Newsletter issue reports on the sixth ITER council meeting; introduces the newly appointed ITER director and reports on his address to the ITER council. The vacuum tank for the ITER model coil testing, installed at JAERI, Naka, Japan is also briefly described
International Nuclear Information System (INIS)
Duchateau, J.L.; Ciazynski, D.; Guerber, O.; Park, S.H.; Zani, L.
2003-01-01
In Phase II experiment of the International Thermonuclear Experimental Reactor (ITER) Toroidal Field Model Coil (TFMC) the operation limits of its 80 kA Nb 3 Sn conductor were explored. To increase the magnetic field on the conductor, the TFMC was tested in presence of another large coil: the EURATOM-LCT coil. Under these conditions the maximum field reached on the conductor, was around 10 tesla. This exploration has been performed at constant current, by progressively increasing the coil temperature and monitoring the coil voltage drop in the current sharing regime. Such an operation was made possible thanks to the very high stability of the conductor. The aim of these tests was to compare the critical properties of the conductor with expectations and assess the ITER TF conductor design. These expectations are based on the documented critical field and temperature dependent properties of the 720 superconducting strands which compose the conductor. In addition the conductor properties are highly dependent on the strain, due to the compression appearing on Nb 3 Sn during the heat treatment of the pancakes and related to the differential thermal compression between Nb 3 Sn and the stainless steel jacket. No precise model exists to predict this strain, which is therefore the main information, which is expected from these tests. The method to deduce this strain from the different tests is presented, including a thermalhydraulic analysis to identify the temperature of the critical point and a careful estimation of the field map across the conductor. The measured strain has been estimated in the range -0.75% to -0.79 %. This information will be taken into account for ITER design and some adjustment of the ITER conductor design is under examination. (authors)
DEFF Research Database (Denmark)
Dieterle, Mischa; Horstmeyer, Thomas; Berthold, Jost
2012-01-01
a particular skeleton ad-hoc for repeated execution turns out to be considerably complicated, and raises general questions about introducing state into a stateless parallel computation. In addition, one would strongly prefer an approach which leaves the original skeleton intact, and only uses it as a building...... block inside a bigger structure. In this work, we present a general framework for skeleton iteration and discuss requirements and variations of iteration control and iteration body. Skeleton iteration is expressed by synchronising a parallel iteration body skeleton with a (likewise parallel) state......Skeleton-based programming is an area of increasing relevance with upcoming highly parallel hardware, since it substantially facilitates parallel programming and separates concerns. When parallel algorithms expressed by skeletons involve iterations – applying the same algorithm repeatedly...
Energy Technology Data Exchange (ETDEWEB)
Nakaura, Takeshi; Iyama, Yuji; Kidoh, Masafumi; Yokoyama, Koichi [Amakusa Medical Center, Diagnostic Radiology, Amakusa, Kumamoto (Japan); Kumamoto University, Department of Diagnostic Radiology, Graduate School of Medical Sciences, Kumamoto (Japan); Oda, Seitaro; Yamashita, Yasuyuki [Kumamoto University, Department of Diagnostic Radiology, Graduate School of Medical Sciences, Kumamoto (Japan); Tokuyasu, Shinichi [Philips Electronics, Kumamoto (Japan); Harada, Kazunori [Amakusa Medical Center, Department of Surgery, Kumamoto (Japan)
2016-03-15
The purpose of this study was to evaluate the utility of iterative model reconstruction (IMR) in brain CT especially with thin-slice images. This prospective study received institutional review board approval, and prior informed consent to participate was obtained from all patients. We enrolled 34 patients who underwent brain CT and reconstructed axial images with filtered back projection (FBP), hybrid iterative reconstruction (HIR) and IMR with 1 and 5 mm slice thicknesses. The CT number, image noise, contrast, and contrast noise ratio (CNR) between the thalamus and internal capsule, and the rate of increase of image noise in 1 and 5 mm thickness images between the reconstruction methods, were assessed. Two independent radiologists assessed image contrast, image noise, image sharpness, and overall image quality on a 4-point scale. The CNRs in 1 and 5 mm slice thickness were significantly higher with IMR (1.2 ± 0.6 and 2.2 ± 0.8, respectively) than with FBP (0.4 ± 0.3 and 1.0 ± 0.4, respectively) and HIR (0.5 ± 0.3 and 1.2 ± 0.4, respectively) (p < 0.01). The mean rate of increasing noise from 5 to 1 mm thickness images was significantly lower with IMR (1.7 ± 0.3) than with FBP (2.3 ± 0.3) and HIR (2.3 ± 0.4) (p < 0.01). There were no significant differences in qualitative analysis of unfamiliar image texture between the reconstruction techniques. IMR offers significant noise reduction and higher contrast and CNR in brain CT, especially for thin-slice images, when compared to FBP and HIR. (orig.)
Directory of Open Access Journals (Sweden)
Jihang Sun
Full Text Available OBJECTIVE: To evaluate noise reduction and image quality improvement in low-radiation dose chest CT images in children using adaptive statistical iterative reconstruction (ASIR and a full model-based iterative reconstruction (MBIR algorithm. METHODS: Forty-five children (age ranging from 28 days to 6 years, median of 1.8 years who received low-dose chest CT scans were included. Age-dependent noise index (NI was used for acquisition. Images were retrospectively reconstructed using three methods: MBIR, 60% of ASIR and 40% of conventional filtered back-projection (FBP, and FBP. The subjective quality of the images was independently evaluated by two radiologists. Objective noises in the left ventricle (LV, muscle, fat, descending aorta and lung field at the layer with the largest cross-section area of LV were measured, with the region of interest about one fourth to half of the area of descending aorta. Optimized signal-to-noise ratio (SNR was calculated. RESULT: In terms of subjective quality, MBIR images were significantly better than ASIR and FBP in image noise and visibility of tiny structures, but blurred edges were observed. In terms of objective noise, MBIR and ASIR reconstruction decreased the image noise by 55.2% and 31.8%, respectively, for LV compared with FBP. Similarly, MBIR and ASIR reconstruction increased the SNR by 124.0% and 46.2%, respectively, compared with FBP. CONCLUSION: Compared with FBP and ASIR, overall image quality and noise reduction were significantly improved by MBIR. MBIR image could reconstruct eligible chest CT images in children with lower radiation dose.
International Nuclear Information System (INIS)
Katsura, Masaki; Matsuda, Izuru; Akahane, Masaaki; Sato, Jiro; Akai, Hiroyuki; Yasaka, Koichiro; Kunimatsu, Akira; Ohtomo, Kuni
2012-01-01
To prospectively evaluate dose reduction and image quality characteristics of chest CT reconstructed with model-based iterative reconstruction (MBIR) compared with adaptive statistical iterative reconstruction (ASIR). One hundred patients underwent reference-dose and low-dose unenhanced chest CT with 64-row multidetector CT. Images were reconstructed with 50 % ASIR-filtered back projection blending (ASIR50) for reference-dose CT, and with ASIR50 and MBIR for low-dose CT. Two radiologists assessed the images in a blinded manner for subjective image noise, artefacts and diagnostic acceptability. Objective image noise was measured in the lung parenchyma. Data were analysed using the sign test and pair-wise Student's t-test. Compared with reference-dose CT, there was a 79.0 % decrease in dose-length product with low-dose CT. Low-dose MBIR images had significantly lower objective image noise (16.93 ± 3.00) than low-dose ASIR (49.24 ± 9.11, P < 0.01) and reference-dose ASIR images (24.93 ± 4.65, P < 0.01). Low-dose MBIR images were all diagnostically acceptable. Unique features of low-dose MBIR images included motion artefacts and pixellated blotchy appearances, which did not adversely affect diagnostic acceptability. Diagnostically acceptable chest CT images acquired with nearly 80 % less radiation can be obtained using MBIR. MBIR shows greater potential than ASIR for providing diagnostically acceptable low-dose CT images without severely compromising image quality. (orig.)
Energy Technology Data Exchange (ETDEWEB)
Katsura, Masaki; Matsuda, Izuru; Akahane, Masaaki; Sato, Jiro; Akai, Hiroyuki; Yasaka, Koichiro; Kunimatsu, Akira; Ohtomo, Kuni [University of Tokyo, Department of Radiology, Graduate School of Medicine, Bunkyo-ku, Tokyo (Japan)
2012-08-15
To prospectively evaluate dose reduction and image quality characteristics of chest CT reconstructed with model-based iterative reconstruction (MBIR) compared with adaptive statistical iterative reconstruction (ASIR). One hundred patients underwent reference-dose and low-dose unenhanced chest CT with 64-row multidetector CT. Images were reconstructed with 50 % ASIR-filtered back projection blending (ASIR50) for reference-dose CT, and with ASIR50 and MBIR for low-dose CT. Two radiologists assessed the images in a blinded manner for subjective image noise, artefacts and diagnostic acceptability. Objective image noise was measured in the lung parenchyma. Data were analysed using the sign test and pair-wise Student's t-test. Compared with reference-dose CT, there was a 79.0 % decrease in dose-length product with low-dose CT. Low-dose MBIR images had significantly lower objective image noise (16.93 {+-} 3.00) than low-dose ASIR (49.24 {+-} 9.11, P < 0.01) and reference-dose ASIR images (24.93 {+-} 4.65, P < 0.01). Low-dose MBIR images were all diagnostically acceptable. Unique features of low-dose MBIR images included motion artefacts and pixellated blotchy appearances, which did not adversely affect diagnostic acceptability. Diagnostically acceptable chest CT images acquired with nearly 80 % less radiation can be obtained using MBIR. MBIR shows greater potential than ASIR for providing diagnostically acceptable low-dose CT images without severely compromising image quality. (orig.)
Elsheikh, Ahmed H.
2013-06-01
We introduce a nonlinear orthogonal matching pursuit (NOMP) for sparse calibration of subsurface flow models. Sparse calibration is a challenging problem as the unknowns are both the non-zero components of the solution and their associated weights. NOMP is a greedy algorithm that discovers at each iteration the most correlated basis function with the residual from a large pool of basis functions. The discovered basis (aka support) is augmented across the nonlinear iterations. Once a set of basis functions are selected, the solution is obtained by applying Tikhonov regularization. The proposed algorithm relies on stochastically approximated gradient using an iterative stochastic ensemble method (ISEM). In the current study, the search space is parameterized using an overcomplete dictionary of basis functions built using the K-SVD algorithm. The proposed algorithm is the first ensemble based algorithm that tackels the sparse nonlinear parameter estimation problem. © 2013 Elsevier Ltd.
International Nuclear Information System (INIS)
Onozuka, M.; Takeda, N.; Nakahira, M.; Shimizu, K.; Nakamura, T.
2003-01-01
The most recent assessment method to evaluate the dynamic behavior of the International Thermonuclear Experimental Reactor (ITER) tokamak assembly is outlined. Three experimental models, including a 1/5.8-scale tokamak model, have been considered to validate the numerical analysis methods for dynamic events, particularly seismic ones. The experimental model has been evaluated by numerical calculations and the results are presented. In the calculations, equivalent linearization has been applied for the non-linear characteristics of the support flange connection, caused by the effects of the bolt-fastening and the friction between the flanges. The detailed connecting conditions for the support flanges have been developed and validated for the analysis. Using the conditions, the eigen-mode analysis has shown that the first and second eigen-mode are horizontal vibration modes with the natural frequency of 39 Hz, while the vertical vibration mode is the fourth mode with the natural frequency of 86 Hz. Dynamic analysis for seismic events has shown the maximum acceleration of approximately twofold larger than that of the applied acceleration, and the maximum stress of 104 MPa found in the flange connecting bolt. These values will be examined comparing with experimental results in order to validate the analysis methods
Energy Technology Data Exchange (ETDEWEB)
Onozuka, M. E-mail: masanori_onozuka@mhi.co.jp; Takeda, N.; Nakahira, M.; Shimizu, K.; Nakamura, T
2003-09-01
The most recent assessment method to evaluate the dynamic behavior of the International Thermonuclear Experimental Reactor (ITER) tokamak assembly is outlined. Three experimental models, including a 1/5.8-scale tokamak model, have been considered to validate the numerical analysis methods for dynamic events, particularly seismic ones. The experimental model has been evaluated by numerical calculations and the results are presented. In the calculations, equivalent linearization has been applied for the non-linear characteristics of the support flange connection, caused by the effects of the bolt-fastening and the friction between the flanges. The detailed connecting conditions for the support flanges have been developed and validated for the analysis. Using the conditions, the eigen-mode analysis has shown that the first and second eigen-mode are horizontal vibration modes with the natural frequency of 39 Hz, while the vertical vibration mode is the fourth mode with the natural frequency of 86 Hz. Dynamic analysis for seismic events has shown the maximum acceleration of approximately twofold larger than that of the applied acceleration, and the maximum stress of 104 MPa found in the flange connecting bolt. These values will be examined comparing with experimental results in order to validate the analysis methods.
Energy Technology Data Exchange (ETDEWEB)
Hawley, B.W.; Zandt, G.; Smith, R.B.
1981-08-10
An iterative inversion technique has been developed that uses the direct P and S wave arrival times from local earthquakes to compute simultaneously a three-dimensional velocity structure and relocated hypocenters. Crustal structure is modeled by subdiving flat layers into rectangular blocks. An interpolation function is used to smoothly vary velocities between blocks, allowing ray trace calculations of travel times in a three-dimensional medium. Tests using synthetic data from known models show that solutions are reasonably independent of block size and spatial distribution but are sensitive to the choice of layer thicknesses. Application of the technique to observed earthquake data from north-central Utah shown the following: (1) lateral velcoity variations in the crust as large as 7% occur over 30-km distance, (2) earthquake epicenters computed with the three-dimensional velocity structure were shifted an average of 3.0 km from location determined assuming homogeneous flat layered models, and (3) the laterally varying velocity structure correlates with anomalous variations in the local gravity and aeromagnetic fields, suggesting that the new velocity information can be valuable in acquiring a better understanding of crustal structure.
Chen, Tinggui; Xiao, Renbin
2014-01-01
Due to fierce market competition, how to improve product quality and reduce development cost determines the core competitiveness of enterprises. However, design iteration generally causes increases of product cost and delays of development time as well, so how to identify and model couplings among tasks in product design and development has become an important issue for enterprises to settle. In this paper, the shortcomings existing in WTM model are discussed and tearing approach as well as inner iteration method is used to complement the classic WTM model. In addition, the ABC algorithm is also introduced to find out the optimal decoupling schemes. In this paper, firstly, tearing approach and inner iteration method are analyzed for solving coupled sets. Secondly, a hybrid iteration model combining these two technologies is set up. Thirdly, a high-performance swarm intelligence algorithm, artificial bee colony, is adopted to realize problem-solving. Finally, an engineering design of a chemical processing system is given in order to verify its reasonability and effectiveness.
The Silicon Trypanosome : A Test Case of Iterative Model Extension in Systems Biology
Achcar, Fiona; Fadda, Abeer; Haanstra, Jurgen R.; Kerkhoven, Eduard J.; Kim, Dong-Hyun; Leroux, Alejandro E.; Papamarkou, Theodore; Rojas, Federico; Bakker, Barbara M.; Barrett, Michael P.; Clayton, Christine; Girolami, Mark; Krauth-Siegel, R. Luise; Matthews, Keith R.; Breitling, Rainer; Poole, RK
2014-01-01
The African trypanosome, Ttypanosoma brucei, is a unicellular parasite causing African Trypanosomiasis (sleeping sickness in humans and nagana in animals). Due to some of its unique properties, it has emerged as a popular model organism in systems biology. A predictive quantitative model of
Application of Homotopy Perturbation and Variational Iteration Methods to SIR Epidemic Model
DEFF Research Database (Denmark)
Ghotbi, Abdoul R.; Barari, Amin; Omidvar, M.
2011-01-01
effective strategy against childhood diseases, the development of the framework that would predict the optimal vaccine coverage level needed to prevent the spread of diseases is crucial. The SIR model is a standard compartmental model that has been used to describe many epidemiological diseases...
Modeling and analysis of alternative concept of ITER vacuum vessel primary heat transfer system
International Nuclear Information System (INIS)
Carbajo, Juan; Yoder, Graydon; Dell'Orco, G.; Curd, Warren; Kim, Seokho
2010-01-01
A RELAP5-3D model of the ITER (Latin for 'the way') vacuum vessel (VV) primary heat transfer system has been developed to evaluate a proposed design change that relocates the heat exchangers (HXs) from the exterior of the tokamak building to the interior. This alternative design protects the HXs from external hazards such as wind, tornado, and aircraft crash. The proposed design integrates the VV HXs into a VV pressure suppression system (VVPSS) tank that contains water to condense vapour in case of a leak into the plasma chamber. The proposal is to also use this water as the ultimate sink when removing decay heat from the VV system. The RELAP5-3D model has been run under normal operating and abnormal (decay heat) conditions. Results indicate that this alternative design is feasible, with no effects on the VVPSS tank under normal operation and with tank temperature and pressure increasing under decay heat conditions resulting in a requirement to remove steam generated if the VVPSS tank low pressure must be maintained.
Overview of physics basis for ITER
International Nuclear Information System (INIS)
Mukhovatov, V; Shimada, M; Chudnovskiy, A N; Costley, A E; Gribov, Y; Federici, G; Kardaun, O; Kukushkin, A S; Polevoi, A; Pustovitov, V D; Shimomura, Y; Sugie, T; Sugihara, M; Vayakis, G
2003-01-01
ITER will be the first magnetic confinement device with burning DT plasma and fusion power of about 0.5 GW. Parameters of ITER plasma have been predicted using methodologies summarized in the ITER Physics Basis (1999 Nucl. Fusion 39 2175). During the past few years, new results have been obtained that substantiate confidence in achieving Q>=10 in ITER with inductive H-mode operation. These include achievement of a good H-mode confinement near the Greenwald density at high triangularity of the plasma cross section; improvements in theory-based confinement projections for the core plasma, even though further studies are needed for understanding the transport near the plasma edge; improvement in helium ash removal due to the elastic collisions of He atoms with D/T ions in the divertor predicted by modelling; demonstration of feedback control of neoclassical tearing modes and resultant improvement in the achievable beta values; better understanding of edge localized mode (ELM) physics and development of ELM mitigation techniques; and demonstration of mitigation of plasma disruptions. ITER will have a flexibility to operate also in steady-state and intermediate (hybrid) regimes. The 'advanced tokamak' regimes with weak or negative central magnetic shear and internal transport barriers are considered as potential scenarios for steady-state operation. The paper concentrates on inductively driven plasma performance and discusses requirements for steady-state operation in ITER
Model predictive control using fuzzy decision functions
Kaymak, U.; Costa Sousa, da J.M.
2001-01-01
Fuzzy predictive control integrates conventional model predictive control with techniques from fuzzy multicriteria decision making, translating the goals and the constraints to predictive control in a transparent way. The information regarding the (fuzzy) goals and the (fuzzy) constraints of the
Double folding model of nucleus-nucleus potential: formulae, iteration method and computer code
International Nuclear Information System (INIS)
Luk'yanov, K.V.
2008-01-01
Method of construction of the nucleus-nucleus double folding potential is described. Iteration procedure for the corresponding integral equation is presented. Computer code and numerical results are presented
Multivariate statistical models for disruption prediction at ASDEX Upgrade
International Nuclear Information System (INIS)
Aledda, R.; Cannas, B.; Fanni, A.; Sias, G.; Pautasso, G.
2013-01-01
In this paper, a disruption prediction system for ASDEX Upgrade has been proposed that does not require disruption terminated experiments to be implemented. The system consists of a data-based model, which is built using only few input signals coming from successfully terminated pulses. A fault detection and isolation approach has been used, where the prediction is based on the analysis of the residuals of an auto regressive exogenous input model. The prediction performance of the proposed system is encouraging when it is applied to the same set of campaigns used to implement the model. However, the false alarms significantly increase when we tested the system on discharges coming from experimental campaigns temporally far from those used to train the model. This is due to the well know aging effect inherent in the data-based models. The main advantage of the proposed method, with respect to other data-based approaches in literature, is that it does not need data on experiments terminated with a disruption, as it uses a normal operating conditions model. This is a big advantage in the prospective of a prediction system for ITER, where a limited number of disruptions can be allowed
2-D Reflectometer Modeling for Optimizing the ITER Low-field Side Reflectometer System
International Nuclear Information System (INIS)
Kramer, G.J.; Nazikian, R.; Valeo, E.J.; Budny, R.V.; Kessel, C.; Johnson, D.
2005-01-01
The response of a low-field side reflectometer system for ITER is simulated with a 2?D reflectometer code using a realistic plasma equilibrium. It is found that the reflected beam will often miss its launch point by as much as 40 cm and that a vertical array of receiving antennas is essential in order to observe a reflection on the low-field side of ITER
International Nuclear Information System (INIS)
Raeder, J.; Piet, S.; Buende, R.
1991-01-01
As part of the series of publications by the IAEA that summarize the results of the Conceptual Design Activities for the ITER project, this document describes the ITER safety analyses. It contains an assessment of normal operation effluents, accident scenarios, plasma chamber safety, tritium system safety, magnet system safety, external loss of coolant and coolant flow problems, and a waste management assessment, while it describes the implementation of the safety approach for ITER. The document ends with a list of major conclusions, a set of topical remarks on technical safety issues, and recommendations for the Engineering Design Activities, safety considerations for siting ITER, and recommendations with regard to the safety issues for the R and D for ITER. Refs, figs and tabs
Circuit model of the ITER-like antenna for JET and simulation of its control algorithms
Durodié, Frédéric; Dumortier, Pierre; Helou, Walid; Křivská, Alena; Lerche, Ernesto
2015-12-01
The ITER-like Antenna (ILA) for JET [1] is a 2 toroidal by 2 poloidal array of Resonant Double Loops (RDL) featuring in-vessel matching capacitors feeding RF current straps in conjugate-T manner, a low impedance quarter-wave impedance transformer, a service stub allowing hydraulic actuator and water cooling services to reach the aforementioned capacitors and a 2nd stage phase-shifter-stub matching circuit allowing to correct/choose the conjugate-T working impedance. Toroidally adjacent RDLs are fed from a 3dB hybrid splitter. It has been operated at 33, 42 and 47MHz on plasma (2008-2009) while it presently estimated frequency range is from 29 to 49MHz. At the time of the design (2001-2004) as well as the experiments the circuit models of the ILA were quite basic. The ILA front face and strap array Topica model was relatively crude and failed to correctly represent the poloidal central septum, Faraday Screen attachment as well as the segmented antenna central septum limiter. The ILA matching capacitors, T-junction, Vacuum Transmission Line (VTL) and Service Stubs were represented by lumped circuit elements and simple transmission line models. The assessment of the ILA results carried out to decide on the repair of the ILA identified that achieving routine full array operation requires a better understanding of the RF circuit, a feedback control algorithm for the 2nd stage matching as well as tighter calibrations of RF measurements. The paper presents the progress in modelling of the ILA comprising a more detailed Topica model of the front face for various plasma Scrape Off Layer profiles, a comprehensive HFSS model of the matching capacitors including internal bellows and electrode cylinders, 3D-EM models of the VTL including vacuum ceramic window, Service stub, a transmission line model of the 2nd stage matching circuit and main transmission lines including the 3dB hybrid splitters. A time evolving simulation using the improved circuit model allowed to design and
Circuit model of the ITER-like antenna for JET and simulation of its control algorithms
Energy Technology Data Exchange (ETDEWEB)
Durodié, Frédéric, E-mail: frederic.durodie@rma.ac.be; Křivská, Alena [LPP-ERM/KMS, TEC Partner, Brussels (Belgium); Dumortier, Pierre; Lerche, Ernesto [LPP-ERM/KMS, TEC Partner, Brussels (Belgium); JET, Culham Science Centre, Abingdon, OX14 3DB (United Kingdom); Helou, Walid [CEA, IRFM, F-13108 St-Paul-Lez-Durance (France); Collaboration: EUROfusion Consortium
2015-12-10
The ITER-like Antenna (ILA) for JET [1] is a 2 toroidal by 2 poloidal array of Resonant Double Loops (RDL) featuring in-vessel matching capacitors feeding RF current straps in conjugate-T manner, a low impedance quarter-wave impedance transformer, a service stub allowing hydraulic actuator and water cooling services to reach the aforementioned capacitors and a 2nd stage phase-shifter-stub matching circuit allowing to correct/choose the conjugate-T working impedance. Toroidally adjacent RDLs are fed from a 3dB hybrid splitter. It has been operated at 33, 42 and 47MHz on plasma (2008-2009) while it presently estimated frequency range is from 29 to 49MHz. At the time of the design (2001-2004) as well as the experiments the circuit models of the ILA were quite basic. The ILA front face and strap array Topica model was relatively crude and failed to correctly represent the poloidal central septum, Faraday Screen attachment as well as the segmented antenna central septum limiter. The ILA matching capacitors, T-junction, Vacuum Transmission Line (VTL) and Service Stubs were represented by lumped circuit elements and simple transmission line models. The assessment of the ILA results carried out to decide on the repair of the ILA identified that achieving routine full array operation requires a better understanding of the RF circuit, a feedback control algorithm for the 2nd stage matching as well as tighter calibrations of RF measurements. The paper presents the progress in modelling of the ILA comprising a more detailed Topica model of the front face for various plasma Scrape Off Layer profiles, a comprehensive HFSS model of the matching capacitors including internal bellows and electrode cylinders, 3D-EM models of the VTL including vacuum ceramic window, Service stub, a transmission line model of the 2nd stage matching circuit and main transmission lines including the 3dB hybrid splitters. A time evolving simulation using the improved circuit model allowed to design and
Xiang, D.; Ni, W.; Zhang, H.; Wu, J.; Yan, W.; Su, Y.
2017-09-01
Superpixel segmentation has an advantage that can well preserve the target shape and details. In this research, an adaptive polarimetric SLIC (Pol-ASLIC) superpixel segmentation method is proposed. First, the spherically invariant random vector (SIRV) product model is adopted to estimate the normalized covariance matrix and texture for each pixel. A new edge detector is then utilized to extract PolSAR image edges for the initialization of central seeds. In the local iterative clustering, multiple cues including polarimetric, texture, and spatial information are considered to define the similarity measure. Moreover, a polarimetric homogeneity measurement is used to automatically determine the tradeoff factor, which can vary from homogeneous areas to heterogeneous areas. Finally, the SLIC superpixel segmentation scheme is applied to the airborne Experimental SAR and PiSAR L-band PolSAR data to demonstrate the effectiveness of this proposed segmentation approach. This proposed algorithm produces compact superpixels which can well adhere to image boundaries in both natural and urban areas. The detail information in heterogeneous areas can be well preserved.
Directory of Open Access Journals (Sweden)
D. Xiang
2017-09-01
Full Text Available Superpixel segmentation has an advantage that can well preserve the target shape and details. In this research, an adaptive polarimetric SLIC (Pol-ASLIC superpixel segmentation method is proposed. First, the spherically invariant random vector (SIRV product model is adopted to estimate the normalized covariance matrix and texture for each pixel. A new edge detector is then utilized to extract PolSAR image edges for the initialization of central seeds. In the local iterative clustering, multiple cues including polarimetric, texture, and spatial information are considered to define the similarity measure. Moreover, a polarimetric homogeneity measurement is used to automatically determine the tradeoff factor, which can vary from homogeneous areas to heterogeneous areas. Finally, the SLIC superpixel segmentation scheme is applied to the airborne Experimental SAR and PiSAR L-band PolSAR data to demonstrate the effectiveness of this proposed segmentation approach. This proposed algorithm produces compact superpixels which can well adhere to image boundaries in both natural and urban areas. The detail information in heterogeneous areas can be well preserved.
Baumes, Laurent A
2006-01-01
One of the main problems in high-throughput research for materials is still the design of experiments. At early stages of discovery programs, purely exploratory methodologies coupled with fast screening tools should be employed. This should lead to opportunities to find unexpected catalytic results and identify the "groups" of catalyst outputs, providing well-defined boundaries for future optimizations. However, very few new papers deal with strategies that guide exploratory studies. Mostly, traditional designs, homogeneous covering, or simple random samplings are exploited. Typical catalytic output distributions exhibit unbalanced datasets for which an efficient learning is hardly carried out, and interesting but rare classes are usually unrecognized. Here is suggested a new iterative algorithm for the characterization of the search space structure, working independently of learning processes. It enhances recognition rates by transferring catalysts to be screened from "performance-stable" space zones to "unsteady" ones which necessitate more experiments to be well-modeled. The evaluation of new algorithm attempts through benchmarks is compulsory due to the lack of past proofs about their efficiency. The method is detailed and thoroughly tested with mathematical functions exhibiting different levels of complexity. The strategy is not only empirically evaluated, the effect or efficiency of sampling on future Machine Learning performances is also quantified. The minimum sample size required by the algorithm for being statistically discriminated from simple random sampling is investigated.
Indirect iterative learning control for a discrete visual servo without a camera-robot model.
Jiang, Ping; Bamforth, Leon C A; Feng, Zuren; Baruch, John E F; Chen, YangQuan
2007-08-01
This paper presents a discrete learning controller for vision-guided robot trajectory imitation with no prior knowledge of the camera-robot model. A teacher demonstrates a desired movement in front of a camera, and then, the robot is tasked to replay it by repetitive tracking. The imitation procedure is considered as a discrete tracking control problem in the image plane, with an unknown and time-varying image Jacobian matrix. Instead of updating the control signal directly, as is usually done in iterative learning control (ILC), a series of neural networks are used to approximate the unknown Jacobian matrix around every sample point in the demonstrated trajectory, and the time-varying weights of local neural networks are identified through repetitive tracking, i.e., indirect ILC. This makes repetitive segmented training possible, and a segmented training strategy is presented to retain the training trajectories solely within the effective region for neural network approximation. However, a singularity problem may occur if an unmodified neural-network-based Jacobian estimation is used to calculate the robot end-effector velocity. A new weight modification algorithm is proposed which ensures invertibility of the estimation, thus circumventing the problem. Stability is further discussed, and the relationship between the approximation capability of the neural network and the tracking accuracy is obtained. Simulations and experiments are carried out to illustrate the validity of the proposed controller for trajectory imitation of robot manipulators with unknown time-varying Jacobian matrices.
International Nuclear Information System (INIS)
Knoll, D.A.; McHugh, P.R.; Krasheninnikov, S.I.; Sigmar, D.J.
1996-01-01
A combined edge plasma/Navier-Stokes neutral transport model is used to simulate dissipative divertor plasmas in the collisional limit for neutrals on a simplified two-dimensional slab geometry with ITER-like plasma conditions and scale lengths. The neutral model contains three momentum equations which are coupled to the plasma through ionization, recombination, and ion-neutral elastic collisions. The neutral transport coefficients are evaluated including both ion-neutral and neutral-neutral collisions. (orig.)
Lalush, D. S.; Tsui, B. M. W.
1998-06-01
We study the statistical convergence properties of two fast iterative reconstruction algorithms, the rescaled block-iterative (RBI) and ordered subset (OS) EM algorithms, in the context of cardiac SPECT with 3D detector response modeling. The Monte Carlo method was used to generate nearly noise-free projection data modeling the effects of attenuation, detector response, and scatter from the MCAT phantom. One thousand noise realizations were generated with an average count level approximating a typical T1-201 cardiac study. Each noise realization was reconstructed using the RBI and OS algorithms for cases with and without detector response modeling. For each iteration up to twenty, we generated mean and variance images, as well as covariance images for six specific locations. Both OS and RBI converged in the mean to results that were close to the noise-free ML-EM result using the same projection model. When detector response was not modeled in the reconstruction, RBI exhibited considerably lower noise variance than OS for the same resolution. When 3D detector response was modeled, the RBI-EM provided a small improvement in the tradeoff between noise level and resolution recovery, primarily in the axial direction, while OS required about half the number of iterations of RBI to reach the same resolution. We conclude that OS is faster than RBI, but may be sensitive to errors in the projection model. Both OS-EM and RBI-EM are effective alternatives to the EVIL-EM algorithm, but noise level and speed of convergence depend on the projection model used.
Modelling of the edge of a fusion plasma towards ITER and experimental validation on JET
International Nuclear Information System (INIS)
Guillemaut, Christophe
2013-01-01
The conditions required for fusion can be obtained in tokamaks. In most of these machines, the plasma wall-interaction and the exhaust of heating power are handled in a cavity called divertor. However, the high heat flux involved and the limitations of the materials of the plasma facing components (PFC) are problematic. Many researches are done this field in the context of ITER which should demonstrate 500 MW of DT fusion power during ∼ 400 s. Such operations could bring the heat flux on the PFC too high to be handled. Its reduction to manageable levels relies on the divertor detachment involving the reduction of the particle and heat fluxes on the PFC. Unfortunately, this phenomenon is still difficult to model. The aim of this PhD is to use the modelling of JET experiments with EDGE2D-EIRENE to make some progress in the understanding of the detachment. The simulations reproduce the observed detachment in C and Be/W environments. The distribution of the radiation is well reproduced by the code for C but with some discrepancies in Be/W. The comparison between different sets of atomic physics processes shows that ion-molecule elastic collisions are responsible for the detachment seen in EDGE2D-EIRENE. This process provides good neutral confinement in the divertor and significant momentum losses at low temperature, when the plasma is recombining. Comparison between EDGE2D-EIRENE and SOLPS4.3 shows similar detachment trends but the importance of the ion-molecule elastic collisions is reduced in SOLPS4.3. Both codes suggest that any process capable of improving the neutral confinement in the divertor should help to improve the modelling of the detachment. (author) [fr
A family of small-world network models built by complete graph and iteration-function
Ma, Fei; Yao, Bing
2018-02-01
Small-world networks are popular in real-life complex systems. In the past few decades, researchers presented amounts of small-world models, in which some are stochastic and the rest are deterministic. In comparison with random models, it is not only convenient but also interesting to study the topological properties of deterministic models in some fields, such as graph theory, theorem computer sciences and so on. As another concerned darling in current researches, community structure (modular topology) is referred to as an useful statistical parameter to uncover the operating functions of network. So, building and studying such models with community structure and small-world character will be a demanded task. Hence, in this article, we build a family of sparse network space N(t) which is different from those previous deterministic models. Even though, our models are established in the same way as them, iterative generation. By randomly connecting manner in each time step, every resulting member in N(t) has no absolutely self-similar feature widely shared in a large number of previous models. This makes our insight not into discussing a class certain model, but into investigating a group various ones spanning a network space. Somewhat surprisingly, our results prove all members of N(t) to possess some similar characters: (a) sparsity, (b) exponential-scale feature P(k) ∼α-k, and (c) small-world property. Here, we must stress a very screming, but intriguing, phenomenon that the difference of average path length (APL) between any two members in N(t) is quite small, which indicates this random connecting way among members has no great effect on APL. At the end of this article, as a new topological parameter correlated to reliability, synchronization capability and diffusion properties of networks, the number of spanning trees on a representative member NB(t) of N(t) is studied in detail, then an exact analytical solution for its spanning trees entropy is also
Integrated predictive modelling simulations of burning plasma experiment designs
International Nuclear Information System (INIS)
Bateman, Glenn; Onjun, Thawatchai; Kritz, Arnold H
2003-01-01
Models for the height of the pedestal at the edge of H-mode plasmas (Onjun T et al 2002 Phys. Plasmas 9 5018) are used together with the Multi-Mode core transport model (Bateman G et al 1998 Phys. Plasmas 5 1793) in the BALDUR integrated predictive modelling code to predict the performance of the ITER (Aymar A et al 2002 Plasma Phys. Control. Fusion 44 519), FIRE (Meade D M et al 2001 Fusion Technol. 39 336), and IGNITOR (Coppi B et al 2001 Nucl. Fusion 41 1253) fusion reactor designs. The simulation protocol used in this paper is tested by comparing predicted temperature and density profiles against experimental data from 33 H-mode discharges in the JET (Rebut P H et al 1985 Nucl. Fusion 25 1011) and DIII-D (Luxon J L et al 1985 Fusion Technol. 8 441) tokamaks. The sensitivities of the predictions are evaluated for the burning plasma experimental designs by using variations of the pedestal temperature model that are one standard deviation above and below the standard model. Simulations of the fusion reactor designs are carried out for scans in which the plasma density and auxiliary heating power are varied
Energy Technology Data Exchange (ETDEWEB)
Jia, Qianjun, E-mail: jiaqianjun@126.com [Southern Medical University, Guangzhou, Guangdong (China); Department of Radiology, Guangdong General Hospital, Guangdong Academy of Medical Sciences, Guangzhou, Guangdong (China); Department of Catheterization Lab, Guangdong Cardiovascular Institute, Guangdong Provincial Key Laboratory of South China Structural Heart Disease, Guangdong General Hospital, Guangdong Academy of Medical Sciences, Guangzhou, Guangdong (China); Zhuang, Jian, E-mail: zhuangjian5413@tom.com [Department of Cardiac Surgery, Guangdong Cardiovascular Institute, Guangdong Provincial Key Laboratory of South China Structural Heart Disease, Guangdong General Hospital, Guangdong Academy of Medical Sciences, Guangzhou, Guangdong (China); Jiang, Jun, E-mail: 81711587@qq.com [Department of Radiology, Shenzhen Second People’s Hospital, Shenzhen, Guangdong (China); Li, Jiahua, E-mail: 970872804@qq.com [Department of Catheterization Lab, Guangdong Cardiovascular Institute, Guangdong Provincial Key Laboratory of South China Structural Heart Disease, Guangdong General Hospital, Guangdong Academy of Medical Sciences, Guangzhou, Guangdong (China); Huang, Meiping, E-mail: huangmeiping_vip@163.com [Department of Catheterization Lab, Guangdong Cardiovascular Institute, Guangdong Provincial Key Laboratory of South China Structural Heart Disease, Guangdong General Hospital, Guangdong Academy of Medical Sciences, Guangzhou, Guangdong (China); Southern Medical University, Guangzhou, Guangdong (China); Liang, Changhong, E-mail: cjr.lchh@vip.163.com [Department of Radiology, Guangdong General Hospital, Guangdong Academy of Medical Sciences, Guangzhou, Guangdong (China); Southern Medical University, Guangzhou, Guangdong (China)
2017-01-15
Purpose: To compare the image quality, rate of coronary artery visualization and diagnostic accuracy of 256-slice multi-detector computed tomography angiography (CTA) with prospective electrocardiographic (ECG) triggering at a tube voltage of 80 kVp between 3 reconstruction algorithms (filtered back projection (FBP), hybrid iterative reconstruction (iDose{sup 4}) and iterative model reconstruction (IMR)) in infants with congenital heart disease (CHD). Methods: Fifty-one infants with CHD who underwent cardiac CTA in our institution between December 2014 and March 2015 were included. The effective radiation doses were calculated. Imaging data were reconstructed using the FBP, iDose{sup 4} and IMR algorithms. Parameters of objective image quality (noise, signal-to-noise ratio (SNR) and contrast-to-noise ratio (CNR)); subjective image quality (overall image quality, image noise and margin sharpness); coronary artery visibility; and diagnostic accuracy for the three algorithms were measured and compared. Results: The mean effective radiation dose was 0.61 ± 0.32 mSv. Compared to FBP and iDose{sup 4}, IMR yielded significantly lower noise (P < 0.01), higher SNR and CNR values (P < 0.01), and a greater subjective image quality score (P < 0.01). The total number of coronary segments visualized was significantly higher for both iDose{sup 4} and IMR than for FBP (P = 0.002 and P = 0.025, respectively), but there was no significant difference in this parameter between iDose{sup 4} and IMR (P = 0.397). There was no significant difference in the diagnostic accuracy between the FBP, iDose{sup 4} and IMR algorithms (χ{sup 2} = 0.343, P = 0.842). Conclusions: For infants with CHD undergoing cardiac CTA, the IMR reconstruction algorithm provided significantly increased objective and subjective image quality compared with the FBP and iDose{sup 4} algorithms. However, IMR did not improve the diagnostic accuracy or coronary artery visualization compared with iDose{sup 4}.
Chuang, Hsueh-Hua
The purpose of this dissertation is to develop an iterative model for the analysis of the current distribution in vertical-cavity surface-emitting lasers (VCSELs) using a circuit network modeling approach. This iterative model divides the VCSEL structure into numerous annular elements and uses a circuit network consisting of resistors and diodes. The measured sheet resistance of the p-distributed Bragg reflector (DBR), the measured sheet resistance of the layers under the oxide layer, and two empirical adjustable parameters are used as inputs to the iterative model to determine the resistance of each resistor. The two empirical values are related to the anisotropy of the resistivity of the p-DBR structure. The spontaneous current, stimulated current, and surface recombination current are accounted for by the diodes. The lateral carrier transport in the quantum well region is analyzed using drift and diffusion currents. The optical gain is calculated as a function of wavelength and carrier density from fundamental principles. The predicted threshold current densities for these VCSELs match the experimentally measured current densities over the wavelength range of 0.83 mum to 0.86 mum with an error of less than 5%. This model includes the effects of the resistance of the p-DBR mirrors, the oxide current-confining layer and spatial hole burning. Our model shows that higher sheet resistance under the oxide layer reduces the threshold current, but also reduces the current range over which single transverse mode operation occurs. The spatial hole burning profile depends on the lateral drift and diffusion of carriers in the quantum wells but is dominated by the voltage drop across the p-DBR region. To my knowledge, for the first time, the drift current and the diffusion current are treated separately. Previous work uses an ambipolar approach, which underestimates the total charge transferred in the quantum well region, especially under the oxide region. However, the total
Boosting iterative stochastic ensemble method for nonlinear calibration of subsurface flow models
Elsheikh, Ahmed H.
2013-06-01
A novel parameter estimation algorithm is proposed. The inverse problem is formulated as a sequential data integration problem in which Gaussian process regression (GPR) is used to integrate the prior knowledge (static data). The search space is further parameterized using Karhunen-Loève expansion to build a set of basis functions that spans the search space. Optimal weights of the reduced basis functions are estimated by an iterative stochastic ensemble method (ISEM). ISEM employs directional derivatives within a Gauss-Newton iteration for efficient gradient estimation. The resulting update equation relies on the inverse of the output covariance matrix which is rank deficient.In the proposed algorithm we use an iterative regularization based on the ℓ2 Boosting algorithm. ℓ2 Boosting iteratively fits the residual and the amount of regularization is controlled by the number of iterations. A termination criteria based on Akaike information criterion (AIC) is utilized. This regularization method is very attractive in terms of performance and simplicity of implementation. The proposed algorithm combining ISEM and ℓ2 Boosting is evaluated on several nonlinear subsurface flow parameter estimation problems. The efficiency of the proposed algorithm is demonstrated by the small size of utilized ensembles and in terms of error convergence rates. © 2013 Elsevier B.V.
Moyer, R. A.; Paz-Soldan, C.; Nazikian, R.; Orlov, D. M.; Ferraro, N. M.; Grierson, B. A.; Knölker, M.; Lyons, B. C.; McKee, G. R.; Osborne, T. H.; Rhodes, T. L.; Meneghini, O.; Smith, S.; Evans, T. E.; Fenstermacher, M. E.; Groebner, R. J.; Hanson, J. M.; La Haye, R. J.; Luce, T. C.; Mordijck, S.; Solomon, W. M.; Turco, F.; Yan, Z.; Zeng, L.; DIII-D Team
2017-10-01
Experiments have been executed in the DIII-D tokamak to extend suppression of Edge Localized Modes (ELMs) with Resonant Magnetic Perturbations (RMPs) to ITER-relevant levels of beam torque. The results support the hypothesis for RMP ELM suppression based on transition from an ideal screened response to a tearing response at a resonant surface that prevents expansion of the pedestal to an unstable width [Snyder et al., Nucl. Fusion 51, 103016 (2011) and Wade et al., Nucl. Fusion 55, 023002 (2015)]. In ITER baseline plasmas with I/aB = 1.4 and pedestal ν * ˜ 0.15, ELMs are readily suppressed with co- I p neutral beam injection. However, reducing the beam torque from 5 Nm to ≤ 3.5 Nm results in loss of ELM suppression and a shift in the zero-crossing of the electron perpendicular rotation ω ⊥ e ˜ 0 deeper into the plasma. The change in radius of ω ⊥ e ˜ 0 is due primarily to changes to the electron diamagnetic rotation frequency ωe * . Linear plasma response modeling with the resistive MHD code m3d-c1 indicates that the tearing response location tracks the inward shift in ω ⊥ e ˜ 0. At pedestal ν * ˜ 1, ELM suppression is also lost when the beam torque is reduced, but the ω ⊥ e change is dominated by collapse of the toroidal rotation v T . The hypothesis predicts that it should be possible to obtain ELM suppression at reduced beam torque by also reducing the height and width of the ωe * profile. This prediction has been confirmed experimentally with RMP ELM suppression at 0 Nm of beam torque and plasma normalized pressure β N ˜ 0.7. This opens the possibility of accessing ELM suppression in low torque ITER baseline plasmas by establishing suppression at low beta and then increasing beta while relying on the strong RMP-island coupling to maintain suppression.
Study of wall conditioning in tokamaks with application to ITER
International Nuclear Information System (INIS)
Kogut, Dmitri
2014-01-01
Thesis is devoted to studies of performance and efficiency of wall conditioning techniques in fusion reactors, such as ITER. Conditioning is necessary to control the state of the surface of plasma facing components to ensure plasma initiation and performance. Conditioning and operation of the JET tokamak with ITER-relevant material mix is extensively studied. A 2D model of glow conditioning discharges is developed and validated; it predicts reasonably uniform discharges in ITER. In the nuclear phase of ITER operation conditioning will be needed to control tritium inventory. It is shown here that isotopic exchange is an efficient mean to eliminate tritium from the walls by replacing it with deuterium. Extrapolations for tritium removal are comparable with expected retention per a nominal plasma pulse in ITER. A 1D model of hydrogen isotopic exchange in beryllium is developed and validated. It shows that fluence and temperature of the surface influence efficiency of the isotopic exchange. (author) [fr
Towards an Iterated Game Model with Multiple Adversaries in Smart-World Systems
Directory of Open Access Journals (Sweden)
Xiaofei He
2018-02-01
Full Text Available Diverse and varied cyber-attacks challenge the operation of the smart-world system that is supported by Internet-of-Things (IoT (smart cities, smart grid, smart transportation, etc. and must be carefully and thoughtfully addressed before widespread adoption of the smart-world system can be fully realized. Although a number of research efforts have been devoted to defending against these threats, a majority of existing schemes focus on the development of a specific defensive strategy to deal with specific, often singular threats. In this paper, we address the issue of coalitional attacks, which can be launched by multiple adversaries cooperatively against the smart-world system such as smart cities. Particularly, we propose a game-theory based model to capture the interaction among multiple adversaries, and quantify the capacity of the defender based on the extended Iterated Public Goods Game (IPGG model. In the formalized game model, in each round of the attack, a participant can either cooperate by participating in the coalitional attack, or defect by standing aside. In our work, we consider the generic defensive strategy that has a probability to detect the coalitional attack. When the coalitional attack is detected, all participating adversaries are penalized. The expected payoff of each participant is derived through the equalizer strategy that provides participants with competitive benefits. The multiple adversaries with the collusive strategy are also considered. Via a combination of theoretical analysis and experimentation, our results show that no matter which strategies the adversaries choose (random strategy, win-stay-lose-shift strategy, or even the adaptive equalizer strategy, our formalized game model is capable of enabling the defender to greatly reduce the maximum value of the expected average payoff to the adversaries via provisioning sufficient defensive resources, which is reflected by setting a proper penalty factor against
Towards an Iterated Game Model with Multiple Adversaries in Smart-World Systems †
Yang, Xinyu; Yu, Wei; Lin, Jie; Yang, Qingyu
2018-01-01
Diverse and varied cyber-attacks challenge the operation of the smart-world system that is supported by Internet-of-Things (IoT) (smart cities, smart grid, smart transportation, etc.) and must be carefully and thoughtfully addressed before widespread adoption of the smart-world system can be fully realized. Although a number of research efforts have been devoted to defending against these threats, a majority of existing schemes focus on the development of a specific defensive strategy to deal with specific, often singular threats. In this paper, we address the issue of coalitional attacks, which can be launched by multiple adversaries cooperatively against the smart-world system such as smart cities. Particularly, we propose a game-theory based model to capture the interaction among multiple adversaries, and quantify the capacity of the defender based on the extended Iterated Public Goods Game (IPGG) model. In the formalized game model, in each round of the attack, a participant can either cooperate by participating in the coalitional attack, or defect by standing aside. In our work, we consider the generic defensive strategy that has a probability to detect the coalitional attack. When the coalitional attack is detected, all participating adversaries are penalized. The expected payoff of each participant is derived through the equalizer strategy that provides participants with competitive benefits. The multiple adversaries with the collusive strategy are also considered. Via a combination of theoretical analysis and experimentation, our results show that no matter which strategies the adversaries choose (random strategy, win-stay-lose-shift strategy, or even the adaptive equalizer strategy), our formalized game model is capable of enabling the defender to greatly reduce the maximum value of the expected average payoff to the adversaries via provisioning sufficient defensive resources, which is reflected by setting a proper penalty factor against the adversaries
Towards an Iterated Game Model with Multiple Adversaries in Smart-World Systems.
He, Xiaofei; Yang, Xinyu; Yu, Wei; Lin, Jie; Yang, Qingyu
2018-02-24
Diverse and varied cyber-attacks challenge the operation of the smart-world system that is supported by Internet-of-Things (IoT) (smart cities, smart grid, smart transportation, etc.) and must be carefully and thoughtfully addressed before widespread adoption of the smart-world system can be fully realized. Although a number of research efforts have been devoted to defending against these threats, a majority of existing schemes focus on the development of a specific defensive strategy to deal with specific, often singular threats. In this paper, we address the issue of coalitional attacks, which can be launched by multiple adversaries cooperatively against the smart-world system such as smart cities. Particularly, we propose a game-theory based model to capture the interaction among multiple adversaries, and quantify the capacity of the defender based on the extended Iterated Public Goods Game (IPGG) model. In the formalized game model, in each round of the attack, a participant can either cooperate by participating in the coalitional attack, or defect by standing aside. In our work, we consider the generic defensive strategy that has a probability to detect the coalitional attack. When the coalitional attack is detected, all participating adversaries are penalized. The expected payoff of each participant is derived through the equalizer strategy that provides participants with competitive benefits. The multiple adversaries with the collusive strategy are also considered. Via a combination of theoretical analysis and experimentation, our results show that no matter which strategies the adversaries choose (random strategy, win-stay-lose-shift strategy, or even the adaptive equalizer strategy), our formalized game model is capable of enabling the defender to greatly reduce the maximum value of the expected average payoff to the adversaries via provisioning sufficient defensive resources, which is reflected by setting a proper penalty factor against the adversaries
Implementation of the Iterative Proportion Fitting Algorithm for Geostatistical Facies Modeling
International Nuclear Information System (INIS)
Li Yupeng; Deutsch, Clayton V.
2012-01-01
In geostatistics, most stochastic algorithm for simulation of categorical variables such as facies or rock types require a conditional probability distribution. The multivariate probability distribution of all the grouped locations including the unsampled location permits calculation of the conditional probability directly based on its definition. In this article, the iterative proportion fitting (IPF) algorithm is implemented to infer this multivariate probability. Using the IPF algorithm, the multivariate probability is obtained by iterative modification to an initial estimated multivariate probability using lower order bivariate probabilities as constraints. The imposed bivariate marginal probabilities are inferred from profiles along drill holes or wells. In the IPF process, a sparse matrix is used to calculate the marginal probabilities from the multivariate probability, which makes the iterative fitting more tractable and practical. This algorithm can be extended to higher order marginal probability constraints as used in multiple point statistics. The theoretical framework is developed and illustrated with estimation and simulation example.
International Nuclear Information System (INIS)
Lalush, D.S.; Tsui, B.M.W.; Karimi, S.S.
1996-01-01
We evaluate fast reconstruction algorithms including ordered subsets-EM (OS-EM) and Rescaled Block Iterative EM (RBI-EM) in fully 3D SPECT applications on the basis of their convergence and resolution recovery properties as iterations proceed. Using a 3D computer-simulated phantom consisting of 3D Gaussian objects, we simulated projection data that includes only the effects of sampling and detector response of a parallel-hole collimator. Reconstructions were performed using each of the three algorithms (ML-EM, OS-EM, and RBI-EM) modeling the 3D detector response in the projection function. Resolution recovery was evaluated by fitting Gaussians to each of the four objects in the iterated image estimates at selected intervals. Results show that OS-EM and RBI-EM behave identically in this case; their resolution recovery results are virtually indistinguishable. Their resolution behavior appears to be very similar to that of ML-EM, but accelerated by a factor of twenty. For all three algorithms, smaller objects take more iterations to converge. Next, we consider the effect noise has on convergence. For both noise-free and noisy data, we evaluate the log likelihood function at each subiteration of OS-EM and RBI-EM, and at each iteration of ML-EM. With noisy data, both OS-EM and RBI-EM give results for which the log-likelihood function oscillates. Especially for 180-degree acquisitions, RBI-EM oscillates less than OS-EM. Both OS-EM and RBI-EM appear to converge to solutions, but not to the ML solution. We conclude that both OS-EM and RBI-EM can be effective algorithms for fully 3D SPECT reconstruction. Both recover resolution similarly to ML-EM, only more quickly
Czech Academy of Sciences Publication Activity Database
Schmitz, O.; Becoulet, M.; Cahyna, Pavel; Evans, T.E.; Feng, Y.; Frerichs, H.; Loarte, A.; Pitts, R.A.; Reiser, D.; Fenstermacher, M.E.; Harting, D.; Kirschner, A.; Kukushkin, A.; Lunt, T.; Saibene, G.; Reiter, D.; Samm, U.; Wiesen, S.
2016-01-01
Roč. 56, č. 6 (2016), č. článku 066008. ISSN 0029-5515 Institutional support: RVO:61389021 Keywords : resonant magnetic perturbations * plasma edge physics * 3D modeling * neutral particle physics * ITER * divertor heat and particle loads * ELM control Subject RIV: BL - Plasma and Gas Discharge Physics OBOR OECD: Fluids and plasma physics (including surface physics) Impact factor: 3.307, year: 2016 http://iopscience.iop.org/article/10.1088/0029-5515/56/6/066008/meta
Directory of Open Access Journals (Sweden)
Yu Zhang
2015-10-01
Full Text Available In this article, we begin with the non-homogeneous model for the non-differentiable heat flow, which is described using the local fractional vector calculus, from the first law of thermodynamics in fractal media point view. We employ the local fractional variational iteration algorithm II to solve the fractal heat equations. The obtained results show the non-differentiable behaviors of temperature fields of fractal heat flow defined on Cantor sets.
International Nuclear Information System (INIS)
Wuerz, H.; Arkhipov, N.I.; Bakhtin, V.P.; Konkashbaev, I.; Landman, I.; Safronov, V.M.; Toporkov, D.A.; Zhitlukhin, A.M.
1995-01-01
The high divertor heat load during a tokamak plasma disruption results in sudden evaporation of a thin layer of divertor plate material, which acts as vapor shield and protects the target from further excessive evaporation. Formation and effectiveness of the vapor shield are theoretically modeled and are experimentally analyzed at the 2MK-200 facility under conditions simulating the thermal quench phase of ITER tokamak plasma disruptions. ((orig.))
Model Prediction Control For Water Management Using Adaptive Prediction Accuracy
Tian, X.; Negenborn, R.R.; Van Overloop, P.J.A.T.M.; Mostert, E.
2014-01-01
In the field of operational water management, Model Predictive Control (MPC) has gained popularity owing to its versatility and flexibility. The MPC controller, which takes predictions, time delay and uncertainties into account, can be designed for multi-objective management problems and for
Development of estrogen receptor beta binding prediction model using large sets of chemicals.
Sakkiah, Sugunadevi; Selvaraj, Chandrabose; Gong, Ping; Zhang, Chaoyang; Tong, Weida; Hong, Huixiao
2017-11-03
We developed an ER β binding prediction model to facilitate identification of chemicals specifically bind ER β or ER α together with our previously developed ER α binding model. Decision Forest was used to train ER β binding prediction model based on a large set of compounds obtained from EADB. Model performance was estimated through 1000 iterations of 5-fold cross validations. Prediction confidence was analyzed using predictions from the cross validations. Informative chemical features for ER β binding were identified through analysis of the frequency data of chemical descriptors used in the models in the 5-fold cross validations. 1000 permutations were conducted to assess the chance correlation. The average accuracy of 5-fold cross validations was 93.14% with a standard deviation of 0.64%. Prediction confidence analysis indicated that the higher the prediction confidence the more accurate the predictions. Permutation testing results revealed that the prediction model is unlikely generated by chance. Eighteen informative descriptors were identified to be important to ER β binding prediction. Application of the prediction model to the data from ToxCast project yielded very high sensitivity of 90-92%. Our results demonstrated ER β binding of chemicals could be accurately predicted using the developed model. Coupling with our previously developed ER α prediction model, this model could be expected to facilitate drug development through identification of chemicals that specifically bind ER β or ER α .
Energy Technology Data Exchange (ETDEWEB)
Almansouri, Hani [Purdue University; Venkatakrishnan, Singanallur V. [ORNL; Clayton, Dwight A. [ORNL; Polsky, Yarom [ORNL; Bouman, Charles [Purdue University; Santos-Villalobos, Hector J. [ORNL
2018-04-01
One-sided non-destructive evaluation (NDE) is widely used to inspect materials, such as concrete structures in nuclear power plants (NPP). A widely used method for one-sided NDE is the synthetic aperture focusing technique (SAFT). The SAFT algorithm produces reasonable results when inspecting simple structures. However, for complex structures, such as heavily reinforced thick concrete structures, SAFT results in artifacts and hence there is a need for a more sophisticated inversion technique. Model-based iterative reconstruction (MBIR) algorithms, which are typically equivalent to regularized inversion techniques, offer a powerful framework to incorporate complex models for the physics, detector miscalibrations and the materials being imaged to obtain high quality reconstructions. Previously, we have proposed an ultrasonic MBIR method that signifcantly improves reconstruction quality compared to SAFT. However, the method made some simplifying assumptions on the propagation model and did not disucss ways to handle data that is obtained by raster scanning a system over a surface to inspect large regions. In this paper, we propose a novel MBIR algorithm that incorporates an anisotropic forward model and allows for the joint processing of data obtained from a system that raster scans a large surface. We demonstrate that the new MBIR method can produce dramatic improvements in reconstruction quality compared to SAFT and suppresses articfacts compared to the perviously presented MBIR approach.
Modelling ELM heat flux deposition on the ITER main chamber wall
Czech Academy of Sciences Publication Activity Database
Kočan, M.; Pitts, R.A.; Lisgo, S.W.; Loarte, A.; Gunn, J. P.; Fuchs, Vladimír
2015-01-01
Roč. 463, July (2015), s. 709-713 ISSN 0022-3115. [International Conference on Plasma-Surface Interactions in Controlled Fusion Devices (PSI)/21./. Kanazawa, 26.05.2014-30.05.2014] Institutional support: RVO:61389021 Keywords : ELM * ITER Subject RIV: JF - Nuclear Energetics OBOR OECD: Nuclear related engineering Impact factor: 2.199, year: 2015
Edge database analysis for extrapolation to ITER
International Nuclear Information System (INIS)
Shimada, M.; Janeschitz, G.; Stambaugh, R.D.
1999-01-01
An edge database has been archived to facilitate cross-machine comparisons of SOL and edge pedestal characteristics, and to enable comparison with theoretical models with an aim to extrapolate to ITER. The SOL decay lengths of power, density and temperature become broader for increasing density and q 95 . The power decay length is predicted to be 1.4-3.5 cm (L-mode) and 1.4-2.7 cm (H-mode) at the midplane in ITER. Analysis of Type I ELMs suggests that each giant ELM on ITER would exceed the ablation threshold of the divertor plates. Theoretical models are proposed for the H-mode transition, for Type I and Type III ELMs and are compared with the edge pedestal database. (author)
Power and particle control for ITER
Energy Technology Data Exchange (ETDEWEB)
Cohen, S A; Cummings, J; Post, D E; Redi, M H [Princeton Univ., NJ (USA). Plasma Physics Lab.; Braams, B J [New York Univ., NY (USA). Courant Inst. of Mathematical Sciences; Brooks, J [Argonne National Lab., IL (USA); Engelmann, F; Pacher, G W; Pacher, H D [Max-Planck-Institut fuer Plasmaphysik, Garching (Germany, F.R.). NET Design Team; Harrison, M; Hotston, E [AEA Fusion, Culham (UK).
1990-12-15
Achievement of ITER's objectives, long-pulse ignited operation and nuclear component testing in quasi-steady-state, requires exhaust of power and helium ash, control of impurity content, and long lifetimes for plasma-facing components. In this paper we describe the data base and modeling results used to extrapolate present edge plasma parameters to ITER. Particular emphasis has been given to determining the uncertainties in predicted divertor performance. These analyses have been applied to four typical scenarios: A1 (ignited, reference Physics Phase), B1 (long pulse, hybrid, Technology Phase), B6 (steady-state, Technology Phase, impurity seeded) and B4 (steady-state, Technology Phase). 43 refs., 3 tabs.
Iowa calibration of MEPDG performance prediction models.
2013-06-01
This study aims to improve the accuracy of AASHTO Mechanistic-Empirical Pavement Design Guide (MEPDG) pavement : performance predictions for Iowa pavement systems through local calibration of MEPDG prediction models. A total of 130 : representative p...
Model complexity control for hydrologic prediction
Schoups, G.; Van de Giesen, N.C.; Savenije, H.H.G.
2008-01-01
A common concern in hydrologic modeling is overparameterization of complex models given limited and noisy data. This leads to problems of parameter nonuniqueness and equifinality, which may negatively affect prediction uncertainties. A systematic way of controlling model complexity is therefore
DEFF Research Database (Denmark)
Sokoler, Leo Emil; Frison, Gianluca; Edlund, Kristian
2013-01-01
In this paper, we develop an efficient interior-point method (IPM) for the linear programs arising in economic model predictive control of linear systems. The novelty of our algorithm is that it combines a homogeneous and self-dual model, and a specialized Riccati iteration procedure. We test...
International Nuclear Information System (INIS)
Ryu, Young Jin; Choi, Young Hun; Cheon, Jung-Eun; Kim, Woo Sun; Kim, In-One; Ha, Seongmin
2016-01-01
CT of pediatric phantoms can provide useful guidance to the optimization of knowledge-based iterative reconstruction CT. To compare radiation dose and image quality of CT images obtained at different radiation doses reconstructed with knowledge-based iterative reconstruction, hybrid iterative reconstruction and filtered back-projection. We scanned a 5-year anthropomorphic phantom at seven levels of radiation. We then reconstructed CT data with knowledge-based iterative reconstruction (iterative model reconstruction [IMR] levels 1, 2 and 3; Philips Healthcare, Andover, MA), hybrid iterative reconstruction (iDose 4 , levels 3 and 7; Philips Healthcare, Andover, MA) and filtered back-projection. The noise, signal-to-noise ratio and contrast-to-noise ratio were calculated. We evaluated low-contrast resolutions and detectability by low-contrast targets and subjective and objective spatial resolutions by the line pairs and wire. With radiation at 100 peak kVp and 100 mAs (3.64 mSv), the relative doses ranged from 5% (0.19 mSv) to 150% (5.46 mSv). Lower noise and higher signal-to-noise, contrast-to-noise and objective spatial resolution were generally achieved in ascending order of filtered back-projection, iDose 4 levels 3 and 7, and IMR levels 1, 2 and 3, at all radiation dose levels. Compared with filtered back-projection at 100% dose, similar noise levels were obtained on IMR level 2 images at 24% dose and iDose 4 level 3 images at 50% dose, respectively. Regarding low-contrast resolution, low-contrast detectability and objective spatial resolution, IMR level 2 images at 24% dose showed comparable image quality with filtered back-projection at 100% dose. Subjective spatial resolution was not greatly affected by reconstruction algorithm. Reduced-dose IMR obtained at 0.92 mSv (24%) showed similar image quality to routine-dose filtered back-projection obtained at 3.64 mSv (100%), and half-dose iDose 4 obtained at 1.81 mSv. (orig.)
Ryu, Young Jin; Choi, Young Hun; Cheon, Jung-Eun; Ha, Seongmin; Kim, Woo Sun; Kim, In-One
2016-03-01
CT of pediatric phantoms can provide useful guidance to the optimization of knowledge-based iterative reconstruction CT. To compare radiation dose and image quality of CT images obtained at different radiation doses reconstructed with knowledge-based iterative reconstruction, hybrid iterative reconstruction and filtered back-projection. We scanned a 5-year anthropomorphic phantom at seven levels of radiation. We then reconstructed CT data with knowledge-based iterative reconstruction (iterative model reconstruction [IMR] levels 1, 2 and 3; Philips Healthcare, Andover, MA), hybrid iterative reconstruction (iDose(4), levels 3 and 7; Philips Healthcare, Andover, MA) and filtered back-projection. The noise, signal-to-noise ratio and contrast-to-noise ratio were calculated. We evaluated low-contrast resolutions and detectability by low-contrast targets and subjective and objective spatial resolutions by the line pairs and wire. With radiation at 100 peak kVp and 100 mAs (3.64 mSv), the relative doses ranged from 5% (0.19 mSv) to 150% (5.46 mSv). Lower noise and higher signal-to-noise, contrast-to-noise and objective spatial resolution were generally achieved in ascending order of filtered back-projection, iDose(4) levels 3 and 7, and IMR levels 1, 2 and 3, at all radiation dose levels. Compared with filtered back-projection at 100% dose, similar noise levels were obtained on IMR level 2 images at 24% dose and iDose(4) level 3 images at 50% dose, respectively. Regarding low-contrast resolution, low-contrast detectability and objective spatial resolution, IMR level 2 images at 24% dose showed comparable image quality with filtered back-projection at 100% dose. Subjective spatial resolution was not greatly affected by reconstruction algorithm. Reduced-dose IMR obtained at 0.92 mSv (24%) showed similar image quality to routine-dose filtered back-projection obtained at 3.64 mSv (100%), and half-dose iDose(4) obtained at 1.81 mSv.
ITER EDA newsletter. V. 8, no. 9
International Nuclear Information System (INIS)
1999-09-01
This edition of the ITER EDA Newsletter contains a contribution by the ITER Director, R. Aymar, on the subject of developments in ITER Physics R and D report on the completion of the ITER central solenoid model coils installation by H. Tsuji, Head fo the Superconducting Magnet Laboratory at JAERI in Naka, Japan. Individual abstracts are prepared for each of the two articles
International Nuclear Information System (INIS)
Gordon, C.W.
2005-01-01
ITER was fortunate to have four countries interested in ITER siting to the point where licensing discussions were initiated. This experience uncovered the challenges of licensing a first of a kind, fusion machine under different licensing regimes and helped prepare the way for the site specific licensing process. These initial steps in licensing ITER have allowed for refining the safety case and provide confidence that the design and safety approach will be licensable. With site-specific licensing underway, the necessary regulatory submissions have been defined and are well on the way to being completed. Of course, there is still work to be done and details to be sorted out. However, the informal international discussions to bring both the proponent and regulatory authority up to a common level of understanding have laid the foundation for a licensing process that should proceed smoothly. This paper provides observations from the perspective of the International Team. (author)
Nonlinear chaotic model for predicting storm surges
Directory of Open Access Journals (Sweden)
M. Siek
2010-09-01
Full Text Available This paper addresses the use of the methods of nonlinear dynamics and chaos theory for building a predictive chaotic model from time series. The chaotic model predictions are made by the adaptive local models based on the dynamical neighbors found in the reconstructed phase space of the observables. We implemented the univariate and multivariate chaotic models with direct and multi-steps prediction techniques and optimized these models using an exhaustive search method. The built models were tested for predicting storm surge dynamics for different stormy conditions in the North Sea, and are compared to neural network models. The results show that the chaotic models can generally provide reliable and accurate short-term storm surge predictions.
Staying Power of Churn Prediction Models
Risselada, Hans; Verhoef, Peter C.; Bijmolt, Tammo H. A.
In this paper, we study the staying power of various churn prediction models. Staying power is defined as the predictive performance of a model in a number of periods after the estimation period. We examine two methods, logit models and classification trees, both with and without applying a bagging
Predictive user modeling with actionable attributes
Zliobaite, I.; Pechenizkiy, M.
2013-01-01
Different machine learning techniques have been proposed and used for modeling individual and group user needs, interests and preferences. In the traditional predictive modeling instances are described by observable variables, called attributes. The goal is to learn a model for predicting the target
EFFICIENT PREDICTIVE MODELLING FOR ARCHAEOLOGICAL RESEARCH
Balla, A.; Pavlogeorgatos, G.; Tsiafakis, D.; Pavlidis, G.
2014-01-01
The study presents a general methodology for designing, developing and implementing predictive modelling for identifying areas of archaeological interest. The methodology is based on documented archaeological data and geographical factors, geospatial analysis and predictive modelling, and has been applied to the identification of possible Macedonian tombs’ locations in Northern Greece. The model was tested extensively and the results were validated using a commonly used predictive gain, which...
Directory of Open Access Journals (Sweden)
Xingli Liu
Full Text Available To determine the optimal dose reduction level of iterative reconstruction technique for paediatric chest CT in pig models.27 infant pigs underwent 640-slice volume chest CT with 80kVp and different mAs. Automatic exposure control technique was used, and the index of noise was set to SD10 (Group A, routine dose, SD12.5, SD15, SD17.5, SD20 (Groups from B to E to reduce dose respectively. Group A was reconstructed with filtered back projection (FBP, and Groups from B to E were reconstructed using iterative reconstruction (IR. Objective and subjective image quality (IQ among groups were compared to determine an optimal radiation reduction level.The noise and signal-to-noise ratio (SNR in Group D had no significant statistical difference from that in Group A (P = 1.0. The scores of subjective IQ in Group A were not significantly different from those in Group D (P>0.05. There were no obvious statistical differences in the objective and subjective index values among the subgroups (small, medium and large subgroups of Group D. The effective dose (ED of Group D was 58.9% lower than that of Group A (0.20±0.05mSv vs 0.48±0.10mSv, p <0.001.In infant pig chest CT, using iterative reconstruction can provide diagnostic image quality; furthermore, it can reduce the dosage by 58.9%.
Ion orbit modelling of ELM heat loads on ITER divertor vertical targets.
Czech Academy of Sciences Publication Activity Database
Gunn, J. P.; Carpentier-Chouchana, S.; Dejarnac, Renaud; Escourbiac, F.; Hirai, T.; Komm, Michael; Kukushkin, A.; Panayotis, S.; Pitts, R.A.
2017-01-01
Roč. 12, August (2017), s. 75-83 ISSN 2352-1791. [International Conference on Plasma Surface Interactions 2016, PSI2016 /22./. Roma, 30.05.2016-03.06.2016] Institutional support: RVO:61389021 Keywords : ITER * Divertor * ELM heat loads Subject RIV: BL - Plasma and Gas Discharge Physics OBOR OECD: Fluids and plasma physics (including surface physics) http://www.sciencedirect.com/science/article/pii/S2352179116302745
Robust predictions of the interacting boson model
International Nuclear Information System (INIS)
Casten, R.F.; Koeln Univ.
1994-01-01
While most recognized for its symmetries and algebraic structure, the IBA model has other less-well-known but equally intrinsic properties which give unavoidable, parameter-free predictions. These predictions concern central aspects of low-energy nuclear collective structure. This paper outlines these ''robust'' predictions and compares them with the data
Comparison of Prediction-Error-Modelling Criteria
DEFF Research Database (Denmark)
Jørgensen, John Bagterp; Jørgensen, Sten Bay
2007-01-01
Single and multi-step prediction-error-methods based on the maximum likelihood and least squares criteria are compared. The prediction-error methods studied are based on predictions using the Kalman filter and Kalman predictors for a linear discrete-time stochastic state space model, which is a r...
Bagni, T.; Duchateau, J. L.; Breschi, M.; Devred, A.; Nijhuis, A.
2017-09-01
Cable-in-conduit conductors (CICCs) for ITER magnets are subjected to fast changing magnetic fields during the plasma-operating scenario. In order to anticipate the limitations of conductors under the foreseen operating conditions, it is essential to have a better understanding of the stability margin of magnets. In the last decade ITER has launched a campaign for characterization of several types of NbTi and Nb3Sn CICCs comprising quench tests with a singular sine wave fast magnetic field pulse and relatively small amplitude. The stability tests, performed in the SULTAN facility, were reproduced and analyzed using two codes: JackPot-AC/DC, an electromagnetic-thermal numerical model for CICCs, developed at the University of Twente (van Lanen and Nijhuis 2010 Cryogenics 50 139-148) and multi-constant-model (MCM) (Turck and Zani 2010 Cryogenics 50 443-9), an analytical model for CICCs coupling losses. The outputs of both codes were combined with thermal, hydraulic and electric analysis of superconducting cables to predict the minimum quench energy (MQE) (Bottura et al 2000 Cryogenics 40 617-26). The experimental AC loss results were used to calibrate the JackPot and MCM models and to reproduce the energy deposited in the cable during an MQE test. The agreement between experiments and models confirm a good comprehension of the various CICCs thermal and electromagnetic phenomena. The differences between the analytical MCM and numerical JackPot approaches are discussed. The results provide a good basis for further investigation of CICC stability under plasma scenario conditions using magnetic field pulses with lower ramp rate and higher amplitude.
MODELS OF LIVE MIGRATION WITH ITERATIVE APPROACH AND MOVE OF VIRTUAL MACHINES
Directory of Open Access Journals (Sweden)
S. M. Aleksankov
2015-11-01
Full Text Available Subject of Research. The processes of live migration without shared storage with pre-copy approach and move migration are researched. Migration of virtual machines is an important opportunity of virtualization technology. It enables applications to move transparently with their runtime environments between physical machines. Live migration becomes noticeable technology for efficient load balancing and optimizing the deployment of virtual machines to physical hosts in data centres. Before the advent of live migration, only network migration (the so-called, «Move», has been used, that entails stopping the virtual machine execution while copying to another physical server, and, consequently, unavailability of the service. Method. Algorithms of live migration without shared storage with pre-copy approach and move migration of virtual machines are reviewed from the perspective of research of migration time and unavailability of services at migrating of virtual machines. Main Results. Analytical models are proposed predicting migration time of virtual machines and unavailability of services at migrating with such technologies as live migration with pre-copy approach without shared storage and move migration. In the latest works on the time assessment of unavailability of services and migration time using live migration without shared storage experimental results are described, that are applicable to draw general conclusions about the changes of time for unavailability of services and migration time, but not to predict their values. Practical Significance. The proposed models can be used for predicting the migration time and time of unavailability of services, for example, at implementation of preventive and emergency works on the physical nodes in data centres.
Extracting falsifiable predictions from sloppy models.
Gutenkunst, Ryan N; Casey, Fergal P; Waterfall, Joshua J; Myers, Christopher R; Sethna, James P
2007-12-01
Successful predictions are among the most compelling validations of any model. Extracting falsifiable predictions from nonlinear multiparameter models is complicated by the fact that such models are commonly sloppy, possessing sensitivities to different parameter combinations that range over many decades. Here we discuss how sloppiness affects the sorts of data that best constrain model predictions, makes linear uncertainty approximations dangerous, and introduces computational difficulties in Monte-Carlo uncertainty analysis. We also present a useful test problem and suggest refinements to the standards by which models are communicated.
The prediction of epidemics through mathematical modeling.
Schaus, Catherine
2014-01-01
Mathematical models may be resorted to in an endeavor to predict the development of epidemics. The SIR model is one of the applications. Still too approximate, the use of statistics awaits more data in order to come closer to reality.
Calibration of PMIS pavement performance prediction models.
2012-02-01
Improve the accuracy of TxDOTs existing pavement performance prediction models through calibrating these models using actual field data obtained from the Pavement Management Information System (PMIS). : Ensure logical performance superiority patte...
Evaluating Predictive Uncertainty of Hyporheic Exchange Modelling
Chow, R.; Bennett, J.; Dugge, J.; Wöhling, T.; Nowak, W.
2017-12-01
Hyporheic exchange is the interaction of water between rivers and groundwater, and is difficult to predict. One of the largest contributions to predictive uncertainty for hyporheic fluxes have been attributed to the representation of heterogeneous subsurface properties. This research aims to evaluate which aspect of the subsurface representation - the spatial distribution of hydrofacies or the model for local-scale (within-facies) heterogeneity - most influences the predictive uncertainty. Also, we seek to identify data types that help reduce this uncertainty best. For this investigation, we conduct a modelling study of the Steinlach River meander, in Southwest Germany. The Steinlach River meander is an experimental site established in 2010 to monitor hyporheic exchange at the meander scale. We use HydroGeoSphere, a fully integrated surface water-groundwater model, to model hyporheic exchange and to assess the predictive uncertainty of hyporheic exchange transit times (HETT). A highly parameterized complex model is built and treated as `virtual reality', which is in turn modelled with simpler subsurface parameterization schemes (Figure). Then, we conduct Monte-Carlo simulations with these models to estimate the predictive uncertainty. Results indicate that: Uncertainty in HETT is relatively small for early times and increases with transit times. Uncertainty from local-scale heterogeneity is negligible compared to uncertainty in the hydrofacies distribution. Introducing more data to a poor model structure may reduce predictive variance, but does not reduce predictive bias. Hydraulic head observations alone cannot constrain the uncertainty of HETT, however an estimate of hyporheic exchange flux proves to be more effective at reducing this uncertainty. Figure: Approach for evaluating predictive model uncertainty. A conceptual model is first developed from the field investigations. A complex model (`virtual reality') is then developed based on that conceptual model
Directory of Open Access Journals (Sweden)
Waleed Albusaidi
2015-08-01
Full Text Available This paper introduces a new iterative method to predict the equivalent centrifugal compressor performance at various operating conditions. The presented theoretical analysis and empirical correlations provide a novel approach to derive the entire compressor map corresponding to various suction conditions without a prior knowledge of the detailed geometry. The efficiency model was derived to reflect the impact of physical gas properties, Mach number, and flow and work coefficients. One of the main features of the developed technique is the fact that it considers the variation in the gas properties and stage efficiency which makes it appropriate with hydrocarbons. This method has been tested to predict the performance of two multistage centrifugal compressors and the estimated characteristics are compared with the measured data. The carried comparison revealed a good matching with the actual values, including the stable operation region limits. Furthermore, an optimization study was conducted to investigate the influences of suction conditions on the stage efficiency and surge margin. Moreover, a new sort of presentation has been generated to obtain the equivalent performance characteristics for a constant discharge pressure operation at variable suction pressure and temperature working conditions. A further validation is included in part two of this study in order to evaluate the prediction capability of the derived model at various gas compositions.
Case studies in archaeological predictive modelling
Verhagen, Jacobus Wilhelmus Hermanus Philippus
2007-01-01
In this thesis, a collection of papers is put together dealing with various quantitative aspects of predictive modelling and archaeological prospection. Among the issues covered are the effects of survey bias on the archaeological data used for predictive modelling, and the complexities of testing
Wessler, Benjamin S; Lai Yh, Lana; Kramer, Whitney; Cangelosi, Michael; Raman, Gowri; Lutz, Jennifer S; Kent, David M
2015-07-01
Clinical prediction models (CPMs) estimate the probability of clinical outcomes and hold the potential to improve decision making and individualize care. For patients with cardiovascular disease, there are numerous CPMs available although the extent of this literature is not well described. We conducted a systematic review for articles containing CPMs for cardiovascular disease published between January 1990 and May 2012. Cardiovascular disease includes coronary heart disease, heart failure, arrhythmias, stroke, venous thromboembolism, and peripheral vascular disease. We created a novel database and characterized CPMs based on the stage of development, population under study, performance, covariates, and predicted outcomes. There are 796 models included in this database. The number of CPMs published each year is increasing steadily over time. Seven hundred seventeen (90%) are de novo CPMs, 21 (3%) are CPM recalibrations, and 58 (7%) are CPM adaptations. This database contains CPMs for 31 index conditions, including 215 CPMs for patients with coronary artery disease, 168 CPMs for population samples, and 79 models for patients with heart failure. There are 77 distinct index/outcome pairings. Of the de novo models in this database, 450 (63%) report a c-statistic and 259 (36%) report some information on calibration. There is an abundance of CPMs available for a wide assortment of cardiovascular disease conditions, with substantial redundancy in the literature. The comparative performance of these models, the consistency of effects and risk estimates across models and the actual and potential clinical impact of this body of literature is poorly understood. © 2015 American Heart Association, Inc.
Iterative perceptual learning for social behavior synthesis
de Kok, I.A.; Poppe, Ronald Walter; Heylen, Dirk K.J.
We introduce Iterative Perceptual Learning (IPL), a novel approach to learn computational models for social behavior synthesis from corpora of human–human interactions. IPL combines perceptual evaluation with iterative model refinement. Human observers rate the appropriateness of synthesized
Iterative Perceptual Learning for Social Behavior Synthesis
de Kok, I.A.; Poppe, Ronald Walter; Heylen, Dirk K.J.
We introduce Iterative Perceptual Learning (IPL), a novel approach for learning computational models for social behavior synthesis from corpora of human-human interactions. The IPL approach combines perceptual evaluation with iterative model refinement. Human observers rate the appropriateness of
Modeling of the lithium based neutralizer for ITER neutral beam injector
Energy Technology Data Exchange (ETDEWEB)
Dure, F., E-mail: franck.dure@u-psud.fr [LPGP, Laboratoire de Physique des Gaz et Plasmas, CNRS-Universite Paris Sud, Orsay (France); Lifschitz, A.; Bretagne, J.; Maynard, G. [LPGP, Laboratoire de Physique des Gaz et Plasmas, CNRS-Universite Paris Sud, Orsay (France); Simonin, A. [IRFM, Institut de Recherche sur la Fusion Magnetique, CEA Cadarache, 13108 Saint-Paul lez Durance (France); Minea, T. [LPGP, Laboratoire de Physique des Gaz et Plasmas, CNRS-Universite Paris Sud, Orsay (France)
2012-04-04
Highlights: Black-Right-Pointing-Pointer We compare different lithium based neutraliser configurations to the deuterium one. Black-Right-Pointing-Pointer We study characteristics of the secondary plasma and the propagation of the 1 MeV beam. Black-Right-Pointing-Pointer Using lithium increases the neutralisation effiency keeping correct beam focusing. Black-Right-Pointing-Pointer Using lithium also reduces the backstreaming effect in direction of the ion source. - Abstract: To achieve thermonuclear temperatures necessary to produce fusion reactions in the ITER Tokamak, additional heating systems are required. One of the main method to heat the plasma ions in ITER will be the injection of energetic neutrals (NBI). In the neutral beam injector, negative ions (D{sup -}) are electrostatically accelerated to 1 MeV, and then stripped of their extra electron via collisions with a target gas, in a structure known as neutralizer. In the current ITER specification, the target gas is deuterium. It has been recently proposed to use lithium vapor instead of deuterium as target gas in the neutralizer. This would allow to reduce the gas load in the NBI vessel and to improve the neutralization efficiency. A Particle-in-Cell Monte Carlo code has been developed to study the transport of the beams and the plasma formation in the neutralizer. A comparison between Li and D{sub 2} based neutralizers made with this code is presented here, as well as a parametric study on the geometry of the Li based neutralizer. Results demonstrate the feasibility of a Li based neutralizer, and its advantages with respect to the deuterium based one.
Energy Technology Data Exchange (ETDEWEB)
Li, Ke; Tang, Jie [Department of Medical Physics, University of Wisconsin-Madison, 1111 Highland Avenue, Madison, Wisconsin 53705 (United States); Chen, Guang-Hong, E-mail: gchen7@wisc.edu [Department of Medical Physics, University of Wisconsin-Madison, 1111 Highland Avenue, Madison, Wisconsin 53705 and Department of Radiology, University of Wisconsin-Madison, 600 Highland Avenue, Madison, Wisconsin 53792 (United States)
2014-04-15
Purpose: To reduce radiation dose in CT imaging, the statistical model based iterative reconstruction (MBIR) method has been introduced for clinical use. Based on the principle of MBIR and its nonlinear nature, the noise performance of MBIR is expected to be different from that of the well-understood filtered backprojection (FBP) reconstruction method. The purpose of this work is to experimentally assess the unique noise characteristics of MBIR using a state-of-the-art clinical CT system. Methods: Three physical phantoms, including a water cylinder and two pediatric head phantoms, were scanned in axial scanning mode using a 64-slice CT scanner (Discovery CT750 HD, GE Healthcare, Waukesha, WI) at seven different mAs levels (5, 12.5, 25, 50, 100, 200, 300). At each mAs level, each phantom was repeatedly scanned 50 times to generate an image ensemble for noise analysis. Both the FBP method with a standard kernel and the MBIR method (Veo{sup ®}, GE Healthcare, Waukesha, WI) were used for CT image reconstruction. Three-dimensional (3D) noise power spectrum (NPS), two-dimensional (2D) NPS, and zero-dimensional NPS (noise variance) were assessed both globally and locally. Noise magnitude, noise spatial correlation, noise spatial uniformity and their dose dependence were examined for the two reconstruction methods. Results: (1) At each dose level and at each frequency, the magnitude of the NPS of MBIR was smaller than that of FBP. (2) While the shape of the NPS of FBP was dose-independent, the shape of the NPS of MBIR was strongly dose-dependent; lower dose lead to a “redder” NPS with a lower mean frequency value. (3) The noise standard deviation (σ) of MBIR and dose were found to be related through a power law of σ ∝ (dose){sup −β} with the component β ≈ 0.25, which violated the classical σ ∝ (dose){sup −0.5} power law in FBP. (4) With MBIR, noise reduction was most prominent for thin image slices. (5) MBIR lead to better noise spatial
Heinsch, Stephen C; Das, Siba R; Smanski, Michael J
2018-01-01
Increasing the final titer of a multi-gene metabolic pathway can be viewed as a multivariate optimization problem. While numerous multivariate optimization algorithms exist, few are specifically designed to accommodate the constraints posed by genetic engineering workflows. We present a strategy for optimizing expression levels across an arbitrary number of genes that requires few design-build-test iterations. We compare the performance of several optimization algorithms on a series of simulated expression landscapes. We show that optimal experimental design parameters depend on the degree of landscape ruggedness. This work provides a theoretical framework for designing and executing numerical optimization on multi-gene systems.
Predictive modelling of complex agronomic and biological systems.
Keurentjes, Joost J B; Molenaar, Jaap; Zwaan, Bas J
2013-09-01
Biological systems are tremendously complex in their functioning and regulation. Studying the multifaceted behaviour and describing the performance of such complexity has challenged the scientific community for years. The reduction of real-world intricacy into simple descriptive models has therefore convinced many researchers of the usefulness of introducing mathematics into biological sciences. Predictive modelling takes such an approach another step further in that it takes advantage of existing knowledge to project the performance of a system in alternating scenarios. The ever growing amounts of available data generated by assessing biological systems at increasingly higher detail provide unique opportunities for future modelling and experiment design. Here we aim to provide an overview of the progress made in modelling over time and the currently prevalent approaches for iterative modelling cycles in modern biology. We will further argue for the importance of versatility in modelling approaches, including parameter estimation, model reduction and network reconstruction. Finally, we will discuss the difficulties in overcoming the mathematical interpretation of in vivo complexity and address some of the future challenges lying ahead. © 2013 John Wiley & Sons Ltd.
Incorporating uncertainty in predictive species distribution modelling.
Beale, Colin M; Lennon, Jack J
2012-01-19
Motivated by the need to solve ecological problems (climate change, habitat fragmentation and biological invasions), there has been increasing interest in species distribution models (SDMs). Predictions from these models inform conservation policy, invasive species management and disease-control measures. However, predictions are subject to uncertainty, the degree and source of which is often unrecognized. Here, we review the SDM literature in the context of uncertainty, focusing on three main classes of SDM: niche-based models, demographic models and process-based models. We identify sources of uncertainty for each class and discuss how uncertainty can be minimized or included in the modelling process to give realistic measures of confidence around predictions. Because this has typically not been performed, we conclude that uncertainty in SDMs has often been underestimated and a false precision assigned to predictions of geographical distribution. We identify areas where development of new statistical tools will improve predictions from distribution models, notably the development of hierarchical models that link different types of distribution model and their attendant uncertainties across spatial scales. Finally, we discuss the need to develop more defensible methods for assessing predictive performance, quantifying model goodness-of-fit and for assessing the significance of model covariates.
Model Predictive Control for Smart Energy Systems
DEFF Research Database (Denmark)
Halvgaard, Rasmus
pumps, heat tanks, electrical vehicle battery charging/discharging, wind farms, power plants). 2.Embed forecasting methodologies for the weather (e.g. temperature, solar radiation), the electricity consumption, and the electricity price in a predictive control system. 3.Develop optimization algorithms....... Chapter 3 introduces Model Predictive Control (MPC) including state estimation, filtering and prediction for linear models. Chapter 4 simulates the models from Chapter 2 with the certainty equivalent MPC from Chapter 3. An economic MPC minimizes the costs of consumption based on real electricity prices...... that determined the flexibility of the units. A predictive control system easily handles constraints, e.g. limitations in power consumption, and predicts the future behavior of a unit by integrating predictions of electricity prices, consumption, and weather variables. The simulations demonstrate the expected...
Evaluating the Predictive Value of Growth Prediction Models
Murphy, Daniel L.; Gaertner, Matthew N.
2014-01-01
This study evaluates four growth prediction models--projection, student growth percentile, trajectory, and transition table--commonly used to forecast (and give schools credit for) middle school students' future proficiency. Analyses focused on vertically scaled summative mathematics assessments, and two performance standards conditions (high…
International Nuclear Information System (INIS)
Vasileiadis, N.; Tatsios, G.; Misdanitis, S.; Valougeorgis, D.
2016-01-01
Highlights: • An integrated s/w for modeling complex rarefied gas distribution systems is presented. • Analysis is based on kinetic theory of gases. • Code effectiveness is demonstrated by simulating the ITER divertor pumping system. • The present s/w has the potential to support design work in large vacuum systems. - Abstract: An integrated software tool for modeling and simulation of complex gas distribution systems operating under any vacuum conditions is presented and validated. The algorithm structure includes (a) the input geometrical and operational data of the network, (b) the definition of the fundamental set of network loops and pseudoloops, (c) the formulation and solution of the mass and energy conservation equations, (d) the kinetic data base of the flow rates for channels of any length in the whole range of the Knudsen number, supporting, in an explicit manner, the solution of the conservation equations and (e) the network output data (mainly node pressures and channel flow rates/conductance). The code validity is benchmarked under rough vacuum conditions by comparison with hydrodynamic solutions in the slip regime. Then, its feasibility, effectiveness and potential are demonstrated by simulating the ITER torus vacuum system with the six direct pumps based on the 2012 design of the ITER divertor. Detailed results of the flow patterns and paths in the cassettes, in the gaps between the cassettes and along the divertor ring, as well as of the total throughput for various pumping scenarios and dome pressures are provided. A comparison with previous results available in the literature is included.
Drifts, currents, and power scrape-off width in SOLPS-ITER modeling of DIII-D
International Nuclear Information System (INIS)
Meier, E. T.; Goldston, R. J.; Kaveeva, E. G.; Makowski, M. A.; Mordijck, S.
2016-01-01
The effects of drifts and associated flows and currents on the width of the parallel heat flux channel (λ q ) in the tokamak scrape-off layer (SOL) are analyzed using the SOLPS-ITER 2D fluid transport code. Motivation is supplied by Goldston’s heuristic drift (HD) model for λ q , which yields the same approximately inverse poloidal magnetic field dependence seen in multi-machine regression. The analysis, focusing on a DIII-D H-mode discharge, reveals HD-like features, including comparable density and temperature fall-off lengths in the SOL, and up-down ion pressure asymmetry that allows net cross-separatrix ion magnetic drift flux to exceed net anomalous ion flux. In experimentally relevant high-recycling cases, scans of both toroidal and poloidal magnetic field (B tor and B pol ) are conducted, showing minimal λ q dependence on either component of the field. Insensitivity to B tor is expected, and suggests that SOLPS-ITER is effectively capturing some aspects of HD physics. Absence of λ q dependence on B pol , however, is inconsistent with both the HD model and experimental results. As a result, the inconsistency is attributed to strong variation in the parallel Mach number, which violates one of the premises of the HD model.
Model predictive control classical, robust and stochastic
Kouvaritakis, Basil
2016-01-01
For the first time, a textbook that brings together classical predictive control with treatment of up-to-date robust and stochastic techniques. Model Predictive Control describes the development of tractable algorithms for uncertain, stochastic, constrained systems. The starting point is classical predictive control and the appropriate formulation of performance objectives and constraints to provide guarantees of closed-loop stability and performance. Moving on to robust predictive control, the text explains how similar guarantees may be obtained for cases in which the model describing the system dynamics is subject to additive disturbances and parametric uncertainties. Open- and closed-loop optimization are considered and the state of the art in computationally tractable methods based on uncertainty tubes presented for systems with additive model uncertainty. Finally, the tube framework is also applied to model predictive control problems involving hard or probabilistic constraints for the cases of multiplic...
Modeling, robust and distributed model predictive control for freeway networks
Liu, S.
2016-01-01
In Model Predictive Control (MPC) for traffic networks, traffic models are crucial since they are used as prediction models for determining the optimal control actions. In order to reduce the computational complexity of MPC for traffic networks, macroscopic traffic models are often used instead of
Deep Predictive Models in Interactive Music
Martin, Charles P.; Ellefsen, Kai Olav; Torresen, Jim
2018-01-01
Automatic music generation is a compelling task where much recent progress has been made with deep learning models. In this paper, we ask how these models can be integrated into interactive music systems; how can they encourage or enhance the music making of human users? Musical performance requires prediction to operate instruments, and perform in groups. We argue that predictive models could help interactive systems to understand their temporal context, and ensemble behaviour. Deep learning...
Hosseiny, S. M. H.; Smith, V.
2017-12-01
Darby Creek is an urbanized highly flood-prone watershed in Metro-Philadelphia, PA. The floodplain and the main channel are composed of alluvial sediment and are subject to frequent geomorphological changes. The lower part of the channel is within the coastal zone, subjugating the flow to a backwater condition. This study applies a multi-disciplinary approach to modeling the morphological alteration of the creek and floodplain in presence of the backwater using an iteration and integration of combined models. To do this, FaSTMECH (a two-dimensional quasi unsteady flow solver) in International River Interface Cooperative software (iRIC) is coupled with a 1-dimensional backwater model to calculate hydraulic characteristics of the flow over a digital elevation model of the channel and floodplain. One USGS gage at the upstream and two NOAA gages at the downstream are used for model validation. The output of the model is afterward used to calculate sediment transport and morphological changes over the domain through time using an iterative process. The updated elevation data is incorporated in the hydraulic model again to calculate the velocity field. The calculations continue reciprocally over discrete discharges of the hydrograph until the flood attenuates and the next flood event occurs. The results from this study demonstrate how to incorporate bathymetry and flow data to model floodplain evolution in the backwater through time, and provide a means to better understanding the dynamics of the floodplain. This work is not only applicable to river management, but also provides insight to the geoscience community concerning the development of landscapes in the backwater.
Gharamti, M. E.
2015-05-11
The ensemble Kalman filter (EnKF) is a popular method for state-parameters estimation of subsurface flow and transport models based on field measurements. The common filtering procedure is to directly update the state and parameters as one single vector, which is known as the Joint-EnKF. In this study, we follow the one-step-ahead smoothing formulation of the filtering problem, to derive a new joint-based EnKF which involves a smoothing step of the state between two successive analysis steps. The new state-parameters estimation scheme is derived in a consistent Bayesian filtering framework and results in separate update steps for the state and the parameters. This new algorithm bears strong resemblance with the Dual-EnKF, but unlike the latter which first propagates the state with the model then updates it with the new observation, the proposed scheme starts by an update step, followed by a model integration step. We exploit this new formulation of the joint filtering problem and propose an efficient model-integration-free iterative procedure on the update step of the parameters only for further improved performances. Numerical experiments are conducted with a two-dimensional synthetic subsurface transport model simulating the migration of a contaminant plume in a heterogenous aquifer domain. Contaminant concentration data are assimilated to estimate both the contaminant state and the hydraulic conductivity field. Assimilation runs are performed under imperfect modeling conditions and various observational scenarios. Simulation results suggest that the proposed scheme efficiently recovers both the contaminant state and the aquifer conductivity, providing more accurate estimates than the standard Joint and Dual EnKFs in all tested scenarios. Iterating on the update step of the new scheme further enhances the proposed filter’s behavior. In term of computational cost, the new Joint-EnKF is almost equivalent to that of the Dual-EnKF, but requires twice more model
Unreachable Setpoints in Model Predictive Control
DEFF Research Database (Denmark)
Rawlings, James B.; Bonné, Dennis; Jørgensen, John Bagterp
2008-01-01
In this work, a new model predictive controller is developed that handles unreachable setpoints better than traditional model predictive control methods. The new controller induces an interesting fast/slow asymmetry in the tracking response of the system. Nominal asymptotic stability of the optimal...... steady state is established for terminal constraint model predictive control (MPC). The region of attraction is the steerable set. Existing analysis methods for closed-loop properties of MPC are not applicable to this new formulation, and a new analysis method is developed. It is shown how to extend...
Bayesian Predictive Models for Rayleigh Wind Speed
DEFF Research Database (Denmark)
Shahirinia, Amir; Hajizadeh, Amin; Yu, David C
2017-01-01
predictive model of the wind speed aggregates the non-homogeneous distributions into a single continuous distribution. Therefore, the result is able to capture the variation among the probability distributions of the wind speeds at the turbines’ locations in a wind farm. More specifically, instead of using...... a wind speed distribution whose parameters are known or estimated, the parameters are considered as random whose variations are according to probability distributions. The Bayesian predictive model for a Rayleigh which only has a single model scale parameter has been proposed. Also closed-form posterior...... and predictive inferences under different reasonable choices of prior distribution in sensitivity analysis have been presented....
Schmitz, Oliver
2014-10-01
The constrains used in magneto-hydrodynamic (MHD) modeling of the plasma response to external resonant magnetic perturbation (RMP) fields have a profound impact on the three-dimensional (3-D) shape of the plasma boundary induced by RMP fields. In this contribution, the consequences of the plasma response on the actual 3D boundary structure and transport during RMP application at ITER are investigated. The 3D fluid plasma and kinetic neutral transport code EMC3-Eirene is used for edge transport modeling. Plasma response modeling is conducted with the M3D-C1 code using a single fluid, non-linear and a two fluid, linear MHD constrain. These approaches are compared to results with an ideal MHD like plasma response. A 3D plasma boundary is formed for all cases consisting of magnetic finger structures at the X-point intersecting the divertor surface in a helical footprint pattern. The width of the helical footprint pattern is largely reduced compared to vacuum magnetic fields when using the ideal MHD like screening model. This yields increasing peak heat fluxes in contrast to a beneficial heat flux spreading seen with vacuum fields. The particle pump out as well as loss of thermal energy is reduced by a factor of two compared to vacuum fields. In contrast, the impact of the plasma response obtained from both MHD constrains in M3D-C1 is nearly negligible at the plasma boundary and only a small modification of the magnetic footprint topology is detected. Accordingly, heat and particle fluxes on the target plates as well as the edge transport characteristics are comparable to the vacuum solution. This span of modeling results with different plasma response models highlights the importance of thoroughly validating both, plasma response and 3D edge transport models for a robust extrapolation towards ITER. Supported by ITER Grant IO/CT/11/4300000497 and F4E Grant GRT-055 (PMS-PE) and by Start-Up Funds of the University of Wisconsin - Madison.
Predictive Modelling and Time: An Experiment in Temporal Archaeological Predictive Models
David Ebert
2006-01-01
One of the most common criticisms of archaeological predictive modelling is that it fails to account for temporal or functional differences in sites. However, a practical solution to temporal or functional predictive modelling has proven to be elusive. This article discusses temporal predictive modelling, focusing on the difficulties of employing temporal variables, then introduces and tests a simple methodology for the implementation of temporal modelling. The temporal models thus created ar...
International Nuclear Information System (INIS)
Ulbricht, A.
2001-05-01
In the frame of a contract between the ITER (International Thermonuclear Experimental Reactor) Director and the European Home Team Director was concluded the extension of the TOSKA facility of the Forschungszentrum Karlsruhe as test bed for the ITER toroidal field model coil (TFMC), one of the 7 large research and development projects of the ITER EDA (Engineering Design Activity). The report describes the work and development, which were performed together with industry to extend the existing components and add new components. In this frame a new 2 kW refrigerator was added to the TOSKA facility including the cold lines to the Helium dewar in the TOSKA experimental area. The measuring and control system as well as data acquisition was renewed according to the state-of-the-art. Two power supplies (30 kA, 50 kA) were switched in parallel across an Al bus bar system and combined with an 80 kA dump circuit. For the test of the TFMC in the background field of the EURATOM LCT coil a new 20 kA power supply was taken into operation with the existing 20 kA discharge circuit. Two forced flow cooled 80 kA current leads for the TFMC were developed. The total lifting capacity for loads in the TOSKA building was increased by an ordered new 80 t crane with a suitable cross head (125 t lifting capacity +5 t net mass) to 130 t for assembling and installation of the test arrangement. Numerous pre-tests and development and adaptation work was required to make the components suitable for application. The 1.8 K test of the EURATOM LCT coil and the test of the W 7-X prototype coil count to these tests as overall pre-tests. (orig.)
Convergence Guaranteed Nonlinear Constraint Model Predictive Control via I/O Linearization
Directory of Open Access Journals (Sweden)
Xiaobing Kong
2013-01-01
Full Text Available Constituting reliable optimal solution is a key issue for the nonlinear constrained model predictive control. Input-output feedback linearization is a popular method in nonlinear control. By using an input-output feedback linearizing controller, the original linear input constraints will change to nonlinear constraints and sometimes the constraints are state dependent. This paper presents an iterative quadratic program (IQP routine on the continuous-time system. To guarantee its convergence, another iterative approach is incorporated. The proposed algorithm can reach a feasible solution over the entire prediction horizon. Simulation results on both a numerical example and the continuous stirred tank reactors (CSTR demonstrate the effectiveness of the proposed method.
Input-constrained model predictive control via the alternating direction method of multipliers
DEFF Research Database (Denmark)
Sokoler, Leo Emil; Frison, Gianluca; Andersen, Martin S.
2014-01-01
This paper presents an algorithm, based on the alternating direction method of multipliers, for the convex optimal control problem arising in input-constrained model predictive control. We develop an efficient implementation of the algorithm for the extended linear quadratic control problem (LQCP......) with input and input-rate limits. The algorithm alternates between solving an extended LQCP and a highly structured quadratic program. These quadratic programs are solved using a Riccati iteration procedure, and a structure-exploiting interior-point method, respectively. The computational cost per iteration...... is quadratic in the dimensions of the controlled system, and linear in the length of the prediction horizon. Simulations show that the approach proposed in this paper is more than an order of magnitude faster than several state-of-the-art quadratic programming algorithms, and that the difference in computation...
Fingerprint verification prediction model in hand dermatitis.
Lee, Chew K; Chang, Choong C; Johor, Asmah; Othman, Puwira; Baba, Roshidah
2015-07-01
Hand dermatitis associated fingerprint changes is a significant problem and affects fingerprint verification processes. This study was done to develop a clinically useful prediction model for fingerprint verification in patients with hand dermatitis. A case-control study involving 100 patients with hand dermatitis. All patients verified their thumbprints against their identity card. Registered fingerprints were randomized into a model derivation and model validation group. Predictive model was derived using multiple logistic regression. Validation was done using the goodness-of-fit test. The fingerprint verification prediction model consists of a major criterion (fingerprint dystrophy area of ≥ 25%) and two minor criteria (long horizontal lines and long vertical lines). The presence of the major criterion predicts it will almost always fail verification, while presence of both minor criteria and presence of one minor criterion predict high and low risk of fingerprint verification failure, respectively. When none of the criteria are met, the fingerprint almost always passes the verification. The area under the receiver operating characteristic curve was 0.937, and the goodness-of-fit test showed agreement between the observed and expected number (P = 0.26). The derived fingerprint verification failure prediction model is validated and highly discriminatory in predicting risk of fingerprint verification in patients with hand dermatitis. © 2014 The International Society of Dermatology.
Massive Predictive Modeling using Oracle R Enterprise
CERN. Geneva
2014-01-01
R is fast becoming the lingua franca for analyzing data via statistics, visualization, and predictive analytics. For enterprise-scale data, R users have three main concerns: scalability, performance, and production deployment. Oracle's R-based technologies - Oracle R Distribution, Oracle R Enterprise, Oracle R Connector for Hadoop, and the R package ROracle - address these concerns. In this talk, we introduce Oracle's R technologies, highlighting how each enables R users to achieve scalability and performance while making production deployment of R results a natural outcome of the data analyst/scientist efforts. The focus then turns to Oracle R Enterprise with code examples using the transparency layer and embedded R execution, targeting massive predictive modeling. One goal behind massive predictive modeling is to build models per entity, such as customers, zip codes, simulations, in an effort to understand behavior and tailor predictions at the entity level. Predictions...
Gharamti, M. E.; Ait-El-Fquih, Boujemaa; Hoteit, Ibrahim
2015-01-01
Numerical experiments are conducted with a two-dimensional synthetic subsurface transport model simulating the migration of a contaminant plume in a heterogenous aquifer domain. Contaminant concentration data are assimilated to estimate both the contaminant state and the hydraulic conductivity field. Assimilation runs are performed under imperfect modeling conditions and various observational scenarios. Simulation results suggest that the proposed scheme efficiently recovers both the contaminant state and the aquifer conductivity, providing more accurate estimates than the standard Joint and Dual EnKFs in all tested scenarios. Iterating on the update step of the new scheme further enhances the proposed filter’s behavior. In term of computational cost, the new Joint-EnKF is almost equivalent to that of the Dual-EnKF, but requires twice more model integrations than the standard Joint-EnKF.
Application of Gauss's law space-charge limited emission model in iterative particle tracking method
Energy Technology Data Exchange (ETDEWEB)
Altsybeyev, V.V., E-mail: v.altsybeev@spbu.ru; Ponomarev, V.A.
2016-11-01
The particle tracking method with a so-called gun iteration for modeling the space charge is discussed in the following paper. We suggest to apply the emission model based on the Gauss's law for the calculation of the space charge limited current density distribution using considered method. Based on the presented emission model we have developed a numerical algorithm for this calculations. This approach allows us to perform accurate and low time consumpting numerical simulations for different vacuum sources with the curved emitting surfaces and also in the presence of additional physical effects such as bipolar flows and backscattered electrons. The results of the simulations of the cylindrical diode and diode with elliptical emitter with the use of axysimmetric coordinates are presented. The high efficiency and accuracy of the suggested approach are confirmed by the obtained results and comparisons with the analytical solutions.
International Nuclear Information System (INIS)
Cohen, S.A.
1991-12-01
The exhaust of power and fusion-reaction products from ITER plasma are critical physics and technology issues from performance, safety, and reliability perspectives. Because of inadequate pulse length, fluence, flux, scrape-off layer plasma temperature and density, and other parameters, the present generation of tokamaks, linear plasma devices, or energetic beam facilities are unable to perform adequate technology testing of divertor components, though they are essential contributors to many physics issues such as edge-plasma transport and disruption effects and control. This Technical Requirements Documents presents a description of the capabilities and parameters divertor test facilities should have to perform accelerated life testing on predominantly technological divertor issues such as basic divertor concepts, heat load limits, thermal fatigue, tritium inventory and erosion/redeposition. The cost effectiveness of such divertor technology testing is also discussed
Use of the iterative solution method for coupled finite element and boundary element modeling
International Nuclear Information System (INIS)
Koteras, J.R.
1993-07-01
Tunnels buried deep within the earth constitute an important class geomechanics problems. Two numerical techniques used for the analysis of geomechanics problems, the finite element method and the boundary element method, have complementary characteristics for applications to problems of this type. The usefulness of combining these two methods for use as a geomechanics analysis tool has been recognized for some time, and a number of coupling techniques have been proposed. However, not all of them lend themselves to efficient computational implementations for large-scale problems. This report examines a coupling technique that can form the basis for an efficient analysis tool for large scale geomechanics problems through the use of an iterative equation solver
International Nuclear Information System (INIS)
Kocifaj, Miroslav
2016-01-01
The study of diffuse light of a night sky is undergoing a renaissance due to the development of inexpensive high performance computers which can significantly reduce the time needed for accurate numerical simulations. Apart from targeted field campaigns, numerical modeling appears to be one of the most attractive and powerful approaches for predicting the diffuse light of a night sky. However, computer-aided simulation of night-sky radiances over any territory and under arbitrary conditions is a complex problem that is difficult to solve. This study addresses three concepts for modeling the artificial light propagation through a turbid stratified atmosphere. Specifically, these are two-stream approximation, iterative approach to Radiative Transfer Equation (RTE) and Method of Successive Orders of Scattering (MSOS). The principles of the methods, their strengths and weaknesses are reviewed with respect to their implications for night-light modeling in different environments. - Highlights: • Three methods for modeling nightsky radiance are reviewed. • The two-stream approximation allows for rapid calculation of radiative fluxes. • The above approach is convenient for modeling large uniformly emitting areas. • SOS is applicable to heterogeneous deployment of well-separated cities or towns. • MSOS is generally CPU less-intensive than traditional 3D RTE.
Multi-model analysis in hydrological prediction
Lanthier, M.; Arsenault, R.; Brissette, F.
2017-12-01
Hydrologic modelling, by nature, is a simplification of the real-world hydrologic system. Therefore ensemble hydrological predictions thus obtained do not present the full range of possible streamflow outcomes, thereby producing ensembles which demonstrate errors in variance such as under-dispersion. Past studies show that lumped models used in prediction mode can return satisfactory results, especially when there is not enough information available on the watershed to run a distributed model. But all lumped models greatly simplify the complex processes of the hydrologic cycle. To generate more spread in the hydrologic ensemble predictions, multi-model ensembles have been considered. In this study, the aim is to propose and analyse a method that gives an ensemble streamflow prediction that properly represents the forecast probabilities and reduced ensemble bias. To achieve this, three simple lumped models are used to generate an ensemble. These will also be combined using multi-model averaging techniques, which generally generate a more accurate hydrogram than the best of the individual models in simulation mode. This new predictive combined hydrogram is added to the ensemble, thus creating a large ensemble which may improve the variability while also improving the ensemble mean bias. The quality of the predictions is then assessed on different periods: 2 weeks, 1 month, 3 months and 6 months using a PIT Histogram of the percentiles of the real observation volumes with respect to the volumes of the ensemble members. Initially, the models were run using historical weather data to generate synthetic flows. This worked for individual models, but not for the multi-model and for the large ensemble. Consequently, by performing data assimilation at each prediction period and thus adjusting the initial states of the models, the PIT Histogram could be constructed using the observed flows while allowing the use of the multi-model predictions. The under-dispersion has been
Prostate Cancer Risk Prediction Models
Developing statistical models that estimate the probability of developing prostate cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.
Colorectal Cancer Risk Prediction Models
Developing statistical models that estimate the probability of developing colorectal cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.
Esophageal Cancer Risk Prediction Models
Developing statistical models that estimate the probability of developing esophageal cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.
Bladder Cancer Risk Prediction Models
Developing statistical models that estimate the probability of developing bladder cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.
Lung Cancer Risk Prediction Models
Developing statistical models that estimate the probability of developing lung cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.
Breast Cancer Risk Prediction Models
Developing statistical models that estimate the probability of developing breast cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.
Pancreatic Cancer Risk Prediction Models
Developing statistical models that estimate the probability of developing pancreatic cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.
Ovarian Cancer Risk Prediction Models
Developing statistical models that estimate the probability of developing ovarian cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.
Liver Cancer Risk Prediction Models
Developing statistical models that estimate the probability of developing liver cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.
Testicular Cancer Risk Prediction Models
Developing statistical models that estimate the probability of testicular cervical cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.
Cervical Cancer Risk Prediction Models
Developing statistical models that estimate the probability of developing cervical cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.
Modeling and Prediction Using Stochastic Differential Equations
DEFF Research Database (Denmark)
Juhl, Rune; Møller, Jan Kloppenborg; Jørgensen, John Bagterp
2016-01-01
Pharmacokinetic/pharmakodynamic (PK/PD) modeling for a single subject is most often performed using nonlinear models based on deterministic ordinary differential equations (ODEs), and the variation between subjects in a population of subjects is described using a population (mixed effects) setup...... deterministic and can predict the future perfectly. A more realistic approach would be to allow for randomness in the model due to e.g., the model be too simple or errors in input. We describe a modeling and prediction setup which better reflects reality and suggests stochastic differential equations (SDEs...
Predictive Model of Systemic Toxicity (SOT)
In an effort to ensure chemical safety in light of regulatory advances away from reliance on animal testing, USEPA and L’Oréal have collaborated to develop a quantitative systemic toxicity prediction model. Prediction of human systemic toxicity has proved difficult and remains a ...
Spent fuel: prediction model development
International Nuclear Information System (INIS)
Almassy, M.Y.; Bosi, D.M.; Cantley, D.A.
1979-07-01
The need for spent fuel disposal performance modeling stems from a requirement to assess the risks involved with deep geologic disposal of spent fuel, and to support licensing and public acceptance of spent fuel repositories. Through the balanced program of analysis, diagnostic testing, and disposal demonstration tests, highlighted in this presentation, the goal of defining risks and of quantifying fuel performance during long-term disposal can be attained
Navy Recruit Attrition Prediction Modeling
2014-09-01
have high correlation with attrition, such as age, job characteristics, command climate, marital status, behavior issues prior to recruitment, and the...the additive model. glm(formula = Outcome ~ Age + Gender + Marital + AFQTCat + Pay + Ed + Dep, family = binomial, data = ltraining) Deviance ...0.1 ‘ ‘ 1 (Dispersion parameter for binomial family taken to be 1) Null deviance : 105441 on 85221 degrees of freedom Residual deviance
Predicting and Modeling RNA Architecture
Westhof, Eric; Masquida, Benoît; Jossinet, Fabrice
2011-01-01
SUMMARY A general approach for modeling the architecture of large and structured RNA molecules is described. The method exploits the modularity and the hierarchical folding of RNA architecture that is viewed as the assembly of preformed double-stranded helices defined by Watson-Crick base pairs and RNA modules maintained by non-Watson-Crick base pairs. Despite the extensive molecular neutrality observed in RNA structures, specificity in RNA folding is achieved through global constraints like lengths of helices, coaxiality of helical stacks, and structures adopted at the junctions of helices. The Assemble integrated suite of computer tools allows for sequence and structure analysis as well as interactive modeling by homology or ab initio assembly with possibilities for fitting within electronic density maps. The local key role of non-Watson-Crick pairs guides RNA architecture formation and offers metrics for assessing the accuracy of three-dimensional models in a more useful way than usual root mean square deviation (RMSD) values. PMID:20504963
Predictive Models and Computational Toxicology (II IBAMTOX)
EPA’s ‘virtual embryo’ project is building an integrative systems biology framework for predictive models of developmental toxicity. One schema involves a knowledge-driven adverse outcome pathway (AOP) framework utilizing information from public databases, standardized ontologies...
Finding furfural hydrogenation catalysts via predictive modelling
Strassberger, Z.; Mooijman, M.; Ruijter, E.; Alberts, A.H.; Maldonado, A.G.; Orru, R.V.A.; Rothenberg, G.
2010-01-01
We combine multicomponent reactions, catalytic performance studies and predictive modelling to find transfer hydrogenation catalysts. An initial set of 18 ruthenium-carbene complexes were synthesized and screened in the transfer hydrogenation of furfural to furfurol with isopropyl alcohol complexes
FINITE ELEMENT MODEL FOR PREDICTING RESIDUAL ...
African Journals Online (AJOL)
FINITE ELEMENT MODEL FOR PREDICTING RESIDUAL STRESSES IN ... the transverse residual stress in the x-direction (σx) had a maximum value of 375MPa ... the finite element method are in fair agreement with the experimental results.
Evaluation of CASP8 model quality predictions
Cozzetto, Domenico; Kryshtafovych, Andriy; Tramontano, Anna
2009-01-01
established a prediction category to evaluate their performance in 2006. In 2008 the experiment was repeated and its results are reported here. Participants were invited to infer the correctness of the protein models submitted by the registered automatic
Mental models accurately predict emotion transitions.
Thornton, Mark A; Tamir, Diana I
2017-06-06
Successful social interactions depend on people's ability to predict others' future actions and emotions. People possess many mechanisms for perceiving others' current emotional states, but how might they use this information to predict others' future states? We hypothesized that people might capitalize on an overlooked aspect of affective experience: current emotions predict future emotions. By attending to regularities in emotion transitions, perceivers might develop accurate mental models of others' emotional dynamics. People could then use these mental models of emotion transitions to predict others' future emotions from currently observable emotions. To test this hypothesis, studies 1-3 used data from three extant experience-sampling datasets to establish the actual rates of emotional transitions. We then collected three parallel datasets in which participants rated the transition likelihoods between the same set of emotions. Participants' ratings of emotion transitions predicted others' experienced transitional likelihoods with high accuracy. Study 4 demonstrated that four conceptual dimensions of mental state representation-valence, social impact, rationality, and human mind-inform participants' mental models. Study 5 used 2 million emotion reports on the Experience Project to replicate both of these findings: again people reported accurate models of emotion transitions, and these models were informed by the same four conceptual dimensions. Importantly, neither these conceptual dimensions nor holistic similarity could fully explain participants' accuracy, suggesting that their mental models contain accurate information about emotion dynamics above and beyond what might be predicted by static emotion knowledge alone.
Mental models accurately predict emotion transitions
Thornton, Mark A.; Tamir, Diana I.
2017-01-01
Successful social interactions depend on people’s ability to predict others’ future actions and emotions. People possess many mechanisms for perceiving others’ current emotional states, but how might they use this information to predict others’ future states? We hypothesized that people might capitalize on an overlooked aspect of affective experience: current emotions predict future emotions. By attending to regularities in emotion transitions, perceivers might develop accurate mental models of others’ emotional dynamics. People could then use these mental models of emotion transitions to predict others’ future emotions from currently observable emotions. To test this hypothesis, studies 1–3 used data from three extant experience-sampling datasets to establish the actual rates of emotional transitions. We then collected three parallel datasets in which participants rated the transition likelihoods between the same set of emotions. Participants’ ratings of emotion transitions predicted others’ experienced transitional likelihoods with high accuracy. Study 4 demonstrated that four conceptual dimensions of mental state representation—valence, social impact, rationality, and human mind—inform participants’ mental models. Study 5 used 2 million emotion reports on the Experience Project to replicate both of these findings: again people reported accurate models of emotion transitions, and these models were informed by the same four conceptual dimensions. Importantly, neither these conceptual dimensions nor holistic similarity could fully explain participants’ accuracy, suggesting that their mental models contain accurate information about emotion dynamics above and beyond what might be predicted by static emotion knowledge alone. PMID:28533373
International Nuclear Information System (INIS)
Oda, Seitaro; Weissman, Gaby; Weigold, W. Guy; Vembar, Mani
2015-01-01
The purpose of this study was to investigate the effects of knowledge-based iterative model reconstruction (IMR) on image quality in cardiac CT performed for the planning of redo cardiac surgery by comparing IMR images with images reconstructed with filtered back-projection (FBP) and hybrid iterative reconstruction (HIR). We studied 31 patients (23 men, 8 women; mean age 65.1 ± 16.5 years) referred for redo cardiac surgery who underwent cardiac CT. Paired image sets were created using three types of reconstruction: FBP, HIR, and IMR. Quantitative parameters including CT attenuation, image noise, and contrast-to-noise ratio (CNR) of each cardiovascular structure were calculated. The visual image quality - graininess, streak artefact, margin sharpness of each cardiovascular structure, and overall image quality - was scored on a five-point scale. The mean image noise of FBP, HIR, and IMR images was 58.3 ± 26.7, 36.0 ± 12.5, and 14.2 ± 5.5 HU, respectively; there were significant differences in all comparison combinations among the three methods. The CNR of IMR images was better than that of FBP and HIR images in all evaluated structures. The visual scores were significantly higher for IMR than for the other images in all evaluated parameters. IMR can provide significantly improved qualitative and quantitative image quality at in cardiac CT for planning of reoperative cardiac surgery. (orig.)
A noise power spectrum study of a new model-based iterative reconstruction system: Veo 3.0.
Li, Guang; Liu, Xinming; Dodge, Cristina T; Jensen, Corey T; Rong, X John
2016-09-08
The purpose of this study was to evaluate performance of the third generation of model-based iterative reconstruction (MBIR) system, Veo 3.0, based on noise power spectrum (NPS) analysis with various clinical presets over a wide range of clinically applicable dose levels. A CatPhan 600 surrounded by an oval, fat-equivalent ring to mimic patient size/shape was scanned 10 times at each of six dose levels on a GE HD 750 scanner. NPS analysis was performed on images reconstructed with various Veo 3.0 preset combinations for comparisons of those images reconstructed using Veo 2.0, filtered back projection (FBP) and adaptive statistical iterative reconstruc-tion (ASiR). The new Target Thickness setting resulted in higher noise in thicker axial images. The new Texture Enhancement function achieved a more isotropic noise behavior with less image artifacts. Veo 3.0 provides additional reconstruction options designed to allow the user choice of balance between spatial resolution and image noise, relative to Veo 2.0. Veo 3.0 provides more user selectable options and in general improved isotropic noise behavior in comparison to Veo 2.0. The overall noise reduction performance of both versions of MBIR was improved in comparison to FBP and ASiR, especially at low-dose levels. © 2016 The Authors.
Iteration and Prototyping in Creating Technical Specifications.
Flynt, John P.
1994-01-01
Claims that the development process for computer software can be greatly aided by the writers of specifications if they employ basic iteration and prototyping techniques. Asserts that computer software configuration management practices provide ready models for iteration and prototyping. (HB)
Return Predictability, Model Uncertainty, and Robust Investment
DEFF Research Database (Denmark)
Lukas, Manuel
Stock return predictability is subject to great uncertainty. In this paper we use the model confidence set approach to quantify uncertainty about expected utility from investment, accounting for potential return predictability. For monthly US data and six representative return prediction models, we...... find that confidence sets are very wide, change significantly with the predictor variables, and frequently include expected utilities for which the investor prefers not to invest. The latter motivates a robust investment strategy maximizing the minimal element of the confidence set. The robust investor...... allocates a much lower share of wealth to stocks compared to a standard investor....
Model predictive Controller for Mobile Robot
Alireza Rezaee
2017-01-01
This paper proposes a Model Predictive Controller (MPC) for control of a P2AT mobile robot. MPC refers to a group of controllers that employ a distinctly identical model of process to predict its future behavior over an extended prediction horizon. The design of a MPC is formulated as an optimal control problem. Then this problem is considered as linear quadratic equation (LQR) and is solved by making use of Ricatti equation. To show the effectiveness of the proposed method this controller is...
Spatial Economics Model Predicting Transport Volume
Directory of Open Access Journals (Sweden)
Lu Bo
2016-10-01
Full Text Available It is extremely important to predict the logistics requirements in a scientific and rational way. However, in recent years, the improvement effect on the prediction method is not very significant and the traditional statistical prediction method has the defects of low precision and poor interpretation of the prediction model, which cannot only guarantee the generalization ability of the prediction model theoretically, but also cannot explain the models effectively. Therefore, in combination with the theories of the spatial economics, industrial economics, and neo-classical economics, taking city of Zhuanghe as the research object, the study identifies the leading industry that can produce a large number of cargoes, and further predicts the static logistics generation of the Zhuanghe and hinterlands. By integrating various factors that can affect the regional logistics requirements, this study established a logistics requirements potential model from the aspect of spatial economic principles, and expanded the way of logistics requirements prediction from the single statistical principles to an new area of special and regional economics.
Accuracy assessment of landslide prediction models
International Nuclear Information System (INIS)
Othman, A N; Mohd, W M N W; Noraini, S
2014-01-01
The increasing population and expansion of settlements over hilly areas has greatly increased the impact of natural disasters such as landslide. Therefore, it is important to developed models which could accurately predict landslide hazard zones. Over the years, various techniques and models have been developed to predict landslide hazard zones. The aim of this paper is to access the accuracy of landslide prediction models developed by the authors. The methodology involved the selection of study area, data acquisition, data processing and model development and also data analysis. The development of these models are based on nine different landslide inducing parameters i.e. slope, land use, lithology, soil properties, geomorphology, flow accumulation, aspect, proximity to river and proximity to road. Rank sum, rating, pairwise comparison and AHP techniques are used to determine the weights for each of the parameters used. Four (4) different models which consider different parameter combinations are developed by the authors. Results obtained are compared to landslide history and accuracies for Model 1, Model 2, Model 3 and Model 4 are 66.7, 66.7%, 60% and 22.9% respectively. From the results, rank sum, rating and pairwise comparison can be useful techniques to predict landslide hazard zones
International Nuclear Information System (INIS)
Beauwens, B.; Arkuszewski, J.; Boryszewicz, M.
1981-01-01
Results obtained in the field of linear iterative methods within the Coordinated Research Program on Transport Theory and Advanced Reactor Calculations are summarized. The general convergence theory of linear iterative methods is essentially based on the properties of nonnegative operators on ordered normed spaces. The following aspects of this theory have been improved: new comparison theorems for regular splittings, generalization of the notions of M- and H-matrices, new interpretations of classical convergence theorems for positive-definite operators. The estimation of asymptotic convergence rates was developed with two purposes: the analysis of model problems and the optimization of relaxation parameters. In the framework of factorization iterative methods, model problem analysis is needed to investigate whether the increased computational complexity of higher-order methods does not offset their increased asymptotic convergence rates, as well as to appreciate the effect of standard relaxation techniques (polynomial relaxation). On the other hand, the optimal use of factorization iterative methods requires the development of adequate relaxation techniques and their optimization. The relative performances of a few possibilities have been explored for model problems. Presently, the best results have been obtained with optimal diagonal-Chebyshev relaxation
Predictive validation of an influenza spread model.
Directory of Open Access Journals (Sweden)
Ayaz Hyder
Full Text Available BACKGROUND: Modeling plays a critical role in mitigating impacts of seasonal influenza epidemics. Complex simulation models are currently at the forefront of evaluating optimal mitigation strategies at multiple scales and levels of organization. Given their evaluative role, these models remain limited in their ability to predict and forecast future epidemics leading some researchers and public-health practitioners to question their usefulness. The objective of this study is to evaluate the predictive ability of an existing complex simulation model of influenza spread. METHODS AND FINDINGS: We used extensive data on past epidemics to demonstrate the process of predictive validation. This involved generalizing an individual-based model for influenza spread and fitting it to laboratory-confirmed influenza infection data from a single observed epidemic (1998-1999. Next, we used the fitted model and modified two of its parameters based on data on real-world perturbations (vaccination coverage by age group and strain type. Simulating epidemics under these changes allowed us to estimate the deviation/error between the expected epidemic curve under perturbation and observed epidemics taking place from 1999 to 2006. Our model was able to forecast absolute intensity and epidemic peak week several weeks earlier with reasonable reliability and depended on the method of forecasting-static or dynamic. CONCLUSIONS: Good predictive ability of influenza epidemics is critical for implementing mitigation strategies in an effective and timely manner. Through the process of predictive validation applied to a current complex simulation model of influenza spread, we provided users of the model (e.g. public-health officials and policy-makers with quantitative metrics and practical recommendations on mitigating impacts of seasonal influenza epidemics. This methodology may be applied to other models of communicable infectious diseases to test and potentially improve
Predictive Validation of an Influenza Spread Model
Hyder, Ayaz; Buckeridge, David L.; Leung, Brian
2013-01-01
Background Modeling plays a critical role in mitigating impacts of seasonal influenza epidemics. Complex simulation models are currently at the forefront of evaluating optimal mitigation strategies at multiple scales and levels of organization. Given their evaluative role, these models remain limited in their ability to predict and forecast future epidemics leading some researchers and public-health practitioners to question their usefulness. The objective of this study is to evaluate the predictive ability of an existing complex simulation model of influenza spread. Methods and Findings We used extensive data on past epidemics to demonstrate the process of predictive validation. This involved generalizing an individual-based model for influenza spread and fitting it to laboratory-confirmed influenza infection data from a single observed epidemic (1998–1999). Next, we used the fitted model and modified two of its parameters based on data on real-world perturbations (vaccination coverage by age group and strain type). Simulating epidemics under these changes allowed us to estimate the deviation/error between the expected epidemic curve under perturbation and observed epidemics taking place from 1999 to 2006. Our model was able to forecast absolute intensity and epidemic peak week several weeks earlier with reasonable reliability and depended on the method of forecasting-static or dynamic. Conclusions Good predictive ability of influenza epidemics is critical for implementing mitigation strategies in an effective and timely manner. Through the process of predictive validation applied to a current complex simulation model of influenza spread, we provided users of the model (e.g. public-health officials and policy-makers) with quantitative metrics and practical recommendations on mitigating impacts of seasonal influenza epidemics. This methodology may be applied to other models of communicable infectious diseases to test and potentially improve their predictive
Zhang, Langwen; Xie, Wei; Wang, Jingcheng
2017-11-01
In this work, synthesis of robust distributed model predictive control (MPC) is presented for a class of linear systems subject to structured time-varying uncertainties. By decomposing a global system into smaller dimensional subsystems, a set of distributed MPC controllers, instead of a centralised controller, are designed. To ensure the robust stability of the closed-loop system with respect to model uncertainties, distributed state feedback laws are obtained by solving a min-max optimisation problem. The design of robust distributed MPC is then transformed into solving a minimisation optimisation problem with linear matrix inequality constraints. An iterative online algorithm with adjustable maximum iteration is proposed to coordinate the distributed controllers to achieve a global performance. The simulation results show the effectiveness of the proposed robust distributed MPC algorithm.
Finding Furfural Hydrogenation Catalysts via Predictive Modelling.
Strassberger, Zea; Mooijman, Maurice; Ruijter, Eelco; Alberts, Albert H; Maldonado, Ana G; Orru, Romano V A; Rothenberg, Gadi
2010-09-10
We combine multicomponent reactions, catalytic performance studies and predictive modelling to find transfer hydrogenation catalysts. An initial set of 18 ruthenium-carbene complexes were synthesized and screened in the transfer hydrogenation of furfural to furfurol with isopropyl alcohol complexes gave varied yields, from 62% up to >99.9%, with no obvious structure/activity correlations. Control experiments proved that the carbene ligand remains coordinated to the ruthenium centre throughout the reaction. Deuterium-labelling studies showed a secondary isotope effect (k(H):k(D)=1.5). Further mechanistic studies showed that this transfer hydrogenation follows the so-called monohydride pathway. Using these data, we built a predictive model for 13 of the catalysts, based on 2D and 3D molecular descriptors. We tested and validated the model using the remaining five catalysts (cross-validation, R(2)=0.913). Then, with this model, the conversion and selectivity were predicted for four completely new ruthenium-carbene complexes. These four catalysts were then synthesized and tested. The results were within 3% of the model's predictions, demonstrating the validity and value of predictive modelling in catalyst optimization.
Corporate prediction models, ratios or regression analysis?
Bijnen, E.J.; Wijn, M.F.C.M.
1994-01-01
The models developed in the literature with respect to the prediction of a company s failure are based on ratios. It has been shown before that these models should be rejected on theoretical grounds. Our study of industrial companies in the Netherlands shows that the ratios which are used in
Predicting Protein Secondary Structure with Markov Models
DEFF Research Database (Denmark)
Fischer, Paul; Larsen, Simon; Thomsen, Claus
2004-01-01
we are considering here, is to predict the secondary structure from the primary one. To this end we train a Markov model on training data and then use it to classify parts of unknown protein sequences as sheets, helices or coils. We show how to exploit the directional information contained...... in the Markov model for this task. Classifications that are purely based on statistical models might not always be biologically meaningful. We present combinatorial methods to incorporate biological background knowledge to enhance the prediction performance....
Power flow prediction in vibrating systems via model reduction
Li, Xianhui
This dissertation focuses on power flow prediction in vibrating systems. Reduced order models (ROMs) are built based on rational Krylov model reduction which preserve power flow information in the original systems over a specified frequency band. Stiffness and mass matrices of the ROMs are obtained by projecting the original system matrices onto the subspaces spanned by forced responses. A matrix-free algorithm is designed to construct ROMs directly from the power quantities at selected interpolation frequencies. Strategies for parallel implementation of the algorithm via message passing interface are proposed. The quality of ROMs is iteratively refined according to the error estimate based on residual norms. Band capacity is proposed to provide a priori estimate of the sizes of good quality ROMs. Frequency averaging is recast as ensemble averaging and Cauchy distribution is used to simplify the computation. Besides model reduction for deterministic systems, details of constructing ROMs for parametric and nonparametric random systems are also presented. Case studies have been conducted on testbeds from Harwell-Boeing collections. Input and coupling power flow are computed for the original systems and the ROMs. Good agreement is observed in all cases.
Energy based prediction models for building acoustics
DEFF Research Database (Denmark)
Brunskog, Jonas
2012-01-01
In order to reach robust and simplified yet accurate prediction models, energy based principle are commonly used in many fields of acoustics, especially in building acoustics. This includes simple energy flow models, the framework of statistical energy analysis (SEA) as well as more elaborated...... principles as, e.g., wave intensity analysis (WIA). The European standards for building acoustic predictions, the EN 12354 series, are based on energy flow and SEA principles. In the present paper, different energy based prediction models are discussed and critically reviewed. Special attention is placed...... on underlying basic assumptions, such as diffuse fields, high modal overlap, resonant field being dominant, etc., and the consequences of these in terms of limitations in the theory and in the practical use of the models....
Comparative Study of Bancruptcy Prediction Models
Directory of Open Access Journals (Sweden)
Isye Arieshanti
2013-09-01
Full Text Available Early indication of bancruptcy is important for a company. If companies aware of potency of their bancruptcy, they can take a preventive action to anticipate the bancruptcy. In order to detect the potency of a bancruptcy, a company can utilize a a model of bancruptcy prediction. The prediction model can be built using a machine learning methods. However, the choice of machine learning methods should be performed carefully. Because the suitability of a model depends on the problem specifically. Therefore, in this paper we perform a comparative study of several machine leaning methods for bancruptcy prediction. According to the comparative study, the performance of several models that based on machine learning methods (k-NN, fuzzy k-NN, SVM, Bagging Nearest Neighbour SVM, Multilayer Perceptron(MLP, Hybrid of MLP + Multiple Linear Regression, it can be showed that fuzzy k-NN method achieve the best performance with accuracy 77.5%
International Nuclear Information System (INIS)
Choi, Hyeon Chang; Park, Jun Hyub
2005-01-01
In this study, residual stress distribution in multi-stacked film by MEMS (Micro-Electro Mechanical System) process is predicted using Finite Element Method (FEM). We develop a finite element program for REsidual Stress Analysis (RESA) in multi-stacked film. The RESA predicts the distribution of residual stress field in multi-stacked film. Curvatures of multi-stacked film and single layers which consist of the multi-stacked film are used as the input to the RESA. To measure those curvatures is easier than to measure a distribution of residual stress. To verify the RESA, mean stresses and stress gradients of single and multilayers are measured. The mean stresses are calculated from curvatures of deposited wafer by using Stoney's equation. The stress gradients are calculated from the vertical deflection at the end of cantilever beam. To measure the mean stress of each layer in multi-stacked film, we measure the curvature of wafer with the film after etching layer by layer in multi-stacked film
Prediction Models for Dynamic Demand Response
Energy Technology Data Exchange (ETDEWEB)
Aman, Saima; Frincu, Marc; Chelmis, Charalampos; Noor, Muhammad; Simmhan, Yogesh; Prasanna, Viktor K.
2015-11-02
As Smart Grids move closer to dynamic curtailment programs, Demand Response (DR) events will become necessary not only on fixed time intervals and weekdays predetermined by static policies, but also during changing decision periods and weekends to react to real-time demand signals. Unique challenges arise in this context vis-a-vis demand prediction and curtailment estimation and the transformation of such tasks into an automated, efficient dynamic demand response (D^{2}R) process. While existing work has concentrated on increasing the accuracy of prediction models for DR, there is a lack of studies for prediction models for D^{2}R, which we address in this paper. Our first contribution is the formal definition of D^{2}R, and the description of its challenges and requirements. Our second contribution is a feasibility analysis of very-short-term prediction of electricity consumption for D^{2}R over a diverse, large-scale dataset that includes both small residential customers and large buildings. Our third, and major contribution is a set of insights into the predictability of electricity consumption in the context of D^{2}R. Specifically, we focus on prediction models that can operate at a very small data granularity (here 15-min intervals), for both weekdays and weekends - all conditions that characterize scenarios for D^{2}R. We find that short-term time series and simple averaging models used by Independent Service Operators and utilities achieve superior prediction accuracy. We also observe that workdays are more predictable than weekends and holiday. Also, smaller customers have large variation in consumption and are less predictable than larger buildings. Key implications of our findings are that better models are required for small customers and for non-workdays, both of which are critical for D^{2}R. Also, prediction models require just few days’ worth of data indicating that small amounts of
Integration of Tuyere, Raceway and Shaft Models for Predicting Blast Furnace Process
Fu, Dong; Tang, Guangwu; Zhao, Yongfu; D'Alessio, John; Zhou, Chenn Q.
2018-06-01
A novel modeling strategy is presented for simulating the blast furnace iron making process. Such physical and chemical phenomena are taking place across a wide range of length and time scales, and three models are developed to simulate different regions of the blast furnace, i.e., the tuyere model, the raceway model and the shaft model. This paper focuses on the integration of the three models to predict the entire blast furnace process. Mapping output and input between models and an iterative scheme are developed to establish communications between models. The effects of tuyere operation and burden distribution on blast furnace fuel efficiency are investigated numerically. The integration of different models provides a way to realistically simulate the blast furnace by improving the modeling resolution on local phenomena and minimizing the model assumptions.
Evaluation of CASP8 model quality predictions
Cozzetto, Domenico
2009-01-01
The model quality assessment problem consists in the a priori estimation of the overall and per-residue accuracy of protein structure predictions. Over the past years, a number of methods have been developed to address this issue and CASP established a prediction category to evaluate their performance in 2006. In 2008 the experiment was repeated and its results are reported here. Participants were invited to infer the correctness of the protein models submitted by the registered automatic servers. Estimates could apply to both whole models and individual amino acids. Groups involved in the tertiary structure prediction categories were also asked to assign local error estimates to each predicted residue in their own models and their results are also discussed here. The correlation between the predicted and observed correctness measures was the basis of the assessment of the results. We observe that consensus-based methods still perform significantly better than those accepting single models, similarly to what was concluded in the previous edition of the experiment. © 2009 WILEY-LISS, INC.
Finding Furfural Hydrogenation Catalysts via Predictive Modelling
Strassberger, Zea; Mooijman, Maurice; Ruijter, Eelco; Alberts, Albert H; Maldonado, Ana G; Orru, Romano V A; Rothenberg, Gadi
2010-01-01
Abstract We combine multicomponent reactions, catalytic performance studies and predictive modelling to find transfer hydrogenation catalysts. An initial set of 18 ruthenium-carbene complexes were synthesized and screened in the transfer hydrogenation of furfural to furfurol with isopropyl alcohol complexes gave varied yields, from 62% up to >99.9%, with no obvious structure/activity correlations. Control experiments proved that the carbene ligand remains coordinated to the ruthenium centre throughout the reaction. Deuterium-labelling studies showed a secondary isotope effect (kH:kD=1.5). Further mechanistic studies showed that this transfer hydrogenation follows the so-called monohydride pathway. Using these data, we built a predictive model for 13 of the catalysts, based on 2D and 3D molecular descriptors. We tested and validated the model using the remaining five catalysts (cross-validation, R2=0.913). Then, with this model, the conversion and selectivity were predicted for four completely new ruthenium-carbene complexes. These four catalysts were then synthesized and tested. The results were within 3% of the model’s predictions, demonstrating the validity and value of predictive modelling in catalyst optimization. PMID:23193388
ITER council proceedings: 2001
International Nuclear Information System (INIS)
2001-01-01
Continuing the ITER EDA, two further ITER Council Meetings were held since the publication of ITER EDA documentation series no, 20, namely the ITER Council Meeting on 27-28 February 2001 in Toronto, and the ITER Council Meeting on 18-19 July in Vienna. That Meeting was the last one during the ITER EDA. This volume contains records of these Meetings, including: Records of decisions; List of attendees; ITER EDA status report; ITER EDA technical activities report; MAC report and advice; Final report of ITER EDA; and Press release
Wind farm production prediction - The Zephyr model
Energy Technology Data Exchange (ETDEWEB)
Landberg, L. [Risoe National Lab., Wind Energy Dept., Roskilde (Denmark); Giebel, G. [Risoe National Lab., Wind Energy Dept., Roskilde (Denmark); Madsen, H. [IMM (DTU), Kgs. Lyngby (Denmark); Nielsen, T.S. [IMM (DTU), Kgs. Lyngby (Denmark); Joergensen, J.U. [Danish Meteorologisk Inst., Copenhagen (Denmark); Lauersen, L. [Danish Meteorologisk Inst., Copenhagen (Denmark); Toefting, J. [Elsam, Fredericia (DK); Christensen, H.S. [Eltra, Fredericia (Denmark); Bjerge, C. [SEAS, Haslev (Denmark)
2002-06-01
This report describes a project - funded by the Danish Ministry of Energy and the Environment - which developed a next generation prediction system called Zephyr. The Zephyr system is a merging between two state-of-the-art prediction systems: Prediktor of Risoe National Laboratory and WPPT of IMM at the Danish Technical University. The numerical weather predictions were generated by DMI's HIRLAM model. Due to technical difficulties programming the system, only the computational core and a very simple version of the originally very complex system were developed. The project partners were: Risoe, DMU, DMI, Elsam, Eltra, Elkraft System, SEAS and E2. (au)
Model predictive controller design of hydrocracker reactors
GÖKÇE, Dila
2011-01-01
This study summarizes the design of a Model Predictive Controller (MPC) in Tüpraş, İzmit Refinery Hydrocracker Unit Reactors. Hydrocracking process, in which heavy vacuum gasoil is converted into lighter and valuable products at high temperature and pressure is described briefly. Controller design description, identification and modeling studies are examined and the model variables are presented. WABT (Weighted Average Bed Temperature) equalization and conversion increase are simulate...
Ott, Julien G.; Becce, Fabio; Monnin, Pascal; Schmidt, Sabine; Bochud, François O.; Verdun, Francis R.
2014-08-01
The state of the art to describe image quality in medical imaging is to assess the performance of an observer conducting a task of clinical interest. This can be done by using a model observer leading to a figure of merit such as the signal-to-noise ratio (SNR). Using the non-prewhitening (NPW) model observer, we objectively characterised the evolution of its figure of merit in various acquisition conditions. The NPW model observer usually requires the use of the modulation transfer function (MTF) as well as noise power spectra. However, although the computation of the MTF poses no problem when dealing with the traditional filtered back-projection (FBP) algorithm, this is not the case when using iterative reconstruction (IR) algorithms, such as adaptive statistical iterative reconstruction (ASIR) or model-based iterative reconstruction (MBIR). Given that the target transfer function (TTF) had already shown it could accurately express the system resolution even with non-linear algorithms, we decided to tune the NPW model observer, replacing the standard MTF by the TTF. It was estimated using a custom-made phantom containing cylindrical inserts surrounded by water. The contrast differences between the inserts and water were plotted for each acquisition condition. Then, mathematical transformations were performed leading to the TTF. As expected, the first results showed a dependency of the image contrast and noise levels on the TTF for both ASIR and MBIR. Moreover, FBP also proved to be dependent of the contrast and noise when using the lung kernel. Those results were then introduced in the NPW model observer. We observed an enhancement of SNR every time we switched from FBP to ASIR to MBIR. IR algorithms greatly improve image quality, especially in low-dose conditions. Based on our results, the use of MBIR could lead to further dose reduction in several clinical applications.
Energy Technology Data Exchange (ETDEWEB)
Millon, Domitille; Coche, Emmanuel E. [Universite Catholique de Louvain, Department of Radiology and Medical Imaging, Cliniques Universitaires Saint Luc, Brussels (Belgium); Vlassenbroek, Alain [Philips Healthcare, Brussels (Belgium); Maanen, Aline G. van; Cambier, Samantha E. [Universite Catholique de Louvain, Statistics Unit, King Albert II Cancer Institute, Brussels (Belgium)
2017-03-15
To compare image quality [low contrast (LC) detectability, noise, contrast-to-noise (CNR) and spatial resolution (SR)] of MDCT images reconstructed with an iterative reconstruction (IR) algorithm and a filtered back projection (FBP) algorithm. The experimental study was performed on a 256-slice MDCT. LC detectability, noise, CNR and SR were measured on a Catphan phantom scanned with decreasing doses (48.8 down to 0.7 mGy) and parameters typical of a chest CT examination. Images were reconstructed with FBP and a model-based IR algorithm. Additionally, human chest cadavers were scanned and reconstructed using the same technical parameters. Images were analyzed to illustrate the phantom results. LC detectability and noise were statistically significantly different between the techniques, supporting model-based IR algorithm (p < 0.0001). At low doses, the noise in FBP images only enabled SR measurements of high contrast objects. The superior CNR of model-based IR algorithm enabled lower dose measurements, which showed that SR was dose and contrast dependent. Cadaver images reconstructed with model-based IR illustrated that visibility and delineation of anatomical structure edges could be deteriorated at low doses. Model-based IR improved LC detectability and enabled dose reduction. At low dose, SR became dose and contrast dependent. (orig.)
Multi-Model Ensemble Wake Vortex Prediction
Koerner, Stephan; Holzaepfel, Frank; Ahmad, Nash'at N.
2015-01-01
Several multi-model ensemble methods are investigated for predicting wake vortex transport and decay. This study is a joint effort between National Aeronautics and Space Administration and Deutsches Zentrum fuer Luft- und Raumfahrt to develop a multi-model ensemble capability using their wake models. An overview of different multi-model ensemble methods and their feasibility for wake applications is presented. The methods include Reliability Ensemble Averaging, Bayesian Model Averaging, and Monte Carlo Simulations. The methodologies are evaluated using data from wake vortex field experiments.
Risk terrain modeling predicts child maltreatment.
Daley, Dyann; Bachmann, Michael; Bachmann, Brittany A; Pedigo, Christian; Bui, Minh-Thuy; Coffman, Jamye
2016-12-01
As indicated by research on the long-term effects of adverse childhood experiences (ACEs), maltreatment has far-reaching consequences for affected children. Effective prevention measures have been elusive, partly due to difficulty in identifying vulnerable children before they are harmed. This study employs Risk Terrain Modeling (RTM), an analysis of the cumulative effect of environmental factors thought to be conducive for child maltreatment, to create a highly accurate prediction model for future substantiated child maltreatment cases in the City of Fort Worth, Texas. The model is superior to commonly used hotspot predictions and more beneficial in aiding prevention efforts in a number of ways: 1) it identifies the highest risk areas for future instances of child maltreatment with improved precision and accuracy; 2) it aids the prioritization of risk-mitigating efforts by informing about the relative importance of the most significant contributing risk factors; 3) since predictions are modeled as a function of easily obtainable data, practitioners do not have to undergo the difficult process of obtaining official child maltreatment data to apply it; 4) the inclusion of a multitude of environmental risk factors creates a more robust model with higher predictive validity; and, 5) the model does not rely on a retrospective examination of past instances of child maltreatment, but adapts predictions to changing environmental conditions. The present study introduces and examines the predictive power of this new tool to aid prevention efforts seeking to improve the safety, health, and wellbeing of vulnerable children. Copyright Â© 2016 The Authors. Published by Elsevier Ltd.. All rights reserved.
Model validation of GAMMA code with heat transfer experiment for KO TBM in ITER
International Nuclear Information System (INIS)
Yum, Soo Been; Lee, Eo Hwak; Lee, Dong Won; Park, Goon Cherl
2013-01-01
Highlights: ► In this study, helium supplying system was constructed. ► Preparation for heat transfer experiment in KO TBM condition using helium supplying system was progressed. ► To get more applicable results, test matrix was made to cover the condition for KO TBM. ► Using CFD code; CFX 11, validation and modification for system code GAMMA was performed. -- Abstract: By considering the requirements for a DEMO-relevant blanket concept, Korea (KO) has proposed a He cooled molten lithium (HCML) test blanket module (TBM) for testing in ITER. A performance analysis for the thermal–hydraulics and a safety analysis for the KO TBM have been carried out using a commercial CFD code, ANSYS-CFX, and a system code, GAMMA (GAs multicomponent mixture analysis), which was developed by the gas cooled reactor in Korea. To verify the codes, a preliminary study was performed by Lee using a single TBM first wall (FW) mock-up made from the same material as the KO TBM, ferritic martensitic steel, using a 6 MPa nitrogen gas loop. The test was performed at pressures of 1.1, 1.9 and 2.9 MPa, and under various ranges of flow rate from 0.0105 to 0.0407 kg/s with a constant wall temperature condition. In the present study, a thermal–hydraulic test was performed with the newly constructed helium supplying system, in which the design pressure and temperature were 9 MPa and 500 °C, respectively. In the experiment, the same mock-up was used, and the test was performed under the conditions of 3 MPa pressure, 30 °C inlet temperature and 70 m/s helium velocity, which are almost same conditions of the KO TBM FW. One side of the mock-up was heated with a constant heat flux of 0.3–0.5 MW/m 2 using a graphite heating system, KoHLT-2 (Korea heat load test facility-2). Because the comparison result between CFX 11 and GAMMA showed a difference tendency, the modification of heat transfer correlation included in GAMMA was performed. And the modified GAMMA showed the strong parity with CFX
PREDICTIVE CAPACITY OF ARCH FAMILY MODELS
Directory of Open Access Journals (Sweden)
Raphael Silveira Amaro
2016-03-01
Full Text Available In the last decades, a remarkable number of models, variants from the Autoregressive Conditional Heteroscedastic family, have been developed and empirically tested, making extremely complex the process of choosing a particular model. This research aim to compare the predictive capacity, using the Model Confidence Set procedure, than five conditional heteroskedasticity models, considering eight different statistical probability distributions. The financial series which were used refers to the log-return series of the Bovespa index and the Dow Jones Industrial Index in the period between 27 October 2008 and 30 December 2014. The empirical evidences showed that, in general, competing models have a great homogeneity to make predictions, either for a stock market of a developed country or for a stock market of a developing country. An equivalent result can be inferred for the statistical probability distributions that were used.
Alcator C-Mod predictive modeling
International Nuclear Information System (INIS)
Pankin, Alexei; Bateman, Glenn; Kritz, Arnold; Greenwald, Martin; Snipes, Joseph; Fredian, Thomas
2001-01-01
Predictive simulations for the Alcator C-mod tokamak [I. Hutchinson et al., Phys. Plasmas 1, 1511 (1994)] are carried out using the BALDUR integrated modeling code [C. E. Singer et al., Comput. Phys. Commun. 49, 275 (1988)]. The results are obtained for temperature and density profiles using the Multi-Mode transport model [G. Bateman et al., Phys. Plasmas 5, 1793 (1998)] as well as the mixed-Bohm/gyro-Bohm transport model [M. Erba et al., Plasma Phys. Controlled Fusion 39, 261 (1997)]. The simulated discharges are characterized by very high plasma density in both low and high modes of confinement. The predicted profiles for each of the transport models match the experimental data about equally well in spite of the fact that the two models have different dimensionless scalings. Average relative rms deviations are less than 8% for the electron density profiles and 16% for the electron and ion temperature profiles
ITER EDA newsletter. V. 5, no. 5
International Nuclear Information System (INIS)
1996-05-01
This issues of the ITER Engineering Design Activities Newsletter contains a report on the Tenth Meeting of the ITER Management Advisory Committee held at JAERI Headquarters, Tokyo, June 5-6, 1996; on the Fourth ITER Divertor Physics and Divertor Modelling and Database Expert Group Workshop, held at the San Diego ITER Joint Worksite, March 11-15, 1996, and on the Agenda for the 16th IAEA Fusion Energy Conference (7-11 October 1996)
ITER EDA newsletter. V. 9, no. 8
International Nuclear Information System (INIS)
2000-08-01
This ITER EDA Newsletter reports on the ITER meeting on 29-30 June 2000 in Moscow, summarizes the status report on the ITER EDA by R. Aymar, the ITER Director, and gives overviews of the expert group workshop on transport and internal barrier physics, confinement database and modelling and edge and pedestal physics, and the IEA workshop on transport barriers at edge and core. Individual abstracts have been prepared
Positive feedback : exploring current approaches in iterative travel demand model implementation.
2012-01-01
Currently, the models that TxDOTs Transportation Planning and Programming Division (TPP) developed are : traditional three-step models (i.e., trip generation, trip distribution, and traffic assignment) that are sequentially : applied. A limitation...
Dillman, Jonathan R.; Goodsitt, Mitchell M.; Christodoulou, Emmanuel G.; Keshavarzi, Nahid; Strouse, Peter J.
2014-01-01
Purpose To retrospectively compare image quality and radiation dose between a reduced-dose computed tomographic (CT) protocol that uses model-based iterative reconstruction (MBIR) and a standard-dose CT protocol that uses 30% adaptive statistical iterative reconstruction (ASIR) with filtered back projection. Materials and Methods Institutional review board approval was obtained. Clinical CT images of the chest, abdomen, and pelvis obtained with a reduced-dose protocol were identified. Images were reconstructed with two algorithms: MBIR and 100% ASIR. All subjects had undergone standard-dose CT within the prior year, and the images were reconstructed with 30% ASIR. Reduced- and standard-dose images were evaluated objectively and subjectively. Reduced-dose images were evaluated for lesion detectability. Spatial resolution was assessed in a phantom. Radiation dose was estimated by using volumetric CT dose index (CTDIvol) and calculated size-specific dose estimates (SSDE). A combination of descriptive statistics, analysis of variance, and t tests was used for statistical analysis. Results In the 25 patients who underwent the reduced-dose protocol, mean decrease in CTDIvol was 46% (range, 19%–65%) and mean decrease in SSDE was 44% (range, 19%–64%). Reduced-dose MBIR images had less noise (P > .004). Spatial resolution was superior for reduced-dose MBIR images. Reduced-dose MBIR images were equivalent to standard-dose images for lungs and soft tissues (P > .05) but were inferior for bones (P = .004). Reduced-dose 100% ASIR images were inferior for soft tissues (P ASIR. Conclusion CT performed with a reduced-dose protocol and MBIR is feasible in the pediatric population, and it maintains diagnostic quality. © RSNA, 2013 Online supplemental material is available for this article. PMID:24091359
Feedback and rotational stabilization of resistive wall modes in ITER
International Nuclear Information System (INIS)
Liu Yueqiang; Bondeson, A.; Chu, M.S.; La Haye, R.J.; Favez, J.-Y.; Lister, J.B.; Gribov, Y.; Gryaznevich, M.; Hender, T.C.; Howell, D.F.
2005-01-01
Different models have been introduced in the stability code MARS-F in order to study the damping effect of resistive wall modes (RWM) in rotating plasmas. Benchmark of MARS-F calculations with RWM experiments on JET and D3D indicates that the semi-kinetic damping model is a good candidate for explaining the damping mechanisms. Based on these results, the critical rotation speeds required for RWM stabilization in an advanced ITER scenario are predicted. Active feedback control of the n = 1 RWM in ITER is also studied using the MARS-F code. (author)
International Nuclear Information System (INIS)
Murakami, Yoshiki; Itami, Kiyoshi; Sugihara, Masayoshi; Fujieda, Hirobumi.
1992-09-01
Steady-state and hybrid mode operations of ITER are investigated by 0-D power balance calculations assuming no radiation and charge-exchange cooling in divertor region. Operation points are optimized with respect to divertor heat load which must be reduced to the level of ignition mode (∼5 MW/m 2 ). Dependence of the divertor heat load on the variety of the models, i.e., constant-χ model, Bohm-type-χ model and JT-60U empirical scaling model, is also discussed. The divertor heat load increases linearly with the fusion power (P FUS ) in all models. The possible highest fusion power much differs for each model with an allowable divertor heat load. The heat load evaluated by constant-χ model is, for example, about 1.8 times larger than that by Bohm-type-χ model at P FUS = 750 MW. Effect of reduction of the helium accumulation, improvements of the confinement capability and the current-drive efficiency are also investigated aiming at lowering the divertor heat load. It is found that NBI power should be larger than about 60 MW to obtain a burn time longer than 2000 s. The optimized operation point, where the minimum divertor heat load is achieved, does not depend on the model and is the point with the minimum-P FUS and the maximum-P NBI . When P FUS = 690 MW and P NBI = 110 MW, the divertor heat load can be reduced to the level of ignition mode without impurity seeding if H = 2.2 is achieved. Controllability of the current-profile is also discussed. (J.P.N.)
Hubeny, I.; Lanz, T.
1995-01-01
A new munerical method for computing non-Local Thermodynamic Equilibrium (non-LTE) model stellar atmospheres is presented. The method, called the hybird complete linearization/accelerated lambda iretation (CL/ALI) method, combines advantages of both its constituents. Its rate of convergence is virtually as high as for the standard CL method, while the computer time per iteration is almost as low as for the standard ALI method. The method is formulated as the standard complete lineariation, the only difference being that the radiation intensity at selected frequency points is not explicity linearized; instead, it is treated by means of the ALI approach. The scheme offers a wide spectrum of options, ranging from the full CL to the full ALI method. We deonstrate that the method works optimally if the majority of frequency points are treated in the ALI mode, while the radiation intensity at a few (typically two to 30) frequency points is explicity linearized. We show how this method can be applied to calculate metal line-blanketed non-LTE model atmospheres, by using the idea of 'superlevels' and 'superlines' introduced originally by Anderson (1989). We calculate several illustrative models taking into accont several tens of thosands of lines of Fe III to Fe IV and show that the hybrid CL/ALI method provides a robust method for calculating non-LTE line-blanketed model atmospheres for a wide range of stellar parameters. The results for individual stellar types will be presented in subsequent papers in this series.
Directory of Open Access Journals (Sweden)
Xuan Wu
2015-01-01
Full Text Available In order to control the permanent-magnet synchronous motor system (PMSM with different disturbances and nonlinearity, an improved current control algorithm for the PMSM systems using recursive model predictive control (RMPC is developed in this paper. As the conventional MPC has to be computed online, its iterative computational procedure needs long computing time. To enhance computational speed, a recursive method based on recursive Levenberg-Marquardt algorithm (RLMA and iterative learning control (ILC is introduced to solve the learning issue in MPC. RMPC is able to significantly decrease the computation cost of traditional MPC in the PMSM system. The effectiveness of the proposed algorithm has been verified by simulation and experimental results.
Modelling the predictive performance of credit scoring
Directory of Open Access Journals (Sweden)
Shi-Wei Shen
2013-07-01
Research purpose: The purpose of this empirical paper was to examine the predictive performance of credit scoring systems in Taiwan. Motivation for the study: Corporate lending remains a major business line for financial institutions. However, in light of the recent global financial crises, it has become extremely important for financial institutions to implement rigorous means of assessing clients seeking access to credit facilities. Research design, approach and method: Using a data sample of 10 349 observations drawn between 1992 and 2010, logistic regression models were utilised to examine the predictive performance of credit scoring systems. Main findings: A test of Goodness of fit demonstrated that credit scoring models that incorporated the Taiwan Corporate Credit Risk Index (TCRI, micro- and also macroeconomic variables possessed greater predictive power. This suggests that macroeconomic variables do have explanatory power for default credit risk. Practical/managerial implications: The originality in the study was that three models were developed to predict corporate firms’ defaults based on different microeconomic and macroeconomic factors such as the TCRI, asset growth rates, stock index and gross domestic product. Contribution/value-add: The study utilises different goodness of fits and receiver operator characteristics during the examination of the robustness of the predictive power of these factors.
International Nuclear Information System (INIS)
Wu, Xuedong; Zhu, Zhiyu; Su, Xunliang; Fan, Shaosheng; Du, Zhaoping; Chang, Yanchao; Zeng, Qingjun
2015-01-01
Wind speed prediction is one important methods to guarantee the wind energy integrated into the whole power system smoothly. However, wind power has a non–schedulable nature due to the strong stochastic nature and dynamic uncertainty nature of wind speed. Therefore, wind speed prediction is an indispensable requirement for power system operators. Two new approaches for hourly wind speed prediction are developed in this study by integrating the single multiplicative neuron model and the iterated nonlinear filters for updating the wind speed sequence accurately. In the presented methods, a nonlinear state–space model is first formed based on the single multiplicative neuron model and then the iterated nonlinear filters are employed to perform dynamic state estimation on wind speed sequence with stochastic uncertainty. The suggested approaches are demonstrated using three cases wind speed data and are compared with autoregressive moving average, artificial neural network, kernel ridge regression based residual active learning and single multiplicative neuron model methods. Three types of prediction errors, mean absolute error improvement ratio and running time are employed for different models’ performance comparison. Comparison results from Tables 1–3 indicate that the presented strategies have much better performance for hourly wind speed prediction than other technologies. - Highlights: • Developed two novel hybrid modeling methods for hourly wind speed prediction. • Uncertainty and fluctuations of wind speed can be better explained by novel methods. • Proposed strategies have online adaptive learning ability. • Proposed approaches have shown better performance compared with existed approaches. • Comparison and analysis of two proposed novel models for three cases are provided
Comparison of two ordinal prediction models
DEFF Research Database (Denmark)
Kattan, Michael W; Gerds, Thomas A
2015-01-01
system (i.e. old or new), such as the level of evidence for one or more factors included in the system or the general opinions of expert clinicians. However, given the major objective of estimating prognosis on an ordinal scale, we argue that the rival staging system candidates should be compared...... on their ability to predict outcome. We sought to outline an algorithm that would compare two rival ordinal systems on their predictive ability. RESULTS: We devised an algorithm based largely on the concordance index, which is appropriate for comparing two models in their ability to rank observations. We...... demonstrate our algorithm with a prostate cancer staging system example. CONCLUSION: We have provided an algorithm for selecting the preferred staging system based on prognostic accuracy. It appears to be useful for the purpose of selecting between two ordinal prediction models....
Energy Technology Data Exchange (ETDEWEB)
Price, Ryan G. [Department of Radiation Oncology, Henry Ford Health Systems, Detroit, Michigan 48202 and Wayne State University School of Medicine, Detroit, Michigan 48201 (United States); Vance, Sean; Cattaneo, Richard; Elshaikh, Mohamed A.; Chetty, Indrin J.; Glide-Hurst, Carri K., E-mail: churst2@hfhs.org [Department of Radiation Oncology, Henry Ford Health Systems, Detroit, Michigan 48202 (United States); Schultz, Lonni [Department of Public Health Sciences, Henry Ford Health Systems, Detroit, Michigan 48202 (United States)
2014-08-15
Purpose: Iterative reconstruction (IR) reduces noise, thereby allowing dose reduction in computed tomography (CT) while maintaining comparable image quality to filtered back-projection (FBP). This study sought to characterize image quality metrics, delineation, dosimetric assessment, and other aspects necessary to integrate IR into treatment planning. Methods: CT images (Brilliance Big Bore v3.6, Philips Healthcare) were acquired of several phantoms using 120 kVp and 25–800 mAs. IR was applied at levels corresponding to noise reduction of 0.89–0.55 with respect to FBP. Noise power spectrum (NPS) analysis was used to characterize noise magnitude and texture. CT to electron density (CT-ED) curves were generated over all IR levels. Uniformity as well as spatial and low contrast resolution were quantified using a CATPHAN phantom. Task specific modulation transfer functions (MTF{sub task}) were developed to characterize spatial frequency across objects of varied contrast. A prospective dose reduction study was conducted for 14 patients undergoing interfraction CT scans for high-dose rate brachytherapy. Three physicians performed image quality assessment using a six-point grading scale between the normal-dose FBP (reference), low-dose FBP, and low-dose IR scans for the following metrics: image noise, detectability of the vaginal cuff/bladder interface, spatial resolution, texture, segmentation confidence, and overall image quality. Contouring differences between FBP and IR were quantified for the bladder and rectum via overlap indices (OI) and Dice similarity coefficients (DSC). Line profile and region of interest analyses quantified noise and boundary changes. For two subjects, the impact of IR on external beam dose calculation was assessed via gamma analysis and changes in digitally reconstructed radiographs (DRRs) were quantified. Results: NPS showed large reduction in noise magnitude (50%), and a slight spatial frequency shift (∼0.1 mm{sup −1}) with
Fast Nonconvex Model Predictive Control for Commercial Refrigeration
DEFF Research Database (Denmark)
Hovgaard, Tobias Gybel; Larsen, Lars F. S.; Jørgensen, John Bagterp
2012-01-01
in fewer than 5 or so iterations. We employ a fast convex quadratic programming solver to carry out the iterations, which is more than fast enough to run in real-time. We demonstrate our method on a realistic model, with a full year simulation, using real historical data. These simulations show substantial...... cost savings, and reveal how the method exhibits sophisticated response to real-time variations in electricity prices. This demand response is critical to help balance real-time uncertainties associated with large penetration of intermittent renewable energy sources in a future smart grid....
Predictive analytics can support the ACO model.
Bradley, Paul
2012-04-01
Predictive analytics can be used to rapidly spot hard-to-identify opportunities to better manage care--a key tool in accountable care. When considering analytics models, healthcare providers should: Make value-based care a priority and act on information from analytics models. Create a road map that includes achievable steps, rather than major endeavors. Set long-term expectations and recognize that the effectiveness of an analytics program takes time, unlike revenue cycle initiatives that may show a quick return.
Predictive performance models and multiple task performance
Wickens, Christopher D.; Larish, Inge; Contorer, Aaron
1989-01-01
Five models that predict how performance of multiple tasks will interact in complex task scenarios are discussed. The models are shown in terms of the assumptions they make about human operator divided attention. The different assumptions about attention are then empirically validated in a multitask helicopter flight simulation. It is concluded from this simulation that the most important assumption relates to the coding of demand level of different component tasks.
Model Predictive Control of Sewer Networks
DEFF Research Database (Denmark)
Pedersen, Einar B.; Herbertsson, Hannes R.; Niemann, Henrik
2016-01-01
The developments in solutions for management of urban drainage are of vital importance, as the amount of sewer water from urban areas continues to increase due to the increase of the world’s population and the change in the climate conditions. How a sewer network is structured, monitored and cont...... benchmark model. Due to the inherent constraints the applied approach is based on Model Predictive Control....
Distributed Model Predictive Control via Dual Decomposition
DEFF Research Database (Denmark)
Biegel, Benjamin; Stoustrup, Jakob; Andersen, Palle
2014-01-01
This chapter presents dual decomposition as a means to coordinate a number of subsystems coupled by state and input constraints. Each subsystem is equipped with a local model predictive controller while a centralized entity manages the subsystems via prices associated with the coupling constraints...
International Nuclear Information System (INIS)
Wuerz, H.; Arkhipov, N.I.; Bakhin, V.P.; Goel, B.; Hoebel, W.; Konkashbaev, I.; Landman, I.; Piazza, G.; Safronov, V.M.; Sherbakov, A.R.; Toporkov, D.A.; Zhitlukhin, A.M.
1994-01-01
The high divertor heat load during a tokamak plasma disruption results in sudden evaporation of a thin layer of divertor plate material, which acts as vapor shield and protects the target from further excessive evaporation. Formation and effectiveness of the vapor shield are theoretically modeled and experimentally investigated at the 2MK-200 facility under conditions simulating the thermal quench phase of ITER tokamak plasma disruptions. In the optical wavelength range C II, C III, C IV emission lines for graphite, Cu I, Cu II lines for copper and continuum radiation for tungsten samples are observed in the target plasma. The plasma expands along the magnetic field lines with velocities of (4±1)x10 6 cm/s for graphite and 10 5 cm/s for copper. Modeling was done with a radiation hydrodynamics code in one-dimensional planar geometry. The multifrequency radiation transport is treated in flux limited diffusion and in forward reverse transport approximation. In these first modeling studies the overall shielding efficiency for carbon and tungsten defined as ratio of the incident energy and the vaporization energy for power densities of 10 MW/cm 2 exceeds a factor of 30. The vapor shield is established within 2 μs, the power fraction to the target after 10 μs is below 3% and reaches in the stationary state after about 20 μs a value of around 1.5%. ((orig.))
A stepwise model to predict monthly streamflow
Mahmood Al-Juboori, Anas; Guven, Aytac
2016-12-01
In this study, a stepwise model empowered with genetic programming is developed to predict the monthly flows of Hurman River in Turkey and Diyalah and Lesser Zab Rivers in Iraq. The model divides the monthly flow data to twelve intervals representing the number of months in a year. The flow of a month, t is considered as a function of the antecedent month's flow (t - 1) and it is predicted by multiplying the antecedent monthly flow by a constant value called K. The optimum value of K is obtained by a stepwise procedure which employs Gene Expression Programming (GEP) and Nonlinear Generalized Reduced Gradient Optimization (NGRGO) as alternative to traditional nonlinear regression technique. The degree of determination and root mean squared error are used to evaluate the performance of the proposed models. The results of the proposed model are compared with the conventional Markovian and Auto Regressive Integrated Moving Average (ARIMA) models based on observed monthly flow data. The comparison results based on five different statistic measures show that the proposed stepwise model performed better than Markovian model and ARIMA model. The R2 values of the proposed model range between 0.81 and 0.92 for the three rivers in this study.
Directory of Open Access Journals (Sweden)
Mohamed Mostafa R.
2016-01-01
Full Text Available Self-Excited Permanent Magnet Induction Generator (PMIG is commonly used in wind energy generation systems. The difficulty of Self-Excited Permanent Magnet Induction Generator (SEPMIG modeling is the circuit parameters of the generator vary at each load conditions due to the a change in the frequency and stator voltage. The paper introduces a new modeling for SEPMIG using Gauss-sidle relaxation method. The SEPMIG characteristics using the proposed method are studied at different load conditions according to the wind speed variation, load impedance changes and different shunted capacitor values. The system modeling is investigated due to the magnetizing current variation, the efficiency variation, the power variation and power factor variation. The proposed modeling system satisfies high degree of simplicity and accuracy.
A model of lipid-free apolipoprotein A-I revealed by iterative molecular dynamics simulation.
Directory of Open Access Journals (Sweden)
Xing Zhang
Full Text Available Apolipoprotein A-I (apo A-I, the major protein component of high-density lipoprotein, has been proven inversely correlated to cardiovascular risk in past decades. The lipid-free state of apo A-I is the initial stage which binds to lipids forming high-density lipoprotein. Molecular models of lipid-free apo A-I have been reported by methods like X-ray crystallography and chemical cross-linking/mass spectrometry (CCL/MS. Through structural analysis we found that those current models had limited consistency with other experimental results, such as those from hydrogen exchange with mass spectrometry. Through molecular dynamics simulations, we also found those models could not reach a stable equilibrium state. Therefore, by integrating various experimental results, we proposed a new structural model for lipid-free apo A-I, which contains a bundled four-helix N-terminal domain (1-192 that forms a variable hydrophobic groove and a mobile short hairpin C-terminal domain (193-243. This model exhibits an equilibrium state through molecular dynamics simulation and is consistent with most of the experimental results known from CCL/MS on lysine pairs, fluorescence resonance energy transfer and hydrogen exchange. This solution-state lipid-free apo A-I model may elucidate the possible conformational transitions of apo A-I binding with lipids in high-density lipoprotein formation.
An Intelligent Model for Stock Market Prediction
Directory of Open Access Journals (Sweden)
IbrahimM. Hamed
2012-08-01
Full Text Available This paper presents an intelligent model for stock market signal prediction using Multi-Layer Perceptron (MLP Artificial Neural Networks (ANN. Blind source separation technique, from signal processing, is integrated with the learning phase of the constructed baseline MLP ANN to overcome the problems of prediction accuracy and lack of generalization. Kullback Leibler Divergence (KLD is used, as a learning algorithm, because it converges fast and provides generalization in the learning mechanism. Both accuracy and efficiency of the proposed model were confirmed through the Microsoft stock, from wall-street market, and various data sets, from different sectors of the Egyptian stock market. In addition, sensitivity analysis was conducted on the various parameters of the model to ensure the coverage of the generalization issue. Finally, statistical significance was examined using ANOVA test.
ITER primary cryopump test facility
International Nuclear Information System (INIS)
Petersohn, N.; Mack, A.; Boissin, J.C.; Murdoc, D.
1998-01-01
A cryopump as ITER primary vacuum pump is being developed at FZK under the European fusion technology programme. The ITER vacuum system comprises of 16 cryopumps operating in a cyclic mode which fulfills the vacuum requirements in all ITER operation modes. Prior to the construction of a prototype cryopump, the concept is tested on a reduced scale model pump. To test the model pump, the TIMO facility is being built at FZK in which the model pump operation under ITER environmental conditions, except for tritium exposure, neutron irradiation and magnetic fields, can be simulated. The TIMO facility mainly consists of a test vessel for ITER divertor duct simulation, a 600 W refrigerator system supplying helium in the 5 K stage and a 30 kW helium supply system for the 80 K stage. The model pump test programme will be performed with regard to the pumping performance and cryogenic operation of the pump. The results of the model pump testing will lead to the design of the full scale ITER cryopump. (orig.)
Predictive Models, How good are they?
DEFF Research Database (Denmark)
Kasch, Helge
The WAD grading system has been used for more than 20 years by now. It has shown long-term viability, but with strengths and limitations. New bio-psychosocial assessment of the acute whiplash injured subject may provide better prediction of long-term disability and pain. Furthermore, the emerging......-up. It is important to obtain prospective identification of the relevant risk underreported disability could, if we were able to expose these hidden “risk-factors” during our consultations, provide us with better predictive models. New data from large clinical studies will present exciting new genetic risk markers...
NONLINEAR MODEL PREDICTIVE CONTROL OF CHEMICAL PROCESSES
Directory of Open Access Journals (Sweden)
SILVA R. G.
1999-01-01
Full Text Available A new algorithm for model predictive control is presented. The algorithm utilizes a simultaneous solution and optimization strategy to solve the model's differential equations. The equations are discretized by equidistant collocation, and along with the algebraic model equations are included as constraints in a nonlinear programming (NLP problem. This algorithm is compared with the algorithm that uses orthogonal collocation on finite elements. The equidistant collocation algorithm results in simpler equations, providing a decrease in computation time for the control moves. Simulation results are presented and show a satisfactory performance of this algorithm.
A statistical model for predicting muscle performance
Byerly, Diane Leslie De Caix
The objective of these studies was to develop a capability for predicting muscle performance and fatigue to be utilized for both space- and ground-based applications. To develop this predictive model, healthy test subjects performed a defined, repetitive dynamic exercise to failure using a Lordex spinal machine. Throughout the exercise, surface electromyography (SEMG) data were collected from the erector spinae using a Mega Electronics ME3000 muscle tester and surface electrodes placed on both sides of the back muscle. These data were analyzed using a 5th order Autoregressive (AR) model and statistical regression analysis. It was determined that an AR derived parameter, the mean average magnitude of AR poles, significantly correlated with the maximum number of repetitions (designated Rmax) that a test subject was able to perform. Using the mean average magnitude of AR poles, a test subject's performance to failure could be predicted as early as the sixth repetition of the exercise. This predictive model has the potential to provide a basis for improving post-space flight recovery, monitoring muscle atrophy in astronauts and assessing the effectiveness of countermeasures, monitoring astronaut performance and fatigue during Extravehicular Activity (EVA) operations, providing pre-flight assessment of the ability of an EVA crewmember to perform a given task, improving the design of training protocols and simulations for strenuous International Space Station assembly EVA, and enabling EVA work task sequences to be planned enhancing astronaut performance and safety. Potential ground-based, medical applications of the predictive model include monitoring muscle deterioration and performance resulting from illness, establishing safety guidelines in the industry for repetitive tasks, monitoring the stages of rehabilitation for muscle-related injuries sustained in sports and accidents, and enhancing athletic performance through improved training protocols while reducing
An integrated model for the assessment of unmitigated fault events in ITER's superconducting magnets
Energy Technology Data Exchange (ETDEWEB)
McIntosh, S., E-mail: simon.mcintosh@ccfe.ac.uk [Culham Centre for Fusion Energy, Culham Science Center, Abingdon OX14 3DB, Oxfordshire (United Kingdom); Holmes, A. [Marcham Scientific Ltd., Sarum House, 10 Salisbury Rd., Hungerford RG17 0LH, Berkshire (United Kingdom); Cave-Ayland, K.; Ash, A.; Domptail, F.; Zheng, S.; Surrey, E.; Taylor, N. [Culham Centre for Fusion Energy, Culham Science Center, Abingdon OX14 3DB, Oxfordshire (United Kingdom); Hamada, K.; Mitchell, N. [ITER Organization, Magnet Division, St Paul Lez Durance Cedex (France)
2016-11-01
A large amount of energy is stored in ITER superconducting magnet system. Faults which initiate a discharge are typically mitigated to quickly transfer away the stored magnetic energy for dissipation through a bank of resistors. In an extreme unlikely occurrence, an unmitigated fault event represents a potentially severe discharge of energy into the coils and the surrounding structure. A new simulation tool has been developed for the detailed study of these unmitigated fault events. The tool integrates: the propagation of multiple quench fronts initiated by an initial fault or by subsequent coil heating; the 3D convection and conduction of heat through the magnet structure; the 3D conduction of current and Ohmic heating both along the conductor and via alternate pathways generated by arcing or material melt. Arcs linking broken sections of conductor or separate turns are simulated with a new unconstrained arc model to balance electrical current paths and heat generation within the arc column in the multi-physics model. The influence under the high Lorenz forces present is taken into account. Simulation results for an unmitigated fault in a poloidal field coil are presented.
Investigation of the dynamic behavior of the ITER tokamak assembly using a 1/5.8-scale model
Energy Technology Data Exchange (ETDEWEB)
Onozuka, M. [Mitsubishi Heavy Industries, Ltd., Nuclear Systems Engineering Department, Konan 2-16-5, Minato-ku, Tokyo 108-8215 (Japan)]. E-mail: masanori_onozuka@mhi.co.jp; Shimizu, K. [Mitsubishi Heavy Industries, Ltd., Nuclear Systems Engineering Department, Konan 2-16-5, Minato-ku, Tokyo 108-8215 (Japan); Nakamura, T. [Mitsubishi Heavy Industries, Ltd., Nuclear Systems Engineering Department, Konan 2-16-5, Minato-ku, Tokyo 108-8215 (Japan); Takeda, N. [Japan Atomic Energy Research Institute, Mukouyama 801-1, Naka-machi, Naka-gun, Ibaraki 311-0193 (Japan); Nakahira, M. [Japan Atomic Energy Research Institute, Mukouyama 801-1, Naka-machi, Naka-gun, Ibaraki 311-0193 (Japan); Tado, S. [Japan Atomic Energy Research Institute, Mukouyama 801-1, Naka-machi, Naka-gun, Ibaraki 311-0193 (Japan); Shibanuma, K. [Japan Atomic Energy Research Institute, Mukouyama 801-1, Naka-machi, Naka-gun, Ibaraki 311-0193 (Japan)
2006-02-15
The dynamic behavior of the ITER tokamak assembly has been investigated, using a 1/5.8-scale model. In the static load test, the observed deformations and stresses were confirmed to be linearly proportional to the applied loads. In the vibration test, the first and third eigen-modes corresponded to horizontal vibration modes, with natural frequencies of about 29 and 81 Hz, respectively. The global-twisting and oval modes were found to be in the second and forth modes, with frequencies of about 60 and 86 Hz, respectively. A comparison of experimental and analytical results showed that the deflections and the induced stresses in the analysis were approximately half of those found in the experiment. Natural frequencies have also been compared. The frequency in the first mode was almost the same for both the experiment and the analysis. However, in higher vibration modes, the natural frequencies in the experiment were found to be smaller than those in the analysis. To further examine the dynamic behavior of the coil structure, a vibration test of the entire sub-scale model is planned using the vibration stage rather than the shaker.
PSO-MISMO modeling strategy for multistep-ahead time series prediction.
Bao, Yukun; Xiong, Tao; Hu, Zhongyi
2014-05-01
Multistep-ahead time series prediction is one of the most challenging research topics in the field of time series modeling and prediction, and is continually under research. Recently, the multiple-input several multiple-outputs (MISMO) modeling strategy has been proposed as a promising alternative for multistep-ahead time series prediction, exhibiting advantages compared with the two currently dominating strategies, the iterated and the direct strategies. Built on the established MISMO strategy, this paper proposes a particle swarm optimization (PSO)-based MISMO modeling strategy, which is capable of determining the number of sub-models in a self-adaptive mode, with varying prediction horizons. Rather than deriving crisp divides with equal-size s prediction horizons from the established MISMO, the proposed PSO-MISMO strategy, implemented with neural networks, employs a heuristic to create flexible divides with varying sizes of prediction horizons and to generate corresponding sub-models, providing considerable flexibility in model construction, which has been validated with simulated and real datasets.
Prediction models : the right tool for the right problem
Kappen, Teus H.; Peelen, Linda M.
2016-01-01
PURPOSE OF REVIEW: Perioperative prediction models can help to improve personalized patient care by providing individual risk predictions to both patients and providers. However, the scientific literature on prediction model development and validation can be quite technical and challenging to
Neuro-fuzzy modeling in bankruptcy prediction
Directory of Open Access Journals (Sweden)
Vlachos D.
2003-01-01
Full Text Available For the past 30 years the problem of bankruptcy prediction had been thoroughly studied. From the paper of Altman in 1968 to the recent papers in the '90s, the progress of prediction accuracy was not satisfactory. This paper investigates an alternative modeling of the system (firm, combining neural networks and fuzzy controllers, i.e. using neuro-fuzzy models. Classical modeling is based on mathematical models that describe the behavior of the firm under consideration. The main idea of fuzzy control, on the other hand, is to build a model of a human control expert who is capable of controlling the process without thinking in a mathematical model. This control expert specifies his control action in the form of linguistic rules. These control rules are translated into the framework of fuzzy set theory providing a calculus, which can stimulate the behavior of the control expert and enhance its performance. The accuracy of the model is studied using datasets from previous research papers.
Repurposing and probabilistic integration of data: An iterative and data model independent approach
Wanders, B.
2016-01-01
Besides the scientific paradigms of empiricism, mathematical modelling, and simulation, the method of combining and analysing data in novel ways has become a main research paradigm capable of tackling research questions that could not be answered before. To speed up research in this new paradigm,
Core-SOL modelling of neon seeded JET discharges with the ITER-like wall
Energy Technology Data Exchange (ETDEWEB)
Telesca, G. [Department of Applied Physics, Ghent University (Belgium); EUROfusion Consortium, JET, Culham Science Centre, Abingdon (United Kingdom); Ivanova-Stanik, I.; Zagoerski, R.; Czarnecka, A. [Institute of Plasma Physics and Laser Microfusion, Warsaw (Poland); EUROfusion Consortium, JET, Culham Science Centre, Abingdon (United Kingdom); Brezinsek, S.; Huber, A.; Wiesen, S. [Forschungszentrum Juelich GmbH, Institut fuer Klima- und Energieforschung-Plasmaphysik, Juelich (Germany); EUROfusion Consortium, JET, Culham Science Centre, Abingdon (United Kingdom); Drewelow, P. [Max-Planck-Institut fuer Plasmaphysik, Greifswald (Germany); EUROfusion Consortium, JET, Culham Science Centre, Abingdon (United Kingdom); Giroud, C. [CCFE Culham, Abingdon (United Kingdom); EUROfusion Consortium, JET, Culham Science Centre, Abingdon (United Kingdom); Collaboration: JET EFDA contributors
2016-08-15
Five ELMy H-mode Ne seeded JET pulses have been simulated with the self-consistent core-SOL model COREDIV. In this five pulse series only the Ne seeding rate was changed shot by shot, allowing a thorough study of the effect of Ne seeding on the total radiated power and of its distribution between core and SOL tobe made. The increase in the simulations of the Ne seeding rate level above that achieved in experiments shows saturation of the total radiated power at a relatively low radiated-heating power ratio (f{sub rad} = 0.60) and a further increase of the ratio of SOL to core radiation, in agreement with the reduction of W release at high Ne seeding level. In spite of the uncertainties caused by the simplified SOL model of COREDIV (neutral model, absence of ELMs and slab model for the SOL), the increase of the perpendicular transport in the SOL with increasing Ne seeding rate, which allows to reproduce numerically the experimental distribution core-SOL of the radiated power, appears to be of general applicability. (copyright 2016 The Authors. Contributions to Plasma Physics published by Wiley-VCH Verlag GmbH and Co. KGaA Weinheim. This)
Comparison of Iterative Methods for Computing the Pressure Field in a Dynamic Network Model
DEFF Research Database (Denmark)
Mogensen, Kristian; Stenby, Erling Halfdan; Banerjee, Srilekha
1999-01-01
In dynamic network models, the pressure map (the pressure in the pores) must be evaluated at each time step. This calculation involves the solution of a large number of nonlinear algebraic systems of equations and accounts for more than 80 of the total CPU-time. Each nonlinear system requires...
Automated main-chain model building by template matching and iterative fragment extension
International Nuclear Information System (INIS)
Terwilliger, Thomas C.
2003-01-01
A method for automated macromolecular main-chain model building is described. An algorithm for the automated macromolecular model building of polypeptide backbones is described. The procedure is hierarchical. In the initial stages, many overlapping polypeptide fragments are built. In subsequent stages, the fragments are extended and then connected. Identification of the locations of helical and β-strand regions is carried out by FFT-based template matching. Fragment libraries of helices and β-strands from refined protein structures are then positioned at the potential locations of helices and strands and the longest segments that fit the electron-density map are chosen. The helices and strands are then extended using fragment libraries consisting of sequences three amino acids long derived from refined protein structures. The resulting segments of polypeptide chain are then connected by choosing those which overlap at two or more C α positions. The fully automated procedure has been implemented in RESOLVE and is capable of model building at resolutions as low as 3.5 Å. The algorithm is useful for building a preliminary main-chain model that can serve as a basis for refinement and side-chain addition
Directory of Open Access Journals (Sweden)
Meng Zhi-Jun
2016-01-01
Full Text Available This paper addresses a new application of the local fractional variational iteration algorithm III to solve the local fractional diffusion equation defined on Cantor sets associated with non-differentiable heat transfer.
International Nuclear Information System (INIS)
Dell'Orco, G.; Canneta, A.; Cattadori, G.; Gaspari, G.P.; Merola, M.; Polazzi, G.; Vieider, G.; Zito, D.
2001-01-01
In 1998, in the frame of the European R and D on ITER high heat flux components, the fabrication of a full scale ITER Divertor Outboard mock-up was launched. It comprised a Cassette Body, designed with some mechanical and hydraulic simplifications with respect to the reference body, and the actively cooled Dummy Armour Prototype (DAP). This DAP consists of the Vertical Target, the Wing and the Dump Target, manufactured by the European industry, which are integrated with the Gas Box Liner supplied by the Russian Federation Home Team. In order to simplify the manufacturing, the DAP was layered with an equivalent CuCrZr thickness simulating the real armour (CFC or W tiles). In parallel with the manufacturing activity, the ITER European HT decided to assign to ENEA the Task EU-DV1 for the 'Component Integration and Thermal-Hydraulic Testing of the ITER Divertor Targets and Wing Dummy Prototypes and Cassette Body'
Energy Technology Data Exchange (ETDEWEB)
Dell' Orco, G. E-mail: dellorco@brasimone.enea.it; Canneta, A.; Cattadori, G.; Gaspari, G.P.; Merola, M.; Polazzi, G.; Vieider, G.; Zito, D
2001-10-01
In 1998, in the frame of the European R and D on ITER high heat flux components, the fabrication of a full scale ITER Divertor Outboard mock-up was launched. It comprised a Cassette Body, designed with some mechanical and hydraulic simplifications with respect to the reference body, and the actively cooled Dummy Armour Prototype (DAP). This DAP consists of the Vertical Target, the Wing and the Dump Target, manufactured by the European industry, which are integrated with the Gas Box Liner supplied by the Russian Federation Home Team. In order to simplify the manufacturing, the DAP was layered with an equivalent CuCrZr thickness simulating the real armour (CFC or W tiles). In parallel with the manufacturing activity, the ITER European HT decided to assign to ENEA the Task EU-DV1 for the 'Component Integration and Thermal-Hydraulic Testing of the ITER Divertor Targets and Wing Dummy Prototypes and Cassette Body'.
Predictive Models for Carcinogenicity and Mutagenicity ...
Mutagenicity and carcinogenicity are endpoints of major environmental and regulatory concern. These endpoints are also important targets for development of alternative methods for screening and prediction due to the large number of chemicals of potential concern and the tremendous cost (in time, money, animals) of rodent carcinogenicity bioassays. Both mutagenicity and carcinogenicity involve complex, cellular processes that are only partially understood. Advances in technologies and generation of new data will permit a much deeper understanding. In silico methods for predicting mutagenicity and rodent carcinogenicity based on chemical structural features, along with current mutagenicity and carcinogenicity data sets, have performed well for local prediction (i.e., within specific chemical classes), but are less successful for global prediction (i.e., for a broad range of chemicals). The predictivity of in silico methods can be improved by improving the quality of the data base and endpoints used for modelling. In particular, in vitro assays for clastogenicity need to be improved to reduce false positives (relative to rodent carcinogenicity) and to detect compounds that do not interact directly with DNA or have epigenetic activities. New assays emerging to complement or replace some of the standard assays include VitotoxTM, GreenScreenGC, and RadarScreen. The needs of industry and regulators to assess thousands of compounds necessitate the development of high-t
Model-based radiation scalings for the ITER-like divertors of JET and ASDEX Upgrade
Energy Technology Data Exchange (ETDEWEB)
Aho-Mantila, L., E-mail: leena.aho-mantila@vtt.fi [VTT Technical Research Centre of Finland, FI-02044 VTT (Finland); Bonnin, X. [LSPM – CNRS, Université Paris 13, Sorbonne Paris Cité, F-93430 Villetaneuse (France); Coster, D.P. [Max-Planck Institut für Plasmaphysik, D-85748 Garching (Germany); Lowry, C. [EFDA JET CSU, Culham Science Centre, Abingdon OX14 3DB (United Kingdom); Wischmeier, M. [Max-Planck Institut für Plasmaphysik, D-85748 Garching (Germany); Brezinsek, S. [Forschungszentrum Jülich, Institut für Energie- und Klimaforschung Plasmaphysik, 52425 Jülich (Germany); Federici, G. [EFDA PPP& T Department, D-85748 Garching (Germany)
2015-08-15
Effects of N-seeding in L-mode experiments in ASDEX Upgrade and JET are analysed numerically with the SOLPS5.0 code package. The modelling yields 3 qualitatively different radiative regimes with increasing N concentration, when initially attached outer divertor conditions are studied. The radiation pattern is observed to evolve asymmetrically, with radiation increasing first in the inner divertor, then in the outer divertor, and finally on closed field lines above the X-point. The properties of these radiative regimes are observed to be sensitive to cross-field drifts and they differ between the two devices. The modelled scaling of the divertor radiated power with the divertor neutral pressure is similar to an experimental scaling law for H-mode radiation. The same parametric dependencies are not observed in simulations without drifts.
Automated main-chain model building by template matching and iterative fragment extension.
Terwilliger, Thomas C
2003-01-01
An algorithm for the automated macromolecular model building of polypeptide backbones is described. The procedure is hierarchical. In the initial stages, many overlapping polypeptide fragments are built. In subsequent stages, the fragments are extended and then connected. Identification of the locations of helical and beta-strand regions is carried out by FFT-based template matching. Fragment libraries of helices and beta-strands from refined protein structures are then positioned at the potential locations of helices and strands and the longest segments that fit the electron-density map are chosen. The helices and strands are then extended using fragment libraries consisting of sequences three amino acids long derived from refined protein structures. The resulting segments of polypeptide chain are then connected by choosing those which overlap at two or more C(alpha) positions. The fully automated procedure has been implemented in RESOLVE and is capable of model building at resolutions as low as 3.5 A. The algorithm is useful for building a preliminary main-chain model that can serve as a basis for refinement and side-chain addition.
ICP (ITER Collaborative Platform)
Energy Technology Data Exchange (ETDEWEB)
Capuano, C.; Carayon, F.; Patel, V. [ITER, 13 - St. Paul-Lez Durance (France)
2009-07-01
The ITER organization has the necessity to manage a massive amount of data and processes. Each team requires different process and databases often interconnected with those of others teams. ICP is the current central ITER repository of structured and unstructured data. All data in ICP is served and managed via a web interface that provides global accessibility with a common user friendly interface. This paper will explain the model used by ICP and how it serves the ITER project by providing a robust and agile platform. ICP is developed in ASP.NET using MSSQL Server for data storage. It currently houses 15 data driven applications, 150 different types of record, 500 k objects and 2.5 M references. During European working hours the system averages 150 concurrent users and 20 requests per second. ICP connects to external database applications to provide a single entry point to ITER data and a safe shared storage place to maintain this data long-term. The Core model provides an easy to extend framework to meet the future needs of the Organization. ICP follows a multi-tier architecture, providing logical separation of process. The standard three-tier architecture is expanded, with the data layer separated into data storage, data structure, and data access components. The business or applications logic layer is broken up into a common business functionality layer, a type specific logic layer, and a detached work-flow layer. Finally the presentation tier comprises a presentation adapter layer and an interface layer. Each layer is built up from small blocks which can be combined to create a wide range of more complex functionality. Each new object type developed gains access to a wealth of existing code functionality, while also free to adapt and extend this. The hardware structure is designed to provide complete redundancy, high availability and to handle high load. This document is composed of an abstract followed by the presentation transparencies. (authors)
Federated learning of predictive models from federated Electronic Health Records.
Brisimi, Theodora S; Chen, Ruidi; Mela, Theofanie; Olshevsky, Alex; Paschalidis, Ioannis Ch; Shi, Wei
2018-04-01
In an era of "big data," computationally efficient and privacy-aware solutions for large-scale machine learning problems become crucial, especially in the healthcare domain, where large amounts of data are stored in different locations and owned by different entities. Past research has been focused on centralized algorithms, which assume the existence of a central data repository (database) which stores and can process the data from all participants. Such an architecture, however, can be impractical when data are not centrally located, it does not scale well to very large datasets, and introduces single-point of failure risks which could compromise the integrity and privacy of the data. Given scores of data widely spread across hospitals/individuals, a decentralized computationally scalable methodology is very much in need. We aim at solving a binary supervised classification problem to predict hospitalizations for cardiac events using a distributed algorithm. We seek to develop a general decentralized optimization framework enabling multiple data holders to collaborate and converge to a common predictive model, without explicitly exchanging raw data. We focus on the soft-margin l 1 -regularized sparse Support Vector Machine (sSVM) classifier. We develop an iterative cluster Primal Dual Splitting (cPDS) algorithm for solving the large-scale sSVM problem in a decentralized fashion. Such a distributed learning scheme is relevant for multi-institutional collaborations or peer-to-peer applications, allowing the data holders to collaborate, while keeping every participant's data private. We test cPDS on the problem of predicting hospitalizations due to heart diseases within a calendar year based on information in the patients Electronic Health Records prior to that year. cPDS converges faster than centralized methods at the cost of some communication between agents. It also converges faster and with less communication overhead compared to an alternative distributed
Basic features of boron isotope separation by SILARC method in the two-step iterative static model
Lyakhov, K. A.; Lee, H. J.
2013-05-01
In this paper we develop a new static model for boron isotope separation by the laser assisted retardation of condensation method (SILARC) on the basis of model proposed by Jeff Eerkens. Our model is thought to be adequate to so-called two-step iterative scheme for isotope separation. This rather simple model helps to understand combined action on boron separation by SILARC method of all important parameters and relations between them. These parameters include carrier gas, molar fraction of BCl3 molecules in carrier gas, laser pulse intensity, gas pulse duration, gas pressure and temperature in reservoir and irradiation cells, optimal irradiation cell and skimmer chamber volumes, and optimal nozzle throughput. A method for finding optimal values of these parameters based on some objective function global minimum search was suggested. It turns out that minimum of this objective function is directly related to the minimum of total energy consumed, and total setup volume. Relations between nozzle throat area, IC volume, laser intensity, number of nozzles, number of vacuum pumps, and required isotope production rate were derived. Two types of industrial scale irradiation cells are compared. The first one has one large throughput slit nozzle, while the second one has numerous small nozzles arranged in parallel arrays for better overlap with laser beam. It is shown that the last one outperforms the former one significantly. It is argued that NO2 is the best carrier gas for boron isotope separation from the point of view of energy efficiency and Ar from the point of view of setup compactness.
Validated predictive modelling of the environmental resistome.
Amos, Gregory C A; Gozzard, Emma; Carter, Charlotte E; Mead, Andrew; Bowes, Mike J; Hawkey, Peter M; Zhang, Lihong; Singer, Andrew C; Gaze, William H; Wellington, Elizabeth M H
2015-06-01
Multi-drug-resistant bacteria pose a significant threat to public health. The role of the environment in the overall rise in antibiotic-resistant infections and risk to humans is largely unknown. This study aimed to evaluate drivers of antibiotic-resistance levels across the River Thames catchment, model key biotic, spatial and chemical variables and produce predictive models for future risk assessment. Sediment samples from 13 sites across the River Thames basin were taken at four time points across 2011 and 2012. Samples were analysed for class 1 integron prevalence and enumeration of third-generation cephalosporin-resistant bacteria. Class 1 integron prevalence was validated as a molecular marker of antibiotic resistance; levels of resistance showed significant geospatial and temporal variation. The main explanatory variables of resistance levels at each sample site were the number, proximity, size and type of surrounding wastewater-treatment plants. Model 1 revealed treatment plants accounted for 49.5% of the variance in resistance levels. Other contributing factors were extent of different surrounding land cover types (for example, Neutral Grassland), temporal patterns and prior rainfall; when modelling all variables the resulting model (Model 2) could explain 82.9% of variations in resistance levels in the whole catchment. Chemical analyses correlated with key indicators of treatment plant effluent and a model (Model 3) was generated based on water quality parameters (contaminant and macro- and micro-nutrient levels). Model 2 was beta tested on independent sites and explained over 78% of the variation in integron prevalence showing a significant predictive ability. We believe all models in this study are highly useful tools for informing and prioritising mitigation strategies to reduce the environmental resistome.
Nonlinear model predictive control theory and algorithms
Grüne, Lars
2017-01-01
This book offers readers a thorough and rigorous introduction to nonlinear model predictive control (NMPC) for discrete-time and sampled-data systems. NMPC schemes with and without stabilizing terminal constraints are detailed, and intuitive examples illustrate the performance of different NMPC variants. NMPC is interpreted as an approximation of infinite-horizon optimal control so that important properties like closed-loop stability, inverse optimality and suboptimality can be derived in a uniform manner. These results are complemented by discussions of feasibility and robustness. An introduction to nonlinear optimal control algorithms yields essential insights into how the nonlinear optimization routine—the core of any nonlinear model predictive controller—works. Accompanying software in MATLAB® and C++ (downloadable from extras.springer.com/), together with an explanatory appendix in the book itself, enables readers to perform computer experiments exploring the possibilities and limitations of NMPC. T...
Baryogenesis model predicting antimatter in the Universe
International Nuclear Information System (INIS)
Kirilova, D.
2003-01-01
Cosmic ray and gamma-ray data do not rule out antimatter domains in the Universe, separated at distances bigger than 10 Mpc from us. Hence, it is interesting to analyze the possible generation of vast antimatter structures during the early Universe evolution. We discuss a SUSY-condensate baryogenesis model, predicting large separated regions of matter and antimatter. The model provides generation of the small locally observed baryon asymmetry for a natural initial conditions, it predicts vast antimatter domains, separated from the matter ones by baryonically empty voids. The characteristic scale of antimatter regions and their distance from the matter ones is in accordance with observational constraints from cosmic ray, gamma-ray and cosmic microwave background anisotropy data
Energy Technology Data Exchange (ETDEWEB)
Tenant, Sean; Pang, Chun Lap; Dissanayake, Prageeth [Peninsula Radiology Academy, Plymouth (United Kingdom); Vardhanabhuti, Varut [Plymouth University Peninsula Schools of Medicine and Dentistry, Plymouth (United Kingdom); University of Hong Kong, Department of Diagnostic Radiology, Li Ka Shing Faculty of Medicine, Pokfulam (China); Stuckey, Colin; Gutteridge, Catherine [Plymouth Hospitals NHS Trust, Plymouth (United Kingdom); Hyde, Christopher [University of Exeter Medical School, St Luke' s Campus, Exeter (United Kingdom); Roobottom, Carl [Plymouth University Peninsula Schools of Medicine and Dentistry, Plymouth (United Kingdom); Plymouth Hospitals NHS Trust, Plymouth (United Kingdom)
2017-10-15
To evaluate the accuracy of reduced-dose CT scans reconstructed using a new generation of model-based iterative reconstruction (MBIR) in the imaging of urinary tract stone disease, compared with a standard-dose CT using 30% adaptive statistical iterative reconstruction. This single-institution prospective study recruited 125 patients presenting either with acute renal colic or for follow-up of known urinary tract stones. They underwent two immediately consecutive scans, one at standard dose settings and one at the lowest dose (highest noise index) the scanner would allow. The reduced-dose scans were reconstructed using both ASIR 30% and MBIR algorithms and reviewed independently by two radiologists. Objective and subjective image quality measures as well as diagnostic data were obtained. The reduced-dose MBIR scan was 100% concordant with the reference standard for the assessment of ureteric stones. It was extremely accurate at identifying calculi of 3 mm and above. The algorithm allowed a dose reduction of 58% without any loss of scan quality. A reduced-dose CT scan using MBIR is accurate in acute imaging for renal colic symptoms and for urolithiasis follow-up and allows a significant reduction in dose. (orig.)
Nam, S B; Jeong, D W; Choo, K S; Nam, K J; Hwang, J-Y; Lee, J W; Kim, J Y; Lim, S J
2017-12-01
To compare the image quality of computed tomography angiography (CTA) reconstructed by sinogram-affirmed iterative reconstruction (SAFIRE) with that of advanced modelled iterative reconstruction (ADMIRE) in children with congenital heart disease (CHD). Thirty-one children (8.23±13.92 months) with CHD who underwent CTA were enrolled. Images were reconstructed using SAFIRE (strength 5) and ADMIRE (strength 5). Objective image qualities (attenuation, noise) were measured in the great vessels and heart chambers. Two radiologists independently calculated the contrast-to-noise ratio (CNR) by measuring the intensity and noise of the myocardial walls. Subjective noise, diagnostic confidence, and sharpness at the level prior to the first branch of the main pulmonary artery were also graded by the two radiologists independently. The objective image noise of ADMIRE was significantly lower than that of SAFIRE in the right atrium, right ventricle, and myocardial wall (p0.05). The mean CNR values were 21.56±10.80 for ADMIRE and 18.21±6.98 for SAFIRE, which were significantly different (p0.05). CTA using ADMIRE was superior to SAFIRE when comparing the objective and subjective image quality in children with CHD. Copyright © 2017 The Royal College of Radiologists. Published by Elsevier Ltd. All rights reserved.
Finding Furfural Hydrogenation Catalysts via Predictive Modelling
Strassberger, Zea; Mooijman, Maurice; Ruijter, Eelco; Alberts, Albert H; Maldonado, Ana G; Orru, Romano V A; Rothenberg, Gadi
2010-01-01
Abstract We combine multicomponent reactions, catalytic performance studies and predictive modelling to find transfer hydrogenation catalysts. An initial set of 18 ruthenium-carbene complexes were synthesized and screened in the transfer hydrogenation of furfural to furfurol with isopropyl alcohol complexes gave varied yields, from 62% up to >99.9%, with no obvious structure/activity correlations. Control experiments proved that the carbene ligand remains coordinated to the ruthenium centre t...
Predictive Modeling in Actinide Chemistry and Catalysis
Energy Technology Data Exchange (ETDEWEB)
Yang, Ping [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)
2016-05-16
These are slides from a presentation on predictive modeling in actinide chemistry and catalysis. The following topics are covered in these slides: Structures, bonding, and reactivity (bonding can be quantified by optical probes and theory, and electronic structures and reaction mechanisms of actinide complexes); Magnetic resonance properties (transition metal catalysts with multi-nuclear centers, and NMR/EPR parameters); Moving to more complex systems (surface chemistry of nanomaterials, and interactions of ligands with nanoparticles); Path forward and conclusions.
Tectonic predictions with mantle convection models
Coltice, Nicolas; Shephard, Grace E.
2018-04-01
Over the past 15 yr, numerical models of convection in Earth's mantle have made a leap forward: they can now produce self-consistent plate-like behaviour at the surface together with deep mantle circulation. These digital tools provide a new window into the intimate connections between plate tectonics and mantle dynamics, and can therefore be used for tectonic predictions, in principle. This contribution explores this assumption. First, initial conditions at 30, 20, 10 and 0 Ma are generated by driving a convective flow with imposed plate velocities at the surface. We then compute instantaneous mantle flows in response to the guessed temperature fields without imposing any boundary conditions. Plate boundaries self-consistently emerge at correct locations with respect to reconstructions, except for small plates close to subduction zones. As already observed for other types of instantaneous flow calculations, the structure of the top boundary layer and upper-mantle slab is the dominant character that leads to accurate predictions of surface velocities. Perturbations of the rheological parameters have little impact on the resulting surface velocities. We then compute fully dynamic model evolution from 30 and 10 to 0 Ma, without imposing plate boundaries or plate velocities. Contrary to instantaneous calculations, errors in kinematic predictions are substantial, although the plate layout and kinematics in several areas remain consistent with the expectations for the Earth. For these calculations, varying the rheological parameters makes a difference for plate boundary evolution. Also, identified errors in initial conditions contribute to first-order kinematic errors. This experiment shows that the tectonic predictions of dynamic models over 10 My are highly sensitive to uncertainties of rheological parameters and initial temperature field in comparison to instantaneous flow calculations. Indeed, the initial conditions and the rheological parameters can be good enough
Breast cancer risks and risk prediction models.
Engel, Christoph; Fischer, Christine
2015-02-01
BRCA1/2 mutation carriers have a considerably increased risk to develop breast and ovarian cancer. The personalized clinical management of carriers and other at-risk individuals depends on precise knowledge of the cancer risks. In this report, we give an overview of the present literature on empirical cancer risks, and we describe risk prediction models that are currently used for individual risk assessment in clinical practice. Cancer risks show large variability between studies. Breast cancer risks are at 40-87% for BRCA1 mutation carriers and 18-88% for BRCA2 mutation carriers. For ovarian cancer, the risk estimates are in the range of 22-65% for BRCA1 and 10-35% for BRCA2. The contralateral breast cancer risk is high (10-year risk after first cancer 27% for BRCA1 and 19% for BRCA2). Risk prediction models have been proposed to provide more individualized risk prediction, using additional knowledge on family history, mode of inheritance of major genes, and other genetic and non-genetic risk factors. User-friendly software tools have been developed that serve as basis for decision-making in family counseling units. In conclusion, further assessment of cancer risks and model validation is needed, ideally based on prospective cohort studies. To obtain such data, clinical management of carriers and other at-risk individuals should always be accompanied by standardized scientific documentation.
International Nuclear Information System (INIS)
Kim, Hyungjin; Kim, Seong Ho; Lee, Sang Min; Lee, Kyung Hee; Park, Chang Min; Park, Sang Joon; Goo, Jin Mo
2014-01-01
To compare the pulmonary subsolid nodule (SSN) classification agreement and measurement variability between filtered back projection (FBP) and model-based iterative reconstruction (MBIR). Low-dose CTs were reconstructed using FBP and MBIR for 47 patients with 47 SSNs. Two readers independently classified SSNs into pure or part-solid ground-glass nodules, and measured the size of the whole nodule and solid portion twice on both reconstruction algorithms. Nodule classification agreement was analyzed using Cohen's kappa and compared between reconstruction algorithms using McNemar's test. Measurement variability was investigated using Bland-Altman analysis and compared with the paired t-test. Cohen's kappa for inter-reader SSN classification agreement was 0.541-0.662 on FBP and 0.778-0.866 on MBIR. Between the two readers, nodule classification was consistent in 79.8 % (75/94) with FBP and 91.5 % (86/94) with MBIR (p = 0.027). Inter-reader measurement variability range was -5.0-2.1 mm on FBP and -3.3-1.8 mm on MBIR for whole nodule size, and was -6.5-0.9 mm on FBP and -5.5-1.5 mm on MBIR for solid portion size. Inter-reader measurement differences were significantly smaller on MBIR (p = 0.027, whole nodule; p = 0.011, solid portion). MBIR significantly improved SSN classification agreement and reduced measurement variability of both whole nodules and solid portions between readers. (orig.)
International Nuclear Information System (INIS)
Barras, Heloise; Dunet, Vincent; Hachulla, Anne-Lise; Grimm, Jochen; Beigelman-Aubry, Catherine
2016-01-01
Model-based iterative reconstruction (MBIR) reduces image noise and improves image quality (IQ) but its influence on post-processing tools including maximal intensity projection (MIP) and minimal intensity projection (mIP) remains unknown. To evaluate the influence on IQ of MBIR on native, mIP, MIP axial and coronal reformats of reduced dose computed tomography (RD-CT) chest acquisition. Raw data of 50 patients, who underwent a standard dose CT (SD-CT) and a follow-up RD-CT with a CT dose index (CTDI) of 2–3 mGy, were reconstructed by MBIR and FBP. Native slices, 4-mm-thick MIP, and 3-mm-thick mIP axial and coronal reformats were generated. The relative IQ, subjective IQ, image noise, and number of artifacts were determined in order to compare different reconstructions of RD-CT with reference SD-CT. The lowest noise was observed with MBIR. RD-CT reconstructed by MBIR exhibited the best relative and subjective IQ on coronal view regardless of the post-processing tool. MBIR generated the lowest rate of artefacts on coronal mIP/MIP reformats and the highest one on axial reformats, mainly represented by distortions and stairsteps artifacts. The MBIR algorithm reduces image noise but generates more artifacts than FBP on axial mIP and MIP reformats of RD-CT. Conversely, it significantly improves IQ on coronal views, without increasing artifacts, regardless of the post-processing technique
Energy Technology Data Exchange (ETDEWEB)
Schmitz, O., E-mail: o.schmitz@fz-juelich.de [Forschungszentrum Jülich, IEK-4, Association EURATOM-FZJ, Jülich (Germany); Becoulet, M. [CEA/IRFM, Cadarache, 13108 St. Paul-lez-Durance Cedex (France); Cahyna, P. [IPP AS CR, Za Slovankou 3, 18200 Prague 8 (Czech Republic); Evans, T.E. [General Atomics, P.O. Box 85608, San Diego, CA 92186-5608 (United States); Feng, Y. [Max-Planck-Institut für Plasmaphysik, Greifswald (Germany); Frerichs, H.; Kirschner, A. [Forschungszentrum Jülich, IEK-4, Association EURATOM-FZJ, Jülich (Germany); Kukushkin, A. [ITER Organization, Route de Vinon sur Verdon, 13115 Saint Paul Lez Durance (France); Laengner, R. [Forschungszentrum Jülich, IEK-4, Association EURATOM-FZJ, Jülich (Germany); Lunt, T. [Max-Planck-Institut für Plasmaphysik, Greifswald (Germany); Loarte, A.; Pitts, R. [ITER Organization, Route de Vinon sur Verdon, 13115 Saint Paul Lez Durance (France); Reiser, D.; Reiter, D. [Forschungszentrum Jülich, IEK-4, Association EURATOM-FZJ, Jülich (Germany); Saibene, G. [Fusion for Energy Joint Undertaking, Barcelona (Spain); Samm, U. [Forschungszentrum Jülich, IEK-4, Association EURATOM-FZJ, Jülich (Germany)
2013-07-15
First results from three-dimensional modeling of the divertor heat and particle flux pattern during application of resonant magnetic perturbation fields as ELM control scheme in ITER with the EMC3-Eirene fluid plasma and kinetic neutral transport code are discussed. The formation of a helical magnetic footprint breaks the toroidal symmetry of the heat and particle fluxes. Expansion of the flux pattern as far as 60 cm away from the unperturbed strike line is seen with vacuum RMP fields, resulting in a preferable heat flux spreading. Inclusion of plasma response reduces the radial extension of the heat and particle fluxes and results in a heat flux peaking closer to the unperturbed level. A strong reduction of the particle confinement is found. 3D flow channels are identified as a consistent reason due to direct parallel outflow from inside of the separatrix. Their radial inward expansion and hence the level of particle pump out is shown to be dependent on the perturbation level.
International Nuclear Information System (INIS)
Carraretto, Cristian; Zigante, Andrea
2006-01-01
The liberalization of the electricity sector requires utilities to develop sound operation strategies for their power plants. In this paper, attention is focused on the problem of optimizing the management of the thermal power plants belonging to a strategic producer that competes with other strategic companies and a set of smaller non-strategic ones in the day-ahead market. The market model suggested here determines an equilibrium condition over the selected period of analysis, in which no producer can increase profits by changing its supply offers given all rivals' bids. Power plants technical and operating constraints are considered. An iterative procedure, based on the dynamic programming, is used to find the optimum production plans of each producer. Some combinations of power plants and number of producers are analyzed, to simulate for instance the decommissioning of old expensive power plants, the installation of new more efficient capacity, the severance of large dominant producers into smaller utilities, the access of new producers to the market. Their effect on power plants management, market equilibrium, electricity quantities traded and prices is discussed. (author)
Gale, Maggie; Ball, Linden J
2012-04-01
Hypothesis-testing performance on Wason's (Quarterly Journal of Experimental Psychology 12:129-140, 1960) 2-4-6 task is typically poor, with only around 20% of participants announcing the to-be-discovered "ascending numbers" rule on their first attempt. Enhanced solution rates can, however, readily be observed with dual-goal (DG) task variants requiring the discovery of two complementary rules, one labeled "DAX" (the standard "ascending numbers" rule) and the other labeled "MED" ("any other number triples"). Two DG experiments are reported in which we manipulated the usefulness of a presented MED exemplar, where usefulness denotes cues that can establish a helpful "contrast class" that can stand in opposition to the presented 2-4-6 DAX exemplar. The usefulness of MED exemplars had a striking facilitatory effect on DAX rule discovery, which supports the importance of contrast-class information in hypothesis testing. A third experiment ruled out the possibility that the useful MED triple seeded the correct rule from the outset and obviated any need for hypothesis testing. We propose that an extension of Oaksford and Chater's (European Journal of Cognitive Psychology 6:149-169, 1994) iterative counterfactual model can neatly capture the mechanisms by which DG facilitation arises.
Built To Last: Using Iterative Development Models for Sustainable Scientific Software Development
Jasiak, M. E.; Truslove, I.; Savoie, M.
2013-12-01
In scientific research, software development exists fundamentally for the results they create. The core research must take focus. It seems natural to researchers, driven by grant deadlines, that every dollar invested in software development should be used to push the boundaries of problem solving. This system of values is frequently misaligned with those of the software being created in a sustainable fashion; short-term optimizations create longer-term sustainability issues. The National Snow and Ice Data Center (NSIDC) has taken bold cultural steps in using agile and lean development and management methodologies to help its researchers meet critical deadlines, while building in the necessary support structure for the code to live far beyond its original milestones. Agile and lean software development and methodologies including Scrum, Kanban, Continuous Delivery and Test-Driven Development have seen widespread adoption within NSIDC. This focus on development methods is combined with an emphasis on explaining to researchers why these methods produce more desirable results for everyone, as well as promoting developers interacting with researchers. This presentation will describe NSIDC's current scientific software development model, how this addresses the short-term versus sustainability dichotomy, the lessons learned and successes realized by transitioning to this agile and lean-influenced model, and the current challenges faced by the organization.
A predictive model for dimensional errors in fused deposition modeling
DEFF Research Database (Denmark)
Stolfi, A.
2015-01-01
This work concerns the effect of deposition angle (a) and layer thickness (L) on the dimensional performance of FDM parts using a predictive model based on the geometrical description of the FDM filament profile. An experimental validation over the whole a range from 0° to 177° at 3° steps and two...... values of L (0.254 mm, 0.330 mm) was produced by comparing predicted values with external face-to-face measurements. After removing outliers, the results show that the developed two-parameter model can serve as tool for modeling the FDM dimensional behavior in a wide range of deposition angles....
Two stage neural network modelling for robust model predictive control.
Patan, Krzysztof
2018-01-01
The paper proposes a novel robust model predictive control scheme realized by means of artificial neural networks. The neural networks are used twofold: to design the so-called fundamental model of a plant and to catch uncertainty associated with the plant model. In order to simplify the optimization process carried out within the framework of predictive control an instantaneous linearization is applied which renders it possible to define the optimization problem in the form of constrained quadratic programming. Stability of the proposed control system is also investigated by showing that a cost function is monotonically decreasing with respect to time. Derived robust model predictive control is tested and validated on the example of a pneumatic servomechanism working at different operating regimes. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.
Predicting extinction rates in stochastic epidemic models
International Nuclear Information System (INIS)
Schwartz, Ira B; Billings, Lora; Dykman, Mark; Landsman, Alexandra
2009-01-01
We investigate the stochastic extinction processes in a class of epidemic models. Motivated by the process of natural disease extinction in epidemics, we examine the rate of extinction as a function of disease spread. We show that the effective entropic barrier for extinction in a susceptible–infected–susceptible epidemic model displays scaling with the distance to the bifurcation point, with an unusual critical exponent. We make a direct comparison between predictions and numerical simulations. We also consider the effect of non-Gaussian vaccine schedules, and show numerically how the extinction process may be enhanced when the vaccine schedules are Poisson distributed
Predictive Modeling of the CDRA 4BMS
Coker, Robert F.; Knox, James C.
2016-01-01
As part of NASA's Advanced Exploration Systems (AES) program and the Life Support Systems Project (LSSP), fully predictive models of the Four Bed Molecular Sieve (4BMS) of the Carbon Dioxide Removal Assembly (CDRA) on the International Space Station (ISS) are being developed. This virtual laboratory will be used to help reduce mass, power, and volume requirements for future missions. In this paper we describe current and planned modeling developments in the area of carbon dioxide removal to support future crewed Mars missions as well as the resolution of anomalies observed in the ISS CDRA.
ITER council proceedings: 1998
International Nuclear Information System (INIS)
1999-01-01
This volume contains documents of the 13th and the 14th ITER council meeting as well as of the 1st extraordinary ITER council meeting. Documents of the ITER meetings held in Vienna and Yokohama during 1998 are also included. The contents include an outline of the ITER objectives, the ITER parameters and design overview as well as operating scenarios and plasma performance. Furthermore, design features, safety and environmental characteristics are given
Studio Physics at the Colorado School of Mines: A model for iterative development and assessment
Kohl, Patrick; Kuo, Vincent
2009-05-01
The Colorado School of Mines (CSM) has taught its first-semester introductory physics course using a hybrid lecture/Studio Physics format for several years. Based on this previous success, over the past 18 months we have converted the second semester of our traditional calculus-based introductory physics course (Physics II) to a Studio Physics format. In this talk, we describe the recent history of the Physics II course and of Studio at Mines, discuss the PER-based improvements that we are implementing, and characterize our progress via several metrics, including pre/post Conceptual Survey of Electricity and Magnetism (CSEM) scores, Colorado Learning About Science Survey scores (CLASS), failure rates, and exam scores. We also report on recent attempts to involve students in the department's Senior Design program with our course. Our ultimate goal is to construct one possible model for a practical and successful transition from a lecture course to a Studio (or Studio-like) course.
Data Driven Economic Model Predictive Control
Directory of Open Access Journals (Sweden)
Masoud Kheradmandi
2018-04-01
Full Text Available This manuscript addresses the problem of data driven model based economic model predictive control (MPC design. To this end, first, a data-driven Lyapunov-based MPC is designed, and shown to be capable of stabilizing a system at an unstable equilibrium point. The data driven Lyapunov-based MPC utilizes a linear time invariant (LTI model cognizant of the fact that the training data, owing to the unstable nature of the equilibrium point, has to be obtained from closed-loop operation or experiments. Simulation results are first presented demonstrating closed-loop stability under the proposed data-driven Lyapunov-based MPC. The underlying data-driven model is then utilized as the basis to design an economic MPC. The economic improvements yielded by the proposed method are illustrated through simulations on a nonlinear chemical process system example.
Iteration of adjoint equations
International Nuclear Information System (INIS)
Lewins, J.D.
1994-01-01
Adjoint functions are the basis of variational methods and now widely used for perturbation theory and its extension to higher order theory as used, for example, in modelling fuel burnup and optimization. In such models, the adjoint equation is to be solved in a critical system with an adjoint source distribution that is not zero but has special properties related to ratios of interest in critical systems. Consequently the methods of solving equations by iteration and accumulation are reviewed to show how conventional methods may be utilized in these circumstances with adequate accuracy. (author). 3 refs., 6 figs., 3 tabs
Structural model for the first wall W-based material in ITER project
Institute of Scientific and Technical Information of China (English)
Dehua Xu; Xinkui He; Shuiquan Deng; Yong Zhao
2014-01-01
The preparation, characterization, and test of the first wall materials designed to be used in the fusion reactor have remained challenging problems in the material science. This work uses the first-principles method as implemented in the CASTEP package to study the influ-ences of the doped titanium carbide on the structural sta-bility of the W–TiC material. The calculated total energy and enthalpy have been used as criteria to judge the structural models built with consideration of symmetry. Our simulation indicates that the doped TiC tends to form its own domain up to the investigated nano-scale, which implies a possible phase separation. This result reveals the intrinsic reason for the composite nature of the W–TiC material and provides an explanation for the experimen-tally observed phase separation at the nano-scale. Our approach also sheds a light on explaining the enhancing effects of doped components on the durability, reliability, corrosion resistance, etc., in many special steels.
Plant control using embedded predictive models
International Nuclear Information System (INIS)
Godbole, S.S.; Gabler, W.E.; Eschbach, S.L.
1990-01-01
B and W recently undertook the design of an advanced light water reactor control system. A concept new to nuclear steam system (NSS) control was developed. The concept, which is called the Predictor-Corrector, uses mathematical models of portions of the controlled NSS to calculate, at various levels within the system, demand and control element position signals necessary to satisfy electrical demand. The models give the control system the ability to reduce overcooling and undercooling of the reactor coolant system during transients and upsets. Two types of mathematical models were developed for use in designing and testing the control system. One model was a conventional, comprehensive NSS model that responds to control system outputs and calculates the resultant changes in plant variables that are then used as inputs to the control system. Two other models, embedded in the control system, were less conventional, inverse models. These models accept as inputs plant variables, equipment states, and demand signals and predict plant operating conditions and control element states that will satisfy the demands. This paper reports preliminary results of closed-loop Reactor Coolant (RC) pump trip and normal load reduction testing of the advanced concept. Results of additional transient testing, and of open and closed loop stability analyses will be reported as they are available
Ground Motion Prediction Models for Caucasus Region
Jorjiashvili, Nato; Godoladze, Tea; Tvaradze, Nino; Tumanova, Nino
2016-04-01
Ground motion prediction models (GMPMs) relate ground motion intensity measures to variables describing earthquake source, path, and site effects. Estimation of expected ground motion is a fundamental earthquake hazard assessment. The most commonly used parameter for attenuation relation is peak ground acceleration or spectral acceleration because this parameter gives useful information for Seismic Hazard Assessment. Since 2003 development of Georgian Digital Seismic Network has started. In this study new GMP models are obtained based on new data from Georgian seismic network and also from neighboring countries. Estimation of models is obtained by classical, statistical way, regression analysis. In this study site ground conditions are additionally considered because the same earthquake recorded at the same distance may cause different damage according to ground conditions. Empirical ground-motion prediction models (GMPMs) require adjustment to make them appropriate for site-specific scenarios. However, the process of making such adjustments remains a challenge. This work presents a holistic framework for the development of a peak ground acceleration (PGA) or spectral acceleration (SA) GMPE that is easily adjustable to different seismological conditions and does not suffer from the practical problems associated with adjustments in the response spectral domain.
Modeling and Prediction of Krueger Device Noise
Guo, Yueping; Burley, Casey L.; Thomas, Russell H.
2016-01-01
This paper presents the development of a noise prediction model for aircraft Krueger flap devices that are considered as alternatives to leading edge slotted slats. The prediction model decomposes the total Krueger noise into four components, generated by the unsteady flows, respectively, in the cove under the pressure side surface of the Krueger, in the gap between the Krueger trailing edge and the main wing, around the brackets supporting the Krueger device, and around the cavity on the lower side of the main wing. For each noise component, the modeling follows a physics-based approach that aims at capturing the dominant noise-generating features in the flow and developing correlations between the noise and the flow parameters that control the noise generation processes. The far field noise is modeled using each of the four noise component's respective spectral functions, far field directivities, Mach number dependencies, component amplitudes, and other parametric trends. Preliminary validations are carried out by using small scale experimental data, and two applications are discussed; one for conventional aircraft and the other for advanced configurations. The former focuses on the parametric trends of Krueger noise on design parameters, while the latter reveals its importance in relation to other airframe noise components.
Prediction of Chemical Function: Model Development and ...
The United States Environmental Protection Agency’s Exposure Forecaster (ExpoCast) project is developing both statistical and mechanism-based computational models for predicting exposures to thousands of chemicals, including those in consumer products. The high-throughput (HT) screening-level exposures developed under ExpoCast can be combined with HT screening (HTS) bioactivity data for the risk-based prioritization of chemicals for further evaluation. The functional role (e.g. solvent, plasticizer, fragrance) that a chemical performs can drive both the types of products in which it is found and the concentration in which it is present and therefore impacting exposure potential. However, critical chemical use information (including functional role) is lacking for the majority of commercial chemicals for which exposure estimates are needed. A suite of machine-learning based models for classifying chemicals in terms of their likely functional roles in products based on structure were developed. This effort required collection, curation, and harmonization of publically-available data sources of chemical functional use information from government and industry bodies. Physicochemical and structure descriptor data were generated for chemicals with function data. Machine-learning classifier models for function were then built in a cross-validated manner from the descriptor/function data using the method of random forests. The models were applied to: 1) predict chemi
Evaluating Predictive Models of Software Quality
Ciaschini, V.; Canaparo, M.; Ronchieri, E.; Salomoni, D.
2014-06-01
Applications from High Energy Physics scientific community are constantly growing and implemented by a large number of developers. This implies a strong churn on the code and an associated risk of faults, which is unavoidable as long as the software undergoes active evolution. However, the necessities of production systems run counter to this. Stability and predictability are of paramount importance; in addition, a short turn-around time for the defect discovery-correction-deployment cycle is required. A way to reconcile these opposite foci is to use a software quality model to obtain an approximation of the risk before releasing a program to only deliver software with a risk lower than an agreed threshold. In this article we evaluated two quality predictive models to identify the operational risk and the quality of some software products. We applied these models to the development history of several EMI packages with intent to discover the risk factor of each product and compare it with its real history. We attempted to determine if the models reasonably maps reality for the applications under evaluation, and finally we concluded suggesting directions for further studies.
Predicting FLDs Using a Multiscale Modeling Scheme
Wu, Z.; Loy, C.; Wang, E.; Hegadekatte, V.
2017-09-01
The measurement of a single forming limit diagram (FLD) requires significant resources and is time consuming. We have developed a multiscale modeling scheme to predict FLDs using a combination of limited laboratory testing, crystal plasticity (VPSC) modeling, and dual sequential-stage finite element (ABAQUS/Explicit) modeling with the Marciniak-Kuczynski (M-K) criterion to determine the limit strain. We have established a means to work around existing limitations in ABAQUS/Explicit by using an anisotropic yield locus (e.g., BBC2008) in combination with the M-K criterion. We further apply a VPSC model to reduce the number of laboratory tests required to characterize the anisotropic yield locus. In the present work, we show that the predicted FLD is in excellent agreement with the measured FLD for AA5182 in the O temper. Instead of 13 different tests as for a traditional FLD determination within Novelis, our technique uses just four measurements: tensile properties in three orientations; plane strain tension; biaxial bulge; and the sheet crystallographic texture. The turnaround time is consequently far less than for the traditional laboratory measurement of the FLD.
PREDICTION MODELS OF GRAIN YIELD AND CHARACTERIZATION
Directory of Open Access Journals (Sweden)
Narciso Ysac Avila Serrano
2009-06-01
Full Text Available With the objective to characterize the grain yield of five cowpea cultivars and to find linear regression models to predict it, a study was developed in La Paz, Baja California Sur, Mexico. A complete randomized blocks design was used. Simple and multivariate analyses of variance were carried out using the canonical variables to characterize the cultivars. The variables cluster per plant, pods per plant, pods per cluster, seeds weight per plant, seeds hectoliter weight, 100-seed weight, seeds length, seeds wide, seeds thickness, pods length, pods wide, pods weight, seeds per pods, and seeds weight per pods, showed significant differences (Pâ‰¤ 0.05 among cultivars. PaceÃ±o and IT90K-277-2 cultivars showed the higher seeds weight per plant. The linear regression models showed correlation coefficients â‰¥0.92. In these models, the seeds weight per plant, pods per cluster, pods per plant, cluster per plant and pods length showed significant correlations (Pâ‰¤ 0.05. In conclusion, the results showed that grain yield differ among cultivars and for its estimation, the prediction models showed determination coefficients highly dependable.
Evaluating predictive models of software quality
International Nuclear Information System (INIS)
Ciaschini, V; Canaparo, M; Ronchieri, E; Salomoni, D
2014-01-01
Applications from High Energy Physics scientific community are constantly growing and implemented by a large number of developers. This implies a strong churn on the code and an associated risk of faults, which is unavoidable as long as the software undergoes active evolution. However, the necessities of production systems run counter to this. Stability and predictability are of paramount importance; in addition, a short turn-around time for the defect discovery-correction-deployment cycle is required. A way to reconcile these opposite foci is to use a software quality model to obtain an approximation of the risk before releasing a program to only deliver software with a risk lower than an agreed threshold. In this article we evaluated two quality predictive models to identify the operational risk and the quality of some software products. We applied these models to the development history of several EMI packages with intent to discover the risk factor of each product and compare it with its real history. We attempted to determine if the models reasonably maps reality for the applications under evaluation, and finally we concluded suggesting directions for further studies.
Gamma-Ray Pulsars Models and Predictions
Harding, A K
2001-01-01
Pulsed emission from gamma-ray pulsars originates inside the magnetosphere, from radiation by charged particles accelerated near the magnetic poles or in the outer gaps. In polar cap models, the high energy spectrum is cut off by magnetic pair production above an energy that is dependent on the local magnetic field strength. While most young pulsars with surface fields in the range B = 10^{12} - 10^{13} G are expected to have high energy cutoffs around several GeV, the gamma-ray spectra of old pulsars having lower surface fields may extend to 50 GeV. Although the gamma-ray emission of older pulsars is weaker, detecting pulsed emission at high energies from nearby sources would be an important confirmation of polar cap models. Outer gap models predict more gradual high-energy turnovers at around 10 GeV, but also predict an inverse Compton component extending to TeV energies. Detection of pulsed TeV emission, which would not survive attenuation at the polar caps, is thus an important test of outer gap models. N...
Artificial Neural Network Model for Predicting Compressive
Directory of Open Access Journals (Sweden)
Salim T. Yousif
2013-05-01
Full Text Available Compressive strength of concrete is a commonly used criterion in evaluating concrete. Although testing of the compressive strength of concrete specimens is done routinely, it is performed on the 28th day after concrete placement. Therefore, strength estimation of concrete at early time is highly desirable. This study presents the effort in applying neural network-based system identification techniques to predict the compressive strength of concrete based on concrete mix proportions, maximum aggregate size (MAS, and slump of fresh concrete. Back-propagation neural networks model is successively developed, trained, and tested using actual data sets of concrete mix proportions gathered from literature. The test of the model by un-used data within the range of input parameters shows that the maximum absolute error for model is about 20% and 88% of the output results has absolute errors less than 10%. The parametric study shows that water/cement ratio (w/c is the most significant factor affecting the output of the model. The results showed that neural networks has strong potential as a feasible tool for predicting compressive strength of concrete.
Clinical Predictive Modeling Development and Deployment through FHIR Web Services.
Khalilia, Mohammed; Choi, Myung; Henderson, Amelia; Iyengar, Sneha; Braunstein, Mark; Sun, Jimeng
2015-01-01
Clinical predictive modeling involves two challenging tasks: model development and model deployment. In this paper we demonstrate a software architecture for developing and deploying clinical predictive models using web services via the Health Level 7 (HL7) Fast Healthcare Interoperability Resources (FHIR) standard. The services enable model development using electronic health records (EHRs) stored in OMOP CDM databases and model deployment for scoring individual patients through FHIR resources. The MIMIC2 ICU dataset and a synthetic outpatient dataset were transformed into OMOP CDM databases for predictive model development. The resulting predictive models are deployed as FHIR resources, which receive requests of patient information, perform prediction against the deployed predictive model and respond with prediction scores. To assess the practicality of this approach we evaluated the response and prediction time of the FHIR modeling web services. We found the system to be reasonably fast with one second total response time per patient prediction.
Energy Technology Data Exchange (ETDEWEB)
Gang, G; Siewerdsen, J; Stayman, J [Johns Hopkins University, Baltimore, MD (United States)
2016-06-15
Purpose: There has been increasing interest in integrating fluence field modulation (FFM) devices with diagnostic CT scanners for dose reduction purposes. Conventional FFM strategies, however, are often either based on heuristics or the analysis of filtered-backprojection (FBP) performance. This work investigates a prospective task-driven optimization of FFM for model-based iterative reconstruction (MBIR) in order to improve imaging performance at the same total dose as conventional strategies. Methods: The task-driven optimization framework utilizes an ultra-low dose 3D scout as a patient-specific anatomical model and a mathematical formation of the imaging task. The MBIR method investigated is quadratically penalized-likelihood reconstruction. The FFM objective function uses detectability index, d’, computed as a function of the predicted spatial resolution and noise in the image. To optimize performance throughout the object, a maxi-min objective was adopted where the minimum d’ over multiple locations is maximized. To reduce the dimensionality of the problem, FFM is parameterized as a linear combination of 2D Gaussian basis functions over horizontal detector pixels and projection angles. The coefficients of these bases are found using the covariance matrix adaptation evolution strategy (CMA-ES) algorithm. The task-driven design was compared with three other strategies proposed for FBP reconstruction for a calcification cluster discrimination task in an abdomen phantom. Results: The task-driven optimization yielded FFM that was significantly different from those designed for FBP. Comparing all four strategies, the task-based design achieved the highest minimum d’ with an 8–48% improvement, consistent with the maxi-min objective. In addition, d’ was improved to a greater extent over a larger area within the entire phantom. Conclusion: Results from this investigation suggests the need to re-evaluate conventional FFM strategies for MBIR. The task
International Nuclear Information System (INIS)
Gang, G; Siewerdsen, J; Stayman, J
2016-01-01
Purpose: There has been increasing interest in integrating fluence field modulation (FFM) devices with diagnostic CT scanners for dose reduction purposes. Conventional FFM strategies, however, are often either based on heuristics or the analysis of filtered-backprojection (FBP) performance. This work investigates a prospective task-driven optimization of FFM for model-based iterative reconstruction (MBIR) in order to improve imaging performance at the same total dose as conventional strategies. Methods: The task-driven optimization framework utilizes an ultra-low dose 3D scout as a patient-specific anatomical model and a mathematical formation of the imaging task. The MBIR method investigated is quadratically penalized-likelihood reconstruction. The FFM objective function uses detectability index, d’, computed as a function of the predicted spatial resolution and noise in the image. To optimize performance throughout the object, a maxi-min objective was adopted where the minimum d’ over multiple locations is maximized. To reduce the dimensionality of the problem, FFM is parameterized as a linear combination of 2D Gaussian basis functions over horizontal detector pixels and projection angles. The coefficients of these bases are found using the covariance matrix adaptation evolution strategy (CMA-ES) algorithm. The task-driven design was compared with three other strategies proposed for FBP reconstruction for a calcification cluster discrimination task in an abdomen phantom. Results: The task-driven optimization yielded FFM that was significantly different from those designed for FBP. Comparing all four strategies, the task-based design achieved the highest minimum d’ with an 8–48% improvement, consistent with the maxi-min objective. In addition, d’ was improved to a greater extent over a larger area within the entire phantom. Conclusion: Results from this investigation suggests the need to re-evaluate conventional FFM strategies for MBIR. The task
An analytical model for climatic predictions
International Nuclear Information System (INIS)
Njau, E.C.
1990-12-01
A climatic model based upon analytical expressions is presented. This model is capable of making long-range predictions of heat energy variations on regional or global scales. These variations can then be transformed into corresponding variations of some other key climatic parameters since weather and climatic changes are basically driven by differential heating and cooling around the earth. On the basis of the mathematical expressions upon which the model is based, it is shown that the global heat energy structure (and hence the associated climatic system) are characterized by zonally as well as latitudinally propagating fluctuations at frequencies downward of 0.5 day -1 . We have calculated the propagation speeds for those particular frequencies that are well documented in the literature. The calculated speeds are in excellent agreement with the measured speeds. (author). 13 refs
An Anisotropic Hardening Model for Springback Prediction
Zeng, Danielle; Xia, Z. Cedric
2005-08-01
As more Advanced High-Strength Steels (AHSS) are heavily used for automotive body structures and closures panels, accurate springback prediction for these components becomes more challenging because of their rapid hardening characteristics and ability to sustain even higher stresses. In this paper, a modified Mroz hardening model is proposed to capture realistic Bauschinger effect at reverse loading, such as when material passes through die radii or drawbead during sheet metal forming process. This model accounts for material anisotropic yield surface and nonlinear isotropic/kinematic hardening behavior. Material tension/compression test data are used to accurately represent Bauschinger effect. The effectiveness of the model is demonstrated by comparison of numerical and experimental springback results for a DP600 straight U-channel test.
An Anisotropic Hardening Model for Springback Prediction
International Nuclear Information System (INIS)
Zeng, Danielle; Xia, Z. Cedric
2005-01-01
As more Advanced High-Strength Steels (AHSS) are heavily used for automotive body structures and closures panels, accurate springback prediction for these components becomes more challenging because of their rapid hardening characteristics and ability to sustain even higher stresses. In this paper, a modified Mroz hardening model is proposed to capture realistic Bauschinger effect at reverse loading, such as when material passes through die radii or drawbead during sheet metal forming process. This model accounts for material anisotropic yield surface and nonlinear isotropic/kinematic hardening behavior. Material tension/compression test data are used to accurately represent Bauschinger effect. The effectiveness of the model is demonstrated by comparison of numerical and experimental springback results for a DP600 straight U-channel test
Accurate Holdup Calculations with Predictive Modeling & Data Integration
Energy Technology Data Exchange (ETDEWEB)
Azmy, Yousry [North Carolina State Univ., Raleigh, NC (United States). Dept. of Nuclear Engineering; Cacuci, Dan [Univ. of South Carolina, Columbia, SC (United States). Dept. of Mechanical Engineering
2017-04-03
Bayes’ Theorem, one must have a model y(x) that maps the state variables x (the solution in this case) to the measurements y. In this case, the unknown state variables are the configuration and composition of the heldup SNM. The measurements are the detector readings. Thus, the natural model is neutral-particle radiation transport where a wealth of computational tools exists for performing these simulations accurately and efficiently. The combination of predictive model and Bayesian inference forms the Data Integration with Modeled Predictions (DIMP) method that serves as foundation for this project. The cost functional describing the model-to-data misfit is computed via a norm created by the inverse of the covariance matrix of the model parameters and responses. Since the model y(x) for the holdup problem is nonlinear, a nonlinear optimization on Q is conducted via Newton-type iterative methods to find the optimal values of the model parameters x. This project comprised a collaboration between NC State University (NCSU), the University of South Carolina (USC), and Oak Ridge National Laboratory (ORNL). The project was originally proposed in seven main tasks with an eighth contingency task to be performed if time and funding permitted; in fact time did not permit commencement of the contingency task and it was not performed. The remaining tasks involved holdup analysis with gamma detection strategies and separately with neutrons based on coincidence counting. Early in the project, and upon consultation with experts in coincidence counting it became evident that this approach is not viable for holdup applications and this task was replaced with an alternative, but valuable investigation that was carried out by the USC partner. Nevertheless, the experimental 4 measurements at ORNL of both gamma and neutron sources for the purpose of constructing Detector Response Functions (DRFs) with the associated uncertainties were indeed completed.
ITER Council proceedings: 1993
International Nuclear Information System (INIS)
1994-01-01
Records of the third ITER Council Meeting (IC-3), held on 21-22 April 1993, in Tokyo, Japan, and the fourth ITER Council Meeting (IC-4) held on 29 September - 1 October 1993 in San Diego, USA, are presented, giving essential information on the evolution of the ITER Engineering Design Activities (EDA), such as the text of the draft of Protocol 2 further elaborated in ''ITER EDA Agreement and Protocol 2'' (ITER EDA Documentation Series No. 5), recommendations on future work programmes: a description of technology R and D tasks; the establishment of a trust fund for the ITER EDA activities; arrangements for Visiting Home Team Personnel; the general framework for the involvement of other countries in the ITER EDA; conditions for the involvement of Canada in the Euratom Contribution to the ITER EDA; and other attachments as parts of the Records of Decision of the aforementioned ITER Council Meetings
ITER council proceedings: 2000
International Nuclear Information System (INIS)
2001-01-01
No ITER Council Meetings were held during 2000. However, two ITER EDA Meetings were held, one in Tokyo, January 19-20, and one in Moscow, June 29-30. The parties participating in these meetings were those that partake in the extended ITER EDA, namely the EU, the Russian Federation, and Japan. This document contains, a/o, the records of these meetings, the list of attendees, the agenda, the ITER EDA Status Reports issued during these meetings, the TAC (Technical Advisory Committee) reports and recommendations, the MAC Reports and Advice (also for the July 1999 Meeting), the ITER-FEAT Outline Design Report, the TAC Reports and Recommendations both meetings), Site requirements and Site Design Assumptions, the Tentative Sequence of technical Activities 2000-2001, Report of the ITER SWG-P2 on Joint Implementation of ITER, EU/ITER Canada Proposal for New ITER Identification
ITER council proceedings: 1993
Energy Technology Data Exchange (ETDEWEB)
NONE
1994-12-31
Records of the third ITER Council Meeting (IC-3), held on 21-22 April 1993, in Tokyo, Japan, and the fourth ITER Council Meeting (IC-4) held on 29 September - 1 October 1993 in San Diego, USA, are presented, giving essential information on the evolution of the ITER Engineering Design Activities (EDA), such as the text of the draft of Protocol 2 further elaborated in ``ITER EDA Agreement and Protocol 2`` (ITER EDA Documentation Series No. 5), recommendations on future work programmes: a description of technology R and D tastes; the establishment of a trust fund for the ITER EDA activities; arrangements for Visiting Home Team Personnel; the general framework for the involvement of other countries in the ITER EDA; conditions for the involvement of Canada in the Euratom Contribution to the ITER EDA; and other attachments as parts of the Records of Decision of the aforementioned ITER Council Meetings.
Hossain, F.; Iqbal, N.; Lee, H.; Muhammad, A.
2015-12-01
When it comes to building durable capacity for implementing state of the art technology and earth observation (EO) data for improved decision making, it has been long recognized that a unidirectional approach (from research to application) often does not work. Co-design of capacity building effort has recently been recommended as a better alternative. This approach is a two-way street where scientists and stakeholders engage intimately along the entire chain of actions from design of research experiments to packaging of decision making tools and each party provides an equal amount of input. Scientists execute research experiments based on boundary conditions and outputs that are defined as tangible by stakeholders for decision making. On the other hand, decision making tools are packaged by stakeholders with scientists ensuring the application-specific science is relevant. In this talk, we will overview one such iterative capacity building approach that we have implemented for gravimetry-based satellite (GRACE) EO data for improved groundwater management in Pakistan. We call our approach a hybrid approach where the initial step is a forward model involving a conventional short-term (3 day) capacity building workshop in the stakeholder environment addressing a very large audience. In this forward model, the net is cast wide to 'shortlist' a set of highly motivated stakeholder agency staffs who are then engaged more directly in 1-1 training. In the next step (the backward model), these short listed staffs are then brought back in the research environment of the scientists (supply) for 1-1 and long-term (6 months) intense brainstorming, training, and design of decision making tools. The advantage of this backward model is that it allows for a much better understanding for scientists of the ground conditions and hurdles of making a EO-based scientific innovation work for a specific decision making problem that is otherwise fundamentally impossible in conventional
International Nuclear Information System (INIS)
Hufnagel, Heike; Pennec, Xavier; Ayache, Nicholas; Ehrhardt, Jan; Handels, Heinz
2008-01-01
Identification of point correspondences between shapes is required for statistical analysis of organ shapes differences. Since manual identification of landmarks is not a feasible option in 3D, several methods were developed to automatically find one-to-one correspondences on shape surfaces. For unstructured point sets, however, one-to-one correspondences do not exist but correspondence probabilities can be determined. A method was developed to compute a statistical shape model based on shapes which are represented by unstructured point sets with arbitrary point numbers. A fundamental problem when computing statistical shape models is the determination of correspondences between the points of the shape observations of the training data set. In the absence of landmarks, exact correspondences can only be determined between continuous surfaces, not between unstructured point sets. To overcome this problem, we introduce correspondence probabilities instead of exact correspondences. The correspondence probabilities are found by aligning the observation shapes with the affine expectation maximization-iterative closest points (EM-ICP) registration algorithm. In a second step, the correspondence probabilities are used as input to compute a mean shape (represented once again by an unstructured point set). Both steps are unified in a single optimization criterion which depe nds on the two parameters 'registration transformation' and 'mean shape'. In a last step, a variability model which best represents the variability in the training data set is computed. Experiments on synthetic data sets and in vivo brain structure data sets (MRI) are then designed to evaluate the performance of our algorithm. The new method was applied to brain MRI data sets, and the estimated point correspondences were compared to a statistical shape model built on exact correspondences. Based on established measures of ''generalization ability'' and ''specificity'', the estimates were very satisfactory
Web tools for predictive toxicology model building.
Jeliazkova, Nina
2012-07-01
The development and use of web tools in chemistry has accumulated more than 15 years of history already. Powered by the advances in the Internet technologies, the current generation of web systems are starting to expand into areas, traditional for desktop applications. The web platforms integrate data storage, cheminformatics and data analysis tools. The ease of use and the collaborative potential of the web is compelling, despite the challenges. The topic of this review is a set of recently published web tools that facilitate predictive toxicology model building. The focus is on software platforms, offering web access to chemical structure-based methods, although some of the frameworks could also provide bioinformatics or hybrid data analysis functionalities. A number of historical and current developments are cited. In order to provide comparable assessment, the following characteristics are considered: support for workflows, descriptor calculations, visualization, modeling algorithms, data management and data sharing capabilities, availability of GUI or programmatic access and implementation details. The success of the Web is largely due to its highly decentralized, yet sufficiently interoperable model for information access. The expected future convergence between cheminformatics and bioinformatics databases provides new challenges toward management and analysis of large data sets. The web tools in predictive toxicology will likely continue to evolve toward the right mix of flexibility, performance, scalability, interoperability, sets of unique features offered, friendly user interfaces, programmatic access for advanced users, platform independence, results reproducibility, curation and crowdsourcing utilities, collaborative sharing and secure access.
IHadoop: Asynchronous iterations for MapReduce
Elnikety, Eslam Mohamed Ibrahim
2011-11-01
MapReduce is a distributed programming frame-work designed to ease the development of scalable data-intensive applications for large clusters of commodity machines. Most machine learning and data mining applications involve iterative computations over large datasets, such as the Web hyperlink structures and social network graphs. Yet, the MapReduce model does not efficiently support this important class of applications. The architecture of MapReduce, most critically its dataflow techniques and task scheduling, is completely unaware of the nature of iterative applications; tasks are scheduled according to a policy that optimizes the execution for a single iteration which wastes bandwidth, I/O, and CPU cycles when compared with an optimal execution for a consecutive set of iterations. This work presents iHadoop, a modified MapReduce model, and an associated implementation, optimized for iterative computations. The iHadoop model schedules iterations asynchronously. It connects the output of one iteration to the next, allowing both to process their data concurrently. iHadoop\\'s task scheduler exploits inter-iteration data locality by scheduling tasks that exhibit a producer/consumer relation on the same physical machine allowing a fast local data transfer. For those iterative applications that require satisfying certain criteria before termination, iHadoop runs the check concurrently during the execution of the subsequent iteration to further reduce the application\\'s latency. This paper also describes our implementation of the iHadoop model, and evaluates its performance against Hadoop, the widely used open source implementation of MapReduce. Experiments using different data analysis applications over real-world and synthetic datasets show that iHadoop performs better than Hadoop for iterative algorithms, reducing execution time of iterative applications by 25% on average. Furthermore, integrating iHadoop with HaLoop, a variant Hadoop implementation that caches
IHadoop: Asynchronous iterations for MapReduce
Elnikety, Eslam Mohamed Ibrahim; El Sayed, Tamer S.; Ramadan, Hany E.
2011-01-01
MapReduce is a distributed programming frame-work designed to ease the development of scalable data-intensive applications for large clusters of commodity machines. Most machine learning and data mining applications involve iterative computations over large datasets, such as the Web hyperlink structures and social network graphs. Yet, the MapReduce model does not efficiently support this important class of applications. The architecture of MapReduce, most critically its dataflow techniques and task scheduling, is completely unaware of the nature of iterative applications; tasks are scheduled according to a policy that optimizes the execution for a single iteration which wastes bandwidth, I/O, and CPU cycles when compared with an optimal execution for a consecutive set of iterations. This work presents iHadoop, a modified MapReduce model, and an associated implementation, optimized for iterative computations. The iHadoop model schedules iterations asynchronously. It connects the output of one iteration to the next, allowing both to process their data concurrently. iHadoop's task scheduler exploits inter-iteration data locality by scheduling tasks that exhibit a producer/consumer relation on the same physical machine allowing a fast local data transfer. For those iterative applications that require satisfying certain criteria before termination, iHadoop runs the check concurrently during the execution of the subsequent iteration to further reduce the application's latency. This paper also describes our implementation of the iHadoop model, and evaluates its performance against Hadoop, the widely used open source implementation of MapReduce. Experiments using different data analysis applications over real-world and synthetic datasets show that iHadoop performs better than Hadoop for iterative algorithms, reducing execution time of iterative applications by 25% on average. Furthermore, integrating iHadoop with HaLoop, a variant Hadoop implementation that caches
Predictions of models for environmental radiological assessment
International Nuclear Information System (INIS)
Peres, Sueli da Silva; Lauria, Dejanira da Costa; Mahler, Claudio Fernando
2011-01-01
In the field of environmental impact assessment, models are used for estimating source term, environmental dispersion and transfer of radionuclides, exposure pathway, radiation dose and the risk for human beings Although it is recognized that the specific information of local data are important to improve the quality of the dose assessment results, in fact obtaining it can be very difficult and expensive. Sources of uncertainties are numerous, among which we can cite: the subjectivity of modelers, exposure scenarios and pathways, used codes and general parameters. The various models available utilize different mathematical approaches with different complexities that can result in different predictions. Thus, for the same inputs different models can produce very different outputs. This paper presents briefly the main advances in the field of environmental radiological assessment that aim to improve the reliability of the models used in the assessment of environmental radiological impact. The intercomparison exercise of model supplied incompatible results for 137 Cs and 60 Co, enhancing the need for developing reference methodologies for environmental radiological assessment that allow to confront dose estimations in a common comparison base. The results of the intercomparison exercise are present briefly. (author)
Detection of Adverse Reaction to Drugs in Elderly Patients through Predictive Modeling
Directory of Open Access Journals (Sweden)
Rafael San-Miguel Carrasco
2016-03-01
Full Text Available Geriatrics Medicine constitutes a clinical research field in which data analytics, particularly predictive modeling, can deliver compelling, reliable and long-lasting benefits, as well as non-intuitive clinical insights and net new knowledge. The research work described in this paper leverages predictive modeling to uncover new insights related to adverse reaction to drugs in elderly patients. The differentiation factor that sets this research exercise apart from traditional clinical research is the fact that it was not designed by formulating a particular hypothesis to be validated. Instead, it was data-centric, with data being mined to discover relationships or correlations among variables. Regression techniques were systematically applied to data through multiple iterations and under different configurations. The obtained results after the process was completed are explained and discussed next.
A Predictive Maintenance Model for Railway Tracks
DEFF Research Database (Denmark)
Li, Rui; Wen, Min; Salling, Kim Bang
2015-01-01
presents a mathematical model based on Mixed Integer Programming (MIP) which is designed to optimize the predictive railway tamping activities for ballasted track for the time horizon up to four years. The objective function is setup to minimize the actual costs for the tamping machine (measured by time......). Five technical and economic aspects are taken into account to schedule tamping: (1) track degradation of the standard deviation of the longitudinal level over time; (2) track geometrical alignment; (3) track quality thresholds based on the train speed limits; (4) the dependency of the track quality...
Knoops, Paul G M; Borghi, Alessandro; Ruggiero, Federica; Badiali, Giovanni; Bianchi, Alberto; Marchetti, Claudio; Rodriguez-Florez, Naiara; Breakey, Richard W F; Jeelani, Owase; Dunaway, David J; Schievano, Silvia
2018-01-01
Repositioning of the maxilla in orthognathic surgery is carried out for functional and aesthetic purposes. Pre-surgical planning tools can predict 3D facial appearance by computing the response of the soft tissue to the changes to the underlying skeleton. The clinical use of commercial prediction software remains controversial, likely due to the deterministic nature of these computational predictions. A novel probabilistic finite element model (FEM) for the prediction of postoperative facial soft tissues is proposed in this paper. A probabilistic FEM was developed and validated on a cohort of eight patients who underwent maxillary repositioning and had pre- and postoperative cone beam computed tomography (CBCT) scans taken. Firstly, a variables correlation assessed various modelling parameters. Secondly, a design of experiments (DOE) provided a range of potential outcomes based on uniformly distributed input parameters, followed by an optimisation. Lastly, the second DOE iteration provided optimised predictions with a probability range. A range of 3D predictions was obtained using the probabilistic FEM and validated using reconstructed soft tissue surfaces from the postoperative CBCT data. The predictions in the nose and upper lip areas accurately include the true postoperative position, whereas the prediction under-estimates the position of the cheeks and lower lip. A probabilistic FEM has been developed and validated for the prediction of the facial appearance following orthognathic surgery. This method shows how inaccuracies in the modelling and uncertainties in executing surgical planning influence the soft tissue prediction and it provides a range of predictions including a minimum and maximum, which may be helpful for patients in understanding the impact of surgery on the face.
Predictive Capability Maturity Model for computational modeling and simulation.
Energy Technology Data Exchange (ETDEWEB)
Oberkampf, William Louis; Trucano, Timothy Guy; Pilch, Martin M.
2007-10-01
The Predictive Capability Maturity Model (PCMM) is a new model that can be used to assess the level of maturity of computational modeling and simulation (M&S) efforts. The development of the model is based on both the authors experience and their analysis of similar investigations in the past. The perspective taken in this report is one of judging the usefulness of a predictive capability that relies on the numerical solution to partial differential equations to better inform and improve decision making. The review of past investigations, such as the Software Engineering Institute's Capability Maturity Model Integration and the National Aeronautics and Space Administration and Department of Defense Technology Readiness Levels, indicates that a more restricted, more interpretable method is needed to assess the maturity of an M&S effort. The PCMM addresses six contributing elements to M&S: (1) representation and geometric fidelity, (2) physics and material model fidelity, (3) code verification, (4) solution verification, (5) model validation, and (6) uncertainty quantification and sensitivity analysis. For each of these elements, attributes are identified that characterize four increasing levels of maturity. Importantly, the PCMM is a structured method for assessing the maturity of an M&S effort that is directed toward an engineering application of interest. The PCMM does not assess whether the M&S effort, the accuracy of the predictions, or the performance of the engineering system satisfies or does not satisfy specified application requirements.
Effective modelling for predictive analytics in data science ...
African Journals Online (AJOL)
Effective modelling for predictive analytics in data science. ... the nearabsence of empirical or factual predictive analytics in the mainstream research going on ... Keywords: Predictive Analytics, Big Data, Business Intelligence, Project Planning.
International Nuclear Information System (INIS)
Aymar, R.
2000-01-01
Six years of joint work under the international thermonuclear experimental reactor (ITER) EDA agreement yielded a mature design for ITER which met the objectives set for it (ITER final design report (FDR)), together with a corpus of scientific and technological data, large/full scale models or prototypes of key components/systems and progress in understanding which both validated the specific design and are generally applicable to a next step, reactor-oriented tokamak on the road to the development of fusion as an energy source. In response to requests from the parties to explore the scope for addressing ITER's programmatic objective at reduced cost, the study of options for cost reduction has been the main feature of ITER work since summer 1998, using the advances in physics and technology databases, understandings, and tools arising out of the ITER collaboration to date. A joint concept improvement task force drawn from the joint central team and home teams has overseen and co-ordinated studies of the key issues in physics and technology which control the possibility of reducing the overall investment and simultaneously achieving the required objectives. The aim of this task force is to achieve common understandings of these issues and their consequences so as to inform and to influence the best cost-benefit choice, which will attract consensus between the ITER partners. A report to be submitted to the parties by the end of 1999 will present key elements of a specific design of minimum capital investment, with a target cost saving of about 50% the cost of the ITER FDR design, and a restricted number of design variants. Outline conclusions from the work of the task force are presented in terms of physics, operations, and design of the main tokamak systems. Possible implications for the way forward are discussed
Kim, Hyungjin; Park, Chang Min; Kim, Seong Ho; Lee, Sang Min; Park, Sang Joon; Lee, Kyung Hee; Goo, Jin Mo
2014-11-01
To compare the pulmonary subsolid nodule (SSN) classification agreement and measurement variability between filtered back projection (FBP) and model-based iterative reconstruction (MBIR). Low-dose CTs were reconstructed using FBP and MBIR for 47 patients with 47 SSNs. Two readers independently classified SSNs into pure or part-solid ground-glass nodules, and measured the size of the whole nodule and solid portion twice on both reconstruction algorithms. Nodule classification agreement was analyzed using Cohen's kappa and compared between reconstruction algorithms using McNemar's test. Measurement variability was investigated using Bland-Altman analysis and compared with the paired t-test. Cohen's kappa for inter-reader SSN classification agreement was 0.541-0.662 on FBP and 0.778-0.866 on MBIR. Between the two readers, nodule classification was consistent in 79.8 % (75/94) with FBP and 91.5 % (86/94) with MBIR (p = 0.027). Inter-reader measurement variability range was -5.0-2.1 mm on FBP and -3.3-1.8 mm on MBIR for whole nodule size, and was -6.5-0.9 mm on FBP and -5.5-1.5 mm on MBIR for solid portion size. Inter-reader measurement differences were significantly smaller on MBIR (p = 0.027, whole nodule; p = 0.011, solid portion). MBIR significantly improved SSN classification agreement and reduced measurement variability of both whole nodules and solid portions between readers. • Low-dose CT using MBIR algorithm improves reproducibility in the classification of SSNs. • MBIR would enable more confident clinical planning according to the SSN type. • Reduced measurement variability on MBIR allows earlier detection of potentially malignant nodules.
A two-dimensional iterative panel method and boundary layer model for bio-inspired multi-body wings
Blower, Christopher J.; Dhruv, Akash; Wickenheiser, Adam M.
2014-03-01
The increased use of Unmanned Aerial Vehicles (UAVs) has created a continuous demand for improved flight capabilities and range of use. During the last decade, engineers have turned to bio-inspiration for new and innovative flow control methods for gust alleviation, maneuverability, and stability improvement using morphing aircraft wings. The bio-inspired wing design considered in this study mimics the flow manipulation techniques performed by birds to extend the operating envelope of UAVs through the installation of an array of feather-like panels across the airfoil's upper and lower surfaces while replacing the trailing edge flap. Each flap has the ability to deflect into both the airfoil and the inbound airflow using hinge points with a single degree-of-freedom, situated at 20%, 40%, 60% and 80% of the chord. The installation of the surface flaps offers configurations that enable advantageous maneuvers while alleviating gust disturbances. Due to the number of possible permutations available for the flap configurations, an iterative constant-strength doublet/source panel method has been developed with an integrated boundary layer model to calculate the pressure distribution and viscous drag over the wing's surface. As a result, the lift, drag and moment coefficients for each airfoil configuration can be calculated. The flight coefficients of this numerical method are validated using experimental data from a low speed suction wind tunnel operating at a Reynolds Number 300,000. This method enables the aerodynamic assessment of a morphing wing profile to be performed accurately and efficiently in comparison to Computational Fluid Dynamics methods and experiments as discussed herein.
Benz, Dominik C; Fuchs, Tobias A; Gräni, Christoph; Studer Bruengger, Annina A; Clerc, Olivier F; Mikulicic, Fran; Messerli, Michael; Stehli, Julia; Possner, Mathias; Pazhenkottil, Aju P; Gaemperli, Oliver; Kaufmann, Philipp A; Buechel, Ronny R
2018-02-01
Iterative reconstruction (IR) algorithms allow for a significant reduction in radiation dose of coronary computed tomography angiography (CCTA). We performed a head-to-head comparison of adaptive statistical IR (ASiR) and model-based IR (MBIR) algorithms to assess their impact on quantitative image parameters and diagnostic accuracy for submillisievert CCTA. CCTA datasets of 91 patients were reconstructed using filtered back projection (FBP), increasing contributions of ASiR (20, 40, 60, 80, and 100%), and MBIR. Signal and noise were measured in the aortic root to calculate signal-to-noise ratio (SNR). In a subgroup of 36 patients, diagnostic accuracy of ASiR 40%, ASiR 100%, and MBIR for diagnosis of coronary artery disease (CAD) was compared with invasive coronary angiography. Median radiation dose was 0.21 mSv for CCTA. While increasing levels of ASiR gradually reduced image noise compared with FBP (up to - 48%, P ASiR (-59% compared with ASiR 100%; P ASiR 40% and ASiR 100% resulted in substantially lower diagnostic accuracy to detect CAD as diagnosed by invasive coronary angiography compared with MBIR: sensitivity and specificity were 100 and 37%, 100 and 57%, and 100 and 74% for ASiR 40%, ASiR 100%, and MBIR, respectively. MBIR offers substantial noise reduction with increased SNR, paving the way for implementation of submillisievert CCTA protocols in clinical routine. In contrast, inferior noise reduction by ASiR negatively affects diagnostic accuracy of submillisievert CCTA for CAD detection. Published on behalf of the European Society of Cardiology. All rights reserved. © The Author 2017. For permissions, please email: journals.permissions@oup.com.
International Nuclear Information System (INIS)
Precht, H.; Kitslaar, P.H.; Broersen, A.; Gerke, O.; Dijkstra, J.; Thygesen, J.; Egstrup, K.; Lambrechtsen, J.
2017-01-01
Purpose: Investigate the influence of adaptive statistical iterative reconstruction (ASIR) and the model-based IR (Veo) reconstruction algorithm in coronary computed tomography angiography (CCTA) images on quantitative measurements in coronary arteries for plaque volumes and intensities. Methods: Three patients had three independent dose reduced CCTA performed and reconstructed with 30% ASIR (CTDI vol at 6.7 mGy), 60% ASIR (CTDI vol 4.3 mGy) and Veo (CTDI vol at 1.9 mGy). Coronary plaque analysis was performed for each measured CCTA volumes, plaque burden and intensities. Results: Plaque volume and plaque burden show a decreasing tendency from ASIR to Veo as median volume for ASIR is 314 mm 3 and 337 mm 3 –252 mm 3 for Veo and plaque burden is 42% and 44% for ASIR to 39% for Veo. The lumen and vessel volume decrease slightly from 30% ASIR to 60% ASIR with 498 mm 3 –391 mm 3 for lumen volume and vessel volume from 939 mm 3 to 830 mm 3 . The intensities did not change overall between the different reconstructions for either lumen or plaque. Conclusion: We found a tendency of decreasing plaque volumes and plaque burden but no change in intensities with the use of low dose Veo CCTA (1.9 mGy) compared to dose reduced ASIR CCTA (6.7 mGy & 4.3 mGy), although more studies are warranted. - Highlights: • Veo decrease plaque volumes and plaque burden using low-dose CCTA. • Moving from ASIR 30%, ASIR 60% to Veo did not appear to influence the plaque intensities. • Studies including larger sample size are needed to investigate the effect on plaque.
Combining GPS measurements and IRI model predictions
International Nuclear Information System (INIS)
Hernandez-Pajares, M.; Juan, J.M.; Sanz, J.; Bilitza, D.
2002-01-01
The free electrons distributed in the ionosphere (between one hundred and thousands of km in height) produce a frequency-dependent effect on Global Positioning System (GPS) signals: a delay in the pseudo-orange and an advance in the carrier phase. These effects are proportional to the columnar electron density between the satellite and receiver, i.e. the integrated electron density along the ray path. Global ionospheric TEC (total electron content) maps can be obtained with GPS data from a network of ground IGS (international GPS service) reference stations with an accuracy of few TEC units. The comparison with the TOPEX TEC, mainly measured over the oceans far from the IGS stations, shows a mean bias and standard deviation of about 2 and 5 TECUs respectively. The discrepancies between the STEC predictions and the observed values show an RMS typically below 5 TECUs (which also includes the alignment code noise). he existence of a growing database 2-hourly global TEC maps and with resolution of 5x2.5 degrees in longitude and latitude can be used to improve the IRI prediction capability of the TEC. When the IRI predictions and the GPS estimations are compared for a three month period around the Solar Maximum, they are in good agreement for middle latitudes. An over-determination of IRI TEC has been found at the extreme latitudes, the IRI predictions being, typically two times higher than the GPS estimations. Finally, local fits of the IRI model can be done by tuning the SSN from STEC GPS observations
The ITER project technological challenges
CERN. Geneva; Lister, Joseph; Marquina, Miguel A; Todesco, Ezio
2005-01-01
The first lecture reminds us of the ITER challenges, presents hard engineering problems, typically due to mechanical forces and thermal loads and identifies where the physics uncertainties play a significant role in the engineering requirements. The second lecture presents soft engineering problems of measuring the plasma parameters, feedback control of the plasma and handling the physics data flow and slow controls data flow from a large experiment like ITER. The last three lectures focus on superconductors for fusion. The third lecture reviews the design criteria and manufacturing methods for 6 milestone-conductors of large fusion devices (T-7, T-15, Tore Supra, LHD, W-7X, ITER). The evolution of the designer approach and the available technologies are critically discussed. The fourth lecture is devoted to the issue of performance prediction, from a superconducting wire to a large size conductor. The role of scaling laws, self-field, current distribution, voltage-current characteristic and transposition are...
Effect on Prediction when Modeling Covariates in Bayesian Nonparametric Models.
Cruz-Marcelo, Alejandro; Rosner, Gary L; Müller, Peter; Stewart, Clinton F
2013-04-01
In biomedical research, it is often of interest to characterize biologic processes giving rise to observations and to make predictions of future observations. Bayesian nonparametric methods provide a means for carrying out Bayesian inference making as few assumptions about restrictive parametric models as possible. There are several proposals in the literature for extending Bayesian nonparametric models to include dependence on covariates. Limited attention, however, has been directed to the following two aspects. In this article, we examine the effect on fitting and predictive performance of incorporating covariates in a class of Bayesian nonparametric models by one of two primary ways: either in the weights or in the locations of a discrete random probability measure. We show that different strategies for incorporating continuous covariates in Bayesian nonparametric models can result in big differences when used for prediction, even though they lead to otherwise similar posterior inferences. When one needs the predictive density, as in optimal design, and this density is a mixture, it is better to make the weights depend on the covariates. We demonstrate these points via a simulated data example and in an application in which one wants to determine the optimal dose of an anticancer drug used in pediatric oncology.
Mathematical models for indoor radon prediction
International Nuclear Information System (INIS)
Malanca, A.; Pessina, V.; Dallara, G.
1995-01-01
It is known that the indoor radon (Rn) concentration can be predicted by means of mathematical models. The simplest model relies on two variables only: the Rn source strength and the air exchange rate. In the Lawrence Berkeley Laboratory (LBL) model several environmental parameters are combined into a complex equation; besides, a correlation between the ventilation rate and the Rn entry rate from the soil is admitted. The measurements were carried out using activated carbon canisters. Seventy-five measurements of Rn concentrations were made inside two rooms placed on the second floor of a building block. One of the rooms had a single-glazed window whereas the other room had a double pane window. During three different experimental protocols, the mean Rn concentration was always higher into the room with a double-glazed window. That behavior can be accounted for by the simplest model. A further set of 450 Rn measurements was collected inside a ground-floor room with a grounding well in it. This trend maybe accounted for by the LBL model
Towards predictive models for transitionally rough surfaces
Abderrahaman-Elena, Nabil; Garcia-Mayoral, Ricardo
2017-11-01
We analyze and model the previously presented decomposition for flow variables in DNS of turbulence over transitionally rough surfaces. The flow is decomposed into two contributions: one produced by the overlying turbulence, which has no footprint of the surface texture, and one induced by the roughness, which is essentially the time-averaged flow around the surface obstacles, but modulated in amplitude by the first component. The roughness-induced component closely resembles the laminar steady flow around the roughness elements at the same non-dimensional roughness size. For small - yet transitionally rough - textures, the roughness-free component is essentially the same as over a smooth wall. Based on these findings, we propose predictive models for the onset of the transitionally rough regime. Project supported by the Engineering and Physical Sciences Research Council (EPSRC).
Zhao, Dong; Sakoda, Hideyuki; Sawyer, W Gregory; Banks, Scott A; Fregly, Benjamin J
2008-02-01
Wear of ultrahigh molecular weight polyethylene remains a primary factor limiting the longevity of total knee replacements (TKRs). However, wear testing on a simulator machine is time consuming and expensive, making it impractical for iterative design purposes. The objectives of this paper were first, to evaluate whether a computational model using a wear factor consistent with the TKR material pair can predict accurate TKR damage measured in a simulator machine, and second, to investigate how choice of surface evolution method (fixed or variable step) and material model (linear or nonlinear) affect the prediction. An iterative computational damage model was constructed for a commercial knee implant in an AMTI simulator machine. The damage model combined a dynamic contact model with a surface evolution model to predict how wear plus creep progressively alter tibial insert geometry over multiple simulations. The computational framework was validated by predicting wear in a cylinder-on-plate system for which an analytical solution was derived. The implant damage model was evaluated for 5 million cycles of simulated gait using damage measurements made on the same implant in an AMTI machine. Using a pin-on-plate wear factor for the same material pair as the implant, the model predicted tibial insert wear volume to within 2% error and damage depths and areas to within 18% and 10% error, respectively. Choice of material model had little influence, while inclusion of surface evolution affected damage depth and area but not wear volume predictions. Surface evolution method was important only during the initial cycles, where variable step was needed to capture rapid geometry changes due to the creep. Overall, our results indicate that accurate TKR damage predictions can be made with a computational model using a constant wear factor obtained from pin-on-plate tests for the same material pair, and furthermore, that surface evolution method matters only during the initial
Resource-estimation models and predicted discovery
International Nuclear Information System (INIS)
Hill, G.W.
1982-01-01
Resources have been estimated by predictive extrapolation from past discovery experience, by analogy with better explored regions, or by inference from evidence of depletion of targets for exploration. Changes in technology and new insights into geological mechanisms have occurred sufficiently often in the long run to form part of the pattern of mature discovery experience. The criterion, that a meaningful resource estimate needs an objective measure of its precision or degree of uncertainty, excludes 'estimates' based solely on expert opinion. This is illustrated by development of error measures for several persuasive models of discovery and production of oil and gas in USA, both annually and in terms of increasing exploration effort. Appropriate generalizations of the models resolve many points of controversy. This is illustrated using two USA data sets describing discovery of oil and of U 3 O 8 ; the latter set highlights an inadequacy of available official data. Review of the oil-discovery data set provides a warrant for adjusting the time-series prediction to a higher resource figure for USA petroleum. (author)
Iterative near-term ecological forecasting: Needs, opportunities, and challenges.
Dietze, Michael C; Fox, Andrew; Beck-Johnson, Lindsay M; Betancourt, Julio L; Hooten, Mevin B; Jarnevich, Catherine S; Keitt, Timothy H; Kenney, Melissa A; Laney, Christine M; Larsen, Laurel G; Loescher, Henry W; Lunch, Claire K; Pijanowski, Bryan C; Randerson, James T; Read, Emily K; Tredennick, Andrew T; Vargas, Rodrigo; Weathers, Kathleen C; White, Ethan P
2018-02-13
Two foundational questions about sustainability are "How are ecosystems and the services they provide going to change in the future?" and "How do human decisions affect these trajectories?" Answering these questions requires an ability to forecast ecological processes. Unfortunately, most ecological forecasts focus on centennial-scale climate responses, therefore neither meeting the needs of near-term (daily to decadal) environmental decision-making nor allowing comparison of specific, quantitative predictions to new observational data, one of the strongest tests of scientific theory. Near-term forecasts provide the opportunity to iteratively cycle between performing analyses and updating predictions in light of new evidence. This iterative process of gaining feedback, building experience, and correcting models and methods is critical for improving forecasts. Iterative, near-term forecasting will accelerate ecological research, make it more relevant to society, and inform sustainable decision-making under high uncertainty and adaptive management. Here, we identify the immediate scientific and societal needs, opportunities, and challenges for iterative near-term ecological forecasting. Over the past decade, data volume, variety, and accessibility have greatly increased, but challenges remain in interoperability, latency, and uncertainty quantification. Similarly, ecologists have made considerable advances in applying computational, informatic, and statistical methods, but opportunities exist for improving forecast-specific theory, methods, and cyberinfrastructure. Effective forecasting will also require changes in scientific training, culture, and institutions. The need to start forecasting is now; the time for making ecology more predictive is here, and learning by doing is the fastest route to drive the science forward.
Kassam-Adams, Nancy; Marsac, Meghan L; Kohser, Kristen L; Kenardy, Justin A; March, Sonja; Winston, Flaura K
2015-04-15
The advent of eHealth interventions to address psychological concerns and health behaviors has created new opportunities, including the ability to optimize the effectiveness of intervention activities and then deliver these activities consistently to a large number of individuals in need. Given that eHealth interventions grounded in a well-delineated theoretical model for change are more likely to be effective and that eHealth interventions can be costly to develop, assuring the match of final intervention content and activities to the underlying model is a key step. We propose to apply the concept of "content validity" as a crucial checkpoint to evaluate the extent to which proposed intervention activities in an eHealth intervention program are valid (eg, relevant and likely to be effective) for the specific mechanism of change that each is intended to target and the intended target population for the intervention. The aims of this paper are to define content validity as it applies to model-based eHealth intervention development, to present a feasible method for assessing content validity in this context, and to describe the implementation of this new method during the development of a Web-based intervention for children. We designed a practical 5-step method for assessing content validity in eHealth interventions that includes defining key intervention targets, delineating intervention activity-target pairings, identifying experts and using a survey tool to gather expert ratings of the relevance of each activity to its intended target, its likely effectiveness in achieving the intended target, and its appropriateness with a specific intended audience, and then using quantitative and qualitative results to identify intervention activities that may need modification. We applied this method during our development of the Coping Coach Web-based intervention for school-age children. In the evaluation of Coping Coach content validity, 15 experts from five countries
Prediction of pipeline corrosion rate based on grey Markov models
International Nuclear Information System (INIS)
Chen Yonghong; Zhang Dafa; Peng Guichu; Wang Yuemin
2009-01-01
Based on the model that combined by grey model and Markov model, the prediction of corrosion rate of nuclear power pipeline was studied. Works were done to improve the grey model, and the optimization unbiased grey model was obtained. This new model was used to predict the tendency of corrosion rate, and the Markov model was used to predict the residual errors. In order to improve the prediction precision, rolling operation method was used in these prediction processes. The results indicate that the improvement to the grey model is effective and the prediction precision of the new model combined by the optimization unbiased grey model and Markov model is better, and the use of rolling operation method may improve the prediction precision further. (authors)
An Operational Model for the Prediction of Jet Blast
2012-01-09
This paper presents an operational model for the prediction of jet blast. The model was : developed based upon three modules including a jet exhaust model, jet centerline decay : model and aircraft motion model. The final analysis was compared with d...
Cao, Jian; Chen, Jing-Bo; Dai, Meng-Xue
2018-01-01
An efficient finite-difference frequency-domain modeling of seismic wave propagation relies on the discrete schemes and appropriate solving methods. The average-derivative optimal scheme for the scalar wave modeling is advantageous in terms of the storage saving for the system of linear equations and the flexibility for arbitrary directional sampling intervals. However, using a LU-decomposition-based direct solver to solve its resulting system of linear equations is very costly for both memory and computational requirements. To address this issue, we consider establishing a multigrid-preconditioned BI-CGSTAB iterative solver fit for the average-derivative optimal scheme. The choice of preconditioning matrix and its corresponding multigrid components is made with the help of Fourier spectral analysis and local mode analysis, respectively, which is important for the convergence. Furthermore, we find that for the computation with unequal directional sampling interval, the anisotropic smoothing in the multigrid precondition may affect the convergence rate of this iterative solver. Successful numerical applications of this iterative solver for the homogenous and heterogeneous models in 2D and 3D are presented where the significant reduction of computer memory and the improvement of computational efficiency are demonstrated by comparison with the direct solver. In the numerical experiments, we also show that the unequal directional sampling interval will weaken the advantage of this multigrid-preconditioned iterative solver in the computing speed or, even worse, could reduce its accuracy in some cases, which implies the need for a reasonable control of directional sampling interval in the discretization.
Hoffman, John M; Noo, Frédéric; Young, Stefano; Hsieh, Scott S; McNitt-Gray, Michael
2018-06-01
To facilitate investigations into the impacts of acquisition and reconstruction parameters on quantitative imaging, radiomics and CAD using CT imaging, we previously released an open source implementation of a conventional weighted filtered backprojection reconstruction called FreeCT_wFBP. Our purpose was to extend that work by providing an open-source implementation of a model-based iterative reconstruction method using coordinate descent optimization, called FreeCT_ICD. Model-based iterative reconstruction offers the potential for substantial radiation dose reduction, but can impose substantial computational processing and storage requirements. FreeCT_ICD is an open source implementation of a model-based iterative reconstruction method that provides a reasonable tradeoff between these requirements. This was accomplished by adapting a previously proposed method that allows the system matrix to be stored with a reasonable memory requirement. The method amounts to describing the attenuation coefficient using rotating slices that follow the helical geometry. In the initially-proposed version, the rotating slices are themselves described using blobs. We have replaced this description by a unique model that relies on tri-linear interpolation together with the principles of Joseph's method. This model offers an improvement in memory requirement while still allowing highly accurate reconstruction for conventional CT geometries. The system matrix is stored column-wise and combined with an iterative coordinate descent (ICD) optimization. The result is FreeCT_ICD, which is a reconstruction program developed on the Linux platform using C++ libraries and the open source GNU GPL v2.0 license. The software is capable of reconstructing raw projection data of helical CT scans. In this work, the software has been described and evaluated by reconstructing datasets exported from a clinical scanner which consisted of an ACR accreditation phantom dataset and a clinical pediatric
Data driven propulsion system weight prediction model
Gerth, Richard J.
1994-10-01
The objective of the research was to develop a method to predict the weight of paper engines, i.e., engines that are in the early stages of development. The impetus for the project was the Single Stage To Orbit (SSTO) project, where engineers need to evaluate alternative engine designs. Since the SSTO is a performance driven project the performance models for alternative designs were well understood. The next tradeoff is weight. Since it is known that engine weight varies with thrust levels, a model is required that would allow discrimination between engines that produce the same thrust. Above all, the model had to be rooted in data with assumptions that could be justified based on the data. The general approach was to collect data on as many existing engines as possible and build a statistical model of the engines weight as a function of various component performance parameters. This was considered a reasonable level to begin the project because the data would be readily available, and it would be at the level of most paper engines, prior to detailed component design.
Predictive modeling of emergency cesarean delivery.
Directory of Open Access Journals (Sweden)
Carlos Campillo-Artero
Full Text Available To increase discriminatory accuracy (DA for emergency cesarean sections (ECSs.We prospectively collected data on and studied all 6,157 births occurring in 2014 at four public hospitals located in three different autonomous communities of Spain. To identify risk factors (RFs for ECS, we used likelihood ratios and logistic regression, fitted a classification tree (CTREE, and analyzed a random forest model (RFM. We used the areas under the receiver-operating-characteristic (ROC curves (AUCs to assess their DA.The magnitude of the LR+ for all putative individual RFs and ORs in the logistic regression models was low to moderate. Except for parity, all putative RFs were positively associated with ECS, including hospital fixed-effects and night-shift delivery. The DA of all logistic models ranged from 0.74 to 0.81. The most relevant RFs (pH, induction, and previous C-section in the CTREEs showed the highest ORs in the logistic models. The DA of the RFM and its most relevant interaction terms was even higher (AUC = 0.94; 95% CI: 0.93-0.95.Putative fetal, maternal, and contextual RFs alone fail to achieve reasonable DA for ECS. It is the combination of these RFs and the interactions between them at each hospital that make it possible to improve the DA for the type of delivery and tailor interventions through prediction to improve the appropriateness of ECS indications.
ITER council proceedings: 1995
International Nuclear Information System (INIS)
1996-01-01
Records of the 8. ITER Council Meeting (IC-8), held on 26-27 July 1995, in San Diego, USA, and the 9. ITER Council Meeting (IC-9) held on 12-13 December 1995, in Garching, Germany, are presented, giving essential information on the evolution of the ITER Engineering Design Activities (EDA) and the ITER Interim Design Report Package and Relevant Documents. Figs, tabs
Model Predictive Control based on Finite Impulse Response Models
DEFF Research Database (Denmark)
Prasath, Guru; Jørgensen, John Bagterp
2008-01-01
We develop a regularized l2 finite impulse response (FIR) predictive controller with input and input-rate constraints. Feedback is based on a simple constant output disturbance filter. The performance of the predictive controller in the face of plant-model mismatch is investigated by simulations...... and related to the uncertainty of the impulse response coefficients. The simulations can be used to benchmark l2 MPC against FIR based robust MPC as well as to estimate the maximum performance improvements by robust MPC....
DEFF Research Database (Denmark)
Engsted, Tom; Møller, Stig V.
We suggest an iterated GMM approach to estimate and test the consumption based habit persistence model of Campbell and Cochrane (1999), and we apply the approach on annual and quarterly Danish stock and bond returns. For comparative purposes we also estimate and test the standard CRRA model...... than 80 years there is absolutely no evidence of superior performance of the Campbell-Cochrane model. For the shorter and more recent quarterly data over a 20-30 year period, there is some evidence of counter-cyclical time-variation in the degree of risk-aversion, in accordance with the Campbell...
ITER council proceedings: 1999
International Nuclear Information System (INIS)
1999-01-01
In 1999 the ITER meeting in Cadarache (10-11 March 1999) and the Programme Directors Meeting in Grenoble (28-29 July 1999) took place. Both meetings were exclusively devoted to ITER engineering design activities and their agendas covered all issues important for the development of ITER. This volume presents the documents of these two important meetings
ITER council proceedings: 1996
International Nuclear Information System (INIS)
1997-01-01
Records of the 10. ITER Council Meeting (IC-10), held on 26-27 July 1996, in St. Petersburg, Russia, and the 11. ITER Council Meeting (IC-11) held on 17-18 December 1996, in Tokyo, Japan, are presented, giving essential information on the evolution of the ITER Engineering Design Activities (EDA) and the cost review and safety analysis. Figs, tabs
International Nuclear Information System (INIS)
Aymar, R.
1998-01-01
Six years of technical work under the ITER EDA Agreement have resulted in a design which constitutes a complete description of the ITER device and of its auxiliary systems and facilities. The ITER Council commented that the Final Design Report provides the first comprehensive design of a fusion reactor based on well established physics and technology