WorldWideScience

Sample records for model predictive iterative

  1. Prediction of MHC class II binding peptides based on an iterative learning model

    Science.gov (United States)

    Murugan, Naveen; Dai, Yang

    2005-01-01

    Background Prediction of the binding ability of antigen peptides to major histocompatibility complex (MHC) class II molecules is important in vaccine development. The variable length of each binding peptide complicates this prediction. Motivated by a text mining model designed for building a classifier from labeled and unlabeled examples, we have developed an iterative supervised learning model for the prediction of MHC class II binding peptides. Results A linear programming (LP) model was employed for the learning task at each iteration, since it is fast and can re-optimize the previous classifier when the training sets are altered. The performance of the new model has been evaluated with benchmark datasets. The outcome demonstrates that the model achieves an accuracy of prediction that is competitive compared to the advanced predictors (the Gibbs sampler and TEPITOPE). The average areas under the ROC curve obtained from one variant of our model are 0.753 and 0.715 for the original and homology reduced benchmark sets, respectively. The corresponding values are respectively 0.744 and 0.673 for the Gibbs sampler and 0.702 and 0.667 for TEPITOPE. Conclusion The iterative learning procedure appears to be effective in prediction of MHC class II binders. It offers an alternative approach to this important predictionproblem. PMID:16351712

  2. Comparison of ITER performance predicted by semi-empirical and theory-based transport models

    International Nuclear Information System (INIS)

    Mukhovatov, V.; Shimomura, Y.; Polevoi, A.

    2003-01-01

    The values of Q=(fusion power)/(auxiliary heating power) predicted for ITER by three different methods, i.e., transport model based on empirical confinement scaling, dimensionless scaling technique, and theory-based transport models are compared. The energy confinement time given by the ITERH-98(y,2) scaling for an inductive scenario with plasma current of 15 MA and plasma density 15% below the Greenwald value is 3.6 s with one technical standard deviation of ±14%. These data are translated into a Q interval of [7-13] at the auxiliary heating power P aux = 40 MW and [7-28] at the minimum heating power satisfying a good confinement ELMy H-mode. Predictions of dimensionless scalings and theory-based transport models such as Weiland, MMM and IFS/PPPL overlap with the empirical scaling predictions within the margins of uncertainty. (author)

  3. Iterated non-linear model predictive control based on tubes and contractive constraints.

    Science.gov (United States)

    Murillo, M; Sánchez, G; Giovanini, L

    2016-05-01

    This paper presents a predictive control algorithm for non-linear systems based on successive linearizations of the non-linear dynamic around a given trajectory. A linear time varying model is obtained and the non-convex constrained optimization problem is transformed into a sequence of locally convex ones. The robustness of the proposed algorithm is addressed adding a convex contractive constraint. To account for linearization errors and to obtain more accurate results an inner iteration loop is added to the algorithm. A simple methodology to obtain an outer bounding-tube for state trajectories is also presented. The convergence of the iterative process and the stability of the closed-loop system are analyzed. The simulation results show the effectiveness of the proposed algorithm in controlling a quadcopter type unmanned aerial vehicle. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.

  4. Solubility prediction of carbon dioxide in water by an iterative equation of state/excess Gibbs energy model

    Science.gov (United States)

    Suleman, H.; Maulud, A. S.; Man, Z.

    2016-06-01

    The solubility of carbon dioxide in water has been predicted extensively by various models, owing to their vast applications in process industry. Henry's law has been widely utilized for solubility prediction with good results at low pressure. However, the law shows large deviations at high pressure, even when adjusted to pressure correction and improved conditions. Contrarily, equations of state/excess Gibbs energy models are a promising addition to thermodynamic models for prediction at high pressure non-ideal equilibria. These models can efficiently predict solubilities at high pressures, even when the experimental solubilities are not corroborated. Hence, these models work iteratively, utilizing the mathematical redundancy of local composition excess Gibbs energy models. In this study, an iterative form of Linear Combination of Vidal and Michelsen (LCVM) mixing rule has been used for prediction of carbon dioxide solubility in water, in conjunction with UNIFAC and translated modified Peng- Robinson equation of state. The proposed model, termed iterative LCVM (i-LCVM), predicts carbon dioxide solubility in water for a wide range of temperature (273 to 453 K) and pressure (101.3 to 7380 kPa). The i-LCVM shows good agreement with experimental values and predicts better than Henry's law (53% improvement).

  5. An iterative strategy combining biophysical criteria and duration hidden Markov models for structural predictions of Chlamydia trachomatis σ66 promoters

    Directory of Open Access Journals (Sweden)

    Ojcius David M

    2009-08-01

    Full Text Available Abstract Background Promoter identification is a first step in the quest to explain gene regulation in bacteria. It has been demonstrated that the initiation of bacterial transcription depends upon the stability and topology of DNA in the promoter region as well as the binding affinity between the RNA polymerase σ-factor and promoter. However, promoter prediction algorithms to date have not explicitly used an ensemble of these factors as predictors. In addition, most promoter models have been trained on data from Escherichia coli. Although it has been shown that transcriptional mechanisms are similar among various bacteria, it is quite possible that the differences between Escherichia coli and Chlamydia trachomatis are large enough to recommend an organism-specific modeling effort. Results Here we present an iterative stochastic model building procedure that combines such biophysical metrics as DNA stability, curvature, twist and stress-induced DNA duplex destabilization along with duration hidden Markov model parameters to model Chlamydia trachomatis σ66 promoters from 29 experimentally verified sequences. Initially, iterative duration hidden Markov modeling of the training set sequences provides a scoring algorithm for Chlamydia trachomatis RNA polymerase σ66/DNA binding. Subsequently, an iterative application of Stepwise Binary Logistic Regression selects multiple promoter predictors and deletes/replaces training set sequences to determine an optimal training set. The resulting model predicts the final training set with a high degree of accuracy and provides insights into the structure of the promoter region. Model based genome-wide predictions are provided so that optimal promoter candidates can be experimentally evaluated, and refined models developed. Co-predictions with three other algorithms are also supplied to enhance reliability. Conclusion This strategy and resulting model support the conjecture that DNA biophysical properties

  6. Sector analysis and predictive modelling reveal iterative shoot-like development in fern fronds.

    Science.gov (United States)

    Sanders, Heather L; Darrah, Peter R; Langdale, Jane A

    2011-07-01

    Plants colonized the terrestrial environment over 450 million years ago. Since then, shoot architecture has evolved in response to changing environmental conditions. Our current understanding of the innovations that altered shoot morphology is underpinned by developmental studies in a number of plant groups. However, the least is known about mechanisms that operate in ferns--a key group for understanding the evolution of plant development. Using a novel combination of sector analysis, conditional probability modelling methods and histology, we show that shoots, fronds ('leaves') and pinnae ('leaflets') of the fern Nephrolepis exaltata all develop from single apical initial cells. Shoot initials cleave on three faces to produce a pool of cells from which individual frond apical initials are sequentially specified. Frond initials then cleave in two planes to produce a series of lateral merophyte initials that each contributes a unit of three pinnae to half of the mediolateral frond axis. Notably, this iterative pattern in both shoots and fronds is similar to the developmental process that operates in shoots of other plant groups. Pinnae initials first cleave in two planes to generate lateral marginal initials. The apical and marginal initials then divide in three planes to coordinately generate the determinate pinna. These findings impact both on our understanding of fundamental plant developmental processes and on our perspective of how shoot systems evolved.

  7. Matlab modeling of ITER CODAC

    International Nuclear Information System (INIS)

    Pangione, L.; Lister, J.B.

    2008-01-01

    The ITER CODAC (COntrol, Data Access and Communication) conceptual design resulted from 2 years of activity. One result was a proposed functional partitioning of CODAC into different CODAC Systems, each of them partitioned into other CODAC Systems. Considering the large size of this project, simple use of human language assisted by figures would certainly be ineffective in creating an unambiguous description of all interactions and all relations between these Systems. Moreover, the underlying design is resident in the mind of the designers, who must consider all possible situations that could happen to each system. There is therefore a need to model the whole of CODAC with a clear and preferably graphical method, which allows the designers to verify the correctness and the consistency of their project. The aim of this paper is to describe the work started on ITER CODAC modeling using Matlab/Simulink. The main feature of this tool is the possibility of having a simple, graphical, intuitive representation of a complex system and ultimately to run a numerical simulation of it. Using Matlab/Simulink, each CODAC System was represented in a graphical and intuitive form with its relations and interactions through the definition of a small number of simple rules. In a Simulink diagram, each system was represented as a 'black box', both containing, and connected to, a number of other systems. In this way it is possible to move vertically between systems on different levels, to show the relation of membership, or horizontally to analyse the information exchange between systems at the same level. This process can be iterated, starting from a global diagram, in which only CODAC appears with the Plant Systems and the external sites, and going deeper down to the mathematical model of each CODAC system. The Matlab/Simulink features for simulating the whole top diagram encourage us to develop the idea of completing the functionalities of all systems in order to finally have a full

  8. Predicting Software Test Effort in Iterative Development Using a Dynamic Bayesian Network

    OpenAIRE

    Torkar, Richard; Awan, Nasir Majeed; Alvi, Adnan Khadem; Afzal, Wasif

    2010-01-01

    Projects following iterative software development methodologies must still be managed in a way as to maximize quality and minimize costs. However, there are indications that predicting test effort in iterative development is challenging and currently there seem to be no models for test effort prediction. This paper introduces and validates a dynamic Bayesian network for predicting test effort in iterative software devel- opment. The proposed model is validated by the use of data from two indu...

  9. Wall conditioning for ITER: Current experimental and modeling activities

    Energy Technology Data Exchange (ETDEWEB)

    Douai, D., E-mail: david.douai@cea.fr [CEA, IRFM, Association Euratom-CEA, 13108 St. Paul lez Durance (France); Kogut, D. [CEA, IRFM, Association Euratom-CEA, 13108 St. Paul lez Durance (France); Wauters, T. [LPP-ERM/KMS, Association Belgian State, 1000 Brussels (Belgium); Brezinsek, S. [FZJ, Institut für Energie- und Klimaforschung Plasmaphysik, 52441 Jülich (Germany); Hagelaar, G.J.M. [Laboratoire Plasma et Conversion d’Energie, UMR5213, Toulouse (France); Hong, S.H. [National Fusion Research Institute, Daejeon 305-806 (Korea, Republic of); Lomas, P.J. [CCFE, Culham Science Centre, OX14 3DB Abingdon (United Kingdom); Lyssoivan, A. [LPP-ERM/KMS, Association Belgian State, 1000 Brussels (Belgium); Nunes, I. [Associação EURATOM-IST, Instituto de Plasmas e Fusão Nuclear, 1049-001 Lisboa (Portugal); Pitts, R.A. [ITER International Organization, F-13067 St. Paul lez Durance (France); Rohde, V. [Max-Planck-Institut für Plasmaphysik, 85748 Garching (Germany); Vries, P.C. de [ITER International Organization, F-13067 St. Paul lez Durance (France)

    2015-08-15

    Wall conditioning will be required in ITER to control fuel and impurity recycling, as well as tritium (T) inventory. Analysis of conditioning cycle on the JET, with its ITER-Like Wall is presented, evidencing reduced need for wall cleaning in ITER compared to JET–CFC. Using a novel 2D multi-fluid model, current density during Glow Discharge Conditioning (GDC) on the in-vessel plasma-facing components (PFC) of ITER is predicted to approach the simple expectation of total anode current divided by wall surface area. Baking of the divertor to 350 °C should desorb the majority of the co-deposited T. ITER foresees the use of low temperature plasma based techniques compatible with the permanent toroidal magnetic field, such as Ion (ICWC) or Electron Cyclotron Wall Conditioning (ECWC), for tritium removal between ITER plasma pulses. Extrapolation of JET ICWC results to ITER indicates removal comparable to estimated T-retention in nominal ITER D:T shots, whereas GDC may be unattractive for that purpose.

  10. Iter

    Science.gov (United States)

    Iotti, Robert

    2015-04-01

    ITER is an international experimental facility being built by seven Parties to demonstrate the long term potential of fusion energy. The ITER Joint Implementation Agreement (JIA) defines the structure and governance model of such cooperation. There are a number of necessary conditions for such international projects to be successful: a complete design, strong systems engineering working with an agreed set of requirements, an experienced organization with systems and plans in place to manage the project, a cost estimate backed by industry, and someone in charge. Unfortunately for ITER many of these conditions were not present. The paper discusses the priorities in the JIA which led to setting up the project with a Central Integrating Organization (IO) in Cadarache, France as the ITER HQ, and seven Domestic Agencies (DAs) located in the countries of the Parties, responsible for delivering 90%+ of the project hardware as Contributions-in-Kind and also financial contributions to the IO, as ``Contributions-in-Cash.'' Theoretically the Director General (DG) is responsible for everything. In practice the DG does not have the power to control the work of the DAs, and there is not an effective management structure enabling the IO and the DAs to arbitrate disputes, so the project is not really managed, but is a loose collaboration of competing interests. Any DA can effectively block a decision reached by the DG. Inefficiencies in completing design while setting up a competent organization from scratch contributed to the delays and cost increases during the initial few years. So did the fact that the original estimate was not developed from industry input. Unforeseen inflation and market demand on certain commodities/materials further exacerbated the cost increases. Since then, improvements are debatable. Does this mean that the governance model of ITER is a wrong model for international scientific cooperation? I do not believe so. Had the necessary conditions for success

  11. Model-based iterative learning control applied to an industrial robot with elasticity

    NARCIS (Netherlands)

    Hakvoort, Wouter; Aarts, Ronald G.K.M.; van Dijk, Johannes; Jonker, Jan B.; IEEE,

    2007-01-01

    In this paper model-based Iterative Learning Control (ILC) is applied to improve the tracking accuracy of an industrial robot with elasticity. The ILC algorithm iteratively updates the reference trajectory for the robot such that the predicted tracking error in the next iteration is minimised. The

  12. Iterative-build OMIT maps: map improvement by iterative model building and refinement without model bias

    International Nuclear Information System (INIS)

    Terwilliger, Thomas C.; Grosse-Kunstleve, Ralf W.; Afonine, Pavel V.; Moriarty, Nigel W.; Adams, Paul D.; Read, Randy J.; Zwart, Peter H.; Hung, Li-Wei

    2008-01-01

    An OMIT procedure is presented that has the benefits of iterative model building density modification and refinement yet is essentially unbiased by the atomic model that is built. A procedure for carrying out iterative model building, density modification and refinement is presented in which the density in an OMIT region is essentially unbiased by an atomic model. Density from a set of overlapping OMIT regions can be combined to create a composite ‘iterative-build’ OMIT map that is everywhere unbiased by an atomic model but also everywhere benefiting from the model-based information present elsewhere in the unit cell. The procedure may have applications in the validation of specific features in atomic models as well as in overall model validation. The procedure is demonstrated with a molecular-replacement structure and with an experimentally phased structure and a variation on the method is demonstrated by removing model bias from a structure from the Protein Data Bank

  13. ITER plasma safety interface models and assessments

    International Nuclear Information System (INIS)

    Uckan, N.A.; Bartels, H-W.; Honda, T.; Amano, T.; Boucher, D.; Post, D.; Wesley, J.

    1996-01-01

    Physics models and requirements to be used as a basis for safety analysis studies are developed and physics results motivated by safety considerations are presented for the ITER design. Physics specifications are provided for enveloping plasma dynamic events for Category I (operational event), Category II (likely event), and Category III (unlikely event). A safety analysis code SAFALY has been developed to investigate plasma anomaly events. The plasma response to ex-vessel component failure and machine response to plasma transients are considered

  14. An Iterative Uncertainty Assessment Technique for Environmental Modeling

    International Nuclear Information System (INIS)

    Engel, David W.; Liebetrau, Albert M.; Jarman, Kenneth D.; Ferryman, Thomas A.; Scheibe, Timothy D.; Didier, Brett T.

    2004-01-01

    The reliability of and confidence in predictions from model simulations are crucial--these predictions can significantly affect risk assessment decisions. For example, the fate of contaminants at the U.S. Department of Energy's Hanford Site has critical impacts on long-term waste management strategies. In the uncertainty estimation efforts for the Hanford Site-Wide Groundwater Modeling program, computational issues severely constrain both the number of uncertain parameters that can be considered and the degree of realism that can be included in the models. Substantial improvements in the overall efficiency of uncertainty analysis are needed to fully explore and quantify significant sources of uncertainty. We have combined state-of-the-art statistical and mathematical techniques in a unique iterative, limited sampling approach to efficiently quantify both local and global prediction uncertainties resulting from model input uncertainties. The approach is designed for application to widely diverse problems across multiple scientific domains. Results are presented for both an analytical model where the response surface is ''known'' and a simplified contaminant fate transport and groundwater flow model. The results show that our iterative method for approximating a response surface (for subsequent calculation of uncertainty estimates) of specified precision requires less computing time than traditional approaches based upon noniterative sampling methods

  15. Prediction for disruption erosion of ITER plasma facing components; a comparison of experimental and numerical results

    International Nuclear Information System (INIS)

    Laan, J.G. van der; Akiba, M.; Seki, M.; Hassanein, A.; Tanchuk, V.

    1991-01-01

    An evaluation is given for the prediction for disruption erosion in the International Thermonuclear Engineering Reactor (ITER). At first, a description is given of the relation between plasma operating paramters and system dimensions to the predictions of loading parameters of Plasma Facing Components (PFC) in off-normal events. Numerical results from ITER parties on the prediction of disruption erosion are compared for a few typical cases and discussed. Apart from some differences in the codes, the observed discrepancies can be ascribed to different input data of material properties and boundary conditions. Some physical models for vapour shielding and their effects on numerical results are mentioned. Experimental results from ITER parties, obtained with electron and laser beams, are also compared. Erosion rates for the candidate ITER PFC materials are shown to depend very strongly on the energy deposition parameters, which are based on plasma physics considerations, and on the assumed material loss mechanisms. Lifetimes estimates for divertor plate and first wall armour are given for carbon, tungsten and beryllium, based on the erosion in the thermal quench phase. (orig.)

  16. Iterative and non-iterative solutions of engine flows using ASM and k-ε turbulence models

    International Nuclear Information System (INIS)

    Khaleghi, H.; Fallah, E.

    2003-01-01

    Various turbulent models are widely developed in order to make a good prediction of turbulence phenomena in different applications. The standard k-ε model shows a poor prediction for some applications. The Reynolds Stress Model (RSM) is expected to give a better prediction of turbulent characteristics, because a separate differential equation for each Reynolds stress component is solved in this model. In order to save both time and memory in this calculation a new Algebraic Stress Model (ASM) which was developed by Lumly et al in 1995 is used for calculations of flow characteristics in the internal combustion engine chamber. With using turbulent realizability principles, this model becomes a powerful and reliable turbulence model. In this paper the abilities of the model is examined in internal combustion engine flows. The results of ASM and k-ε models are compared with the experimental data. It is shown that the poor predictions of k-ε model are modified by ASM model. Also in this paper non-iterative PISO and iterative SIMPLE solution algorithms are compared. The results show that the PISO solution algorithm is the preferred and more efficient procedure in the calculation of internal combustion engine. (author)

  17. Dealing with noisy absences to optimize species distribution models: an iterative ensemble modelling approach.

    Directory of Open Access Journals (Sweden)

    Christine Lauzeral

    Full Text Available Species distribution models (SDMs are widespread in ecology and conservation biology, but their accuracy can be lowered by non-environmental (noisy absences that are common in species occurrence data. Here we propose an iterative ensemble modelling (IEM method to deal with noisy absences and hence improve the predictive reliability of ensemble modelling of species distributions. In the IEM approach, outputs of a classical ensemble model (EM were used to update the raw occurrence data. The revised data was then used as input for a new EM run. This process was iterated until the predictions stabilized. The outputs of the iterative method were compared to those of the classical EM using virtual species. The IEM process tended to converge rapidly. It increased the consensus between predictions provided by the different methods as well as between those provided by different learning data sets. Comparing IEM and EM showed that for high levels of non-environmental absences, iterations significantly increased prediction reliability measured by the Kappa and TSS indices, as well as the percentage of well-predicted sites. Compared to EM, IEM also reduced biases in estimates of species prevalence. Compared to the classical EM method, IEM improves the reliability of species predictions. It particularly deals with noisy absences that are replaced in the data matrices by simulated presences during the iterative modelling process. IEM thus constitutes a promising way to increase the accuracy of EM predictions of difficult-to-detect species, as well as of species that are not in equilibrium with their environment.

  18. Modeling defect trends for iterative development

    Science.gov (United States)

    Powell, J. D.; Spanguolo, J. N.

    2003-01-01

    The Employment of Defects (EoD) approach to measuring and analyzing defects seeks to identify and capture trends and phenomena that are critical to managing software quality in the iterative software development lifecycle at JPL.

  19. Active Player Modeling in the Iterated Prisoner’s Dilemma

    Directory of Open Access Journals (Sweden)

    Hyunsoo Park

    2016-01-01

    Full Text Available The iterated prisoner’s dilemma (IPD is well known within the domain of game theory. Although it is relatively simple, it can also elucidate important problems related to cooperation and trust. Generally, players can predict their opponents’ actions when they are able to build a precise model of their behavior based on their game playing experience. However, it is difficult to make such predictions based on a limited number of games. The creation of a precise model requires the use of not only an appropriate learning algorithm and framework but also a good dataset. Active learning approaches have recently been introduced to machine learning communities. The approach can usually produce informative datasets with relatively little effort. Therefore, we have proposed an active modeling technique to predict the behavior of IPD players. The proposed method can model the opponent player’s behavior while taking advantage of interactive game environments. This experiment used twelve representative types of players as opponents, and an observer used an active modeling algorithm to model these opponents. This observer actively collected data and modeled the opponent’s behavior online. Most of our data showed that the observer was able to build, through direct actions, a more accurate model of an opponent’s behavior than when the data were collected through random actions.

  20. 3-Dimensional Iterative Forward Model for Microwave Imaging

    DEFF Research Database (Denmark)

    Kim, Oleksiy S.; Meincke, Peter

    2006-01-01

    The efficient solution of a forward scattering problem is the key point in nonlinear inversion schemes associated with microwave imaging. In this paper the solution is presented for the volume integral equation based on the method of moments (MoM) and accelerated with the adaptive integral method...... in each iteration of the forward solution. Thus, the presented technique allows us to avoid the time-consuming procedure of the MoM matrix filling in each inversion iteration. Furthermore, the forward solution from the previous inversion iteration can be utilized in the next one as an initial guess, thus...... reducing the solution time for the forward model....

  1. Ab initio modeling of small proteins by iterative TASSER simulations

    Directory of Open Access Journals (Sweden)

    Zhang Yang

    2007-05-01

    Full Text Available Abstract Background Predicting 3-dimensional protein structures from amino-acid sequences is an important unsolved problem in computational structural biology. The problem becomes relatively easier if close homologous proteins have been solved, as high-resolution models can be built by aligning target sequences to the solved homologous structures. However, for sequences without similar folds in the Protein Data Bank (PDB library, the models have to be predicted from scratch. Progress in the ab initio structure modeling is slow. The aim of this study was to extend the TASSER (threading/assembly/refinement method for the ab initio modeling and examine systemically its ability to fold small single-domain proteins. Results We developed I-TASSER by iteratively implementing the TASSER method, which is used in the folding test of three benchmarks of small proteins. First, data on 16 small proteins (α-root mean square deviation (RMSD of 3.8Å, with 6 of them having a Cα-RMSD α-RMSD α-RMSD of the I-TASSER models was 3.9Å, whereas it was 5.9Å using TOUCHSTONE-II software. Finally, 20 non-homologous small proteins (α-RMSD of 3.9Å was obtained for the third benchmark, with seven cases having a Cα-RMSD Conclusion Our simulation results show that I-TASSER can consistently predict the correct folds and sometimes high-resolution models for small single-domain proteins. Compared with other ab initio modeling methods such as ROSETTA and TOUCHSTONE II, the average performance of I-TASSER is either much better or is similar within a lower computational time. These data, together with the significant performance of automated I-TASSER server (the Zhang-Server in the 'free modeling' section of the recent Critical Assessment of Structure Prediction (CASP7 experiment, demonstrate new progresses in automated ab initio model generation. The I-TASSER server is freely available for academic users http://zhang.bioinformatics.ku.edu/I-TASSER.

  2. Transient thermal hydraulic modeling and analysis of ITER divertor plate system

    International Nuclear Information System (INIS)

    El-Morshedy, Salah El-Din; Hassanein, Ahmed

    2009-01-01

    A mathematical model has been developed/updated to simulate the steady state and transient thermal-hydraulics of the International Thermonuclear Experimental Reactor (ITER) divertor module. The model predicts the thermal response of the armour coating, divertor plate structural materials and coolant channels. The selected heat transfer correlations cover all operating conditions of ITER under both normal and off-normal situations. The model also accounts for the melting, vaporization, and solidification of the armour material. The developed model is to provide a quick benchmark of the HEIGHTS multidimensional comprehensive simulation package. The present model divides the coolant channels into a specified axial regions and the divertor plate into a specified radial zones, then a two-dimensional heat conduction calculation is created to predict the temperature distribution for both steady and transient states. The model is benchmarked against experimental data performed at Sandia National Laboratory for both bare and swirl tape coolant channel mockups. The results show very good agreements with the data for steady and transient states. The model is then used to predict the thermal behavior of the ITER plasma facing and structural materials due to plasma instability event where 60 MJ/m 2 plasma energy is deposited over 500 ms. The results for ITER divertor response is analyzed and compared with HEIGHTS results.

  3. Transient thermal hydraulic modeling and analysis of ITER divertor plate system

    Energy Technology Data Exchange (ETDEWEB)

    El-Morshedy, Salah El-Din [Argonne National Laboratory, Argonne, IL (United States); Atomic Energy Authority, Cairo (Egypt)], E-mail: selmorshedy@etrr2-aea.org.eg; Hassanein, Ahmed [Purdue University, West Lafayette, IN (United States)], E-mail: hassanein@purdue.edu

    2009-12-15

    A mathematical model has been developed/updated to simulate the steady state and transient thermal-hydraulics of the International Thermonuclear Experimental Reactor (ITER) divertor module. The model predicts the thermal response of the armour coating, divertor plate structural materials and coolant channels. The selected heat transfer correlations cover all operating conditions of ITER under both normal and off-normal situations. The model also accounts for the melting, vaporization, and solidification of the armour material. The developed model is to provide a quick benchmark of the HEIGHTS multidimensional comprehensive simulation package. The present model divides the coolant channels into a specified axial regions and the divertor plate into a specified radial zones, then a two-dimensional heat conduction calculation is created to predict the temperature distribution for both steady and transient states. The model is benchmarked against experimental data performed at Sandia National Laboratory for both bare and swirl tape coolant channel mockups. The results show very good agreements with the data for steady and transient states. The model is then used to predict the thermal behavior of the ITER plasma facing and structural materials due to plasma instability event where 60 MJ/m{sup 2} plasma energy is deposited over 500 ms. The results for ITER divertor response is analyzed and compared with HEIGHTS results.

  4. Predictive capabilities, analysis and experiments for Fusion Nuclear Technology, and ITER R D

    Energy Technology Data Exchange (ETDEWEB)

    1991-01-01

    This report discusses the following topics on ITER research and development: trituim modeling; liquid metal blanket modeling; free surface liquid metal studies; and thermal conductance and thermal control experiments and modeling. (LIP)

  5. Iterative prediction of chaotic time series using a recurrent neural network

    Energy Technology Data Exchange (ETDEWEB)

    Essawy, M.A.; Bodruzzaman, M. [Tennessee State Univ., Nashville, TN (United States). Dept. of Electrical and Computer Engineering; Shamsi, A.; Noel, S. [USDOE Morgantown Energy Technology Center, WV (United States)

    1996-12-31

    Chaotic systems are known for their unpredictability due to their sensitive dependence on initial conditions. When only time series measurements from such systems are available, neural network based models are preferred due to their simplicity, availability, and robustness. However, the type of neutral network used should be capable of modeling the highly non-linear behavior and the multi-attractor nature of such systems. In this paper the authors use a special type of recurrent neural network called the ``Dynamic System Imitator (DSI)``, that has been proven to be capable of modeling very complex dynamic behaviors. The DSI is a fully recurrent neural network that is specially designed to model a wide variety of dynamic systems. The prediction method presented in this paper is based upon predicting one step ahead in the time series, and using that predicted value to iteratively predict the following steps. This method was applied to chaotic time series generated from the logistic, Henon, and the cubic equations, in addition to experimental pressure drop time series measured from a Fluidized Bed Reactor (FBR), which is known to exhibit chaotic behavior. The time behavior and state space attractor of the actual and network synthetic chaotic time series were analyzed and compared. The correlation dimension and the Kolmogorov entropy for both the original and network synthetic data were computed. They were found to resemble each other, confirming the success of the DSI based chaotic system modeling.

  6. DISIS: prediction of drug response through an iterative sure independence screening.

    Directory of Open Access Journals (Sweden)

    Yun Fang

    Full Text Available Prediction of drug response based on genomic alterations is an important task in the research of personalized medicine. Current elastic net model utilized a sure independence screening to select relevant genomic features with drug response, but it may neglect the combination effect of some marginally weak features. In this work, we applied an iterative sure independence screening scheme to select drug response relevant features from the Cancer Cell Line Encyclopedia (CCLE dataset. For each drug in CCLE, we selected up to 40 features including gene expressions, mutation and copy number alterations of cancer-related genes, and some of them are significantly strong features but showing weak marginal correlation with drug response vector. Lasso regression based on the selected features showed that our prediction accuracies are higher than those by elastic net regression for most drugs.

  7. Mixed price and load forecasting of electricity markets by a new iterative prediction method

    International Nuclear Information System (INIS)

    Amjady, Nima; Daraeepour, Ali

    2009-01-01

    Load and price forecasting are the two key issues for the participants of current electricity markets. However, load and price of electricity markets have complex characteristics such as nonlinearity, non-stationarity and multiple seasonality, to name a few (usually, more volatility is seen in the behavior of electricity price signal). For these reasons, much research has been devoted to load and price forecast, especially in the recent years. However, previous research works in the area separately predict load and price signals. In this paper, a mixed model for load and price forecasting is presented, which can consider interactions of these two forecast processes. The mixed model is based on an iterative neural network based prediction technique. It is shown that the proposed model can present lower forecast errors for both load and price compared with the previous separate frameworks. Another advantage of the mixed model is that all required forecast features (from load or price) are predicted within the model without assuming known values for these features. So, the proposed model can better be adapted to real conditions of an electricity market. The forecast accuracy of the proposed mixed method is evaluated by means of real data from the New York and Spanish electricity markets. The method is also compared with some of the most recent load and price forecast techniques. (author)

  8. Iteration schemes for parallelizing models of superconductivity

    Energy Technology Data Exchange (ETDEWEB)

    Gray, P.A. [Michigan State Univ., East Lansing, MI (United States)

    1996-12-31

    The time dependent Lawrence-Doniach model, valid for high fields and high values of the Ginzburg-Landau parameter, is often used for studying vortex dynamics in layered high-T{sub c} superconductors. When solving these equations numerically, the added degrees of complexity due to the coupling and nonlinearity of the model often warrant the use of high-performance computers for their solution. However, the interdependence between the layers can be manipulated so as to allow parallelization of the computations at an individual layer level. The reduced parallel tasks may then be solved independently using a heterogeneous cluster of networked workstations connected together with Parallel Virtual Machine (PVM) software. Here, this parallelization of the model is discussed and several computational implementations of varying degrees of parallelism are presented. Computational results are also given which contrast properties of convergence speed, stability, and consistency of these implementations. Included in these results are models involving the motion of vortices due to an applied current and pinning effects due to various material properties.

  9. Plasma-safety assessment model and safety analyses of ITER

    International Nuclear Information System (INIS)

    Honda, T.; Okazaki, T.; Bartels, H.-H.; Uckan, N.A.; Sugihara, M.; Seki, Y.

    2001-01-01

    A plasma-safety assessment model has been provided on the basis of the plasma physics database of the International Thermonuclear Experimental Reactor (ITER) to analyze events including plasma behavior. The model was implemented in a safety analysis code (SAFALY), which consists of a 0-D dynamic plasma model and a 1-D thermal behavior model of the in-vessel components. Unusual plasma events of ITER, e.g., overfueling, were calculated using the code and plasma burning is found to be self-bounded by operation limits or passively shut down due to impurity ingress from overheated divertor targets. Sudden transition of divertor plasma might lead to failure of the divertor target because of a sharp increase of the heat flux. However, the effects of the aggravating failure can be safely handled by the confinement boundaries. (author)

  10. Assessment and modeling of inductive and non-inductive scenarios for ITER

    International Nuclear Information System (INIS)

    Boucher, D.; Vayakis, G.; Moreau, D.

    1999-01-01

    This paper presents recent developments in modeling and simulations of ITER performances and scenarios. The first part presents an improved modeling of coupled divertor/main plasma operation including the simulation of the measurements involved in the control loop. The second part explores the fusion performances predicted under non-inductive operation with internal transport barrier. The final part covers a detailed scenario for non-inductive operation using a reverse shear configuration with lower hybrid and fast wave current drive. (author)

  11. Modeling Results For the ITER Cryogenic Fore Pump. Final Report

    Energy Technology Data Exchange (ETDEWEB)

    Pfotenhauer, John M. [University of Wisconsin, Madison, WI (United States); Zhang, Dongsheng [University of Wisconsin, Madison, WI (United States)

    2014-03-31

    A numerical model characterizing the operation of a cryogenic fore-pump (CFP) for ITER has been developed at the University of Wisconsin – Madison during the period from March 15, 2011 through June 30, 2014. The purpose of the ITER-CFP is to separate hydrogen isotopes from helium gas, both making up the exhaust components from the ITER reactor. The model explicitly determines the amount of hydrogen that is captured by the supercritical-helium-cooled pump as a function of the inlet temperature of the supercritical helium, its flow rate, and the inlet conditions of the hydrogen gas flow. Furthermore the model computes the location and amount of hydrogen captured in the pump as a function of time. Throughout the model’s development, and as a calibration check for its results, it has been extensively compared with the measurements of a CFP prototype tested at Oak Ridge National Lab. The results of the model demonstrate that the quantity of captured hydrogen is very sensitive to the inlet temperature of the helium coolant on the outside of the cryopump. Furthermore, the model can be utilized to refine those tests, and suggests methods that could be incorporated in the testing to enhance the usefulness of the measured data.

  12. Speeding up predictive electromagnetic simulations for ITER application

    Energy Technology Data Exchange (ETDEWEB)

    Alekseev, A.B. [ITER Organization, Route de Vinon sur Verdon, 13067 St. Paul Lez Durance Cedex (France); Amoskov, V.M. [JSC “NIIEFA”, Doroga na Metallostroy 3, St. Petersburg, 196641 (Russian Federation); Bazarov, A.M., E-mail: alexander.bazarov@gmail.com [JSC “NIIEFA”, Doroga na Metallostroy 3, St. Petersburg, 196641 (Russian Federation); Belov, A.V. [JSC “NIIEFA”, Doroga na Metallostroy 3, St. Petersburg, 196641 (Russian Federation); Belyakov, V.A. [JSC “NIIEFA”, Doroga na Metallostroy 3, St. Petersburg, 196641 (Russian Federation); St. Petersburg State University, 7/9 Universitetskaya Embankment, St. Petersburg, 199034 (Russian Federation); Gapionok, E.I. [JSC “NIIEFA”, Doroga na Metallostroy 3, St. Petersburg, 196641 (Russian Federation); Gornikel, I.V. [Alphysica GmbH, Unterreut, 6, D-76135, Karlsruhe (Germany); Gribov, Yu. V. [ITER Organization, Route de Vinon sur Verdon, 13067 St. Paul Lez Durance Cedex (France); Kukhtin, V.P.; Lamzin, E.A. [JSC “NIIEFA”, Doroga na Metallostroy 3, St. Petersburg, 196641 (Russian Federation); Sytchevsky, S.E. [JSC “NIIEFA”, Doroga na Metallostroy 3, St. Petersburg, 196641 (Russian Federation); St. Petersburg State University, 7/9 Universitetskaya Embankment, St. Petersburg, 199034 (Russian Federation)

    2017-05-15

    Highlights: • A general concept of engineering EM simulator for tokamak application is proposed. • An algorithm is based on influence functions and superposition principle. • The software works with extensive databases and offers parallel processing. • The simulator allows us to obtain the solution hundreds times faster. - Abstract: The paper presents an attempt to proceed to a general concept of software environment for fast and consistent multi-task simulation of EM transients (engineering simulator for tokamak applications). As an example, the ITER tokamak is taken to introduce a computational technique. The strategy exploits parallel processing with optimized simulation algorithms based on using of influence functions and superposition principle to take full advantage of parallelism. The software has been tested on a multi-core supercomputer. The results were compared with data obtained in TYPHOON computations. A discrepancy was found to be below 0.4%. The computation cost for the simulator is proportional to the number of observation points. An average computation time with the simulator is found to be by hundreds times less than the time required to solve numerically a relevant system of differential equations for known software tools.

  13. Pipeline Processing with an Iterative, Context-Based Detection Model

    Science.gov (United States)

    2016-01-22

    unlimited. 59 optimum choice for steering vectors is the adaptive beamformer weighting [ Capon et al ., 1967] also known as the minimum variance...AFRL-RV-PS- AFRL-RV-PS- TR-2016-0080 TR-2016-0080 PIPELINE PROCESSING WITH AN ITERATIVE, CONTEXT-BASED DETECTION MODEL T. Kværna, et al ...conditions, the capability to detect events down to magnitude 2.0 [Gibbons et al ., 2011]. Figure 3. Location of the SPITS array in relation to Novaya

  14. Right adrenal vein: comparison between adaptive statistical iterative reconstruction and model-based iterative reconstruction.

    Science.gov (United States)

    Noda, Y; Goshima, S; Nagata, S; Miyoshi, T; Kawada, H; Kawai, N; Tanahashi, Y; Matsuo, M

    2018-02-16

    To compare right adrenal vein (RAV) visualisation and contrast enhancement degree on adrenal venous phase images reconstructed using adaptive statistical iterative reconstruction (ASiR) and model-based iterative reconstruction (MBIR) techniques. This prospective study was approved by the institutional review board, and written informed consent was waived. Fifty-seven consecutive patients who underwent adrenal venous phase imaging were enrolled. The same raw data were reconstructed using ASiR 40% and MBIR. The expert and beginner independently reviewed computed tomography (CT) images. RAV visualisation rates, background noise, and CT attenuation of the RAV, right adrenal gland, inferior vena cava (IVC), hepatic vein, and bilateral renal veins were compared between the two reconstruction techniques. RAV visualisation rates were higher with MBIR than with ASiR (95% versus 88%, p=0.13 in expert and 93% versus 75%, p=0.002 in beginner, respectively). RAV visualisation confidence ratings with MBIR were significantly greater than with ASiR (p<0.0001, both in the beginner and the expert). The mean background noise was significantly lower with MBIR than with ASiR (p<0.0001). Mean CT attenuation values of the RAV, right adrenal gland, IVC, and hepatic vein were comparable between the two techniques (p=0.12-0.91). Mean CT attenuation values of the bilateral renal veins were significantly higher with MBIR than with ASiR (p=0.0013 and 0.02). Reconstruction of adrenal venous phase images using MBIR significantly reduces background noise, leading to an improvement in the RAV visualisation compared with ASiR. Copyright © 2018 The Royal College of Radiologists. Published by Elsevier Ltd. All rights reserved.

  15. Weld distortion prediction of the ITER Vacuum Vessel using Finite Element simulations

    International Nuclear Information System (INIS)

    Caixas, Joan; Guirao, Julio; Bayon, Angel; Jones, Lawrence; Arbogast, Jean François; Barbensi, Andrea; Dans, Andres; Facca, Aldo; Fernandez, Elena; Fernández, José; Iglesias, Silvia; Jimenez, Marc; Jucker, Philippe; Micó, Gonzalo; Ordieres, Javier; Pacheco, Jose Miguel; Paoletti, Roberto; Sanguinetti, Gian Paolo; Stamos, Vassilis; Tacconelli, Massimiliano

    2013-01-01

    Highlights: ► Computational simulations of the weld processes can rapidly assess different sequences. ► Prediction of welding distortion to optimize the manufacturing sequence. ► Accurate shape prediction after each manufacture phase allows to generate modified procedures and pre-compensate distortions. ► The simulation methodology is improved using condensed computation techniques with ANSYS in order to reduce computation resources. ► For each welding process, the models are calibrated with the results of coupons and mock-ups. -- Abstract: The as-welded surfaces of the ITER Vacuum Vessel sectors need to be within a very tight tolerance, without a full-scale prototype. In order to predict welding distortion and optimize the manufacturing sequence, the industrial contract includes extensive computational simulations of the weld processes which can rapidly assess different sequences. The accurate shape prediction, after each manufacturing phase, enables actual distortions to be compared with the welding simulations to generate modified procedures and pre-compensate distortions. While previous mock-ups used heavy welded-on jigs to try to restrain the distortions, this method allows the use of lightweight jigs and yields important cost and rework savings. In order to enable the optimization of different alternative welding sequences, the simulation methodology is improved using condensed computation techniques with ANSYS in order to reduce computational resources. For each welding process, the models are calibrated with the results of coupons and mock-ups. The calibration is used to construct representative models of each segment and sector. This paper describes the application to the construction of the Vacuum Vessel sector of the enhanced simulation methodology with condensed Finite Element computation techniques and results of the calibration on several test pieces for different types of welds

  16. ITER transient consequences for material damage: modelling versus experiments

    Science.gov (United States)

    Bazylev, B.; Janeschitz, G.; Landman, I.; Pestchanyi, S.; Loarte, A.; Federici, G.; Merola, M.; Linke, J.; Zhitlukhin, A.; Podkovyrov, V.; Klimov, N.; Safronov, V.

    2007-03-01

    Carbon-fibre composite (CFC) and tungsten macrobrush armours are foreseen as PFC for the ITER divertor. In ITER the main mechanisms of metallic armour damage remain surface melting and melt motion erosion. In the case of CFC armour, due to rather different heat conductivities of CFC fibres a noticeable erosion of the PAN bundles may occur at rather small heat loads. Experiments carried out in the plasma gun facilities QSPA-T for the ITER like edge localized mode (ELM) heat load also demonstrated significant erosion of the frontal and lateral brush edges. Numerical simulations of the CFC and tungsten (W) macrobrush target damage accounting for the heat loads at the face and lateral brush edges were carried out for QSPA-T conditions using the three-dimensional (3D) code PHEMOBRID. The modelling results of CFC damage are in a good qualitative and quantitative agreement with the experiments. Estimation of the droplet splashing caused by the Kelvin-Helmholtz (KH) instability was performed.

  17. ITER transient consequences for material damage: modelling versus experiments

    International Nuclear Information System (INIS)

    Bazylev, B; Janeschitz, G; Landman, I; Pestchanyi, S; Loarte, A; Federici, G; Merola, M; Linke, J; Zhitlukhin, A; Podkovyrov, V; Klimov, N; Safronov, V

    2007-01-01

    Carbon-fibre composite (CFC) and tungsten macrobrush armours are foreseen as PFC for the ITER divertor. In ITER the main mechanisms of metallic armour damage remain surface melting and melt motion erosion. In the case of CFC armour, due to rather different heat conductivities of CFC fibres a noticeable erosion of the PAN bundles may occur at rather small heat loads. Experiments carried out in the plasma gun facilities QSPA-T for the ITER like edge localized mode (ELM) heat load also demonstrated significant erosion of the frontal and lateral brush edges. Numerical simulations of the CFC and tungsten (W) macrobrush target damage accounting for the heat loads at the face and lateral brush edges were carried out for QSPA-T conditions using the three-dimensional (3D) code PHEMOBRID. The modelling results of CFC damage are in a good qualitative and quantitative agreement with the experiments. Estimation of the droplet splashing caused by the Kelvin-Helmholtz (KH) instability was performed

  18. Generalization of non-iterative numerical methods for damage-plastic behaviour modeling

    NARCIS (Netherlands)

    Graca-e-Costa, R.; Alfaiate, J.; Dias-da-Costa, D.; Sluys, L.J.

    2013-01-01

    Modelling fracture in concrete or masonry is known to be problematic regarding the robustness of iterative solution procedures and, the use of non-iterative methods (or that minimize the use of iterations) in quasi-brittle materials is now under strong development, due to the necessity to obtain

  19. ITER-like current ramps in JET with ILW: experiments, modelling and consequences for ITER

    Czech Academy of Sciences Publication Activity Database

    Hogeweij, G.M.D.; Calabrò, G.; Sips, A.C.C.; Maggi, C.F.; De Tommasi, G.M.; Joffrin, E.; Loarte, A.; Maviglia, F.; Mlynář, Jan; Rimini, F.G.; Pütterich, T.

    2015-01-01

    Roč. 55, č. 1 (2015), 013009-013009 ISSN 0029-5515 Institutional support: RVO:61389021 Keywords : tokamak * ramp-up * JET * ITER Subject RIV: BL - Plasma and Gas Discharge Physics Impact factor: 4.040, year: 2015 http://iopscience.iop.org/article/10.1088/0029-5515/55/1/013009#metrics

  20. ITER physics-safety interface: models and assessments

    International Nuclear Information System (INIS)

    Uckan, N.A.; Putvinski, S.; Wesley, J.; Bartels, H-W.; Honda, T.; Boucher, D.; Fujisawa, N.; Post, D.; Rosenbluth, M.

    1996-01-01

    Plasma operation conditions and physics requirements to be used as a basis for safety analysis studies are developed and physics results motivated by safety considerations are presented for the ITER design. Physics guidelines and specifications for enveloping plasma dynamic events for Category I (operational event), Category II (likely event), and Category III (unlikely event) are characterized. Safety related physics areas that are considered are: (i) effect of plasma on machined and safety (disruptions, runaway electrons, fast plasma shutdown) and (ii) plasma response to ex-vessel LOCA from first wall providing a potential passive plasma shutdown due to Be evaporation. Physics models and expressions developed are implemented in safety analysis code (SAFALY, couples 0-D dynamic plasma model to thermal response of the in-vessel components). Results from SAFALY are presented

  1. ITER physics-safety interface: models and assessments

    Energy Technology Data Exchange (ETDEWEB)

    Uckan, N.A. [Oak Ridge National Lab., TN (United States); Putvinski, S.; Wesley, J.; Bartels, H-W. [ITER San Diego Joint Work Site, CA (United States); Honda, T. [Hitachi Ltd., Ibaraki (Japan). Hitachi Research Lab.; Amano, T. [National Inst. for Fusion Science, Nagoya (Japan); Boucher, D.; Fujisawa, N.; Post, D.; Rosenbluth, M. [ITER San Diego Joint Work Site, CA (United States)

    1996-10-01

    Plasma operation conditions and physics requirements to be used as a basis for safety analysis studies are developed and physics results motivated by safety considerations are presented for the ITER design. Physics guidelines and specifications for enveloping plasma dynamic events for Category I (operational event), Category II (likely event), and Category III (unlikely event) are characterized. Safety related physics areas that are considered are: (i) effect of plasma on machined and safety (disruptions, runaway electrons, fast plasma shutdown) and (ii) plasma response to ex-vessel LOCA from first wall providing a potential passive plasma shutdown due to Be evaporation. Physics models and expressions developed are implemented in safety analysis code (SAFALY, couples 0-D dynamic plasma model to thermal response of the in-vessel components). Results from SAFALY are presented.

  2. Impact of model-based iterative reconstruction on image quality of contrast-enhanced neck CT.

    Science.gov (United States)

    Gaddikeri, S; Andre, J B; Benjert, J; Hippe, D S; Anzai, Y

    2015-02-01

    Improved image quality is clinically desired for contrast-enhanced CT of the neck. We compared 30% adaptive statistical iterative reconstruction and model-based iterative reconstruction algorithms for the assessment of image quality of contrast-enhanced CT of the neck. Neck contrast-enhanced CT data from 64 consecutive patients were reconstructed retrospectively by using 30% adaptive statistical iterative reconstruction and model-based iterative reconstruction. Objective image quality was assessed by comparing SNR, contrast-to-noise ratio, and background noise at levels 1 (mandible) and 2 (superior mediastinum). Two independent blinded readers subjectively graded the image quality on a scale of 1-5, (grade 5 = excellent image quality without artifacts and grade 1 = nondiagnostic image quality with significant artifacts). The percentage of agreement and disagreement between the 2 readers was assessed. Compared with 30% adaptive statistical iterative reconstruction, model-based iterative reconstruction significantly improved the SNR and contrast-to-noise ratio at levels 1 and 2. Model-based iterative reconstruction also decreased background noise at level 1 (P = .016), though there was no difference at level 2 (P = .61). Model-based iterative reconstruction was scored higher than 30% adaptive statistical iterative reconstruction by both reviewers at the nasopharynx (P quality (P model-based iterative reconstruction. Model-based iterative reconstruction offers improved subjective and objective image quality as evidenced by a higher SNR and contrast-to-noise ratio and lower background noise within the same dataset for contrast-enhanced neck CT. Model-based iterative reconstruction has the potential to reduce the radiation dose while maintaining the image quality, with a minor downside being prominent artifacts related to thyroid shield use on model-based iterative reconstruction. © 2015 by American Journal of Neuroradiology.

  3. Modelling of radiation impact on ITER Beryllium wall

    Science.gov (United States)

    Landman, I. S.; Janeschitz, G.

    2009-04-01

    In the ITER H-Mode confinement regime, edge localized instabilities (ELMs) will perturb the discharge. Plasma lost after each ELM moves along magnetic field lines and impacts on divertor armour, causing plasma contamination by back propagating eroded carbon or tungsten. These impurities produce enhanced radiation flux distributed mainly over the beryllium main chamber wall. The simulation of the complicated processes involved are subject of the integrated tokamak code TOKES that is currently under development. This work describes the new TOKES model for radiation transport through confined plasma. Equations for level populations of the multi-fluid plasma species and the propagation of different kinds of radiation (resonance, recombination and bremsstrahlung photons) are implemented. First simulation results without account of resonance lines are presented.

  4. Modeling of ITER related vacuum gas pumping distribution systems

    Energy Technology Data Exchange (ETDEWEB)

    Misdanitis, Serafeim [University of Thessaly, Department of Mechanical Engineering, Pedion Areos, 38334 Volos (Greece); Association EURATOM - Hellenic Republic (Greece); Valougeorgis, Dimitris, E-mail: diva@mie.uth.gr [University of Thessaly, Department of Mechanical Engineering, Pedion Areos, 38334 Volos (Greece); Association EURATOM - Hellenic Republic (Greece)

    2013-10-15

    Highlights: • An algorithm to simulate vacuum gas flows through pipe networks consisting of long channels and channels of moderate length has been developed. • Analysis and results are based on kinetic theory as described by the BGK kinetic model equation. • The algorithm is capable of computing the mass flow rates (or the conductance) through the pipes and the pressure at the nodes of the network. • Since a kinetic approach is implemented, the algorithm is valid in the whole range of the Knudsen number. • The developed algorithm will be useful for simulating the vacuum distribution systems of ITER and future fusion reactors. -- Abstract: A novel algorithm recently developed to solve steady-state isothermal vacuum gas dynamics flows through pipe networks consisting of long tubes is extended to include, in addition to long channels, channels of moderate length 10 < L/D < 50. This is achieved by implementing the so-called end effect treatment/correction. Analysis and results are based on kinetic theory as described by the Boltzmann equation or associated reliable kinetic model equations. For a pipe network of known geometry the algorithm is capable of computing the mass flow rates (or the conductance) through the pipes as well as the pressure heads at the nodes of the network. The feasibility of the approach is demonstrated by simulating two ITER related vacuum distribution systems, one in the viscous regime and a second one in a wide range of Knudsen numbers. Since a kinetic approach is implemented, the algorithm is valid and the results are accurate in the whole range of the Knudsen number, while the involved computational effort remains small.

  5. Approximating Attractors of Boolean Networks by Iterative CTL Model Checking.

    Science.gov (United States)

    Klarner, Hannes; Siebert, Heike

    2015-01-01

    This paper introduces the notion of approximating asynchronous attractors of Boolean networks by minimal trap spaces. We define three criteria for determining the quality of an approximation: "faithfulness" which requires that the oscillating variables of all attractors in a trap space correspond to their dimensions, "univocality" which requires that there is a unique attractor in each trap space, and "completeness" which requires that there are no attractors outside of a given set of trap spaces. Each is a reachability property for which we give equivalent model checking queries. Whereas faithfulness and univocality can be decided by model checking the corresponding subnetworks, the naive query for completeness must be evaluated on the full state space. Our main result is an alternative approach which is based on the iterative refinement of an initially poor approximation. The algorithm detects so-called autonomous sets in the interaction graph, variables that contain all their regulators, and considers their intersection and extension in order to perform model checking on the smallest possible state spaces. A benchmark, in which we apply the algorithm to 18 published Boolean networks, is given. In each case, the minimal trap spaces are faithful, univocal, and complete, which suggests that they are in general good approximations for the asymptotics of Boolean networks.

  6. Approximating attractors of Boolean networks by iterative CTL model checking

    Directory of Open Access Journals (Sweden)

    Hannes eKlarner

    2015-09-01

    Full Text Available This paper introduces the notion of approximating asynchronous attractors of Boolean networks by minimal trap spaces. We define three criteria for determining the quality of an approximation: faithfulness which requires that the oscillating variables of all attractors in a trapspace correspond to their dimensions, univocality which requires that there is a unique attractor in each trap space and completeness which requires that there are no attractors outside of a given set of trap spaces. Each is a reachability property for which we give equivalent model checking queries. Whereas faithfulness and univocality can be decided by model checking the corresponding subnetworks, the naive query for completeness must be evaluated on the full state space. Our main result is an alternative approach which is based on the iterative refinement of an initially poor approximation. The algorithm detects so-called autonomous sets in the interaction graph, variables that contain all their regulators, and considers their intersection and extension in order to perform model checking on the smallest possible state spaces. A benchmark, in which we apply the algorithm to 18 published Boolean networks, is given. In each case, the minimal trap spaces are faithful, univocal and complete which suggests that they are in general good approximations for the asymptotics of Boolean networks.

  7. Model-based normalization for iterative 3D PET image

    International Nuclear Information System (INIS)

    Bai, B.; Li, Q.; Asma, E.; Leahy, R.M.; Holdsworth, C.H.; Chatziioannou, A.; Tai, Y.C.

    2002-01-01

    We describe a method for normalization in 3D PET for use with maximum a posteriori (MAP) or other iterative model-based image reconstruction methods. This approach is an extension of previous factored normalization methods in which we include separate factors for detector sensitivity, geometric response, block effects and deadtime. Since our MAP reconstruction approach already models some of the geometric factors in the forward projection, the normalization factors must be modified to account only for effects not already included in the model. We describe a maximum likelihood approach to joint estimation of the count-rate independent normalization factors, which we apply to data from a uniform cylindrical source. We then compute block-wise and block-profile deadtime correction factors using singles and coincidence data, respectively, from a multiframe cylindrical source. We have applied this method for reconstruction of data from the Concorde microPET P4 scanner. Quantitative evaluation of this method using well-counter measurements of activity in a multicompartment phantom compares favourably with normalization based directly on cylindrical source measurements. (author)

  8. Railway track geometry degradation due to differential settlement of ballast/subgrade - Numerical prediction by an iterative procedure

    Science.gov (United States)

    Nielsen, Jens C. O.; Li, Xin

    2018-01-01

    An iterative procedure for numerical prediction of long-term degradation of railway track geometry (longitudinal level) due to accumulated differential settlement of ballast/subgrade is presented. The procedure is based on a time-domain model of dynamic vehicle-track interaction to calculate the contact loads between sleepers and ballast in the short-term, which are then used in an empirical model to determine the settlement of ballast/subgrade below each sleeper in the long-term. The number of load cycles (wheel passages) accounted for in each iteration step is determined by an adaptive step length given by a maximum settlement increment. To reduce the computational effort for the simulations of dynamic vehicle-track interaction, complex-valued modal synthesis with a truncated modal set is applied for the linear subset of the discretely supported track model with non-proportional spatial distribution of viscous damping. Gravity loads and state-dependent vehicle, track and wheel-rail contact conditions are accounted for as external loads on the modal model, including situations involving loss of (and recovered) wheel-rail contact, impact between hanging sleeper and ballast, and/or a prescribed variation of non-linear track support stiffness properties along the track model. The procedure is demonstrated by calculating the degradation of longitudinal level over time as initiated by a prescribed initial local rail irregularity (dipped welded rail joint).

  9. Iterative prediction of chaotic time series using a recurrent neural network. Quarterly progress report, January 1, 1995--March 31, 1995

    Energy Technology Data Exchange (ETDEWEB)

    Bodruzzaman, M.; Essawy, M.A.

    1996-03-31

    Chaotic systems are known for their unpredictability due to their sensitive dependence on initial conditions. When only time series measurements from such systems are available, neural network based models are preferred due to their simplicity, availability, and robustness. However, the type of neural network used should be capable of modeling the highly non-linear behavior and the multi- attractor nature of such systems. In this paper we use a special type of recurrent neural network called the ``Dynamic System Imitator (DSI)``, that has been proven to be capable of modeling very complex dynamic behaviors. The DSI is a fully recurrent neural network that is specially designed to model a wide variety of dynamic systems. The prediction method presented in this paper is based upon predicting one step ahead in the time series, and using that predicted value to iteratively predict the following steps. This method was applied to chaotic time series generated from the logistic, Henon, and the cubic equations, in addition to experimental pressure drop time series measured from a Fluidized Bed Reactor (FBR), which is known to exhibit chaotic behavior. The time behavior and state space attractor of the actual and network synthetic chaotic time series were analyzed and compared. The correlation dimension and the Kolmogorov entropy for both the original and network synthetic data were computed. They were found to resemble each other, confirming the success of the DSI based chaotic system modeling.

  10. Parallelization of the model-based iterative reconstruction algorithm DIRA

    International Nuclear Information System (INIS)

    Oertenberg, A.; Sandborg, M.; Alm Carlsson, G.; Malusek, A.; Magnusson, M.

    2016-01-01

    New paradigms for parallel programming have been devised to simplify software development on multi-core processors and many-core graphical processing units (GPU). Despite their obvious benefits, the parallelization of existing computer programs is not an easy task. In this work, the use of the Open Multiprocessing (OpenMP) and Open Computing Language (OpenCL) frameworks is considered for the parallelization of the model-based iterative reconstruction algorithm DIRA with the aim to significantly shorten the code's execution time. Selected routines were parallelized using OpenMP and OpenCL libraries; some routines were converted from MATLAB to C and optimised. Parallelization of the code with the OpenMP was easy and resulted in an overall speedup of 15 on a 16-core computer. Parallelization with OpenCL was more difficult owing to differences between the central processing unit and GPU architectures. The resulting speedup was substantially lower than the theoretical peak performance of the GPU; the cause was explained. (authors)

  11. Standard and reduced radiation dose liver CT images: adaptive statistical iterative reconstruction versus model-based iterative reconstruction-comparison of findings and image quality.

    Science.gov (United States)

    Shuman, William P; Chan, Keith T; Busey, Janet M; Mitsumori, Lee M; Choi, Eunice; Koprowicz, Kent M; Kanal, Kalpana M

    2014-12-01

    To investigate whether reduced radiation dose liver computed tomography (CT) images reconstructed with model-based iterative reconstruction ( MBIR model-based iterative reconstruction ) might compromise depiction of clinically relevant findings or might have decreased image quality when compared with clinical standard radiation dose CT images reconstructed with adaptive statistical iterative reconstruction ( ASIR adaptive statistical iterative reconstruction ). With institutional review board approval, informed consent, and HIPAA compliance, 50 patients (39 men, 11 women) were prospectively included who underwent liver CT. After a portal venous pass with ASIR adaptive statistical iterative reconstruction images, a 60% reduced radiation dose pass was added with MBIR model-based iterative reconstruction images. One reviewer scored ASIR adaptive statistical iterative reconstruction image quality and marked findings. Two additional independent reviewers noted whether marked findings were present on MBIR model-based iterative reconstruction images and assigned scores for relative conspicuity, spatial resolution, image noise, and image quality. Liver and aorta Hounsfield units and image noise were measured. Volume CT dose index and size-specific dose estimate ( SSDE size-specific dose estimate ) were recorded. Qualitative reviewer scores were summarized. Formal statistical inference for signal-to-noise ratio ( SNR signal-to-noise ratio ), contrast-to-noise ratio ( CNR contrast-to-noise ratio ), volume CT dose index, and SSDE size-specific dose estimate was made (paired t tests), with Bonferroni adjustment. Two independent reviewers identified all 136 ASIR adaptive statistical iterative reconstruction image findings (n = 272) on MBIR model-based iterative reconstruction images, scoring them as equal or better for conspicuity, spatial resolution, and image noise in 94.1% (256 of 272), 96.7% (263 of 272), and 99.3% (270 of 272), respectively. In 50 image sets, two reviewers

  12. The steady performance prediction of propeller-rudder-bulb system based on potential iterative method

    International Nuclear Information System (INIS)

    Liu, Y B; Su, Y M; Ju, L; Huang, S L

    2012-01-01

    A new numerical method was developed for predicting the steady hydrodynamic performance of propeller-rudder-bulb system. In the calculation, the rudder and bulb was taken into account as a whole, the potential based surface panel method was applied both to propeller and rudder-bulb system. The interaction between propeller and rudder-bulb was taken into account by velocity potential iteration in which the influence of propeller rotation was considered by the average influence coefficient. In the influence coefficient computation, the singular value should be found and deleted. Numerical results showed that the method presented is effective for predicting the steady hydrodynamic performance of propeller-rudder system and propeller-rudder-bulb system. Comparing with the induced velocity iterative method, the method presented can save programming and calculation time. Changing dimensions, the principal parameter—bulb size that affect energy-saving effect was studied, the results show that the bulb on rudder have a optimal size at the design advance coefficient.

  13. ITER central solenoid model coil heat treatment complete and assembly started

    International Nuclear Information System (INIS)

    Thome, R.J.; Okuno, K.

    1998-01-01

    A major R and D task in the ITER program is to fabricate a Superconducting Model Coil for the Central Solenoid to establish the design and fabrication methods for ITER size coils and to demonstrate conductor performance. Completion of its components is expected in 1998, to be followed by assembly with structural components and testing in a facility at JAERI

  14. Cultural Resource Predictive Modeling

    Science.gov (United States)

    2017-10-01

    refining formal, inductive predictive models is the quality of the archaeological and environmental data. To build models efficiently, relevant...geomorphology, and historic information . Lessons Learned: The original model was focused on the identification of prehistoric resources. This...system but uses predictive modeling informally . For example, there is no probability for buried archaeological deposits on the Burton Mesa, but there is

  15. BAKTRAK: backtracking drifting objects using an iterative algorithm with a forward trajectory model

    Science.gov (United States)

    Breivik, Øyvind; Bekkvik, Tor Christian; Wettre, Cecilie; Ommundsen, Atle

    2012-02-01

    The task of determining the origin of a drifting object after it has been located is highly complex due to the uncertainties in drift properties and environmental forcing (wind, waves, and surface currents). Usually, the origin is inferred by running a trajectory model (stochastic or deterministic) in reverse. However, this approach has some severe drawbacks, most notably the fact that many drifting objects go through nonlinear state changes underway (e.g., evaporating oil or a capsizing lifeboat). This makes it difficult to naively construct a reverse-time trajectory model which realistically predicts the earliest possible time the object may have started drifting. We propose instead a different approach where the original (forward) trajectory model is kept unaltered while an iterative seeding and selection process allows us to retain only those particles that end up within a certain time-space radius of the observation. An iterative refinement process named BAKTRAK is employed where those trajectories that do not make it to the goal are rejected, and new trajectories are spawned from successful trajectories. This allows the model to be run in the forward direction to determine the point of origin of a drifting object. The method is demonstrated using the leeway stochastic trajectory model for drifting objects due to its relative simplicity and the practical importance of being able to identify the origin of drifting objects. However, the methodology is general and even more applicable to oil drift trajectories, drifting ships, and hazardous material that exhibit nonlinear state changes such as evaporation, chemical weathering, capsizing, or swamping. The backtracking method is tested against the drift trajectory of a life raft and is shown to predict closely the initial release position of the raft and its subsequent trajectory.

  16. Simulations of material damage to divertor and first wall armour under ITER transient loads by modelling and experiments

    International Nuclear Information System (INIS)

    Bazylev, B.

    2008-01-01

    Operation of ITER at high fusion gain is assumed to be the H-mode. A characteristic feature of this regime is the transient energy release (TE) from the confined plasma onto plasma facing components (PFCs), which can play a determining role in lifetime of these components. The expected fluxes on the ITER PFCs during transients are: Type I ELM Q = 0.5 - 4 MJ/m 2 in timescales t = 0.3 - 0.6 ms, and thermal quench Q = 2 - 13 MJ/m 2 with t = 1 - 3 ms. CFC and tungsten macrobrush armour are foreseen as PFCs for ITER divertor and Be - as FW armour. During the intense TE in ITER the evaporation (CFC, W, Be) and surface melting and melt splashing (W and Be) are seen as the main mechanisms of PFC erosion. A noticeable erosion of CFC PAN fibres and rather intense crack formation for the W targets were observed in plasma gun experiments at rather small heat loads at which the melt damage to W armour is not substantial. The expected erosion of the ITER PFCs TE can be properly estimated by numerical simulations validated against erosion experiments at the plasma gun facilities QSPA-T, MK- 200UG and QSPA-Kh50. Within collaboration between EU fusion programme and Russian Federation, CFC and W macrobrush targets manufactured in EU were exposed to multiple ITER TE-like loads with Q = 0.5 - 2.2 MJ/m 2 and t = 0 .5 ms at the QSPA-T. The measured erosion was used to validate the modelling codes developed in FZK (PEGASUS, MEMOS, and others), which are then applied to model the erosion of the divertor and main chamber ITER PFCs under expected transient loads in ITER. Numerical simulations performed for the expected ITER-like loads predicted: a significant erosion of the CFC target for Q > 0.5 MJ/m 2 was caused by the inhomogeneous structure of the CFC; the W macrobrush structure is effective in preventing gross melt layer displacement. Optimization of macrobrush geometry to minimize melt splashing is done. Different mechanisms of melt splashing are compared with the results obtained in

  17. Predictive modeling of complications.

    Science.gov (United States)

    Osorio, Joseph A; Scheer, Justin K; Ames, Christopher P

    2016-09-01

    Predictive analytic algorithms are designed to identify patterns in the data that allow for accurate predictions without the need for a hypothesis. Therefore, predictive modeling can provide detailed and patient-specific information that can be readily applied when discussing the risks of surgery with a patient. There are few studies using predictive modeling techniques in the adult spine surgery literature. These types of studies represent the beginning of the use of predictive analytics in spine surgery outcomes. We will discuss the advancements in the field of spine surgery with respect to predictive analytics, the controversies surrounding the technique, and the future directions.

  18. An iterative stochastic ensemble method for parameter estimation of subsurface flow models

    International Nuclear Information System (INIS)

    Elsheikh, Ahmed H.; Wheeler, Mary F.; Hoteit, Ibrahim

    2013-01-01

    Parameter estimation for subsurface flow models is an essential step for maximizing the value of numerical simulations for future prediction and the development of effective control strategies. We propose the iterative stochastic ensemble method (ISEM) as a general method for parameter estimation based on stochastic estimation of gradients using an ensemble of directional derivatives. ISEM eliminates the need for adjoint coding and deals with the numerical simulator as a blackbox. The proposed method employs directional derivatives within a Gauss–Newton iteration. The update equation in ISEM resembles the update step in ensemble Kalman filter, however the inverse of the output covariance matrix in ISEM is regularized using standard truncated singular value decomposition or Tikhonov regularization. We also investigate the performance of a set of shrinkage based covariance estimators within ISEM. The proposed method is successfully applied on several nonlinear parameter estimation problems for subsurface flow models. The efficiency of the proposed algorithm is demonstrated by the small size of utilized ensembles and in terms of error convergence rates

  19. An iterative stochastic ensemble method for parameter estimation of subsurface flow models

    KAUST Repository

    Elsheikh, Ahmed H.

    2013-06-01

    Parameter estimation for subsurface flow models is an essential step for maximizing the value of numerical simulations for future prediction and the development of effective control strategies. We propose the iterative stochastic ensemble method (ISEM) as a general method for parameter estimation based on stochastic estimation of gradients using an ensemble of directional derivatives. ISEM eliminates the need for adjoint coding and deals with the numerical simulator as a blackbox. The proposed method employs directional derivatives within a Gauss-Newton iteration. The update equation in ISEM resembles the update step in ensemble Kalman filter, however the inverse of the output covariance matrix in ISEM is regularized using standard truncated singular value decomposition or Tikhonov regularization. We also investigate the performance of a set of shrinkage based covariance estimators within ISEM. The proposed method is successfully applied on several nonlinear parameter estimation problems for subsurface flow models. The efficiency of the proposed algorithm is demonstrated by the small size of utilized ensembles and in terms of error convergence rates. © 2013 Elsevier Inc.

  20. Analytical prediction of thermal performance of hypervapotron and its application to ITER

    International Nuclear Information System (INIS)

    Baxi, C.B.; Falter, H.

    1992-09-01

    A hypervapotron (HV) is a water cooled device made of high thermal conductivity material such as copper. A surface heat flux of up to 30 MW/m 2 has been achieved in copper hypervapotrans cooled by water at a velocity of 10 m/s and at a pressure of six bar. Hypervapotrons have been used in the past as beam dumps at the Joint European Torus (JET). It is planned to use them for diverter cooling during Mark II upgrade of the JET. Although a large amount of experimental data has been collected on these devices, an analytical performance prediction has not been done before due to the complexity of the heat transfer mechanisms. A method to analytically predict the thermal performance of the hypervapotron is described. The method uses a combination of a number of thermal hydraulic correlations and a finite element analysis. The analytical prediction shows an excellent agreement with experimental results over a wide range of velocities, pressures, subcooling, and geometries. The method was used to predict the performance of hypervapotron made of beryllium. Merits for the use of hypervapotrons for International Thermonuclear Experimental Reactor (ITER) and Tokamak Physics Experiment (TPX) are discussed

  1. Iterative Bayesian Model Averaging: a method for the application of survival analysis to high-dimensional microarray data

    Directory of Open Access Journals (Sweden)

    Raftery Adrian E

    2009-02-01

    Full Text Available Abstract Background Microarray technology is increasingly used to identify potential biomarkers for cancer prognostics and diagnostics. Previously, we have developed the iterative Bayesian Model Averaging (BMA algorithm for use in classification. Here, we extend the iterative BMA algorithm for application to survival analysis on high-dimensional microarray data. The main goal in applying survival analysis to microarray data is to determine a highly predictive model of patients' time to event (such as death, relapse, or metastasis using a small number of selected genes. Our multivariate procedure combines the effectiveness of multiple contending models by calculating the weighted average of their posterior probability distributions. Our results demonstrate that our iterative BMA algorithm for survival analysis achieves high prediction accuracy while consistently selecting a small and cost-effective number of predictor genes. Results We applied the iterative BMA algorithm to two cancer datasets: breast cancer and diffuse large B-cell lymphoma (DLBCL data. On the breast cancer data, the algorithm selected a total of 15 predictor genes across 84 contending models from the training data. The maximum likelihood estimates of the selected genes and the posterior probabilities of the selected models from the training data were used to divide patients in the test (or validation dataset into high- and low-risk categories. Using the genes and models determined from the training data, we assigned patients from the test data into highly distinct risk groups (as indicated by a p-value of 7.26e-05 from the log-rank test. Moreover, we achieved comparable results using only the 5 top selected genes with 100% posterior probabilities. On the DLBCL data, our iterative BMA procedure selected a total of 25 genes across 3 contending models from the training data. Once again, we assigned the patients in the validation set to significantly distinct risk groups (p

  2. Archaeological predictive model set.

    Science.gov (United States)

    2015-03-01

    This report is the documentation for Task 7 of the Statewide Archaeological Predictive Model Set. The goal of this project is to : develop a set of statewide predictive models to assist the planning of transportation projects. PennDOT is developing t...

  3. Bias in iterative reconstruction of low-statistics PET data: benefits of a resolution model

    Energy Technology Data Exchange (ETDEWEB)

    Walker, M D; Asselin, M-C; Julyan, P J; Feldmann, M; Matthews, J C [School of Cancer and Enabling Sciences, Wolfson Molecular Imaging Centre, MAHSC, University of Manchester, Manchester M20 3LJ (United Kingdom); Talbot, P S [Mental Health and Neurodegeneration Research Group, Wolfson Molecular Imaging Centre, MAHSC, University of Manchester, Manchester M20 3LJ (United Kingdom); Jones, T, E-mail: matthew.walker@manchester.ac.uk [Academic Department of Radiation Oncology, Christie Hospital, University of Manchester, Manchester M20 4BX (United Kingdom)

    2011-02-21

    Iterative image reconstruction methods such as ordered-subset expectation maximization (OSEM) are widely used in PET. Reconstructions via OSEM are however reported to be biased for low-count data. We investigated this and considered the impact for dynamic PET. Patient listmode data were acquired in [{sup 11}C]DASB and [{sup 15}O]H{sub 2}O scans on the HRRT brain PET scanner. These data were subsampled to create many independent, low-count replicates. The data were reconstructed and the images from low-count data were compared to the high-count originals (from the same reconstruction method). This comparison enabled low-statistics bias to be calculated for the given reconstruction, as a function of the noise-equivalent counts (NEC). Two iterative reconstruction methods were tested, one with and one without an image-based resolution model (RM). Significant bias was observed when reconstructing data of low statistical quality, for both subsampled human and simulated data. For human data, this bias was substantially reduced by including a RM. For [{sup 11}C]DASB the low-statistics bias in the caudate head at 1.7 M NEC (approx. 30 s) was -5.5% and -13% with and without RM, respectively. We predicted biases in the binding potential of -4% and -10%. For quantification of cerebral blood flow for the whole-brain grey- or white-matter, using [{sup 15}O]H{sub 2}O and the PET autoradiographic method, a low-statistics bias of <2.5% and <4% was predicted for reconstruction with and without the RM. The use of a resolution model reduces low-statistics bias and can hence be beneficial for quantitative dynamic PET.

  4. Wind power prediction models

    Science.gov (United States)

    Levy, R.; Mcginness, H.

    1976-01-01

    Investigations were performed to predict the power available from the wind at the Goldstone, California, antenna site complex. The background for power prediction was derived from a statistical evaluation of available wind speed data records at this location and at nearby locations similarly situated within the Mojave desert. In addition to a model for power prediction over relatively long periods of time, an interim simulation model that produces sample wind speeds is described. The interim model furnishes uncorrelated sample speeds at hourly intervals that reproduce the statistical wind distribution at Goldstone. A stochastic simulation model to provide speed samples representative of both the statistical speed distributions and correlations is also discussed.

  5. Zephyr - the prediction models

    DEFF Research Database (Denmark)

    Nielsen, Torben Skov; Madsen, Henrik; Nielsen, Henrik Aalborg

    2001-01-01

    utilities as partners and users. The new models are evaluated for five wind farms in Denmark as well as one wind farm in Spain. It is shown that the predictions based on conditional parametric models are superior to the predictions obatined by state-of-the-art parametric models.......This paper briefly describes new models and methods for predicationg the wind power output from wind farms. The system is being developed in a project which has the research organization Risø and the department of Informatics and Mathematical Modelling (IMM) as the modelling team and all the Danish...

  6. Quantitative analysis of emphysema and airway measurements according to iterative reconstruction algorithms: comparison of filtered back projection, adaptive statistical iterative reconstruction and model-based iterative reconstruction

    Energy Technology Data Exchange (ETDEWEB)

    Choo, Ji Yung [Seoul National University Medical Research Center, Department of Radiology, Seoul National University College of Medicine, and Institute of Radiation Medicine, Seoul (Korea, Republic of); Korea University Ansan Hospital, Ansan-si, Department of Radiology, Gyeonggi-do (Korea, Republic of); Goo, Jin Mo; Park, Chang Min; Park, Sang Joon [Seoul National University Medical Research Center, Department of Radiology, Seoul National University College of Medicine, and Institute of Radiation Medicine, Seoul (Korea, Republic of); Seoul National University, Cancer Research Institute, Seoul (Korea, Republic of); Lee, Chang Hyun; Shim, Mi-Suk [Seoul National University Medical Research Center, Department of Radiology, Seoul National University College of Medicine, and Institute of Radiation Medicine, Seoul (Korea, Republic of)

    2014-04-15

    To evaluate filtered back projection (FBP) and two iterative reconstruction (IR) algorithms and their effects on the quantitative analysis of lung parenchyma and airway measurements on computed tomography (CT) images. Low-dose chest CT obtained in 281 adult patients were reconstructed using three algorithms: FBP, adaptive statistical IR (ASIR) and model-based IR (MBIR). Measurements of each dataset were compared: total lung volume, emphysema index (EI), airway measurements of the lumen and wall area as well as average wall thickness. Accuracy of airway measurements of each algorithm was also evaluated using an airway phantom. EI using a threshold of -950 HU was significantly different among the three algorithms in decreasing order of FBP (2.30 %), ASIR (1.49 %) and MBIR (1.20 %) (P < 0.01). Wall thickness was also significantly different among the three algorithms with FBP (2.09 mm) demonstrating thicker walls than ASIR (2.00 mm) and MBIR (1.88 mm) (P < 0.01). Airway phantom analysis revealed that MBIR showed the most accurate value for airway measurements. The three algorithms presented different EIs and wall thicknesses, decreasing in the order of FBP, ASIR and MBIR. Thus, care should be taken in selecting the appropriate IR algorithm on quantitative analysis of the lung. (orig.)

  7. Inverse and Predictive Modeling

    Energy Technology Data Exchange (ETDEWEB)

    Syracuse, Ellen Marie [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2017-09-27

    The LANL Seismo-Acoustic team has a strong capability in developing data-driven models that accurately predict a variety of observations. These models range from the simple – one-dimensional models that are constrained by a single dataset and can be used for quick and efficient predictions – to the complex – multidimensional models that are constrained by several types of data and result in more accurate predictions. Team members typically build models of geophysical characteristics of Earth and source distributions at scales of 1 to 1000s of km, the techniques used are applicable for other types of physical characteristics at an even greater range of scales. The following cases provide a snapshot of some of the modeling work done by the Seismo- Acoustic team at LANL.

  8. Helium embrittlement model and program plan for weldability of ITER materials

    International Nuclear Information System (INIS)

    Louthan, M.R. Jr.; Kanne, W.R. Jr.; Tosten, M.H.; Rankin, D.T.; Cross, B.J.

    1997-02-01

    This report presents a refined model of how helium embrittles irradiated stainless steel during welding. The model was developed based on experimental observations drawn from experience at the Savannah River Site and from an extensive literature search. The model shows how helium content, stress, and temperature interact to produce embrittlement. The model takes into account defect structure, time, and gradients in stress, temperature and composition. The report also proposes an experimental program based on the refined helium embrittlement model. A parametric study of the effect of initial defect density on the resulting helium bubble distribution and weldability of tritium aged material is proposed to demonstrate the roll that defects play in embrittlement. This study should include samples charged using vastly different aging times to obtain equivalent helium contents. Additionally, studies to establish the minimal sample thickness and size are needed for extrapolation to real structural materials. The results of these studies should provide a technical basis for the use of tritium aged materials to predict the weldability of irradiated structures. Use of tritium charged and aged material would provide a cost effective approach to developing weld repair techniques for ITER components

  9. Theoretical prediction of thermodynamic properties of tritiated beryllium molecules and application to ITER source term

    Energy Technology Data Exchange (ETDEWEB)

    Virot, F., E-mail: francois.virot@irsn.fr; Barrachin, M.; Souvi, S.; Cantrel, L.

    2014-10-15

    Highlights: • Standard enthalpies of formation of BeH, BeH{sub 2}, BeOH, Be(OH){sub 2} have been calculated. • The impact of hydrogen isotopy on thermodynamic properties has been shown. • Speciation in the vacuum vessel shows that the main tritiated species is tritiated steam. • Beryllium hydroxide and hydride could exist during an accidental event. - Abstract: By quantum chemistry calculations, we have evaluated the standard enthalpies of formation of some gaseous species of the Be-O-H chemical system: BeH, BeH{sub 2}, BeOH, Be(OH){sub 2} for which the values in the referenced thermodynamic databases (NIST-JANAF [1] or COACH [2]) were, due to the lack of experimental data, estimated or reported with a large uncertainty. Comparison between post-HF, DFT approaches and available experimental data allows validation of the ability of an accurate exchange-correlation functional, VSXC, to predict the thermo-chemical properties of the beryllium species of interest. Deviation of enthalpy of formation induced by changes in hydrogen isotopy has been also calculated. From these new theoretically determinated data, we have calculated the chemical speciation in conditions simulating an accident of water ingress in the vacuum vessel of ITER.

  10. Benchmarking of MCAM 4.0 with the ITER 3D Model

    International Nuclear Information System (INIS)

    Ying Li; Lei Lu; Aiping Ding; Haimin Hu; Qin Zeng; Shanliang Zheng; Yican Wu

    2006-01-01

    Monte Carlo particle transport simulations are widely employed in fields such as nuclear engineering, radio-therapy and space science. Describing and verifying the 3D geometry of fusion devices, however, are among the most complex tasks of MCNP calculation problems in nuclear analysis. The manual modeling of a complex geometry for MCNP code, though a common practice, is an extensive, time-consuming, and error prone task. An efficient solution is to shift the geometric modeling into Computer Aided Design(CAD) systems and to use an interface for MCNP to convert the CAD model to MCNP file. The advantage of this approach lies in the fact that it allows access to full features of modern CAD systems facilitating the geometric modeling and utilizing the existing CAD models. MCAM(MCNP Automatic Modeling System) is an integrated tool for CAD model preprocessing, accurate bi-directional conversion between CAD/MCNP models, neutronics property processing and geometric modeling developed by FDS team in ASIPP and Hefei University of Technology. MCAM4.0 has been extended and enhanced to support various CAD file formats and the preprocessing of CAD model, such as healing, automatic model reconstruction, overlap detection and correction, automatic void modeling. The ITER international benchmark model is provided by ITER international team to compare the CAD/MCNP programs being developed in the ITER participant teams. It is created in CATIA/V5, which has been chosen as the CAD system for ITER design, including all the important parts and components of the ITER device. The benchmark model contains vast curve surfaces, which can fully test the ability of MCNP/CAD codes. The whole processing procedure of this model will be presented in this paper, which includes the geometric model processing, neutroics property processing, converting to MCNP input file, calculating with MCNP and analysis. The nuclear analysis results of the model will be given in the end. Although these preliminary

  11. Neutronics analysis of the International Thermonuclear Experimental Reactor (ITER) MCNP ''Benchmark CAD Model'' with the ATTILA discrete ordinance code

    International Nuclear Information System (INIS)

    Youssef, M.Z.; Feder, R.; Davis, I.

    2007-01-01

    The ITER IT has adopted the newly developed FEM, 3-D, and CAD-based Discrete Ordinates code, ATTILA for the neutronics studies contingent on its success in predicting key neutronics parameters and nuclear field according to the stringent QA requirements set forth by the Management and Quality Program (MQP). ATTILA has the advantage of providing a full flux and response functions mapping everywhere in one run where components subjected to excessive radiation level and strong streaming paths can be identified. The ITER neutronics community had agreed to use a standard CAD model of ITER (40 degree sector, denoted ''Benchmark CAD Model'') to compare results for several responses selected for calculation benchmarking purposes to test the efficiency and accuracy of the CAD-MCNP approach developed by each party. Since ATTILA seems to lend itself as a powerful design tool with minimal turnaround time, it was decided to benchmark this model with ATTILA as well and compare the results to those obtained with the CAD MCNP calculations. In this paper we report such comparison for five responses, namely: (1) Neutron wall load on the surface of the 18 shield blanket module (SBM), (2) Neutron flux and nuclear heating rate in the divertor cassette, (3) nuclear heating rate in the winding pack of the inner leg of the TF coil, (4) Radial flux profile across dummy port plug and shield plug placed in the equatorial port, and (5) Flux at seven point locations situated behind the equatorial port plug. (orig.)

  12. Combining Evolutionary Information and an Iterative Sampling Strategy for Accurate Protein Structure Prediction.

    Directory of Open Access Journals (Sweden)

    Tatjana Braun

    2015-12-01

    Full Text Available Recent work has shown that the accuracy of ab initio structure prediction can be significantly improved by integrating evolutionary information in form of intra-protein residue-residue contacts. Following this seminal result, much effort is put into the improvement of contact predictions. However, there is also a substantial need to develop structure prediction protocols tailored to the type of restraints gained by contact predictions. Here, we present a structure prediction protocol that combines evolutionary information with the resolution-adapted structural recombination approach of Rosetta, called RASREC. Compared to the classic Rosetta ab initio protocol, RASREC achieves improved sampling, better convergence and higher robustness against incorrect distance restraints, making it the ideal sampling strategy for the stated problem. To demonstrate the accuracy of our protocol, we tested the approach on a diverse set of 28 globular proteins. Our method is able to converge for 26 out of the 28 targets and improves the average TM-score of the entire benchmark set from 0.55 to 0.72 when compared to the top ranked models obtained by the EVFold web server using identical contact predictions. Using a smaller benchmark, we furthermore show that the prediction accuracy of our method is only slightly reduced when the contact prediction accuracy is comparatively low. This observation is of special interest for protein sequences that only have a limited number of homologs.

  13. Update of the ITER MELCOR model for the validation of the Cryostat design

    Energy Technology Data Exchange (ETDEWEB)

    Martínez, M.; Labarta, C.; Terrón, S.; Izquierdo, J.; Perlado, J.M.

    2015-07-01

    Some transients can compromise the vacuum in the Cryostat of ITER and cause significant loads. A MELCOR model has been updated in order to assess this loads. Transients have been run with this model and its result will be used in the mechanical assessment of the cryostat. (Author)

  14. Physics fundamentals for ITER

    International Nuclear Information System (INIS)

    Rosenbluth, M.N.

    1999-01-01

    The design of an experimental thermonuclear reactor requires both cutting-edge technology and physics predictions precise enough to carry forward the design. The past few years of worldwide physics studies have seen great progress in understanding, innovation and integration. We will discuss this progress and the remaining issues in several key physics areas. (1) Transport and plasma confinement. A worldwide database has led to an 'empirical scaling law' for tokamaks which predicts adequate confinement for the ITER fusion mission, albeit with considerable but acceptable uncertainty. The ongoing revolution in computer capabilities has given rise to new gyrofluid and gyrokinetic simulations of microphysics which may be expected in the near future to attain predictive accuracy. Important databases on H-mode characteristics and helium retention have also been assembled. (2) Divertors, heat removal and fuelling. A novel concept for heat removal - the radiative, baffled, partially detached divertor - has been designed for ITER. Extensive two-dimensional (2D) calculations have been performed and agree qualitatively with recent experiments. Preliminary studies of the interaction of this configuration with core confinement are encouraging and the success of inside pellet launch provides an attractive alternative fuelling method. (3) Macrostability. The ITER mission can be accomplished well within ideal magnetohydrodynamic (MHD) stability limits, except for internal kink modes. Comparisons with JET, as well as a theoretical model including kinetic effects, predict such sawteeth will be benign in ITER. Alternative scenarios involving delayed current penetration or off-axis current drive may be employed if required. The recent discovery of neoclassical beta limits well below ideal MHD limits poses a threat to performance. Extrapolation to reactor scale is as yet unclear. In theory such modes are controllable by current drive profile control or feedback and experiments should

  15. A Novel Iterative and Dynamic Trust Computing Model for Large Scaled P2P Networks

    Directory of Open Access Journals (Sweden)

    Zhenhua Tan

    2016-01-01

    Full Text Available Trust management has been emerging as an essential complementary part to security mechanisms of P2P systems, and trustworthiness is one of the most important concepts driving decision making and establishing reliable relationships. Collusion attack is a main challenge to distributed P2P trust model. Large scaled P2P systems have typical features, such as large scaled data with rapid speed, and this paper presented an iterative and dynamic trust computation model named IDTrust (Iterative and Dynamic Trust model according to these properties. First of all, a three-layered distributed trust communication architecture was presented in IDTrust so as to separate evidence collector and trust decision from P2P service. Then an iterative and dynamic trust computation method was presented to improve efficiency, where only latest evidences were enrolled during one iterative computation. On the basis of these, direct trust model, indirect trust model, and global trust model were presented with both explicit and implicit evidences. We consider multifactors in IDTrust model according to different malicious behaviors, such as similarity, successful transaction rate, and time decay factors. Simulations and analysis proved the rightness and efficiency of IDTrust against attacks with quick respond and sensitiveness during trust decision.

  16. Systematic iteration between model and methodology: A proposed approach to evaluating unintended consequences.

    Science.gov (United States)

    Morell, Jonathan A

    2017-09-18

    This article argues that evaluators could better deal with unintended consequences if they improved their methods of systematically and methodically combining empirical data collection and model building over the life cycle of an evaluation. This process would be helpful because it can increase the timespan from when the need for a change in methodology is first suspected to the time when the new element of the methodology is operational. The article begins with an explanation of why logic models are so important in evaluation, and why the utility of models is limited if they are not continually revised based on empirical evaluation data. It sets the argument within the larger context of the value and limitations of models in the scientific enterprise. Following will be a discussion of various issues that are relevant to model development and revision. What is the relevance of complex system behavior for understanding predictable and unpredictable unintended consequences, and the methods needed to deal with them? How might understanding of unintended consequences be improved with an appreciation of generic patterns of change that are independent of any particular program or change effort? What are the social and organizational dynamics that make it rational and adaptive to design programs around single-outcome solutions to multi-dimensional problems? How does cognitive bias affect our ability to identify likely program outcomes? Why is it hard to discern change as a result of programs being embedded in multi-component, continually fluctuating, settings? The last part of the paper outlines a process for actualizing systematic iteration between model and methodology, and concludes with a set of research questions that speak to how the model/data process can be made efficient and effective. Copyright © 2017. Published by Elsevier Ltd.

  17. Coupled iterated map models of action potential dynamics in a one-dimensional cable of cardiac cells

    International Nuclear Information System (INIS)

    Wang Shihong; Xie Yuanfang; Qu Zhilin

    2008-01-01

    Low-dimensional iterated map models have been widely used to study action potential dynamics in isolated cardiac cells. Coupled iterated map models have also been widely used to investigate action potential propagation dynamics in one-dimensional (1D) coupled cardiac cells, however, these models are usually empirical and not carefully validated. In this study, we first developed two coupled iterated map models which are the standard forms of diffusively coupled maps and overcome the limitations of the previous models. We then determined the coupling strength and space constant by quantitatively comparing the 1D action potential duration profile from the coupled cardiac cell model described by differential equations with that of the coupled iterated map models. To further validate the coupled iterated map models, we compared the stability conditions of the spatially uniform state of the coupled iterated maps and those of the 1D ionic model and showed that the coupled iterated map model could well recapitulate the stability conditions, i.e. the spatially uniform state is stable unless the state is chaotic. Finally, we combined conduction into the developed coupled iterated map model to study the effects of coupling strength on wave stabilities and showed that the diffusive coupling between cardiac cells tends to suppress instabilities during reentry in a 1D ring and the onset of discordant alternans in a periodically paced 1D cable

  18. Polynomial factor models : non-iterative estimation via method-of-moments

    NARCIS (Netherlands)

    Schuberth, Florian; Büchner, Rebecca; Schermelleh-Engel, Karin; Dijkstra, Theo K.

    2017-01-01

    We introduce a non-iterative method-of-moments estimator for non-linear latent variable (LV) models. Under the assumption of joint normality of all exogenous variables, we use the corrected moments of linear combinations of the observed indicators (proxies) to obtain consistent path coefficient and

  19. Extending the reach of strong-coupling: an iterative technique for Hamiltonian lattice models

    International Nuclear Information System (INIS)

    Alberty, J.; Greensite, J.; Patkos, A.

    1983-12-01

    The authors propose an iterative method for doing lattice strong-coupling-like calculations in a range of medium to weak couplings. The method is a modified Lanczos scheme, with greatly improved convergence properties. The technique is tested on the Mathieu equation and on a Hamiltonian finite-chain XY model, with excellent results. (Auth.)

  20. PedCut: an iterative framework for pedestrian segmentation combining shape models and multiple data cues

    NARCIS (Netherlands)

    Flohr, F.; Gavrila, D.M.; Burghardt, T.; Damen, D.; Mayol-Cuevas, W.; Mirmehdi, M.

    2013-01-01

    This paper presents an iterative, EM-like framework for accurate pedestrian segmentation, combining generative shape models and multiple data cues. In the E-step, shape priors are introduced in the unary terms of a Conditional Random Field (CRF) formulation, joining other data terms derived from

  1. Overview of the hydraulic characteristics of the ITER Central Solenoid Model Coil conductors after 15 years of test campaigns

    Science.gov (United States)

    Brighenti, A.; Bonifetto, R.; Isono, T.; Kawano, K.; Russo, G.; Savoldi, L.; Zanino, R.

    2017-12-01

    The ITER Central Solenoid Model Coil (CSMC) is a superconducting magnet, layer-wound two-in-hand using Nb3Sn cable-in-conduit conductors (CICCs) with the central channel typical of ITER magnets, cooled with supercritical He (SHe) at ∼4.5 K and 0.5 MPa, operating for approximately 15 years at the National Institutes for Quantum and Radiological Science and Technology in Naka, Japan. The aim of this work is to give an overview of the issues related to the hydraulic performance of the three different CICCs used in the CSMC based on the extensive experimental database put together during the past 15 years. The measured hydraulic characteristics are compared for the different test campaigns and compared also to those coming from the tests of short conductor samples when available. It is shown that the hydraulic performance of the CSMC conductors did not change significantly in the sequence of test campaigns with more than 50 cycles up to 46 kA and 8 cooldown/warmup cycles from 300 K to 4.5 K. The capability of the correlations typically used to predict the friction factor of the SHe for the design and analysis of ITER-like CICCs is also shown.

  2. Development of Acoustic Model-Based Iterative Reconstruction Technique for Thick-Concrete Imaging

    Energy Technology Data Exchange (ETDEWEB)

    Almansouri, Hani [Purdue University; Clayton, Dwight A [ORNL; Kisner, Roger A [ORNL; Polsky, Yarom [ORNL; Bouman, Charlie [Purdue University; Santos-Villalobos, Hector J [ORNL

    2016-01-01

    Ultrasound signals have been used extensively for non-destructive evaluation (NDE). However, typical reconstruction techniques, such as the synthetic aperture focusing technique (SAFT), are limited to quasi-homogenous thin media. New ultrasonic systems and reconstruction algorithms are in need for one-sided NDE of non-homogenous thick objects. An application example space is imaging of reinforced concrete structures for commercial nuclear power plants (NPPs). These structures provide important foundation, support, shielding, and containment functions. Identification and management of aging and degradation of concrete structures is fundamental to the proposed long-term operation of NPPs. Another example is geothermal and oil/gas production wells. These multi-layered structures are composed of steel, cement, and several types of soil and rocks. Ultrasound systems with greater penetration range and image quality will allow for better monitoring of the well's health and prediction of high-pressure hydraulic fracturing of the rock. These application challenges need to be addressed with an integrated imaging approach, where the application, hardware, and reconstruction software are highly integrated and optimized. Therefore, we are developing an ultrasonic system with Model-Based Iterative Reconstruction (MBIR) as the image reconstruction backbone. As the first implementation of MBIR for ultrasonic signals, this paper document the first implementation of the algorithm and show reconstruction results for synthetically generated data.

  3. Development of Acoustic Model-Based Iterative Reconstruction Technique for Thick-Concrete Imaging

    Energy Technology Data Exchange (ETDEWEB)

    Almansouri, Hani [Purdue University; Clayton, Dwight A [ORNL; Kisner, Roger A [ORNL; Polsky, Yarom [ORNL; Bouman, Charlie [Purdue University; Santos-Villalobos, Hector J [ORNL

    2015-01-01

    Ultrasound signals have been used extensively for non-destructive evaluation (NDE). However, typical reconstruction techniques, such as the synthetic aperture focusing technique (SAFT), are limited to quasi-homogenous thin media. New ultrasonic systems and reconstruction algorithms are in need for one-sided NDE of non-homogenous thick objects. An application example space is imaging of reinforced concrete structures for commercial nuclear power plants (NPPs). These structures provide important foundation, support, shielding, and containment functions. Identification and management of aging and degradation of concrete structures is fundamental to the proposed long-term operation of NPPs. Another example is geothermal and oil/gas production wells. These multi-layered structures are composed of steel, cement, and several types of soil and rocks. Ultrasound systems with greater penetration range and image quality will allow for better monitoring of the well s health and prediction of high-pressure hydraulic fracturing of the rock. These application challenges need to be addressed with an integrated imaging approach, where the application, hardware, and reconstruction software are highly integrated and optimized. Therefore, we are developing an ultrasonic system with Model-Based Iterative Reconstruction (MBIR) as the image reconstruction backbone. As the first implementation of MBIR for ultrasonic signals, this paper document the first implementation of the algorithm and show reconstruction results for synthetically generated data.

  4. Real-Time Optimization for Economic Model Predictive Control

    DEFF Research Database (Denmark)

    Sokoler, Leo Emil; Edlund, Kristian; Frison, Gianluca

    2012-01-01

    In this paper, we develop an efficient homogeneous and self-dual interior-point method for the linear programs arising in economic model predictive control. To exploit structure in the optimization problems, the algorithm employs a highly specialized Riccati iteration procedure. Simulations show...

  5. Iteration Capping For Discrete Choice Models Using the EM Algorithm

    NARCIS (Netherlands)

    Kabatek, J.

    2013-01-01

    The Expectation-Maximization (EM) algorithm is a well-established estimation procedure which is used in many domains of econometric analysis. Recent application in a discrete choice framework (Train, 2008) facilitated estimation of latent class models allowing for very exible treatment of unobserved

  6. Completion of the ITER central solenoid model coils installation

    International Nuclear Information System (INIS)

    Tsuji, H.

    1999-01-01

    The short article details how dozens of problems, regarding the central solenoid model coils installation, were faced and successfully overcome one by one at JAERI-Naga. A black and white photograph shows K. Kwano, a staff member of the JAERI superconducting magnet laboratory, to be still inside the vacuum tank while the lid is already being brought down..

  7. ITER...ation

    International Nuclear Information System (INIS)

    Troyon, F.

    1997-01-01

    Recurrent attacks against ITER, the new generation of tokamak are a mix of political and scientific arguments. This short article draws a historical review of the European fusion program. This program has allowed to build and manage several installations in the aim of getting experimental results necessary to lead the program forwards. ITER will bring together a fusion reactor core with technologies such as materials, superconductive coils, heating devices and instrumentation in order to validate and delimit the operating range. ITER will be a logical and decisive step towards the use of controlled fusion. (A.C.)

  8. Predictive Surface Complexation Modeling

    Energy Technology Data Exchange (ETDEWEB)

    Sverjensky, Dimitri A. [Johns Hopkins Univ., Baltimore, MD (United States). Dept. of Earth and Planetary Sciences

    2016-11-29

    Surface complexation plays an important role in the equilibria and kinetics of processes controlling the compositions of soilwaters and groundwaters, the fate of contaminants in groundwaters, and the subsurface storage of CO2 and nuclear waste. Over the last several decades, many dozens of individual experimental studies have addressed aspects of surface complexation that have contributed to an increased understanding of its role in natural systems. However, there has been no previous attempt to develop a model of surface complexation that can be used to link all the experimental studies in order to place them on a predictive basis. Overall, my research has successfully integrated the results of the work of many experimentalists published over several decades. For the first time in studies of the geochemistry of the mineral-water interface, a practical predictive capability for modeling has become available. The predictive correlations developed in my research now enable extrapolations of experimental studies to provide estimates of surface chemistry for systems not yet studied experimentally and for natural and anthropogenically perturbed systems.

  9. The PRODIGY project--the iterative development of the release one model.

    Science.gov (United States)

    Purves, I N; Sugden, B; Booth, N; Sowerby, M

    1999-01-01

    We summarise the findings of the first two research phases of the PRODIGY project and describe the guidance model for Release One of the ensuing nationally available system. This model was a result of the iterative design process of the PRODIGY research project, which took place between 1995 and 1998 in up to 183 general practices in the England. Release One of PRODIGY is now being rolled out to all (27,000) General Practitioners in England during 1999-2000.

  10. Time-dependent modeling of dust injection in semi-detached ITER divertor plasma

    Science.gov (United States)

    Smirnov, Roman; Krasheninnikov, Sergei

    2017-10-01

    At present, it is generally understood that dust related issues will play important role in operation of the next step fusion devices, i.e. ITER, and in the development of future fusion reactors. Recent progress in research on dust in magnetic fusion devises has outlined several topics of particular concern: a) degradation of fusion plasma performance; b) impairment of in-vessel diagnostic instruments; and c) safety issues related to dust reactivity and tritium retention. In addition, observed dust events in fusion edge plasmas are highly irregular and require consideration of temporal evolution of both the dust and the fusion plasma. In order to address the dust-related fusion performance issues, we have coupled the dust transport code DUSTT and the edge plasma transport code UEDGE in time-dependent manner, allowing modeling of transient dust-induced phenomena in fusion edge plasmas. Using the coupled codes we simulate burst-like injection of tungsten dust into ITER divertor plasma in semi-detached regime, which is considered as preferable ITER divertor operational mode based on the plasma and heat load control restrictions. Analysis of transport of the dust and the dust-produced impurities, and of dynamics of the ITER divertor and edge plasma in response to the dust injection will be presented. This material is based upon work supported by the U.S. Department of Energy, Office of Science, Office of Fusion Energy Sciences, under Award Number DE-FG02-06ER54852.

  11. A simple iterative method for estimating evapotranspiration with integrated surface/subsurface flow models

    Science.gov (United States)

    Hwang, H.-T.; Park, Y.-J.; Frey, S. K.; Berg, S. J.; Sudicky, E. A.

    2015-12-01

    This work presents an iterative, water balance based approach to estimate actual evapotranspiration (ET) with integrated surface/subsurface flow models. Traditionally, groundwater level fluctuation methods have been widely accepted and used for estimating ET and net groundwater recharge; however, in watersheds where interactions between surface and subsurface flow regimes are highly dynamic, the traditional method may be overly simplistic. Here, an innovative methodology is derived and demonstrated for using the water balance equation in conjunction with a fully-integrated surface and subsurface hydrologic model (HydroGeoSphere) in order to estimate ET at watershed and sub-watershed scales. The method invokes a simple and robust iterative numerical solution. For the proof of concept demonstrations, the method is used to estimate ET for a simple synthetic watershed and then for a real, highly-characterized 7000 km2 watershed in Southern Ontario, Canada (Grand River Watershed). The results for the Grand River Watershed show that with three to five iterations, the solution converges to a result where there is less than 1% relative error in stream flow calibration at 16 stream gauging stations. The spatially-averaged ET estimated using the iterative method shows a high level of agreement (R2 = 0.99) with that from a benchmark case simulated with an ET model embedded directly in HydroGeoSphere. The new approach presented here is applicable to any watershed that is suited for integrated surface water/groundwater flow modelling and where spatially-averaged ET estimates are useful for calibrating modelled stream discharge.

  12. Candidate Prediction Models and Methods

    DEFF Research Database (Denmark)

    Nielsen, Henrik Aalborg; Nielsen, Torben Skov; Madsen, Henrik

    2005-01-01

    This document lists candidate prediction models for Work Package 3 (WP3) of the PSO-project called ``Intelligent wind power prediction systems'' (FU4101). The main focus is on the models transforming numerical weather predictions into predictions of power production. The document also outlines...

  13. Building generic anatomical models using virtual model cutting and iterative registration.

    Science.gov (United States)

    Xiao, Mei; Soh, Jung; Meruvia-Pastor, Oscar; Schmidt, Eric; Hallgrímsson, Benedikt; Sensen, Christoph W

    2010-02-08

    Using 3D generic models to statistically analyze trends in biological structure changes is an important tool in morphometrics research. Therefore, 3D generic models built for a range of populations are in high demand. However, due to the complexity of biological structures and the limited views of them that medical images can offer, it is still an exceptionally difficult task to quickly and accurately create 3D generic models (a model is a 3D graphical representation of a biological structure) based on medical image stacks (a stack is an ordered collection of 2D images). We show that the creation of a generic model that captures spatial information exploitable in statistical analyses is facilitated by coupling our generalized segmentation method to existing automatic image registration algorithms. The method of creating generic 3D models consists of the following processing steps: (i) scanning subjects to obtain image stacks; (ii) creating individual 3D models from the stacks; (iii) interactively extracting sub-volume by cutting each model to generate the sub-model of interest; (iv) creating image stacks that contain only the information pertaining to the sub-models; (v) iteratively registering the corresponding new 2D image stacks; (vi) averaging the newly created sub-models based on intensity to produce the generic model from all the individual sub-models. After several registration procedures are applied to the image stacks, we can create averaged image stacks with sharp boundaries. The averaged 3D model created from those image stacks is very close to the average representation of the population. The image registration time varies depending on the image size and the desired accuracy of the registration. Both volumetric data and surface model for the generic 3D model are created at the final step. Our method is very flexible and easy to use such that anyone can use image stacks to create models and retrieve a sub-region from it at their ease. Java

  14. Building generic anatomical models using virtual model cutting and iterative registration

    Directory of Open Access Journals (Sweden)

    Hallgrímsson Benedikt

    2010-02-01

    Full Text Available Abstract Background Using 3D generic models to statistically analyze trends in biological structure changes is an important tool in morphometrics research. Therefore, 3D generic models built for a range of populations are in high demand. However, due to the complexity of biological structures and the limited views of them that medical images can offer, it is still an exceptionally difficult task to quickly and accurately create 3D generic models (a model is a 3D graphical representation of a biological structure based on medical image stacks (a stack is an ordered collection of 2D images. We show that the creation of a generic model that captures spatial information exploitable in statistical analyses is facilitated by coupling our generalized segmentation method to existing automatic image registration algorithms. Methods The method of creating generic 3D models consists of the following processing steps: (i scanning subjects to obtain image stacks; (ii creating individual 3D models from the stacks; (iii interactively extracting sub-volume by cutting each model to generate the sub-model of interest; (iv creating image stacks that contain only the information pertaining to the sub-models; (v iteratively registering the corresponding new 2D image stacks; (vi averaging the newly created sub-models based on intensity to produce the generic model from all the individual sub-models. Results After several registration procedures are applied to the image stacks, we can create averaged image stacks with sharp boundaries. The averaged 3D model created from those image stacks is very close to the average representation of the population. The image registration time varies depending on the image size and the desired accuracy of the registration. Both volumetric data and surface model for the generic 3D model are created at the final step. Conclusions Our method is very flexible and easy to use such that anyone can use image stacks to create models and

  15. State space model-based trust evaluation over wireless sensor networks: an iterative particle filter approach

    Directory of Open Access Journals (Sweden)

    Bin Liu

    2017-03-01

    Full Text Available In this study, the authors propose a state space modelling approach for trust evaluation in wireless sensor networks. In their state space trust model (SSTM, each sensor node is associated with a trust metric, which measures to what extent the data transmitted from this node would better be trusted by the server node. Given the SSTM, they translate the trust evaluation problem to be a non-linear state filtering problem. To estimate the state based on the SSTM, a component-wise iterative state inference procedure is proposed to work in tandem with the particle filter (PF, and thus the resulting algorithm is termed as iterative PF (IPF. The computational complexity of the IPF algorithm is theoretically linearly related with the dimension of the state. This property is desirable especially for high-dimensional trust evaluation and state filtering problems. The performance of the proposed algorithm is evaluated by both simulations and real data analysis.

  16. Mechanical and Electrical Modeling of Strands in Two ITER CS Cable Designs

    CERN Document Server

    Torre, A; Ciazynski, D

    2014-01-01

    Following the test of the first Central Solenoid (CS) conductor short samples for the International Thermonuclear Experimental Reactor (ITER) in the SULTAN facility, Iter Organization (IO) decided to manufacture and test two alternate samples using four different cable designs. These samples, while using the same Nb3Sn strand, were meant to assess the influence of various cable design parameters on the conductor performance and behavior under mechanical cycling. In particular, the second of these samples, CSIO2, aimed at comparing designs with modified cabling twist pitches sequences. This sample has been tested, and the two legs exhibited very different behaviors. To help understand what could lead to such a difference, these two cables were mechanically modeled using the MULTIFIL code, and the resulting strain map was used as an input into the CEA electrical code CARMEN. This article presents the main data extracted from the mechanical simulation and its use into the electrical modeling of individual strand...

  17. Clustered iterative stochastic ensemble method for multi-modal calibration of subsurface flow models

    KAUST Repository

    Elsheikh, Ahmed H.

    2013-05-01

    A novel multi-modal parameter estimation algorithm is introduced. Parameter estimation is an ill-posed inverse problem that might admit many different solutions. This is attributed to the limited amount of measured data used to constrain the inverse problem. The proposed multi-modal model calibration algorithm uses an iterative stochastic ensemble method (ISEM) for parameter estimation. ISEM employs an ensemble of directional derivatives within a Gauss-Newton iteration for nonlinear parameter estimation. ISEM is augmented with a clustering step based on k-means algorithm to form sub-ensembles. These sub-ensembles are used to explore different parts of the search space. Clusters are updated at regular intervals of the algorithm to allow merging of close clusters approaching the same local minima. Numerical testing demonstrates the potential of the proposed algorithm in dealing with multi-modal nonlinear parameter estimation for subsurface flow models. © 2013 Elsevier B.V.

  18. Dynamic analysis of ITER tokamak. Based on results of vibration test using scaled model

    International Nuclear Information System (INIS)

    Takeda, Nobukazu; Kakudate, Satoshi; Nakahira, Masataka

    2005-01-01

    The vibration experiments of the support structures with flexible plates for the ITER major components such as toroidal field coil (TF coil) and vacuum vessel (VV) were performed using small-sized flexible plates aiming to obtain its basic mechanical characteristics such as dependence of the stiffness on the loading angle. The experimental results were compared with the analytical ones in order to estimate an adequate analytical model for ITER support structure with flexible plates. As a result, the bolt connection of the flexible plates on the base plate strongly affected on the stiffness of the flexible plates. After studies of modeling the connection of the bolts, it is found that the analytical results modeling the bolts with finite stiffness only in the axial direction and infinite stiffness in the other directions agree well with the experimental ones. Based on this, numerical analysis regarding the actual support structure of the ITER VV and TF coil was performed. The support structure composed of flexible plates and connection bolts was modeled as a spring composed of only two spring elements simulating the in-plane and out-of-plane stiffness of the support structure with flexible plates including the effect of connection bolts. The stiffness of both spring models for VV and TF coil agree well with that of shell models, simulating actual structures such as flexible plates and connection bolts based on the experimental results. It is therefore found that the spring model with the only two values of stiffness enables to simplify the complicated support structure with flexible plates for the dynamic analysis of the VV and TF coil. Using the proposed spring model, the dynamic analysis of the VV and TF coil for the ITER were performed to estimate the integrity under the design earthquake. As a result, it is found that the maximum relative displacement of 8.6 mm between VV and TF coil is much less than 100 mm, so that the integrity of the VV and TF coil of the

  19. An Iterative Algorithm to Determine the Dynamic User Equilibrium in a Traffic Simulation Model

    Science.gov (United States)

    Gawron, C.

    An iterative algorithm to determine the dynamic user equilibrium with respect to link costs defined by a traffic simulation model is presented. Each driver's route choice is modeled by a discrete probability distribution which is used to select a route in the simulation. After each simulation run, the probability distribution is adapted to minimize the travel costs. Although the algorithm does not depend on the simulation model, a queuing model is used for performance reasons. The stability of the algorithm is analyzed for a simple example network. As an application example, a dynamic version of Braess's paradox is studied.

  20. Scaling of the MHD perturbation amplitude required to trigger a disruption and predictions for ITER

    Czech Academy of Sciences Publication Activity Database

    de Vries, P.C.; Pautasso, G.; Nardon, E.; Cahyna, Pavel; Gerasimov, S.; Havlíček, Josef; Hender, T.C.; Huijsmans, G.T.A.; Lehnen, M.; Maraschek, M.; Markovič, Tomáš; Snipes, J.A.

    2016-01-01

    Roč. 56, č. 2 (2016), č. článku 026007. ISSN 0029-5515 R&D Projects: GA MŠk(CZ) LM2011021 EU Projects: European Commission(XE) 633053 - EUROfusion Institutional support: RVO:61389021 Keywords : disruptions * locked modes * MHD instabilities * ITER * COMPASS tokamak Subject RIV: BL - Plasma and Gas Discharge Physics OBOR OECD: Fluids and plasma physics (including surface physics ) Impact factor: 3.307, year: 2016 http://iopscience.iop.org/article/10.1088/0029-5515/56/2/026007/meta

  1. Scaling of the MHD perturbation amplitude required to trigger a disruption and predictions for ITER

    Czech Academy of Sciences Publication Activity Database

    de Vries, P.C.; Pautasso, G.; Nardon, E.; Cahyna, Pavel; Gerasimov, S.; Havlíček, Josef; Hender, T.C.; Huijsmans, G.T.A.; Lehnen, M.; Maraschek, M.; Markovič, Tomáš; Snipes, J.A.

    2016-01-01

    Roč. 56, č. 2 (2016), č. článku 026007. ISSN 0029-5515 R&D Projects: GA MŠk(CZ) LM2011021 EU Projects: European Commission(XE) 633053 - EUROfusion Institutional support: RVO:61389021 Keywords : disruptions * locked modes * MHD instabilities * ITER * COMPASS tokamak Subject RIV: BL - Plasma and Gas Discharge Physics OBOR OECD: Fluids and plasma physics (including surface physics) Impact factor: 3.307, year: 2016 http://iopscience.iop.org/article/10.1088/0029-5515/56/2/026007/meta

  2. An Iterative Interplanetary Scintillation (IPS) Analysis Using Time-dependent 3-D MHD Models as Kernels

    Science.gov (United States)

    Jackson, B. V.; Yu, H. S.; Hick, P. P.; Buffington, A.; Odstrcil, D.; Kim, T. K.; Pogorelov, N. V.; Tokumaru, M.; Bisi, M. M.; Kim, J.; Yun, J.

    2017-12-01

    The University of California, San Diego has developed an iterative remote-sensing time-dependent three-dimensional (3-D) reconstruction technique which provides volumetric maps of density, velocity, and magnetic field. We have applied this technique in near real time for over 15 years with a kinematic model approximation to fit data from ground-based interplanetary scintillation (IPS) observations. Our modeling concept extends volumetric data from an inner boundary placed above the Alfvén surface out to the inner heliosphere. We now use this technique to drive 3-D MHD models at their inner boundary and generate output 3-D data files that are fit to remotely-sensed observations (in this case IPS observations), and iterated. These analyses are also iteratively fit to in-situ spacecraft measurements near Earth. To facilitate this process, we have developed a traceback from input 3-D MHD volumes to yield an updated boundary in density, temperature, and velocity, which also includes magnetic-field components. Here we will show examples of this analysis using the ENLIL 3D-MHD and the University of Alabama Multi-Scale Fluid-Kinetic Simulation Suite (MS-FLUKSS) heliospheric codes. These examples help refine poorly-known 3-D MHD variables (i.e., density, temperature), and parameters (gamma) by fitting heliospheric remotely-sensed data between the region near the solar surface and in-situ measurements near Earth.

  3. An iterative representer-based scheme for data inversion in reservoir modeling

    International Nuclear Information System (INIS)

    Iglesias, Marco A; Dawson, Clint

    2009-01-01

    In this paper, we develop a mathematical framework for data inversion in reservoir models. A general formulation is presented for the identification of uncertain parameters in an abstract reservoir model described by a set of nonlinear equations. Given a finite number of measurements of the state and prior knowledge of the uncertain parameters, an iterative representer-based scheme (IRBS) is proposed to find improved parameters. In this approach, the representer method is used to solve a linear data assimilation problem at each iteration of the algorithm. We apply the theory of iterative regularization to establish conditions for which the IRBS will converge to a stable approximation of a solution to the parameter identification problem. These theoretical results are applied to the identification of the second-order coefficient of a forward model described by a parabolic boundary value problem. Numerical results are presented to show the capabilities of the IRBS for the reconstruction of hydraulic conductivity from the steady-state of groundwater flow, as well as the absolute permeability in the single-phase Darcy flow through porous media

  4. Automatic Generation and Validation of an ITER Neutronics Model from CAD Data

    International Nuclear Information System (INIS)

    Tsige-Tamirat, H.; Fischer, U.; Serikov, A.; Stickel, S.

    2006-01-01

    Quality assurance rules request the consistency of the geometry model used in neutronics Monte Carlo calculations and the underlying engineering CAD model. This can be ensured by automatically converting the CAD geometry data into the representation used by Monte Carlo codes such as MCNP. Suitable conversion algorithms have been previously developed at FZK and were implemented into an interface program. This paper describes the application of the interface program to a CAD model of a 40 degree ITER torus sector for the generation of a neutronics geometry model for MCNP. A CAD model provided by ITER consisting of all significant components was analyzed, pre-processed, and converted into MCNP geometry representation. The analysis and pre-processing steps include the checking of the adequacy of the CAD model for neutronics calculations in terms of geometric representation and complexity, and of corresponding corrections. This step is followed by the conversion of the CAD model into MCNP geometry including error detection and correction as well as the completion of the model by voids. The conversion process does not introduce any approximations so that the resulting MCNP geometry is fully equivalent to the original CAD geometry. However, there is a moderate increase of the complexity measured in terms of the number of cell and surfaces. The validity of the converted geometry model was shown by comparing the results of stochastic MCNP volume calculations and the volumes provided by the CAD kernel of the interface programme. Furthermore, successful MCNP test calculations have been performed for verifying the converted ITER model in application calculations. (author)

  5. Melanoma risk prediction models

    Directory of Open Access Journals (Sweden)

    Nikolić Jelena

    2014-01-01

    Full Text Available Background/Aim. The lack of effective therapy for advanced stages of melanoma emphasizes the importance of preventive measures and screenings of population at risk. Identifying individuals at high risk should allow targeted screenings and follow-up involving those who would benefit most. The aim of this study was to identify most significant factors for melanoma prediction in our population and to create prognostic models for identification and differentiation of individuals at risk. Methods. This case-control study included 697 participants (341 patients and 356 controls that underwent extensive interview and skin examination in order to check risk factors for melanoma. Pairwise univariate statistical comparison was used for the coarse selection of the most significant risk factors. These factors were fed into logistic regression (LR and alternating decision trees (ADT prognostic models that were assessed for their usefulness in identification of patients at risk to develop melanoma. Validation of the LR model was done by Hosmer and Lemeshow test, whereas the ADT was validated by 10-fold cross-validation. The achieved sensitivity, specificity, accuracy and AUC for both models were calculated. The melanoma risk score (MRS based on the outcome of the LR model was presented. Results. The LR model showed that the following risk factors were associated with melanoma: sunbeds (OR = 4.018; 95% CI 1.724- 9.366 for those that sometimes used sunbeds, solar damage of the skin (OR = 8.274; 95% CI 2.661-25.730 for those with severe solar damage, hair color (OR = 3.222; 95% CI 1.984-5.231 for light brown/blond hair, the number of common naevi (over 100 naevi had OR = 3.57; 95% CI 1.427-8.931, the number of dysplastic naevi (from 1 to 10 dysplastic naevi OR was 2.672; 95% CI 1.572-4.540; for more than 10 naevi OR was 6.487; 95%; CI 1.993-21.119, Fitzpatricks phototype and the presence of congenital naevi. Red hair, phototype I and large congenital naevi were

  6. ITER EDA newsletter. V. 7, no. 7

    International Nuclear Information System (INIS)

    1998-07-01

    This newsletter contains the articles: 'Extraordinary ITER council meeting', 'ITER EDA final safety meeting' and 'Summary report of the 3rd combined workshop of the ITER confinement and transport and ITER confinement database and modeling expert groups'

  7. Limiting CT radiation dose in children with craniosynostosis: phantom study using model-based iterative reconstruction

    Energy Technology Data Exchange (ETDEWEB)

    Kaasalainen, Touko; Lampinen, Anniina [University of Helsinki and Helsinki University Hospital, HUS Medical Imaging Center, Radiology, POB 340, Helsinki (Finland); University of Helsinki, Department of Physics, Helsinki (Finland); Palmu, Kirsi [University of Helsinki and Helsinki University Hospital, HUS Medical Imaging Center, Radiology, POB 340, Helsinki (Finland); School of Science, Aalto University, Department of Biomedical Engineering and Computational Science, Helsinki (Finland); Reijonen, Vappu; Kortesniemi, Mika [University of Helsinki and Helsinki University Hospital, HUS Medical Imaging Center, Radiology, POB 340, Helsinki (Finland); Leikola, Junnu [University of Helsinki and Helsinki University Hospital, Department of Plastic Surgery, Helsinki (Finland); Kivisaari, Riku [University of Helsinki and Helsinki University Hospital, Department of Neurosurgery, Helsinki (Finland)

    2015-09-15

    Medical professionals need to exercise particular caution when developing CT scanning protocols for children who require multiple CT studies, such as those with craniosynostosis. To evaluate the utility of ultra-low-dose CT protocols with model-based iterative reconstruction techniques for craniosynostosis imaging. We scanned two pediatric anthropomorphic phantoms with a 64-slice CT scanner using different low-dose protocols for craniosynostosis. We measured organ doses in the head region with metal-oxide-semiconductor field-effect transistor (MOSFET) dosimeters. Numerical simulations served to estimate organ and effective doses. We objectively and subjectively evaluated the quality of images produced by adaptive statistical iterative reconstruction (ASiR) 30%, ASiR 50% and Veo (all by GE Healthcare, Waukesha, WI). Image noise and contrast were determined for different tissues. Mean organ dose with the newborn phantom was decreased up to 83% compared to the routine protocol when using ultra-low-dose scanning settings. Similarly, for the 5-year phantom the greatest radiation dose reduction was 88%. The numerical simulations supported the findings with MOSFET measurements. The image quality remained adequate with Veo reconstruction, even at the lowest dose level. Craniosynostosis CT with model-based iterative reconstruction could be performed with a 20-μSv effective dose, corresponding to the radiation exposure of plain skull radiography, without compromising required image quality. (orig.)

  8. A parametric AC loss model of the ITER coils for control optimization

    CERN Document Server

    Marinucci, C; Bottura, L

    2010-01-01

    A recent study on AC loss calculation in different operating regimes of relevance for ITER plasma control has shown that AC loss calculations can be performed on a model with a level of complexity and completeness not achieved so far {[}L. Bottura, P Bruzzone, J.B. Lister, C Marinucci, A Portone, Computation of AC losses in the ITER magnets during fast field transients, IEEE Trans Appl Supercond 17 (2) (2007) 2438-2441] The model developed is however too large and computationally expensive for the inclusion in a fast control optimization procedure For this reason a second study was launched aiming at developing a simplified calculation method. The simplified calculation method described here that provides faster estimates of AC loss in the ITER coils The method is based on a Fourier decomposition of the field excitation, calculation for each Fourier component, and sum over all frequencies to estimate the total loss associated with a given current waveform Typical accuracy that can be reached is of the order o...

  9. Computed tomography depiction of small pediatric vessels with model-based iterative reconstruction

    International Nuclear Information System (INIS)

    Koc, Gonca; Courtier, Jesse L.; Phelps, Andrew; Marcovici, Peter A.; MacKenzie, John D.

    2014-01-01

    Computed tomography (CT) is extremely important in characterizing blood vessel anatomy and vascular lesions in children. Recent advances in CT reconstruction technology hold promise for improved image quality and also reductions in radiation dose. This report evaluates potential improvements in image quality for the depiction of small pediatric vessels with model-based iterative reconstruction (Veo trademark), a technique developed to improve image quality and reduce noise. To evaluate Veo trademark as an improved method when compared to adaptive statistical iterative reconstruction (ASIR trademark) for the depiction of small vessels on pediatric CT. Seventeen patients (mean age: 3.4 years, range: 2 days to 10.0 years; 6 girls, 11 boys) underwent contrast-enhanced CT examinations of the chest and abdomen in this HIPAA compliant and institutional review board approved study. Raw data were reconstructed into separate image datasets using Veo trademark and ASIR trademark algorithms (GE Medical Systems, Milwaukee, WI). Four blinded radiologists subjectively evaluated image quality. The pulmonary, hepatic, splenic and renal arteries were evaluated for the length and number of branches depicted. Datasets were compared with parametric and non-parametric statistical tests. Readers stated a preference for Veo trademark over ASIR trademark images when subjectively evaluating image quality criteria for vessel definition, image noise and resolution of small anatomical structures. The mean image noise in the aorta and fat was significantly less for Veo trademark vs. ASIR trademark reconstructed images. Quantitative measurements of mean vessel lengths and number of branches vessels delineated were significantly different for Veo trademark and ASIR trademark images. Veo trademark consistently showed more of the vessel anatomy: longer vessel length and more branching vessels. When compared to the more established adaptive statistical iterative reconstruction algorithm, model

  10. Electrostatic ion thrusters - towards predictive modeling

    Energy Technology Data Exchange (ETDEWEB)

    Kalentev, O.; Matyash, K.; Duras, J.; Lueskow, K.F.; Schneider, R. [Ernst-Moritz-Arndt Universitaet Greifswald, D-17489 (Germany); Koch, N. [Technische Hochschule Nuernberg Georg Simon Ohm, Kesslerplatz 12, D-90489 Nuernberg (Germany); Schirra, M. [Thales Electronic Systems GmbH, Soeflinger Strasse 100, D-89077 Ulm (Germany)

    2014-02-15

    The development of electrostatic ion thrusters so far has mainly been based on empirical and qualitative know-how, and on evolutionary iteration steps. This resulted in considerable effort regarding prototype design, construction and testing and therefore in significant development and qualification costs and high time demands. For future developments it is anticipated to implement simulation tools which allow for quantitative prediction of ion thruster performance, long-term behavior and space craft interaction prior to hardware design and construction. Based on integrated numerical models combining self-consistent kinetic plasma models with plasma-wall interaction modules a new quality in the description of electrostatic thrusters can be reached. These open the perspective for predictive modeling in this field. This paper reviews the application of a set of predictive numerical modeling tools on an ion thruster model of the HEMP-T (High Efficiency Multi-stage Plasma Thruster) type patented by Thales Electron Devices GmbH. (copyright 2014 WILEY-VCH Verlag GmbH and Co. KGaA, Weinheim) (orig.)

  11. Dynamic modelling of flexibly supported gears using iterative convergence of tooth mesh stiffness

    Science.gov (United States)

    Xue, Song; Howard, Ian

    2016-12-01

    This paper presents a new gear dynamic model for flexibly supported gear sets aiming to improve the accuracy of gear fault diagnostic methods. In the model, the operating gear centre distance, which can affect the gear design parameters, like the gear mesh stiffness, has been selected as the iteration criteria because it will significantly deviate from its nominal value for a flexible supported gearset when it is operating. The FEA method was developed for calculation of the gear mesh stiffnesses with varying gear centre distance, which can then be incorporated by iteration into the gear dynamic model. The dynamic simulation results from previous models that neglect the operating gear centre distance change and those from the new model that incorporate the operating gear centre distance change were obtained by numerical integration of the differential equations of motion using the Newmark method. Some common diagnostic tools were utilized to investigate the difference and comparison of the fault diagnostic results between the two models. The results of this paper indicate that the major difference between the two diagnostic results for the cracked tooth exists in the extended duration of the crack event and in changes to the phase modulation of the coherent time synchronous averaged signal even though other notable differences from other diagnostic results can also be observed.

  12. An iterative inverse method to estimate basal topography and initialize ice flow models

    Directory of Open Access Journals (Sweden)

    W. J. J. van Pelt

    2013-06-01

    Full Text Available We evaluate an inverse approach to reconstruct distributed bedrock topography and simultaneously initialize an ice flow model. The inverse method involves an iterative procedure in which an ice dynamical model (PISM is run multiple times over a prescribed period, while being forced with space- and time-dependent climate input. After every iteration bed heights are adjusted using information of the remaining misfit between observed and modeled surface topography. The inverse method is first applied in synthetic experiments with a constant climate forcing to verify convergence and robustness of the approach in three dimensions. In a next step, the inverse approach is applied to Nordenskiöldbreen, Svalbard, forced with height- and time-dependent climate input since 1300 AD. An L-curve stopping criterion is used to prevent overfitting. Validation against radar data reveals a high correlation (up to R = 0.89 between modeled and observed thicknesses. Remaining uncertainties can mainly be ascribed to inaccurate model physics, in particular, uncertainty in the description of sliding. Results demonstrate the applicability of this inverse method to reconstruct the ice thickness distribution of glaciers and ice caps. In addition to reconstructing bedrock topography, the method provides a direct tool to initialize ice flow models for forecasting experiments.

  13. Reconstruction of physiological signals using iterative retraining and accumulated averaging of neural network models.

    Science.gov (United States)

    McBride, Joseph; Sullivan, Adam; Xia, Henian; Petrie, Adam; Zhao, Xiaopeng

    2011-06-01

    Real-time monitoring of vital physiological signals is of significant clinical relevance. Disruptions in the signals are frequently encountered and make it difficult for precise diagnosis. Thus, the ability to accurately predict/recover the lost signals could greatly impact medical research and application. We have developed new techniques of signal reconstructions based on iterative retraining and accumulated averaging of neural networks. The effectiveness and robustness of these techniques are demonstrated using data records from the Computing in Cardiology/PhysioNet Challenge 2010. The average correlation coefficient between prediction and target for 100 records of various target signals is about 0.9. We have also explored influences of a few important parameters on the accuracy of reconstructions. The developed techniques may be used to detect changes in patient state and to recognize intervals of signal corruption.

  14. CT angiography after carotid artery stenting: assessment of the utility of adaptive statistical iterative reconstruction and model-based iterative reconstruction

    Energy Technology Data Exchange (ETDEWEB)

    Kuya, Keita; Shinohara, Yuki; Fujii, Shinya; Ogawa, Toshihide [Tottori University, Division of Radiology, Department of Pathophysiological Therapeutic Science, Faculty of Medicine, Yonago (Japan); Sakamoto, Makoto; Watanabe, Takashi [Tottori University, Division of Neurosurgery, Department of Brain and Neurosciences, Faculty of Medicine, Yonago (Japan); Iwata, Naoki; Kishimoto, Junichi [Tottori University, Division of Clinical Radiology Faculty of Medicine, Yonago (Japan); Kaminou, Toshio [Osaka Minami Medical Center, Department of Radiology, Osaka (Japan)

    2014-11-15

    Follow-up CT angiography (CTA) is routinely performed for post-procedure management after carotid artery stenting (CAS). However, the stent lumen tends to be underestimated because of stent artifacts on CTA reconstructed with the filtered back projection (FBP) technique. We assessed the utility of new iterative reconstruction techniques, such as adaptive statistical iterative reconstruction (ASIR) and model-based iterative reconstruction (MBIR), for CTA after CAS in comparison with FBP. In a phantom study, we evaluated the differences among the three reconstruction techniques with regard to the relationship between the stent luminal diameter and the degree of underestimation of stent luminal diameter. In a clinical study, 34 patients who underwent follow-up CTA after CAS were included. We compared the stent luminal diameters among FBP, ASIR, and MBIR, and performed visual assessment of low attenuation area (LAA) in the stent lumen using a three-point scale. In the phantom study, stent luminal diameter was increasingly underestimated as luminal diameter became smaller in all CTA images. Stent luminal diameter was larger with MBIR than with the other reconstruction techniques. Similarly, in the clinical study, stent luminal diameter was larger with MBIR than with the other reconstruction techniques. LAA detectability scores of MBIR were greater than or equal to those of FBP and ASIR in all cases. MBIR improved the accuracy of assessment of stent luminal diameter and LAA detectability in the stent lumen when compared with FBP and ASIR. We conclude that MBIR is a useful reconstruction technique for CTA after CAS. (orig.)

  15. Protein-Protein Interactions Prediction Based on Iterative Clique Extension with Gene Ontology Filtering

    Directory of Open Access Journals (Sweden)

    Lei Yang

    2014-01-01

    Full Text Available Cliques (maximal complete subnets in protein-protein interaction (PPI network are an important resource used to analyze protein complexes and functional modules. Clique-based methods of predicting PPI complement the data defection from biological experiments. However, clique-based predicting methods only depend on the topology of network. The false-positive and false-negative interactions in a network usually interfere with prediction. Therefore, we propose a method combining clique-based method of prediction and gene ontology (GO annotations to overcome the shortcoming and improve the accuracy of predictions. According to different GO correcting rules, we generate two predicted interaction sets which guarantee the quality and quantity of predicted protein interactions. The proposed method is applied to the PPI network from the Database of Interacting Proteins (DIP and most of the predicted interactions are verified by another biological database, BioGRID. The predicted protein interactions are appended to the original protein network, which leads to clique extension and shows the significance of biological meaning.

  16. Confidence scores for prediction models

    DEFF Research Database (Denmark)

    Gerds, Thomas Alexander; van de Wiel, MA

    2011-01-01

    In medical statistics, many alternative strategies are available for building a prediction model based on training data. Prediction models are routinely compared by means of their prediction performance in independent validation data. If only one data set is available for training and validation......, then rival strategies can still be compared based on repeated bootstraps of the same data. Often, however, the overall performance of rival strategies is similar and it is thus difficult to decide for one model. Here, we investigate the variability of the prediction models that results when the same...... to distinguish rival prediction models with similar prediction performances. Furthermore, on the subject level a confidence score may provide useful supplementary information for new patients who want to base a medical decision on predicted risk. The ideas are illustrated and discussed using data from cancer...

  17. An Iterative Optimization Algorithm for Lens Distortion Correction Using Two-Parameter Models

    Directory of Open Access Journals (Sweden)

    Daniel Santana-Cedrés

    2016-12-01

    Full Text Available We present a method for the automatic estimation of two-parameter radial distortion models, considering polynomial as well as division models. The method first detects the longest distorted lines within the image by applying the Hough transform enriched with a radial distortion parameter. From these lines, the first distortion parameter is estimated, then we initialize the second distortion parameter to zero and the two-parameter model is embedded into an iterative nonlinear optimization process to improve the estimation. This optimization aims at reducing the distance from the edge points to the lines, adjusting two distortion parameters as well as the coordinates of the center of distortion. Furthermore, this allows detecting more points belonging to the distorted lines, so that the Hough transform is iteratively repeated to extract a better set of lines until no improvement is achieved. We present some experiments on real images with significant distortion to show the ability of the proposed approach to automatically correct this type of distortion as well as a comparison between the polynomial and division models.

  18. On the iterative solution of the gap equation in the Nambu-Jona-Lasinio model

    Science.gov (United States)

    Martinez, A.; Raya, A.

    2017-10-01

    In this work we revise the standard iterative procedure to find the solution of the gap equation in the Nambu-Jona-Lasinio model within the most popular regularization schemes available in literature in the super-strong coupling regime. We observe that whereas for the hard cut-off regularization schemes, the procedure smoothly converges to the physically relevant solution, Pauli-Villars and Proper-Time regularization schemes become chaotic in the sense of discrete dynamical systems. We call for the need of an appropriate interpretation of the non-convergence of this procedure to the solution of the gap equation.

  19. ITER CTA newsletter. No. 2

    International Nuclear Information System (INIS)

    2001-10-01

    This ITER CTA newsletter contains results of the ITER toroidal field model coil project presented by ITER EU Home Team (Garching) and an article in commemoration of the late Dr. Charles Maisonnier, one of the former leaders of ITER who made significant contributions to its development

  20. Trial-by-trial identification of categorization strategy using iterative decision-bound modeling.

    Science.gov (United States)

    Hélie, Sébastien; Turner, Benjamin O; Crossley, Matthew J; Ell, Shawn W; Ashby, F Gregory

    2017-06-01

    Identifying the strategy that participants use in laboratory experiments is crucial in interpreting the results of behavioral experiments. This article introduces a new modeling procedure called iterative decision-bound modeling (iDBM), which iteratively fits decision-bound models to the trial-by-trial responses generated from single participants in perceptual categorization experiments. The goals of iDBM are to identify: (1) all response strategies used by a participant, (2) changes in response strategy, and (3) the trial number at which each change occurs. The new method is validated by testing its ability to identify the response strategies used in noisy simulated data. The benchmark simulation results show that iDBM is able to detect and identify strategy switches during an experiment and accurately estimate the trial number at which the strategy change occurs in low to moderate noise conditions. The new method is then used to reanalyze data from Ell and Ashby (2006). Applying iDBM revealed that increasing category overlap in an information-integration category learning task increased the proportion of participants who abandoned explicit rules, and reduced the number of training trials needed to abandon rules in favor of a procedural strategy. Finally, we discuss new research questions made possible through iDBM.

  1. TRIPOLI-4{sup ®} Monte Carlo code ITER A-lite neutronic model validation

    Energy Technology Data Exchange (ETDEWEB)

    Jaboulay, Jean-Charles, E-mail: jean-charles.jaboulay@cea.fr [CEA, DEN, Saclay, DM2S, SERMA, F-91191 Gif-sur-Yvette (France); Cayla, Pierre-Yves; Fausser, Clement [MILLENNIUM, 16 Av du Québec Silic 628, F-91945 Villebon sur Yvette (France); Damian, Frederic; Lee, Yi-Kang; Puma, Antonella Li; Trama, Jean-Christophe [CEA, DEN, Saclay, DM2S, SERMA, F-91191 Gif-sur-Yvette (France)

    2014-10-15

    3D Monte Carlo transport codes are extensively used in neutronic analysis, especially in radiation protection and shielding analyses for fission and fusion reactors. TRIPOLI-4{sup ®} is a Monte Carlo code developed by CEA. The aim of this paper is to show its capability to model a large-scale fusion reactor with complex neutron source and geometry. A benchmark between MCNP5 and TRIPOLI-4{sup ®}, on the ITER A-lite model was carried out; neutron flux, nuclear heating in the blankets and tritium production rate in the European TBMs were evaluated and compared. The methodology to build the TRIPOLI-4{sup ®} A-lite model is based on MCAM and the MCNP A-lite model. Simplified TBMs, from KIT, were integrated in the equatorial-port. A good agreement between MCNP and TRIPOLI-4{sup ®} is shown; discrepancies are mainly included in the statistical error.

  2. Tracking control of nonlinear lumped mechanical continuous-time systems: A model-based iterative learning approach

    Science.gov (United States)

    Smolders, K.; Volckaert, M.; Swevers, J.

    2008-11-01

    This paper presents a nonlinear model-based iterative learning control procedure to achieve accurate tracking control for nonlinear lumped mechanical continuous-time systems. The model structure used in this iterative learning control procedure is new and combines a linear state space model and a nonlinear feature space transformation. An intuitive two-step iterative algorithm to identify the model parameters is presented. It alternates between the estimation of the linear and the nonlinear model part. It is assumed that besides the input and output signals also the full state vector of the system is available for identification. A measurement and signal processing procedure to estimate these signals for lumped mechanical systems is presented. The iterative learning control procedure relies on the calculation of the input that generates a given model output, so-called offline model inversion. A new offline nonlinear model inversion method for continuous-time, nonlinear time-invariant, state space models based on Newton's method is presented and applied to the new model structure. This model inversion method is not restricted to minimum phase models. It requires only calculation of the first order derivatives of the state space model and is applicable to multivariable models. For periodic reference signals the method yields a compact implementation in the frequency domain. Moreover it is shown that a bandwidth can be specified up to which learning is allowed when using this inversion method in the iterative learning control procedure. Experimental results for a nonlinear single-input-single-output system corresponding to a quarter car on a hydraulic test rig are presented. It is shown that the new nonlinear approach outperforms the linear iterative learning control approach which is currently used in the automotive industry on durability test rigs.

  3. A modeling framework for user-driven iterative design of autonomous systems

    NARCIS (Netherlands)

    Lohse, M.; Siepmann, Frederic; Wachsmuth, Sven

    Many researchers in human-robot interaction have acknowledged the fact that iterative design is necessary to optimize the robots for the interaction with the users. However, few iterative user studies have been reported. We believe that one reason for this is that setting up systems for iterative

  4. Testing and modeling of diffusion bonded prototype optical windows under ITER conditions

    NARCIS (Netherlands)

    Jacobs, M.; Oost, G. van; Degrieck, J.; Baere, I. De; Gusarov, A.; Gubbels, F.; Massaut, V.

    2011-01-01

    Glass-metal joints are a part of ITER optical diagnostics windows. These joints must be leak tight for the safety (presence of tritium in ITER) and to preserve the vacuum. They must also withstand the ITER environment: temperatures up to 220°C and fast neutron fluxes of ∼3·10 9 n/cm 2·s. At the

  5. An iterative and targeted sampling design informed by habitat suitability models for detecting focal plant species over extensive areas.

    Science.gov (United States)

    Wang, Ophelia; Zachmann, Luke J; Sesnie, Steven E; Olsson, Aaryn D; Dickson, Brett G

    2014-01-01

    Prioritizing areas for management of non-native invasive plants is critical, as invasive plants can negatively impact plant community structure. Extensive and multi-jurisdictional inventories are essential to prioritize actions aimed at mitigating the impact of invasions and changes in disturbance regimes. However, previous work devoted little effort to devising sampling methods sufficient to assess the scope of multi-jurisdictional invasion over extensive areas. Here we describe a large-scale sampling design that used species occurrence data, habitat suitability models, and iterative and targeted sampling efforts to sample five species and satisfy two key management objectives: 1) detecting non-native invasive plants across previously unsampled gradients, and 2) characterizing the distribution of non-native invasive plants at landscape to regional scales. Habitat suitability models of five species were based on occurrence records and predictor variables derived from topography, precipitation, and remotely sensed data. We stratified and established field sampling locations according to predicted habitat suitability and phenological, substrate, and logistical constraints. Across previously unvisited areas, we detected at least one of our focal species on 77% of plots. In turn, we used detections from 2011 to improve habitat suitability models and sampling efforts in 2012, as well as additional spatial constraints to increase detections. These modifications resulted in a 96% detection rate at plots. The range of habitat suitability values that identified highly and less suitable habitats and their environmental conditions corresponded to field detections with mixed levels of agreement. Our study demonstrated that an iterative and targeted sampling framework can address sampling bias, reduce time costs, and increase detections. Other studies can extend the sampling framework to develop methods in other ecosystems to provide detection data. The sampling methods

  6. Brief communication: human cranial variation fits iterative founder effect model with African origin.

    Science.gov (United States)

    von Cramon-Taubadel, Noreen; Lycett, Stephen J

    2008-05-01

    Recent studies comparing craniometric and neutral genetic affinity matrices have concluded that, on average, human cranial variation fits a model of neutral expectation. While human craniometric and genetic data fit a model of isolation by geographic distance, it is not yet clear whether this is due to geographically mediated gene flow or human dispersal events. Recently, human genetic data have been shown to fit an iterative founder effect model of dispersal with an African origin, in line with the out-of-Africa replacement model for modern human origins, and Manica et al. (Nature 448 (2007) 346-349) have demonstrated that human craniometric data also fit this model. However, in contrast with the neutral model of cranial evolution suggested by previous studies, Manica et al. (2007) made the a priori assumption that cranial form has been subject to climatically driven natural selection and therefore correct for climate prior to conducting their analyses. Here we employ a modified theoretical and methodological approach to test whether human cranial variability fits the iterative founder effect model. In contrast with Manica et al. (2007) we employ size-adjusted craniometric variables, since climatic factors such as temperature have been shown to correlate with aspects of cranial size. Despite these differences, we obtain similar results to those of Manica et al. (2007), with up to 26% of global within-population craniometric variation being explained by geographic distance from sub-Saharan Africa. Comparative analyses using non-African origins do not yield significant results. The implications of these results are discussed in the light of the modern human origins debate. (c) 2007 Wiley-Liss, Inc.

  7. Model-based iterative learning control of Parkinsonian state in thalamic relay neuron

    Science.gov (United States)

    Liu, Chen; Wang, Jiang; Li, Huiyan; Xue, Zhiqin; Deng, Bin; Wei, Xile

    2014-09-01

    Although the beneficial effects of chronic deep brain stimulation on Parkinson's disease motor symptoms are now largely confirmed, the underlying mechanisms behind deep brain stimulation remain unclear and under debate. Hence, the selection of stimulation parameters is full of challenges. Additionally, due to the complexity of neural system, together with omnipresent noises, the accurate model of thalamic relay neuron is unknown. Thus, the iterative learning control of the thalamic relay neuron's Parkinsonian state based on various variables is presented. Combining the iterative learning control with typical proportional-integral control algorithm, a novel and efficient control strategy is proposed, which does not require any particular knowledge on the detailed physiological characteristics of cortico-basal ganglia-thalamocortical loop and can automatically adjust the stimulation parameters. Simulation results demonstrate the feasibility of the proposed control strategy to restore the fidelity of thalamic relay in the Parkinsonian condition. Furthermore, through changing the important parameter—the maximum ionic conductance densities of low-threshold calcium current, the dominant characteristic of the proposed method which is independent of the accurate model can be further verified.

  8. Bootstrap prediction and Bayesian prediction under misspecified models

    OpenAIRE

    Fushiki, Tadayoshi

    2005-01-01

    We consider a statistical prediction problem under misspecified models. In a sense, Bayesian prediction is an optimal prediction method when an assumed model is true. Bootstrap prediction is obtained by applying Breiman's `bagging' method to a plug-in prediction. Bootstrap prediction can be considered to be an approximation to the Bayesian prediction under the assumption that the model is true. However, in applications, there are frequently deviations from the assumed model. In this paper, bo...

  9. Construction of robust dynamic genome-scale metabolic model structures of Saccharomyces cerevisiae through iterative re-parameterization.

    Science.gov (United States)

    Sánchez, Benjamín J; Pérez-Correa, José R; Agosin, Eduardo

    2014-09-01

    Dynamic flux balance analysis (dFBA) has been widely employed in metabolic engineering to predict the effect of genetic modifications and environmental conditions in the cell׳s metabolism during dynamic cultures. However, the importance of the model parameters used in these methodologies has not been properly addressed. Here, we present a novel and simple procedure to identify dFBA parameters that are relevant for model calibration. The procedure uses metaheuristic optimization and pre/post-regression diagnostics, fixing iteratively the model parameters that do not have a significant role. We evaluated this protocol in a Saccharomyces cerevisiae dFBA framework calibrated for aerobic fed-batch and anaerobic batch cultivations. The model structures achieved have only significant, sensitive and uncorrelated parameters and are able to calibrate different experimental data. We show that consumption, suboptimal growth and production rates are more useful for calibrating dynamic S. cerevisiae metabolic models than Boolean gene expression rules, biomass requirements and ATP maintenance. Copyright © 2014 International Metabolic Engineering Society. Published by Elsevier Inc. All rights reserved.

  10. Prediction models in complex terrain

    DEFF Research Database (Denmark)

    Marti, I.; Nielsen, Torben Skov; Madsen, Henrik

    2001-01-01

    The objective of the work is to investigatethe performance of HIRLAM in complex terrain when used as input to energy production forecasting models, and to develop a statistical model to adapt HIRLAM prediction to the wind farm. The features of the terrain, specially the topography, influence...... the performance of HIRLAM in particular with respect to wind predictions. To estimate the performance of the model two spatial resolutions (0,5 Deg. and 0.2 Deg.) and different sets of HIRLAM variables were used to predict wind speed and energy production. The predictions of energy production for the wind farms...... are calculated using on-line measurements of power production as well as HIRLAM predictions as input thus taking advantage of the auto-correlation, which is present in the power production for shorter pediction horizons. Statistical models are used to discribe the relationship between observed energy production...

  11. Coronary stent on coronary CT angiography: Assessment with model-based iterative reconstruction technique

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Eun Chae; Kim, Yeo Koon; Chun, Eun Ju; Choi, Sang IL [Dept. of of Radiology, Seoul National University Bundang Hospital, Seongnam (Korea, Republic of)

    2016-05-15

    To assess the performance of model-based iterative reconstruction (MBIR) technique for evaluation of coronary artery stents on coronary CT angiography (CCTA). Twenty-two patients with coronary stent implantation who underwent CCTA were retrospectively enrolled for comparison of image quality between filtered back projection (FBP), adaptive statistical iterative reconstruction (ASIR) and MBIR. In each data set, image noise was measured as the standard deviation of the measured attenuation units within circular regions of interest in the ascending aorta (AA) and left main coronary artery (LM). To objectively assess the noise and blooming artifacts in coronary stent, we additionally measured the standard deviation of the measured attenuation and intra-luminal stent diameters of total 35 stents with dedicated software. All image noise measured in the AA (all p < 0.001), LM (p < 0.001, p = 0.001) and coronary stent (all p < 0.001) were significantly lower with MBIR in comparison to those with FBP or ASIR. Intraluminal stent diameter was significantly higher with MBIR, as compared with ASIR or FBP (p < 0.001, p = 0.001). MBIR can reduce image noise and blooming artifact from the stent, leading to better in-stent assessment in patients with coronary artery stent.

  12. Hydraulics of the ITER toroidal field model coil cable-in-conduit conductors

    International Nuclear Information System (INIS)

    Nicollet, S.; Cloez, H.; Duchateau, J.L.; Serries, J.P.

    1998-01-01

    The test facility built at CEA-Cadarache OTHELLO (Operating Test facility for HELium LOop) is described, and pressure drop measurements all performed with nitrogen under pressure at room temperature are presented. Tests have been carried out on a 5 meters straight section of the superconducting cable of the Toroidal Field Model Coil. For the bundle region, a fit in form of the general formula proposed by Katheder agrees well with measurements. For the central hole, the friction factor measurements suggest a plateau at practical Reynolds numbers ranging near 10 6 , which could be modelled with the empirical Colebrook formula with an equivalent relative rugosity. This behaviour is quite different from what has been used up to now in the design criteria of ITER. (author)

  13. Modified Step Variational Iteration Method for Solving Fractional Biochemical Reaction Model

    Directory of Open Access Journals (Sweden)

    R. Yulita Molliq

    2011-01-01

    Full Text Available A new method called the modification of step variational iteration method (MoSVIM is introduced and used to solve the fractional biochemical reaction model. The MoSVIM uses general Lagrange multipliers for construction of the correction functional for the problems, and it runs by step approach, which is to divide the interval into subintervals with time step, and the solutions are obtained at each subinterval as well adopting a nonzero auxiliary parameter ℏ to control the convergence region of series' solutions. The MoSVIM yields an analytical solution of a rapidly convergent infinite power series with easily computable terms and produces a good approximate solution on enlarged intervals for solving the fractional biochemical reaction model. The accuracy of the results obtained is in a excellent agreement with the Adam Bashforth Moulton method (ABMM.

  14. MODEL PREDICTIVE CONTROL FUNDAMENTALS

    African Journals Online (AJOL)

    2012-07-02

    Jul 2, 2012 ... Linear MPC. 1. Uses linear model: ˙x = Ax + Bu. 2. Quadratic cost function: F = xT Qx + uT Ru. 3. Linear constraints: Hx + Gu < 0. 4. Quadratic program. Nonlinear MPC. 1. Nonlinear model: ˙x = f(x, u). 2. Cost function can be nonquadratic: F = (x, u). 3. Nonlinear constraints: h(x, u) < 0. 4. Nonlinear program.

  15. Fast parallel algorithm for three-dimensional distance-driven model in iterative computed tomography reconstruction

    International Nuclear Information System (INIS)

    Chen Jian-Lin; Li Lei; Wang Lin-Yuan; Cai Ai-Long; Xi Xiao-Qi; Zhang Han-Ming; Li Jian-Xin; Yan Bin

    2015-01-01

    The projection matrix model is used to describe the physical relationship between reconstructed object and projection. Such a model has a strong influence on projection and backprojection, two vital operations in iterative computed tomographic reconstruction. The distance-driven model (DDM) is a state-of-the-art technology that simulates forward and back projections. This model has a low computational complexity and a relatively high spatial resolution; however, it includes only a few methods in a parallel operation with a matched model scheme. This study introduces a fast and parallelizable algorithm to improve the traditional DDM for computing the parallel projection and backprojection operations. Our proposed model has been implemented on a GPU (graphic processing unit) platform and has achieved satisfactory computational efficiency with no approximation. The runtime for the projection and backprojection operations with our model is approximately 4.5 s and 10.5 s per loop, respectively, with an image size of 256×256×256 and 360 projections with a size of 512×512. We compare several general algorithms that have been proposed for maximizing GPU efficiency by using the unmatched projection/backprojection models in a parallel computation. The imaging resolution is not sacrificed and remains accurate during computed tomographic reconstruction. (paper)

  16. Modelling bankruptcy prediction models in Slovak companies

    Directory of Open Access Journals (Sweden)

    Kovacova Maria

    2017-01-01

    Full Text Available An intensive research from academics and practitioners has been provided regarding models for bankruptcy prediction and credit risk management. In spite of numerous researches focusing on forecasting bankruptcy using traditional statistics techniques (e.g. discriminant analysis and logistic regression and early artificial intelligence models (e.g. artificial neural networks, there is a trend for transition to machine learning models (support vector machines, bagging, boosting, and random forest to predict bankruptcy one year prior to the event. Comparing the performance of this with unconventional approach with results obtained by discriminant analysis, logistic regression, and neural networks application, it has been found that bagging, boosting, and random forest models outperform the others techniques, and that all prediction accuracy in the testing sample improves when the additional variables are included. On the other side the prediction accuracy of old and well known bankruptcy prediction models is quiet high. Therefore, we aim to analyse these in some way old models on the dataset of Slovak companies to validate their prediction ability in specific conditions. Furthermore, these models will be modelled according to new trends by calculating the influence of elimination of selected variables on the overall prediction ability of these models.

  17. Submillisievert coronary calcium quantification using model-based iterative reconstruction: A within-patient analysis

    Energy Technology Data Exchange (ETDEWEB)

    Harder, Annemarie M. den, E-mail: a.m.denharder@umcutrecht.nl [Department of Radiology, University Medical Center Utrecht, Utrecht (Netherlands); Wolterink, Jelmer M. [Image Sciences Institute, University Medical Center Utrecht, Utrecht (Netherlands); Willemink, Martin J.; Schilham, Arnold M.R.; Jong, Pim A. de [Department of Radiology, University Medical Center Utrecht, Utrecht (Netherlands); Budde, Ricardo P.J. [Department of Radiology, Erasmus Medical Center, Rotterdam (Netherlands); Nathoe, Hendrik M. [Department of Cardiology, University Medical Center Utrecht, Utrecht (Netherlands); Išgum, Ivana [Image Sciences Institute, University Medical Center Utrecht, Utrecht (Netherlands); Leiner, Tim [Department of Radiology, University Medical Center Utrecht, Utrecht (Netherlands)

    2016-11-15

    Highlights: • Iterative reconstruction (IR) allows for low dose coronary calcium scoring (CCS). • Radiation dose can be safely reduced to 0.4 mSv with hybrid and model-based IR. • FBP is not feasible at these dose levels due to excessive noise. - Abstract: Purpose: To determine the effect of model-based iterative reconstruction (IR) on coronary calcium quantification using different submillisievert CT acquisition protocols. Methods: Twenty-eight patients received a clinically indicated non contrast-enhanced cardiac CT. After the routine dose acquisition, low-dose acquisitions were performed with 60%, 40% and 20% of the routine dose mAs. Images were reconstructed with filtered back projection (FBP), hybrid IR (HIR) and model-based IR (MIR) and Agatston scores, calcium volumes and calcium mass scores were determined. Results: Effective dose was 0.9, 0.5, 0.4 and 0.2 mSv, respectively. At 0.5 and 0.4 mSv, differences in Agatston scores with both HIR and MIR compared to FBP at routine dose were small (−0.1 to −2.9%), while at 0.2 mSv, differences in Agatston scores of −12.6 to −14.6% occurred. Reclassification of risk category at reduced dose levels was more frequent with MIR (21–25%) than with HIR (18%). Conclusions: Radiation dose for coronary calcium scoring can be safely reduced to 0.4 mSv using both HIR and MIR, while FBP is not feasible at these dose levels due to excessive noise. Further dose reduction can lead to an underestimation in Agatston score and subsequent reclassification to lower risk categories. Mass scores were unaffected by dose reductions.

  18. Melanoma Risk Prediction Models

    Science.gov (United States)

    Developing statistical models that estimate the probability of developing melanoma cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.

  19. Predictive models of moth development

    Science.gov (United States)

    Degree-day models link ambient temperature to insect life-stages, making such models valuable tools in integrated pest management. These models increase management efficacy by predicting pest phenology. In Wisconsin, the top insect pest of cranberry production is the cranberry fruitworm, Acrobasis v...

  20. Interpretation of ensembles created by multiple iterative rebuilding of macromolecular models

    International Nuclear Information System (INIS)

    Terwilliger, Thomas C.; Grosse-Kunstleve, Ralf W.; Afonine, Pavel V.; Adams, Paul D.; Moriarty, Nigel W.; Zwart, Peter; Read, Randy J.; Turk, Dusan; Hung, Li-Wei

    2007-01-01

    Heterogeneity in ensembles generated by independent model rebuilding principally reflects the limitations of the data and of the model-building process rather than the diversity of structures in the crystal. Automation of iterative model building, density modification and refinement in macromolecular crystallography has made it feasible to carry out this entire process multiple times. By using different random seeds in the process, a number of different models compatible with experimental data can be created. Sets of models were generated in this way using real data for ten protein structures from the Protein Data Bank and using synthetic data generated at various resolutions. Most of the heterogeneity among models produced in this way is in the side chains and loops on the protein surface. Possible interpretations of the variation among models created by repetitive rebuilding were investigated. Synthetic data were created in which a crystal structure was modelled as the average of a set of ‘perfect’ structures and the range of models obtained by rebuilding a single starting model was examined. The standard deviations of coordinates in models obtained by repetitive rebuilding at high resolution are small, while those obtained for the same synthetic crystal structure at low resolution are large, so that the diversity within a group of models cannot generally be a quantitative reflection of the actual structures in a crystal. Instead, the group of structures obtained by repetitive rebuilding reflects the precision of the models, and the standard deviation of coordinates of these structures is a lower bound estimate of the uncertainty in coordinates of the individual models

  1. Iterative model-building, structure refinement, and density modification with the PHENIX AutoBuild Wizard

    Energy Technology Data Exchange (ETDEWEB)

    Los Alamos National Laboratory, Mailstop M888, Los Alamos, NM 87545, USA; Lawrence Berkeley National Laboratory, One Cyclotron Road, Building 64R0121, Berkeley, CA 94720, USA; Department of Haematology, University of Cambridge, Cambridge CB2 0XY, England; Terwilliger, Thomas; Terwilliger, T.C.; Grosse-Kunstleve, Ralf Wilhelm; Afonine, P.V.; Moriarty, N.W.; Zwart, P.H.; Hung, L.-W.; Read, R.J.; Adams, P.D.

    2007-04-29

    The PHENIX AutoBuild Wizard is a highly automated tool for iterative model-building, structure refinement and density modification using RESOLVE or TEXTAL model-building, RESOLVE statistical density modification, and phenix.refine structure refinement. Recent advances in the AutoBuild Wizard and phenix.refine include automated detection and application of NCS from models as they are built, extensive model completion algorithms, and automated solvent molecule picking. Model completion algorithms in the AutoBuild Wizard include loop-building, crossovers between chains in different models of a structure, and side-chain optimization. The AutoBuild Wizard has been applied to a set of 48 structures at resolutions ranging from 1.1 {angstrom} to 3.2 {angstrom}, resulting in a mean R-factor of 0.24 and a mean free R factor of 0.29. The R-factor of the final model is dependent on the quality of the starting electron density, and relatively independent of resolution.

  2. Iterative model building, structure refinement and density modification with the PHENIX AutoBuild wizard

    International Nuclear Information System (INIS)

    Terwilliger, Thomas C.; Grosse-Kunstleve, Ralf W.; Afonine, Pavel V.; Moriarty, Nigel W.; Zwart, Peter H.; Hung, Li-Wei; Read, Randy J.; Adams, Paul D.

    2008-01-01

    The highly automated PHENIX AutoBuild wizard is described. The procedure can be applied equally well to phases derived from isomorphous/anomalous and molecular-replacement methods. The PHENIX AutoBuild wizard is a highly automated tool for iterative model building, structure refinement and density modification using RESOLVE model building, RESOLVE statistical density modification and phenix.refine structure refinement. Recent advances in the AutoBuild wizard and phenix.refine include automated detection and application of NCS from models as they are built, extensive model-completion algorithms and automated solvent-molecule picking. Model-completion algorithms in the AutoBuild wizard include loop building, crossovers between chains in different models of a structure and side-chain optimization. The AutoBuild wizard has been applied to a set of 48 structures at resolutions ranging from 1.1 to 3.2 Å, resulting in a mean R factor of 0.24 and a mean free R factor of 0.29. The R factor of the final model is dependent on the quality of the starting electron density and is relatively independent of resolution

  3. Predictive Models and Computational Embryology

    Science.gov (United States)

    EPA’s ‘virtual embryo’ project is building an integrative systems biology framework for predictive models of developmental toxicity. One schema involves a knowledge-driven adverse outcome pathway (AOP) framework utilizing information from public databases, standardized ontologies...

  4. Conductor fabrication for ITER Model Coils. Status of the EU cabling and jacketing activities

    International Nuclear Information System (INIS)

    Corte, A. della; Ricci, M.V.; Spadoni, M.; Bessette, D.; Duchateau, J.L.; Salpietro, E.; Garre, R.; Rossi, S.; Penco, R.; Laurenti, A.

    1994-01-01

    The conductors for the ITER magnets are being defined according to the operating requirements of the machine. To demonstrate the technological feasibility of the main features of the magnets, two model coils (central solenoid and toroidal field), with bores in the range 2-3 m, will be manufactured. This is the first significant industrial production of full-size conductor (a total of about 6.5 km for these coils). One cabling and one jacketing line have been assembled in Europe. The former can cable up to 1100 m (6 tons) unit lengths; the latter, which can also handle 1000 m conductor lengths, has been assembled in a shorter version (320 m). A description of the lines is reported, together with the results of the trials performed up to now. (author) 2 figs

  5. Barriers and strategies to an iterative model of advance care planning communication.

    Science.gov (United States)

    Ahluwalia, Sangeeta C; Bekelman, David B; Huynh, Alexis K; Prendergast, Thomas J; Shreve, Scott; Lorenz, Karl A

    2015-12-01

    Early and repeated patient-provider conversations about advance care planning (ACP) are now widely recommended. We sought to characterize barriers and strategies for realizing an iterative model of ACP patient-provider communication. A total of 2 multidisciplinary focus groups and 3 semistructured interviews with 20 providers at a large Veterans Affairs medical center. Thematic analysis was employed to identify salient themes. Barriers included variation among providers in approaches to ACP, lack of useful information about patient values to guide decision making, and ineffective communication between providers across settings. Strategies included eliciting patient values rather than specific treatment choices and an increased role for primary care in the ACP process. Greater attention to connecting providers across the continuum, maximizing the potential of the electronic health record, and linking patient experiences to their values may help to connect ACP communication across the continuum. © The Author(s) 2014.

  6. Accuracy improvement of a hybrid robot for ITER application using POE modeling method

    Energy Technology Data Exchange (ETDEWEB)

    Wang, Yongbo, E-mail: yongbo.wang@hotmail.com [Laboratory of Intelligent Machines, Lappeenranta University of Technology, FIN-53851 Lappeenranta (Finland); Wu, Huapeng; Handroos, Heikki [Laboratory of Intelligent Machines, Lappeenranta University of Technology, FIN-53851 Lappeenranta (Finland)

    2013-10-15

    Highlights: ► The product of exponential (POE) formula for error modeling of hybrid robot. ► Differential Evolution (DE) algorithm for parameter identification. ► Simulation results are given to verify the effectiveness of the method. -- Abstract: This paper focuses on the kinematic calibration of a 10 degree-of-freedom (DOF) redundant serial–parallel hybrid robot to improve its accuracy. The robot was designed to perform the assembling and repairing tasks of the vacuum vessel (VV) of the international thermonuclear experimental reactor (ITER). By employing the product of exponentials (POEs) formula, we extended the POE-based calibration method from serial robot to redundant serial–parallel hybrid robot. The proposed method combines the forward and inverse kinematics together to formulate a hybrid calibration method for serial–parallel hybrid robot. Because of the high nonlinear characteristics of the error model and too many error parameters need to be identified, the traditional iterative linear least-square algorithms cannot be used to identify the parameter errors. This paper employs a global optimization algorithm, Differential Evolution (DE), to identify parameter errors by solving the inverse kinematics of the hybrid robot. Furthermore, after the parameter errors were identified, the DE algorithm was adopted to numerically solve the forward kinematics of the hybrid robot to demonstrate the accuracy improvement of the end-effector. Numerical simulations were carried out by generating random parameter errors at the allowed tolerance limit and generating a number of configuration poses in the robot workspace. Simulation of the real experimental conditions shows that the accuracy of the end-effector can be improved to the same precision level of the given external measurement device.

  7. Predictions models with neural nets

    Directory of Open Access Journals (Sweden)

    Vladimír Konečný

    2008-01-01

    Full Text Available The contribution is oriented to basic problem trends solution of economic pointers, using neural networks. Problems include choice of the suitable model and consequently configuration of neural nets, choice computational function of neurons and the way prediction learning. The contribution contains two basic models that use structure of multilayer neural nets and way of determination their configuration. It is postulate a simple rule for teaching period of neural net, to get most credible prediction.Experiments are executed with really data evolution of exchange rate Kč/Euro. The main reason of choice this time series is their availability for sufficient long period. In carry out of experiments the both given basic kind of prediction models with most frequent use functions of neurons are verified. Achieve prediction results are presented as in numerical and so in graphical forms.

  8. Pediatric 320-row cardiac computed tomography using electrocardiogram-gated model-based full iterative reconstruction

    Energy Technology Data Exchange (ETDEWEB)

    Shirota, Go; Maeda, Eriko; Namiki, Yoko; Bari, Razibul; Abe, Osamu [The University of Tokyo, Department of Radiology, Graduate School of Medicine, Tokyo (Japan); Ino, Kenji [The University of Tokyo Hospital, Imaging Center, Tokyo (Japan); Torigoe, Rumiko [Toshiba Medical Systems, Tokyo (Japan)

    2017-10-15

    Full iterative reconstruction algorithm is available, but its diagnostic quality in pediatric cardiac CT is unknown. To compare the imaging quality of two algorithms, full and hybrid iterative reconstruction, in pediatric cardiac CT. We included 49 children with congenital cardiac anomalies who underwent cardiac CT. We compared quality of images reconstructed using the two algorithms (full and hybrid iterative reconstruction) based on a 3-point scale for the delineation of the following anatomical structures: atrial septum, ventricular septum, right atrium, right ventricle, left atrium, left ventricle, main pulmonary artery, ascending aorta, aortic arch including the patent ductus arteriosus, descending aorta, right coronary artery and left main trunk. We evaluated beam-hardening artifacts from contrast-enhancement material using a 3-point scale, and we evaluated the overall image quality using a 5-point scale. We also compared image noise, signal-to-noise ratio and contrast-to-noise ratio between the algorithms. The overall image quality was significantly higher with full iterative reconstruction than with hybrid iterative reconstruction (3.67±0.79 vs. 3.31±0.89, P=0.0072). The evaluation scores for most of the gross structures were higher with full iterative reconstruction than with hybrid iterative reconstruction. There was no significant difference between full and hybrid iterative reconstruction for the presence of beam-hardening artifacts. Image noise was significantly lower in full iterative reconstruction, while signal-to-noise ratio and contrast-to-noise ratio were significantly higher in full iterative reconstruction. The diagnostic quality was superior in images with cardiac CT reconstructed with electrocardiogram-gated full iterative reconstruction. (orig.)

  9. Iterating skeletons

    DEFF Research Database (Denmark)

    Dieterle, Mischa; Horstmeyer, Thomas; Berthold, Jost

    2012-01-01

    a particular skeleton ad-hoc for repeated execution turns out to be considerably complicated, and raises general questions about introducing state into a stateless parallel computation. In addition, one would strongly prefer an approach which leaves the original skeleton intact, and only uses it as a building...... block inside a bigger structure. In this work, we present a general framework for skeleton iteration and discuss requirements and variations of iteration control and iteration body. Skeleton iteration is expressed by synchronising a parallel iteration body skeleton with a (likewise parallel) state......Skeleton-based programming is an area of increasing relevance with upcoming highly parallel hardware, since it substantially facilitates parallel programming and separates concerns. When parallel algorithms expressed by skeletons involve iterations – applying the same algorithm repeatedly...

  10. Iterative model reconstruction reduces calcified plaque volume in coronary CT angiography

    Energy Technology Data Exchange (ETDEWEB)

    Károlyi, Mihály, E-mail: mihaly.karolyi@cirg.hu [MTA-SE Cardiovascular Imaging Research Group, Heart and Vascular Center, Semmelweis University, 68. Varosmajor st, 1122, Budapest (Hungary); Szilveszter, Bálint, E-mail: szilveszter.balint@gmail.com [MTA-SE Cardiovascular Imaging Research Group, Heart and Vascular Center, Semmelweis University, 68. Varosmajor st, 1122, Budapest (Hungary); Kolossváry, Márton, E-mail: martonandko@gmail.com [MTA-SE Cardiovascular Imaging Research Group, Heart and Vascular Center, Semmelweis University, 68. Varosmajor st, 1122, Budapest (Hungary); Takx, Richard A.P, E-mail: richard.takx@gmail.com [Department of Radiology, University Medical Center Utrecht, 100 Heidelberglaan, 3584, CX Utrecht (Netherlands); Celeng, Csilla, E-mail: celengcsilla@gmail.com [MTA-SE Cardiovascular Imaging Research Group, Heart and Vascular Center, Semmelweis University, 68. Varosmajor st, 1122, Budapest (Hungary); Bartykowszki, Andrea, E-mail: bartyandi@gmail.com [MTA-SE Cardiovascular Imaging Research Group, Heart and Vascular Center, Semmelweis University, 68. Varosmajor st, 1122, Budapest (Hungary); Jermendy, Ádám L., E-mail: adam.jermendy@gmail.com [MTA-SE Cardiovascular Imaging Research Group, Heart and Vascular Center, Semmelweis University, 68. Varosmajor st, 1122, Budapest (Hungary); Panajotu, Alexisz, E-mail: panajotualexisz@gmail.com [MTA-SE Cardiovascular Imaging Research Group, Heart and Vascular Center, Semmelweis University, 68. Varosmajor st, 1122, Budapest (Hungary); Karády, Júlia, E-mail: karadyjulia@gmail.com [MTA-SE Cardiovascular Imaging Research Group, Heart and Vascular Center, Semmelweis University, 68. Varosmajor st, 1122, Budapest (Hungary); and others

    2017-02-15

    Objective: To assess the impact of iterative model reconstruction (IMR) on calcified plaque quantification as compared to filtered back projection reconstruction (FBP) and hybrid iterative reconstruction (HIR) in coronary computed tomography angiography (CTA). Methods: Raw image data of 52 patients who underwent 256-slice CTA were reconstructed with IMR, HIR and FBP. We evaluated qualitative, quantitative image quality parameters and quantified calcified and partially calcified plaque volumes using automated software. Results: Overall qualitative image quality significantly improved with HIR as compared to FBP, and further improved with IMR (p < 0.01 all). Contrast-to-noise ratios were improved with IMR, compared to HIR and FBP (51.0 [43.5–59.9], 20.3 [16.2–25.9] and 14.0 [11.2–17.7], respectively, all p < 0.01) Overall plaque volumes were lowest with IMR and highest with FBP (121.7 [79.3–168.4], 138.7 [90.6–191.7], 147.0 [100.7–183.6]). Similarly, calcified volumes (>130 HU) were decreased with IMR as compared to HIR and FBP (105.9 [62.1–144.6], 110.2 [63.8–166.6], 115.9 [81.7–164.2], respectively, p < 0.05 all). High-attenuation non-calcified volumes (90–129 HU) yielded similar values with FBP and HIR (p = 0.81), however it was lower with IMR (p < 0.05 both). Intermediate- (30–89 HU) and low-attenuation (<30 HU) non-calcified volumes showed no significant difference (p = 0.22 and p = 0.67, respectively). Conclusions: IMR improves image quality of coronary CTA and decreases calcified plaque volumes.

  11. Submillisievert Radiation Dose Coronary CT Angiography: Clinical Impact of the Knowledge-Based Iterative Model Reconstruction.

    Science.gov (United States)

    Iyama, Yuji; Nakaura, Takeshi; Kidoh, Masafumi; Oda, Seitaro; Utsunomiya, Daisuke; Sakaino, Naritsugu; Tokuyasu, Shinichi; Osakabe, Hirokazu; Harada, Kazunori; Yamashita, Yasuyuki

    2016-11-01

    The purpose of this study was to evaluate the noise and image quality of images reconstructed with a knowledge-based iterative model reconstruction (knowledge-based IMR) in ultra-low dose cardiac computed tomography (CT). We performed submillisievert radiation dose coronary CT angiography on 43 patients. We also performed a phantom study to evaluate the influence of object size with the automatic exposure control phantom. We reconstructed clinical and phantom studies with filtered back projection (FBP), hybrid iterative reconstruction (hybrid IR), and knowledge-based IMR. We measured effective dose of patients and compared CT number, image noise, and contrast noise ratio in ascending aorta of each reconstruction technique. We compared the relationship between image noise and body mass index for the clinical study, and object size for phantom study. The mean effective dose was 0.98 ± 0.25 mSv. The image noise of knowledge-based IMR images was significantly lower than those of FBP and hybrid IR images (knowledge-based IMR: 19.4 ± 2.8; FBP: 126.7 ± 35.0; hybrid IR: 48.8 ± 12.8, respectively) (P knowledge-based IMR images was significantly higher than those of FBP and hybrid IR images (knowledge-based IMR: 29.1 ± 5.4; FBP: 4.6 ± 1.3; hybrid IR: 13.1 ± 3.5, respectively) (P knowledge-based IMR (r = 0.27, P knowledge-based IMR offers significant noise reduction and improvement in image quality in submillisievert radiation dose cardiac CT. Copyright © 2016 The Association of University Radiologists. Published by Elsevier Inc. All rights reserved.

  12. Bioprocess iterative batch-to-batch optimization based on hybrid parametric/nonparametric models.

    Science.gov (United States)

    Teixeira, Ana P; Clemente, João J; Cunha, António E; Carrondo, Manuel J T; Oliveira, Rui

    2006-01-01

    This paper presents a novel method for iterative batch-to-batch dynamic optimization of bioprocesses. The relationship between process performance and control inputs is established by means of hybrid grey-box models combining parametric and nonparametric structures. The bioreactor dynamics are defined by material balance equations, whereas the cell population subsystem is represented by an adjustable mixture of nonparametric and parametric models. Thus optimizations are possible without detailed mechanistic knowledge concerning the biological system. A clustering technique is used to supervise the reliability of the nonparametric subsystem during the optimization. Whenever the nonparametric outputs are unreliable, the objective function is penalized. The technique was evaluated with three simulation case studies. The overall results suggest that the convergence to the optimal process performance may be achieved after a small number of batches. The model unreliability risk constraint along with sampling scheduling are crucial to minimize the experimental effort required to attain a given process performance. In general terms, it may be concluded that the proposed method broadens the application of the hybrid parametric/nonparametric modeling technique to "newer" processes with higher potential for optimization.

  13. ITER EDA Newsletter. V. 3, no. 8

    International Nuclear Information System (INIS)

    1994-08-01

    This ITER EDA (Engineering Design Activities) Newsletter issue reports on the sixth ITER council meeting; introduces the newly appointed ITER director and reports on his address to the ITER council. The vacuum tank for the ITER model coil testing, installed at JAERI, Naka, Japan is also briefly described

  14. SU-E-I-33: Initial Evaluation of Model-Based Iterative CT Reconstruction Using Standard Image Quality Phantoms

    International Nuclear Information System (INIS)

    Gingold, E; Dave, J

    2014-01-01

    Purpose: The purpose of this study was to compare a new model-based iterative reconstruction with existing reconstruction methods (filtered backprojection and basic iterative reconstruction) using quantitative analysis of standard image quality phantom images. Methods: An ACR accreditation phantom (Gammex 464) and a CATPHAN600 phantom were scanned using 3 routine clinical acquisition protocols (adult axial brain, adult abdomen, and pediatric abdomen) on a Philips iCT system. Each scan was acquired using default conditions and 75%, 50% and 25% dose levels. Images were reconstructed using standard filtered backprojection (FBP), conventional iterative reconstruction (iDose4) and a prototype model-based iterative reconstruction (IMR). Phantom measurements included CT number accuracy, contrast to noise ratio (CNR), modulation transfer function (MTF), low contrast detectability (LCD), and noise power spectrum (NPS). Results: The choice of reconstruction method had no effect on CT number accuracy, or MTF (p<0.01). The CNR of a 6 HU contrast target was improved by 1–67% with iDose4 relative to FBP, while IMR improved CNR by 145–367% across all protocols and dose levels. Within each scan protocol, the CNR improvement from IMR vs FBP showed a general trend of greater improvement at lower dose levels. NPS magnitude was greatest for FBP and lowest for IMR. The NPS of the IMR reconstruction showed a pronounced decrease with increasing spatial frequency, consistent with the unusual noise texture seen in IMR images. Conclusion: Iterative Model Reconstruction reduces noise and improves contrast-to-noise ratio without sacrificing spatial resolution in CT phantom images. This offers the possibility of radiation dose reduction and improved low contrast detectability compared with filtered backprojection or conventional iterative reconstruction

  15. Image quality in children with low-radiation chest CT using adaptive statistical iterative reconstruction and model-based iterative reconstruction.

    Directory of Open Access Journals (Sweden)

    Jihang Sun

    Full Text Available OBJECTIVE: To evaluate noise reduction and image quality improvement in low-radiation dose chest CT images in children using adaptive statistical iterative reconstruction (ASIR and a full model-based iterative reconstruction (MBIR algorithm. METHODS: Forty-five children (age ranging from 28 days to 6 years, median of 1.8 years who received low-dose chest CT scans were included. Age-dependent noise index (NI was used for acquisition. Images were retrospectively reconstructed using three methods: MBIR, 60% of ASIR and 40% of conventional filtered back-projection (FBP, and FBP. The subjective quality of the images was independently evaluated by two radiologists. Objective noises in the left ventricle (LV, muscle, fat, descending aorta and lung field at the layer with the largest cross-section area of LV were measured, with the region of interest about one fourth to half of the area of descending aorta. Optimized signal-to-noise ratio (SNR was calculated. RESULT: In terms of subjective quality, MBIR images were significantly better than ASIR and FBP in image noise and visibility of tiny structures, but blurred edges were observed. In terms of objective noise, MBIR and ASIR reconstruction decreased the image noise by 55.2% and 31.8%, respectively, for LV compared with FBP. Similarly, MBIR and ASIR reconstruction increased the SNR by 124.0% and 46.2%, respectively, compared with FBP. CONCLUSION: Compared with FBP and ASIR, overall image quality and noise reduction were significantly improved by MBIR. MBIR image could reconstruct eligible chest CT images in children with lower radiation dose.

  16. Modelling of transitions between L- and H-mode in JET high plasma current plasmas and application to ITER scenarios including tungsten behaviour

    Science.gov (United States)

    Koechl, F.; Loarte, A.; Parail, V.; Belo, P.; Brix, M.; Corrigan, G.; Harting, D.; Koskela, T.; Kukushkin, A. S.; Polevoi, A. R.; Romanelli, M.; Saibene, G.; Sartori, R.; Eich, T.; Contributors, JET

    2017-08-01

    The dynamics for the transition from L-mode to a stationary high Q DT H-mode regime in ITER is expected to be qualitatively different to present experiments. Differences may be caused by a low fuelling efficiency of recycling neutrals, that influence the post transition plasma density evolution on the one hand. On the other hand, the effect of the plasma density evolution itself both on the alpha heating power and the edge power flow required to sustain the H-mode confinement itself needs to be considered. This paper presents results of modelling studies of the transition to stationary high Q DT H-mode regime in ITER with the JINTRAC suite of codes, which include optimisation of the plasma density evolution to ensure a robust achievement of high Q DT regimes in ITER on the one hand and the avoidance of tungsten accumulation in this transient phase on the other hand. As a first step, the JINTRAC integrated models have been validated in fully predictive simulations (excluding core momentum transport which is prescribed) against core, pedestal and divertor plasma measurements in JET C-wall experiments for the transition from L-mode to stationary H-mode in partially ITER relevant conditions (highest achievable current and power, H 98,y ~ 1.0, low collisionality, comparable evolution in P net/P L-H, but different ρ *, T i/T e, Mach number and plasma composition compared to ITER expectations). The selection of transport models (core: NCLASS  +  Bohm/gyroBohm in L-mode/GLF23 in H-mode) was determined by a trade-off between model complexity and efficiency. Good agreement between code predictions and measured plasma parameters is obtained if anomalous heat and particle transport in the edge transport barrier are assumed to be reduced at different rates with increasing edge power flow normalised to the H-mode threshold; in particular the increase in edge plasma density is dominated by this edge transport reduction as the calculated neutral influx across the

  17. Sparse calibration of subsurface flow models using nonlinear orthogonal matching pursuit and an iterative stochastic ensemble method

    KAUST Repository

    Elsheikh, Ahmed H.

    2013-06-01

    We introduce a nonlinear orthogonal matching pursuit (NOMP) for sparse calibration of subsurface flow models. Sparse calibration is a challenging problem as the unknowns are both the non-zero components of the solution and their associated weights. NOMP is a greedy algorithm that discovers at each iteration the most correlated basis function with the residual from a large pool of basis functions. The discovered basis (aka support) is augmented across the nonlinear iterations. Once a set of basis functions are selected, the solution is obtained by applying Tikhonov regularization. The proposed algorithm relies on stochastically approximated gradient using an iterative stochastic ensemble method (ISEM). In the current study, the search space is parameterized using an overcomplete dictionary of basis functions built using the K-SVD algorithm. The proposed algorithm is the first ensemble based algorithm that tackels the sparse nonlinear parameter estimation problem. © 2013 Elsevier Ltd.

  18. Numerical evaluation of experimental models to investigate the dynamic behavior of the ITER tokamak assembly

    International Nuclear Information System (INIS)

    Onozuka, M.; Takeda, N.; Nakahira, M.; Shimizu, K.; Nakamura, T.

    2003-01-01

    The most recent assessment method to evaluate the dynamic behavior of the International Thermonuclear Experimental Reactor (ITER) tokamak assembly is outlined. Three experimental models, including a 1/5.8-scale tokamak model, have been considered to validate the numerical analysis methods for dynamic events, particularly seismic ones. The experimental model has been evaluated by numerical calculations and the results are presented. In the calculations, equivalent linearization has been applied for the non-linear characteristics of the support flange connection, caused by the effects of the bolt-fastening and the friction between the flanges. The detailed connecting conditions for the support flanges have been developed and validated for the analysis. Using the conditions, the eigen-mode analysis has shown that the first and second eigen-mode are horizontal vibration modes with the natural frequency of 39 Hz, while the vertical vibration mode is the fourth mode with the natural frequency of 86 Hz. Dynamic analysis for seismic events has shown the maximum acceleration of approximately twofold larger than that of the applied acceleration, and the maximum stress of 104 MPa found in the flange connecting bolt. These values will be examined comparing with experimental results in order to validate the analysis methods

  19. Modeling Design Iteration in Product Design and Development and Its Solution by a Novel Artificial Bee Colony Algorithm

    Science.gov (United States)

    2014-01-01

    Due to fierce market competition, how to improve product quality and reduce development cost determines the core competitiveness of enterprises. However, design iteration generally causes increases of product cost and delays of development time as well, so how to identify and model couplings among tasks in product design and development has become an important issue for enterprises to settle. In this paper, the shortcomings existing in WTM model are discussed and tearing approach as well as inner iteration method is used to complement the classic WTM model. In addition, the ABC algorithm is also introduced to find out the optimal decoupling schemes. In this paper, firstly, tearing approach and inner iteration method are analyzed for solving coupled sets. Secondly, a hybrid iteration model combining these two technologies is set up. Thirdly, a high-performance swarm intelligence algorithm, artificial bee colony, is adopted to realize problem-solving. Finally, an engineering design of a chemical processing system is given in order to verify its reasonability and effectiveness. PMID:25431584

  20. Nonconvex Model Predictive Control for Commercial Refrigeration

    DEFF Research Database (Denmark)

    Hovgaard, Tobias Gybel; Larsen, Lars F.S.; Jørgensen, John Bagterp

    2013-01-01

    function, however, is nonconvex due to the temperature dependence of thermodynamic efficiency. To handle this nonconvexity we propose a sequential convex optimization method, which typically converges in fewer than 5 or so iterations. We employ a fast convex quadratic programming solver to carry out...... the iterations, which is more than fast enough to run in real-time. We demonstrate our method on a realistic model, with a full year simulation and 15 minute time periods, using historical electricity prices and weather data, as well as random variations in thermal load. These simulations show substantial cost...... capacity associated with large penetration of intermittent renewable energy sources in a future smart grid....

  1. ITER safety

    International Nuclear Information System (INIS)

    Raeder, J.; Piet, S.; Buende, R.

    1991-01-01

    As part of the series of publications by the IAEA that summarize the results of the Conceptual Design Activities for the ITER project, this document describes the ITER safety analyses. It contains an assessment of normal operation effluents, accident scenarios, plasma chamber safety, tritium system safety, magnet system safety, external loss of coolant and coolant flow problems, and a waste management assessment, while it describes the implementation of the safety approach for ITER. The document ends with a list of major conclusions, a set of topical remarks on technical safety issues, and recommendations for the Engineering Design Activities, safety considerations for siting ITER, and recommendations with regard to the safety issues for the R and D for ITER. Refs, figs and tabs

  2. What do saliency models predict?

    Science.gov (United States)

    Koehler, Kathryn; Guo, Fei; Zhang, Sheng; Eckstein, Miguel P.

    2014-01-01

    Saliency models have been frequently used to predict eye movements made during image viewing without a specified task (free viewing). Use of a single image set to systematically compare free viewing to other tasks has never been performed. We investigated the effect of task differences on the ability of three models of saliency to predict the performance of humans viewing a novel database of 800 natural images. We introduced a novel task where 100 observers made explicit perceptual judgments about the most salient image region. Other groups of observers performed a free viewing task, saliency search task, or cued object search task. Behavior on the popular free viewing task was not best predicted by standard saliency models. Instead, the models most accurately predicted the explicit saliency selections and eye movements made while performing saliency judgments. Observers' fixations varied similarly across images for the saliency and free viewing tasks, suggesting that these two tasks are related. The variability of observers' eye movements was modulated by the task (lowest for the object search task and greatest for the free viewing and saliency search tasks) as well as the clutter content of the images. Eye movement variability in saliency search and free viewing might be also limited by inherent variation of what observers consider salient. Our results contribute to understanding the tasks and behavioral measures for which saliency models are best suited as predictors of human behavior, the relationship across various perceptual tasks, and the factors contributing to observer variability in fixational eye movements. PMID:24618107

  3. Superpixel Segmentation for Polsar Images with Local Iterative Clustering and Heterogeneous Statistical Model

    Science.gov (United States)

    Xiang, D.; Ni, W.; Zhang, H.; Wu, J.; Yan, W.; Su, Y.

    2017-09-01

    Superpixel segmentation has an advantage that can well preserve the target shape and details. In this research, an adaptive polarimetric SLIC (Pol-ASLIC) superpixel segmentation method is proposed. First, the spherically invariant random vector (SIRV) product model is adopted to estimate the normalized covariance matrix and texture for each pixel. A new edge detector is then utilized to extract PolSAR image edges for the initialization of central seeds. In the local iterative clustering, multiple cues including polarimetric, texture, and spatial information are considered to define the similarity measure. Moreover, a polarimetric homogeneity measurement is used to automatically determine the tradeoff factor, which can vary from homogeneous areas to heterogeneous areas. Finally, the SLIC superpixel segmentation scheme is applied to the airborne Experimental SAR and PiSAR L-band PolSAR data to demonstrate the effectiveness of this proposed segmentation approach. This proposed algorithm produces compact superpixels which can well adhere to image boundaries in both natural and urban areas. The detail information in heterogeneous areas can be well preserved.

  4. SUPERPIXEL SEGMENTATION FOR POLSAR IMAGES WITH LOCAL ITERATIVE CLUSTERING AND HETEROGENEOUS STATISTICAL MODEL

    Directory of Open Access Journals (Sweden)

    D. Xiang

    2017-09-01

    Full Text Available Superpixel segmentation has an advantage that can well preserve the target shape and details. In this research, an adaptive polarimetric SLIC (Pol-ASLIC superpixel segmentation method is proposed. First, the spherically invariant random vector (SIRV product model is adopted to estimate the normalized covariance matrix and texture for each pixel. A new edge detector is then utilized to extract PolSAR image edges for the initialization of central seeds. In the local iterative clustering, multiple cues including polarimetric, texture, and spatial information are considered to define the similarity measure. Moreover, a polarimetric homogeneity measurement is used to automatically determine the tradeoff factor, which can vary from homogeneous areas to heterogeneous areas. Finally, the SLIC superpixel segmentation scheme is applied to the airborne Experimental SAR and PiSAR L-band PolSAR data to demonstrate the effectiveness of this proposed segmentation approach. This proposed algorithm produces compact superpixels which can well adhere to image boundaries in both natural and urban areas. The detail information in heterogeneous areas can be well preserved.

  5. Circuit model of the ITER-like antenna for JET and simulation of its control algorithms

    Energy Technology Data Exchange (ETDEWEB)

    Durodié, Frédéric, E-mail: frederic.durodie@rma.ac.be; Křivská, Alena [LPP-ERM/KMS, TEC Partner, Brussels (Belgium); Dumortier, Pierre; Lerche, Ernesto [LPP-ERM/KMS, TEC Partner, Brussels (Belgium); JET, Culham Science Centre, Abingdon, OX14 3DB (United Kingdom); Helou, Walid [CEA, IRFM, F-13108 St-Paul-Lez-Durance (France); Collaboration: EUROfusion Consortium

    2015-12-10

    The ITER-like Antenna (ILA) for JET [1] is a 2 toroidal by 2 poloidal array of Resonant Double Loops (RDL) featuring in-vessel matching capacitors feeding RF current straps in conjugate-T manner, a low impedance quarter-wave impedance transformer, a service stub allowing hydraulic actuator and water cooling services to reach the aforementioned capacitors and a 2nd stage phase-shifter-stub matching circuit allowing to correct/choose the conjugate-T working impedance. Toroidally adjacent RDLs are fed from a 3dB hybrid splitter. It has been operated at 33, 42 and 47MHz on plasma (2008-2009) while it presently estimated frequency range is from 29 to 49MHz. At the time of the design (2001-2004) as well as the experiments the circuit models of the ILA were quite basic. The ILA front face and strap array Topica model was relatively crude and failed to correctly represent the poloidal central septum, Faraday Screen attachment as well as the segmented antenna central septum limiter. The ILA matching capacitors, T-junction, Vacuum Transmission Line (VTL) and Service Stubs were represented by lumped circuit elements and simple transmission line models. The assessment of the ILA results carried out to decide on the repair of the ILA identified that achieving routine full array operation requires a better understanding of the RF circuit, a feedback control algorithm for the 2nd stage matching as well as tighter calibrations of RF measurements. The paper presents the progress in modelling of the ILA comprising a more detailed Topica model of the front face for various plasma Scrape Off Layer profiles, a comprehensive HFSS model of the matching capacitors including internal bellows and electrode cylinders, 3D-EM models of the VTL including vacuum ceramic window, Service stub, a transmission line model of the 2nd stage matching circuit and main transmission lines including the 3dB hybrid splitters. A time evolving simulation using the improved circuit model allowed to design and

  6. Circuit model of the ITER-like antenna for JET and simulation of its control algorithms

    Science.gov (United States)

    Durodié, Frédéric; Dumortier, Pierre; Helou, Walid; Křivská, Alena; Lerche, Ernesto

    2015-12-01

    The ITER-like Antenna (ILA) for JET [1] is a 2 toroidal by 2 poloidal array of Resonant Double Loops (RDL) featuring in-vessel matching capacitors feeding RF current straps in conjugate-T manner, a low impedance quarter-wave impedance transformer, a service stub allowing hydraulic actuator and water cooling services to reach the aforementioned capacitors and a 2nd stage phase-shifter-stub matching circuit allowing to correct/choose the conjugate-T working impedance. Toroidally adjacent RDLs are fed from a 3dB hybrid splitter. It has been operated at 33, 42 and 47MHz on plasma (2008-2009) while it presently estimated frequency range is from 29 to 49MHz. At the time of the design (2001-2004) as well as the experiments the circuit models of the ILA were quite basic. The ILA front face and strap array Topica model was relatively crude and failed to correctly represent the poloidal central septum, Faraday Screen attachment as well as the segmented antenna central septum limiter. The ILA matching capacitors, T-junction, Vacuum Transmission Line (VTL) and Service Stubs were represented by lumped circuit elements and simple transmission line models. The assessment of the ILA results carried out to decide on the repair of the ILA identified that achieving routine full array operation requires a better understanding of the RF circuit, a feedback control algorithm for the 2nd stage matching as well as tighter calibrations of RF measurements. The paper presents the progress in modelling of the ILA comprising a more detailed Topica model of the front face for various plasma Scrape Off Layer profiles, a comprehensive HFSS model of the matching capacitors including internal bellows and electrode cylinders, 3D-EM models of the VTL including vacuum ceramic window, Service stub, a transmission line model of the 2nd stage matching circuit and main transmission lines including the 3dB hybrid splitters. A time evolving simulation using the improved circuit model allowed to design and

  7. An iterative genetic and dynamical modelling approach identifies novel features of the gene regulatory network underlying melanocyte development.

    Science.gov (United States)

    Greenhill, Emma R; Rocco, Andrea; Vibert, Laura; Nikaido, Masataka; Kelsh, Robert N

    2011-09-01

    The mechanisms generating stably differentiated cell-types from multipotent precursors are key to understanding normal development and have implications for treatment of cancer and the therapeutic use of stem cells. Pigment cells are a major derivative of neural crest stem cells and a key model cell-type for our understanding of the genetics of cell differentiation. Several factors driving melanocyte fate specification have been identified, including the transcription factor and master regulator of melanocyte development, Mitf, and Wnt signalling and the multipotency and fate specification factor, Sox10, which drive mitf expression. While these factors together drive multipotent neural crest cells to become specified melanoblasts, the mechanisms stabilising melanocyte differentiation remain unclear. Furthermore, there is controversy over whether Sox10 has an ongoing role in melanocyte differentiation. Here we use zebrafish to explore in vivo the gene regulatory network (GRN) underlying melanocyte specification and differentiation. We use an iterative process of mathematical modelling and experimental observation to explore methodically the core melanocyte GRN we have defined. We show that Sox10 is not required for ongoing differentiation and expression is downregulated in differentiating cells, in response to Mitfa and Hdac1. Unexpectedly, we find that Sox10 represses Mitf-dependent expression of melanocyte differentiation genes. Our systems biology approach allowed us to predict two novel features of the melanocyte GRN, which we then validate experimentally. Specifically, we show that maintenance of mitfa expression is Mitfa-dependent, and identify Sox9b as providing an Mitfa-independent input to melanocyte differentiation. Our data supports our previous suggestion that Sox10 only functions transiently in regulation of mitfa and cannot be responsible for long-term maintenance of mitfa expression; indeed, Sox10 is likely to slow melanocyte differentiation in the

  8. Modelling of the edge of a fusion plasma towards ITER and experimental validation on JET

    International Nuclear Information System (INIS)

    Guillemaut, Christophe

    2013-01-01

    The conditions required for fusion can be obtained in tokamaks. In most of these machines, the plasma wall-interaction and the exhaust of heating power are handled in a cavity called divertor. However, the high heat flux involved and the limitations of the materials of the plasma facing components (PFC) are problematic. Many researches are done this field in the context of ITER which should demonstrate 500 MW of DT fusion power during ∼ 400 s. Such operations could bring the heat flux on the PFC too high to be handled. Its reduction to manageable levels relies on the divertor detachment involving the reduction of the particle and heat fluxes on the PFC. Unfortunately, this phenomenon is still difficult to model. The aim of this PhD is to use the modelling of JET experiments with EDGE2D-EIRENE to make some progress in the understanding of the detachment. The simulations reproduce the observed detachment in C and Be/W environments. The distribution of the radiation is well reproduced by the code for C but with some discrepancies in Be/W. The comparison between different sets of atomic physics processes shows that ion-molecule elastic collisions are responsible for the detachment seen in EDGE2D-EIRENE. This process provides good neutral confinement in the divertor and significant momentum losses at low temperature, when the plasma is recombining. Comparison between EDGE2D-EIRENE and SOLPS4.3 shows similar detachment trends but the importance of the ion-molecule elastic collisions is reduced in SOLPS4.3. Both codes suggest that any process capable of improving the neutral confinement in the divertor should help to improve the modelling of the detachment. (author) [fr

  9. A family of small-world network models built by complete graph and iteration-function

    Science.gov (United States)

    Ma, Fei; Yao, Bing

    2018-02-01

    Small-world networks are popular in real-life complex systems. In the past few decades, researchers presented amounts of small-world models, in which some are stochastic and the rest are deterministic. In comparison with random models, it is not only convenient but also interesting to study the topological properties of deterministic models in some fields, such as graph theory, theorem computer sciences and so on. As another concerned darling in current researches, community structure (modular topology) is referred to as an useful statistical parameter to uncover the operating functions of network. So, building and studying such models with community structure and small-world character will be a demanded task. Hence, in this article, we build a family of sparse network space N(t) which is different from those previous deterministic models. Even though, our models are established in the same way as them, iterative generation. By randomly connecting manner in each time step, every resulting member in N(t) has no absolutely self-similar feature widely shared in a large number of previous models. This makes our insight not into discussing a class certain model, but into investigating a group various ones spanning a network space. Somewhat surprisingly, our results prove all members of N(t) to possess some similar characters: (a) sparsity, (b) exponential-scale feature P(k) ∼α-k, and (c) small-world property. Here, we must stress a very screming, but intriguing, phenomenon that the difference of average path length (APL) between any two members in N(t) is quite small, which indicates this random connecting way among members has no great effect on APL. At the end of this article, as a new topological parameter correlated to reliability, synchronization capability and diffusion properties of networks, the number of spanning trees on a representative member NB(t) of N(t) is studied in detail, then an exact analytical solution for its spanning trees entropy is also

  10. Image quality of ct angiography using model-based iterative reconstruction in infants with congenital heart disease: Comparison with filtered back projection and hybrid iterative reconstruction

    Energy Technology Data Exchange (ETDEWEB)

    Jia, Qianjun, E-mail: jiaqianjun@126.com [Southern Medical University, Guangzhou, Guangdong (China); Department of Radiology, Guangdong General Hospital, Guangdong Academy of Medical Sciences, Guangzhou, Guangdong (China); Department of Catheterization Lab, Guangdong Cardiovascular Institute, Guangdong Provincial Key Laboratory of South China Structural Heart Disease, Guangdong General Hospital, Guangdong Academy of Medical Sciences, Guangzhou, Guangdong (China); Zhuang, Jian, E-mail: zhuangjian5413@tom.com [Department of Cardiac Surgery, Guangdong Cardiovascular Institute, Guangdong Provincial Key Laboratory of South China Structural Heart Disease, Guangdong General Hospital, Guangdong Academy of Medical Sciences, Guangzhou, Guangdong (China); Jiang, Jun, E-mail: 81711587@qq.com [Department of Radiology, Shenzhen Second People’s Hospital, Shenzhen, Guangdong (China); Li, Jiahua, E-mail: 970872804@qq.com [Department of Catheterization Lab, Guangdong Cardiovascular Institute, Guangdong Provincial Key Laboratory of South China Structural Heart Disease, Guangdong General Hospital, Guangdong Academy of Medical Sciences, Guangzhou, Guangdong (China); Huang, Meiping, E-mail: huangmeiping_vip@163.com [Department of Catheterization Lab, Guangdong Cardiovascular Institute, Guangdong Provincial Key Laboratory of South China Structural Heart Disease, Guangdong General Hospital, Guangdong Academy of Medical Sciences, Guangzhou, Guangdong (China); Southern Medical University, Guangzhou, Guangdong (China); Liang, Changhong, E-mail: cjr.lchh@vip.163.com [Department of Radiology, Guangdong General Hospital, Guangdong Academy of Medical Sciences, Guangzhou, Guangdong (China); Southern Medical University, Guangzhou, Guangdong (China)

    2017-01-15

    Purpose: To compare the image quality, rate of coronary artery visualization and diagnostic accuracy of 256-slice multi-detector computed tomography angiography (CTA) with prospective electrocardiographic (ECG) triggering at a tube voltage of 80 kVp between 3 reconstruction algorithms (filtered back projection (FBP), hybrid iterative reconstruction (iDose{sup 4}) and iterative model reconstruction (IMR)) in infants with congenital heart disease (CHD). Methods: Fifty-one infants with CHD who underwent cardiac CTA in our institution between December 2014 and March 2015 were included. The effective radiation doses were calculated. Imaging data were reconstructed using the FBP, iDose{sup 4} and IMR algorithms. Parameters of objective image quality (noise, signal-to-noise ratio (SNR) and contrast-to-noise ratio (CNR)); subjective image quality (overall image quality, image noise and margin sharpness); coronary artery visibility; and diagnostic accuracy for the three algorithms were measured and compared. Results: The mean effective radiation dose was 0.61 ± 0.32 mSv. Compared to FBP and iDose{sup 4}, IMR yielded significantly lower noise (P < 0.01), higher SNR and CNR values (P < 0.01), and a greater subjective image quality score (P < 0.01). The total number of coronary segments visualized was significantly higher for both iDose{sup 4} and IMR than for FBP (P = 0.002 and P = 0.025, respectively), but there was no significant difference in this parameter between iDose{sup 4} and IMR (P = 0.397). There was no significant difference in the diagnostic accuracy between the FBP, iDose{sup 4} and IMR algorithms (χ{sup 2} = 0.343, P = 0.842). Conclusions: For infants with CHD undergoing cardiac CTA, the IMR reconstruction algorithm provided significantly increased objective and subjective image quality compared with the FBP and iDose{sup 4} algorithms. However, IMR did not improve the diagnostic accuracy or coronary artery visualization compared with iDose{sup 4}.

  11. Reduced Radiation Dose with Model-based Iterative Reconstruction versus Standard Dose with Adaptive Statistical Iterative Reconstruction in Abdominal CT for Diagnosis of Acute Renal Colic.

    Science.gov (United States)

    Fontarensky, Mikael; Alfidja, Agaïcha; Perignon, Renan; Schoenig, Arnaud; Perrier, Christophe; Mulliez, Aurélien; Guy, Laurent; Boyer, Louis

    2015-07-01

    To evaluate the accuracy of reduced-dose abdominal computed tomographic (CT) imaging by using a new generation model-based iterative reconstruction (MBIR) to diagnose acute renal colic compared with a standard-dose abdominal CT with 50% adaptive statistical iterative reconstruction (ASIR). This institutional review board-approved prospective study included 118 patients with symptoms of acute renal colic who underwent the following two successive CT examinations: standard-dose ASIR 50% and reduced-dose MBIR. Two radiologists independently reviewed both CT examinations for presence or absence of renal calculi, differential diagnoses, and associated abnormalities. The imaging findings, radiation dose estimates, and image quality of the two CT reconstruction methods were compared. Concordance was evaluated by κ coefficient, and descriptive statistics and t test were used for statistical analysis. Intraobserver correlation was 100% for the diagnosis of renal calculi (κ = 1). Renal calculus (τ = 98.7%; κ = 0.97) and obstructive upper urinary tract disease (τ = 98.16%; κ = 0.95) were detected, and differential or alternative diagnosis was performed (τ = 98.87% κ = 0.95). MBIR allowed a dose reduction of 84% versus standard-dose ASIR 50% (mean volume CT dose index, 1.7 mGy ± 0.8 [standard deviation] vs 10.9 mGy ± 4.6; mean size-specific dose estimate, 2.2 mGy ± 0.7 vs 13.7 mGy ± 3.9; P < .001) without a conspicuous deterioration in image quality (reduced-dose MBIR vs ASIR 50% mean scores, 3.83 ± 0.49 vs 3.92 ± 0.27, respectively; P = .32) or increase in noise (reduced-dose MBIR vs ASIR 50% mean, respectively, 18.36 HU ± 2.53 vs 17.40 HU ± 3.42). Its main drawback remains the long time required for reconstruction (mean, 40 minutes). A reduced-dose protocol with MBIR allowed a dose reduction of 84% without increasing noise and without an conspicuous deterioration in image quality in patients suspected of having renal colic.

  12. Boosting iterative stochastic ensemble method for nonlinear calibration of subsurface flow models

    KAUST Repository

    Elsheikh, Ahmed H.

    2013-06-01

    A novel parameter estimation algorithm is proposed. The inverse problem is formulated as a sequential data integration problem in which Gaussian process regression (GPR) is used to integrate the prior knowledge (static data). The search space is further parameterized using Karhunen-Loève expansion to build a set of basis functions that spans the search space. Optimal weights of the reduced basis functions are estimated by an iterative stochastic ensemble method (ISEM). ISEM employs directional derivatives within a Gauss-Newton iteration for efficient gradient estimation. The resulting update equation relies on the inverse of the output covariance matrix which is rank deficient.In the proposed algorithm we use an iterative regularization based on the ℓ2 Boosting algorithm. ℓ2 Boosting iteratively fits the residual and the amount of regularization is controlled by the number of iterations. A termination criteria based on Akaike information criterion (AIC) is utilized. This regularization method is very attractive in terms of performance and simplicity of implementation. The proposed algorithm combining ISEM and ℓ2 Boosting is evaluated on several nonlinear subsurface flow parameter estimation problems. The efficiency of the proposed algorithm is demonstrated by the small size of utilized ensembles and in terms of error convergence rates. © 2013 Elsevier B.V.

  13. Study of wall conditioning in tokamaks with application to ITER

    International Nuclear Information System (INIS)

    Kogut, Dmitri

    2014-01-01

    Thesis is devoted to studies of performance and efficiency of wall conditioning techniques in fusion reactors, such as ITER. Conditioning is necessary to control the state of the surface of plasma facing components to ensure plasma initiation and performance. Conditioning and operation of the JET tokamak with ITER-relevant material mix is extensively studied. A 2D model of glow conditioning discharges is developed and validated; it predicts reasonably uniform discharges in ITER. In the nuclear phase of ITER operation conditioning will be needed to control tritium inventory. It is shown here that isotopic exchange is an efficient mean to eliminate tritium from the walls by replacing it with deuterium. Extrapolations for tritium removal are comparable with expected retention per a nominal plasma pulse in ITER. A 1D model of hydrogen isotopic exchange in beryllium is developed and validated. It shows that fluence and temperature of the surface influence efficiency of the isotopic exchange. (author) [fr

  14. Towards an Iterated Game Model with Multiple Adversaries in Smart-World Systems.

    Science.gov (United States)

    He, Xiaofei; Yang, Xinyu; Yu, Wei; Lin, Jie; Yang, Qingyu

    2018-02-24

    Diverse and varied cyber-attacks challenge the operation of the smart-world system that is supported by Internet-of-Things (IoT) (smart cities, smart grid, smart transportation, etc.) and must be carefully and thoughtfully addressed before widespread adoption of the smart-world system can be fully realized. Although a number of research efforts have been devoted to defending against these threats, a majority of existing schemes focus on the development of a specific defensive strategy to deal with specific, often singular threats. In this paper, we address the issue of coalitional attacks, which can be launched by multiple adversaries cooperatively against the smart-world system such as smart cities. Particularly, we propose a game-theory based model to capture the interaction among multiple adversaries, and quantify the capacity of the defender based on the extended Iterated Public Goods Game (IPGG) model. In the formalized game model, in each round of the attack, a participant can either cooperate by participating in the coalitional attack, or defect by standing aside. In our work, we consider the generic defensive strategy that has a probability to detect the coalitional attack. When the coalitional attack is detected, all participating adversaries are penalized. The expected payoff of each participant is derived through the equalizer strategy that provides participants with competitive benefits. The multiple adversaries with the collusive strategy are also considered. Via a combination of theoretical analysis and experimentation, our results show that no matter which strategies the adversaries choose (random strategy, win-stay-lose-shift strategy, or even the adaptive equalizer strategy), our formalized game model is capable of enabling the defender to greatly reduce the maximum value of the expected average payoff to the adversaries via provisioning sufficient defensive resources, which is reflected by setting a proper penalty factor against the adversaries

  15. Towards an Iterated Game Model with Multiple Adversaries in Smart-World Systems

    Directory of Open Access Journals (Sweden)

    Xiaofei He

    2018-02-01

    Full Text Available Diverse and varied cyber-attacks challenge the operation of the smart-world system that is supported by Internet-of-Things (IoT (smart cities, smart grid, smart transportation, etc. and must be carefully and thoughtfully addressed before widespread adoption of the smart-world system can be fully realized. Although a number of research efforts have been devoted to defending against these threats, a majority of existing schemes focus on the development of a specific defensive strategy to deal with specific, often singular threats. In this paper, we address the issue of coalitional attacks, which can be launched by multiple adversaries cooperatively against the smart-world system such as smart cities. Particularly, we propose a game-theory based model to capture the interaction among multiple adversaries, and quantify the capacity of the defender based on the extended Iterated Public Goods Game (IPGG model. In the formalized game model, in each round of the attack, a participant can either cooperate by participating in the coalitional attack, or defect by standing aside. In our work, we consider the generic defensive strategy that has a probability to detect the coalitional attack. When the coalitional attack is detected, all participating adversaries are penalized. The expected payoff of each participant is derived through the equalizer strategy that provides participants with competitive benefits. The multiple adversaries with the collusive strategy are also considered. Via a combination of theoretical analysis and experimentation, our results show that no matter which strategies the adversaries choose (random strategy, win-stay-lose-shift strategy, or even the adaptive equalizer strategy, our formalized game model is capable of enabling the defender to greatly reduce the maximum value of the expected average payoff to the adversaries via provisioning sufficient defensive resources, which is reflected by setting a proper penalty factor against

  16. Towards an Iterated Game Model with Multiple Adversaries in Smart-World Systems †

    Science.gov (United States)

    Yang, Xinyu; Yu, Wei; Lin, Jie; Yang, Qingyu

    2018-01-01

    Diverse and varied cyber-attacks challenge the operation of the smart-world system that is supported by Internet-of-Things (IoT) (smart cities, smart grid, smart transportation, etc.) and must be carefully and thoughtfully addressed before widespread adoption of the smart-world system can be fully realized. Although a number of research efforts have been devoted to defending against these threats, a majority of existing schemes focus on the development of a specific defensive strategy to deal with specific, often singular threats. In this paper, we address the issue of coalitional attacks, which can be launched by multiple adversaries cooperatively against the smart-world system such as smart cities. Particularly, we propose a game-theory based model to capture the interaction among multiple adversaries, and quantify the capacity of the defender based on the extended Iterated Public Goods Game (IPGG) model. In the formalized game model, in each round of the attack, a participant can either cooperate by participating in the coalitional attack, or defect by standing aside. In our work, we consider the generic defensive strategy that has a probability to detect the coalitional attack. When the coalitional attack is detected, all participating adversaries are penalized. The expected payoff of each participant is derived through the equalizer strategy that provides participants with competitive benefits. The multiple adversaries with the collusive strategy are also considered. Via a combination of theoretical analysis and experimentation, our results show that no matter which strategies the adversaries choose (random strategy, win-stay-lose-shift strategy, or even the adaptive equalizer strategy), our formalized game model is capable of enabling the defender to greatly reduce the maximum value of the expected average payoff to the adversaries via provisioning sufficient defensive resources, which is reflected by setting a proper penalty factor against the adversaries

  17. Validation of the model for ELM suppression with 3D magnetic fields using low torque ITER baseline scenario discharges in DIII-D

    Science.gov (United States)

    Moyer, R. A.; Paz-Soldan, C.; Nazikian, R.; Orlov, D. M.; Ferraro, N. M.; Grierson, B. A.; Knölker, M.; Lyons, B. C.; McKee, G. R.; Osborne, T. H.; Rhodes, T. L.; Meneghini, O.; Smith, S.; Evans, T. E.; Fenstermacher, M. E.; Groebner, R. J.; Hanson, J. M.; La Haye, R. J.; Luce, T. C.; Mordijck, S.; Solomon, W. M.; Turco, F.; Yan, Z.; Zeng, L.; DIII-D Team

    2017-10-01

    Experiments have been executed in the DIII-D tokamak to extend suppression of Edge Localized Modes (ELMs) with Resonant Magnetic Perturbations (RMPs) to ITER-relevant levels of beam torque. The results support the hypothesis for RMP ELM suppression based on transition from an ideal screened response to a tearing response at a resonant surface that prevents expansion of the pedestal to an unstable width [Snyder et al., Nucl. Fusion 51, 103016 (2011) and Wade et al., Nucl. Fusion 55, 023002 (2015)]. In ITER baseline plasmas with I/aB = 1.4 and pedestal ν * ˜ 0.15, ELMs are readily suppressed with co- I p neutral beam injection. However, reducing the beam torque from 5 Nm to ≤ 3.5 Nm results in loss of ELM suppression and a shift in the zero-crossing of the electron perpendicular rotation ω ⊥ e ˜ 0 deeper into the plasma. The change in radius of ω ⊥ e ˜ 0 is due primarily to changes to the electron diamagnetic rotation frequency ωe * . Linear plasma response modeling with the resistive MHD code m3d-c1 indicates that the tearing response location tracks the inward shift in ω ⊥ e ˜ 0. At pedestal ν * ˜ 1, ELM suppression is also lost when the beam torque is reduced, but the ω ⊥ e change is dominated by collapse of the toroidal rotation v T . The hypothesis predicts that it should be possible to obtain ELM suppression at reduced beam torque by also reducing the height and width of the ωe * profile. This prediction has been confirmed experimentally with RMP ELM suppression at 0 Nm of beam torque and plasma normalized pressure β N ˜ 0.7. This opens the possibility of accessing ELM suppression in low torque ITER baseline plasmas by establishing suppression at low beta and then increasing beta while relying on the strong RMP-island coupling to maintain suppression.

  18. Implementation of the Iterative Proportion Fitting Algorithm for Geostatistical Facies Modeling

    International Nuclear Information System (INIS)

    Li Yupeng; Deutsch, Clayton V.

    2012-01-01

    In geostatistics, most stochastic algorithm for simulation of categorical variables such as facies or rock types require a conditional probability distribution. The multivariate probability distribution of all the grouped locations including the unsampled location permits calculation of the conditional probability directly based on its definition. In this article, the iterative proportion fitting (IPF) algorithm is implemented to infer this multivariate probability. Using the IPF algorithm, the multivariate probability is obtained by iterative modification to an initial estimated multivariate probability using lower order bivariate probabilities as constraints. The imposed bivariate marginal probabilities are inferred from profiles along drill holes or wells. In the IPF process, a sparse matrix is used to calculate the marginal probabilities from the multivariate probability, which makes the iterative fitting more tractable and practical. This algorithm can be extended to higher order marginal probability constraints as used in multiple point statistics. The theoretical framework is developed and illustrated with estimation and simulation example.

  19. Three-dimensional modeling of plasma edge transport and divertor fluxes during application of resonant magnetic perturbations on ITER

    Czech Academy of Sciences Publication Activity Database

    Schmitz, O.; Becoulet, M.; Cahyna, Pavel; Evans, T.E.; Feng, Y.; Frerichs, H.; Loarte, A.; Pitts, R.A.; Reiser, D.; Fenstermacher, M.E.; Harting, D.; Kirschner, A.; Kukushkin, A.; Lunt, T.; Saibene, G.; Reiter, D.; Samm, U.; Wiesen, S.

    2016-01-01

    Roč. 56, č. 6 (2016), č. článku 066008. ISSN 0029-5515 Institutional support: RVO:61389021 Keywords : resonant magnetic perturbations * plasma edge physics * 3D modeling * neutral particle physics * ITER * divertor heat and particle loads * ELM control Subject RIV: BL - Plasma and Gas Discharge Physics Impact factor: 3.307, year: 2016 http://iopscience.iop.org/article/10.1088/0029-5515/56/6/066008/meta

  20. Three-dimensional modeling of plasma edge transport and divertor fluxes during application of resonant magnetic perturbations on ITER

    Czech Academy of Sciences Publication Activity Database

    Schmitz, O.; Becoulet, M.; Cahyna, Pavel; Evans, T.E.; Feng, Y.; Frerichs, H.; Loarte, A.; Pitts, R.A.; Reiser, D.; Fenstermacher, M.E.; Harting, D.; Kirschner, A.; Kukushkin, A.; Lunt, T.; Saibene, G.; Reiter, D.; Samm, U.; Wiesen, S.

    2016-01-01

    Roč. 56, č. 6 (2016), č. článku 066008. ISSN 0029-5515 Institutional support: RVO:61389021 Keywords : resonant magnetic perturbations * plasma edge physics * 3D modeling * neutral particle physics * ITER * divertor heat and particle loads * ELM control Subject RIV: BL - Plasma and Gas Discharge Physics OBOR OECD: Fluids and plasma physics (including surface physics) Impact factor: 3.307, year: 2016 http://iopscience.iop.org/article/10.1088/0029-5515/56/6/066008/meta

  1. Experimental simulation and numerical modeling of vapor shield formation and divertor material erosion for ITER typical plasma disruptions

    Energy Technology Data Exchange (ETDEWEB)

    Wuerz, H. [Kernforschungszentrum Karlsruhe, INR, Postfach 36 40, D-76021 Karlsruhe (Germany); Arkhipov, N.I. [Troitsk Institute for Innovation and Fusion Research, 142092 Troitsk (Russian Federation); Bakhtin, V.P. [Troitsk Institute for Innovation and Fusion Research, 142092 Troitsk (Russian Federation); Konkashbaev, I. [Troitsk Institute for Innovation and Fusion Research, 142092 Troitsk (Russian Federation); Landman, I. [Troitsk Institute for Innovation and Fusion Research, 142092 Troitsk (Russian Federation); Safronov, V.M. [Troitsk Institute for Innovation and Fusion Research, 142092 Troitsk (Russian Federation); Toporkov, D.A. [Troitsk Institute for Innovation and Fusion Research, 142092 Troitsk (Russian Federation); Zhitlukhin, A.M. [Troitsk Institute for Innovation and Fusion Research, 142092 Troitsk (Russian Federation)

    1995-04-01

    The high divertor heat load during a tokamak plasma disruption results in sudden evaporation of a thin layer of divertor plate material, which acts as vapor shield and protects the target from further excessive evaporation. Formation and effectiveness of the vapor shield are theoretically modeled and are experimentally analyzed at the 2MK-200 facility under conditions simulating the thermal quench phase of ITER tokamak plasma disruptions. ((orig.)).

  2. Application of Iterative Robust Model-based Optimal Experimental Design for the Calibration of Biocatalytic Models

    DEFF Research Database (Denmark)

    Van Daele, Timothy; Gernaey, Krist V.; Ringborg, Rolf Hoffmeyer

    2017-01-01

    The aim of model calibration is to estimate unique parameter values from available experimental data, here applied to a biocatalytic process. The traditional approach of first gathering data followed by performing a model calibration is inefficient, since the information gathered during experimen...

  3. Modelling ELM heat flux deposition on the ITER main chamber wall

    Czech Academy of Sciences Publication Activity Database

    Kočan, M.; Pitts, R.A.; Lisgo, S.W.; Loarte, A.; Gunn, J. P.; Fuchs, Vladimír

    2015-01-01

    Roč. 463, July (2015), s. 709-713 ISSN 0022-3115. [International Conference on Plasma-Surface Interactions in Controlled Fusion Devices (PSI)/21./. Kanazawa, 26.05.2014-30.05.2014] Institutional support: RVO:61389021 Keywords : ELM * ITER Subject RIV: JF - Nuclear Energetics OBOR OECD: Nuclear related engineering Impact factor: 2.199, year: 2015

  4. ITER EDA newsletter. V. 8, no. 9

    International Nuclear Information System (INIS)

    1999-09-01

    This edition of the ITER EDA Newsletter contains a contribution by the ITER Director, R. Aymar, on the subject of developments in ITER Physics R and D report on the completion of the ITER central solenoid model coils installation by H. Tsuji, Head fo the Superconducting Magnet Laboratory at JAERI in Naka, Japan. Individual abstracts are prepared for each of the two articles

  5. Video compressed sensing using iterative self-similarity modeling and residual reconstruction

    Science.gov (United States)

    Kim, Yookyung; Oh, Han; Bilgin, Ali

    2013-04-01

    Compressed sensing (CS) has great potential for use in video data acquisition and storage because it makes it unnecessary to collect an enormous amount of data and to perform the computationally demanding compression process. We propose an effective CS algorithm for video that consists of two iterative stages. In the first stage, frames containing the dominant structure are estimated. These frames are obtained by thresholding the coefficients of similar blocks. In the second stage, refined residual frames are reconstructed from the original measurements and the measurements corresponding to the frames estimated in the first stage. These two stages are iterated until convergence. The proposed algorithm exhibits superior subjective image quality and significantly improves the peak-signal-to-noise ratio and the structural similarity index measure compared to other state-of-the-art CS algorithms.

  6. Ion orbit modelling of ELM heat loads on ITER divertor vertical targets.

    Czech Academy of Sciences Publication Activity Database

    Gunn, J. P.; Carpentier-Chouchana, S.; Dejarnac, Renaud; Escourbiac, F.; Hirai, T.; Komm, Michael; Kukushkin, A.; Panayotis, S.; Pitts, R.A.

    2017-01-01

    Roč. 12, August (2017), s. 75-83 ISSN 2352-1791. [International Conference on Plasma Surface Interactions 2016, PSI2016 /22./. Roma, 30.05.2016-03.06.2016] Institutional support: RVO:61389021 Keywords : ITER * Divertor * ELM heat loads Subject RIV: BL - Plasma and Gas Discharge Physics OBOR OECD: Fluids and plasma physics (including surface physics) http://www.sciencedirect.com/science/article/pii/S2352179116302745

  7. Iterative Usage of Fixed and Random Effect Models for Powerful and Efficient Genome-Wide Association Studies

    Science.gov (United States)

    Liu, Xiaolei; Huang, Meng; Fan, Bin; Buckler, Edward S.; Zhang, Zhiwu

    2016-01-01

    False positives in a Genome-Wide Association Study (GWAS) can be effectively controlled by a fixed effect and random effect Mixed Linear Model (MLM) that incorporates population structure and kinship among individuals to adjust association tests on markers; however, the adjustment also compromises true positives. The modified MLM method, Multiple Loci Linear Mixed Model (MLMM), incorporates multiple markers simultaneously as covariates in a stepwise MLM to partially remove the confounding between testing markers and kinship. To completely eliminate the confounding, we divided MLMM into two parts: Fixed Effect Model (FEM) and a Random Effect Model (REM) and use them iteratively. FEM contains testing markers, one at a time, and multiple associated markers as covariates to control false positives. To avoid model over-fitting problem in FEM, the associated markers are estimated in REM by using them to define kinship. The P values of testing markers and the associated markers are unified at each iteration. We named the new method as Fixed and random model Circulating Probability Unification (FarmCPU). Both real and simulated data analyses demonstrated that FarmCPU improves statistical power compared to current methods. Additional benefits include an efficient computing time that is linear to both number of individuals and number of markers. Now, a dataset with half million individuals and half million markers can be analyzed within three days. PMID:26828793

  8. MODELS OF LIVE MIGRATION WITH ITERATIVE APPROACH AND MOVE OF VIRTUAL MACHINES

    Directory of Open Access Journals (Sweden)

    S. M. Aleksankov

    2015-11-01

    Full Text Available Subject of Research. The processes of live migration without shared storage with pre-copy approach and move migration are researched. Migration of virtual machines is an important opportunity of virtualization technology. It enables applications to move transparently with their runtime environments between physical machines. Live migration becomes noticeable technology for efficient load balancing and optimizing the deployment of virtual machines to physical hosts in data centres. Before the advent of live migration, only network migration (the so-called, «Move», has been used, that entails stopping the virtual machine execution while copying to another physical server, and, consequently, unavailability of the service. Method. Algorithms of live migration without shared storage with pre-copy approach and move migration of virtual machines are reviewed from the perspective of research of migration time and unavailability of services at migrating of virtual machines. Main Results. Analytical models are proposed predicting migration time of virtual machines and unavailability of services at migrating with such technologies as live migration with pre-copy approach without shared storage and move migration. In the latest works on the time assessment of unavailability of services and migration time using live migration without shared storage experimental results are described, that are applicable to draw general conclusions about the changes of time for unavailability of services and migration time, but not to predict their values. Practical Significance. The proposed models can be used for predicting the migration time and time of unavailability of services, for example, at implementation of preventive and emergency works on the physical nodes in data centres.

  9. Analysis of ITER NbTi and Nb3Sn CICCs experimental minimum quench energy with JackPot, MCM and THEA models

    Science.gov (United States)

    Bagni, T.; Duchateau, J. L.; Breschi, M.; Devred, A.; Nijhuis, A.

    2017-09-01

    Cable-in-conduit conductors (CICCs) for ITER magnets are subjected to fast changing magnetic fields during the plasma-operating scenario. In order to anticipate the limitations of conductors under the foreseen operating conditions, it is essential to have a better understanding of the stability margin of magnets. In the last decade ITER has launched a campaign for characterization of several types of NbTi and Nb3Sn CICCs comprising quench tests with a singular sine wave fast magnetic field pulse and relatively small amplitude. The stability tests, performed in the SULTAN facility, were reproduced and analyzed using two codes: JackPot-AC/DC, an electromagnetic-thermal numerical model for CICCs, developed at the University of Twente (van Lanen and Nijhuis 2010 Cryogenics 50 139-148) and multi-constant-model (MCM) (Turck and Zani 2010 Cryogenics 50 443-9), an analytical model for CICCs coupling losses. The outputs of both codes were combined with thermal, hydraulic and electric analysis of superconducting cables to predict the minimum quench energy (MQE) (Bottura et al 2000 Cryogenics 40 617-26). The experimental AC loss results were used to calibrate the JackPot and MCM models and to reproduce the energy deposited in the cable during an MQE test. The agreement between experiments and models confirm a good comprehension of the various CICCs thermal and electromagnetic phenomena. The differences between the analytical MCM and numerical JackPot approaches are discussed. The results provide a good basis for further investigation of CICC stability under plasma scenario conditions using magnetic field pulses with lower ramp rate and higher amplitude.

  10. Iterative perceptual learning for social behavior synthesis

    NARCIS (Netherlands)

    de Kok, I.A.; Poppe, Ronald Walter; Heylen, Dirk K.J.

    We introduce Iterative Perceptual Learning (IPL), a novel approach to learn computational models for social behavior synthesis from corpora of human–human interactions. IPL combines perceptual evaluation with iterative model refinement. Human observers rate the appropriateness of synthesized

  11. Iterative Perceptual Learning for Social Behavior Synthesis

    NARCIS (Netherlands)

    de Kok, I.A.; Poppe, Ronald Walter; Heylen, Dirk K.J.

    We introduce Iterative Perceptual Learning (IPL), a novel approach for learning computational models for social behavior synthesis from corpora of human-human interactions. The IPL approach combines perceptual evaluation with iterative model refinement. Human observers rate the appropriateness of

  12. Development of estrogen receptor beta binding prediction model using large sets of chemicals.

    Science.gov (United States)

    Sakkiah, Sugunadevi; Selvaraj, Chandrabose; Gong, Ping; Zhang, Chaoyang; Tong, Weida; Hong, Huixiao

    2017-11-03

    We developed an ER β binding prediction model to facilitate identification of chemicals specifically bind ER β or ER α together with our previously developed ER α binding model. Decision Forest was used to train ER β binding prediction model based on a large set of compounds obtained from EADB. Model performance was estimated through 1000 iterations of 5-fold cross validations. Prediction confidence was analyzed using predictions from the cross validations. Informative chemical features for ER β binding were identified through analysis of the frequency data of chemical descriptors used in the models in the 5-fold cross validations. 1000 permutations were conducted to assess the chance correlation. The average accuracy of 5-fold cross validations was 93.14% with a standard deviation of 0.64%. Prediction confidence analysis indicated that the higher the prediction confidence the more accurate the predictions. Permutation testing results revealed that the prediction model is unlikely generated by chance. Eighteen informative descriptors were identified to be important to ER β binding prediction. Application of the prediction model to the data from ToxCast project yielded very high sensitivity of 90-92%. Our results demonstrated ER β binding of chemicals could be accurately predicted using the developed model. Coupling with our previously developed ER α prediction model, this model could be expected to facilitate drug development through identification of chemicals that specifically bind ER β or ER α .

  13. A Riccati Based Homogeneous and Self-Dual Interior-Point Method for Linear Economic Model Predictive Control

    DEFF Research Database (Denmark)

    Sokoler, Leo Emil; Frison, Gianluca; Edlund, Kristian

    2013-01-01

    In this paper, we develop an efficient interior-point method (IPM) for the linear programs arising in economic model predictive control of linear systems. The novelty of our algorithm is that it combines a homogeneous and self-dual model, and a specialized Riccati iteration procedure. We test...

  14. Iowa calibration of MEPDG performance prediction models.

    Science.gov (United States)

    2013-06-01

    This study aims to improve the accuracy of AASHTO Mechanistic-Empirical Pavement Design Guide (MEPDG) pavement : performance predictions for Iowa pavement systems through local calibration of MEPDG prediction models. A total of 130 : representative p...

  15. An iterative and integrative approach to modeling the morphological alterations in backwater condition: A case study of Darby Creek, PA

    Science.gov (United States)

    Hosseiny, S. M. H.; Smith, V.

    2017-12-01

    Darby Creek is an urbanized highly flood-prone watershed in Metro-Philadelphia, PA. The floodplain and the main channel are composed of alluvial sediment and are subject to frequent geomorphological changes. The lower part of the channel is within the coastal zone, subjugating the flow to a backwater condition. This study applies a multi-disciplinary approach to modeling the morphological alteration of the creek and floodplain in presence of the backwater using an iteration and integration of combined models. To do this, FaSTMECH (a two-dimensional quasi unsteady flow solver) in International River Interface Cooperative software (iRIC) is coupled with a 1-dimensional backwater model to calculate hydraulic characteristics of the flow over a digital elevation model of the channel and floodplain. One USGS gage at the upstream and two NOAA gages at the downstream are used for model validation. The output of the model is afterward used to calculate sediment transport and morphological changes over the domain through time using an iterative process. The updated elevation data is incorporated in the hydraulic model again to calculate the velocity field. The calculations continue reciprocally over discrete discharges of the hydrograph until the flood attenuates and the next flood event occurs. The results from this study demonstrate how to incorporate bathymetry and flow data to model floodplain evolution in the backwater through time, and provide a means to better understanding the dynamics of the floodplain. This work is not only applicable to river management, but also provides insight to the geoscience community concerning the development of landscapes in the backwater.

  16. Effects of error covariance structure on estimation of model averaging weights and predictive performance

    Science.gov (United States)

    Lu, Dan; Ye, Ming; Meyer, Philip D.; Curtis, Gary P.; Shi, Xiaoqing; Niu, Xu-Feng; Yabusaki, Steve B.

    2013-01-01

    obtained from the iterative two-stage method also improved predictive performance of the individual models and model averaging in both synthetic and experimental studies.

  17. Model complexity control for hydrologic prediction

    NARCIS (Netherlands)

    Schoups, G.; Van de Giesen, N.C.; Savenije, H.H.G.

    2008-01-01

    A common concern in hydrologic modeling is overparameterization of complex models given limited and noisy data. This leads to problems of parameter nonuniqueness and equifinality, which may negatively affect prediction uncertainties. A systematic way of controlling model complexity is therefore

  18. An Iterative Ensemble Kalman Filter with One-Step-Ahead Smoothing for State-Parameters Estimation of Contaminant Transport Models

    KAUST Repository

    Gharamti, M. E.

    2015-05-11

    The ensemble Kalman filter (EnKF) is a popular method for state-parameters estimation of subsurface flow and transport models based on field measurements. The common filtering procedure is to directly update the state and parameters as one single vector, which is known as the Joint-EnKF. In this study, we follow the one-step-ahead smoothing formulation of the filtering problem, to derive a new joint-based EnKF which involves a smoothing step of the state between two successive analysis steps. The new state-parameters estimation scheme is derived in a consistent Bayesian filtering framework and results in separate update steps for the state and the parameters. This new algorithm bears strong resemblance with the Dual-EnKF, but unlike the latter which first propagates the state with the model then updates it with the new observation, the proposed scheme starts by an update step, followed by a model integration step. We exploit this new formulation of the joint filtering problem and propose an efficient model-integration-free iterative procedure on the update step of the parameters only for further improved performances. Numerical experiments are conducted with a two-dimensional synthetic subsurface transport model simulating the migration of a contaminant plume in a heterogenous aquifer domain. Contaminant concentration data are assimilated to estimate both the contaminant state and the hydraulic conductivity field. Assimilation runs are performed under imperfect modeling conditions and various observational scenarios. Simulation results suggest that the proposed scheme efficiently recovers both the contaminant state and the aquifer conductivity, providing more accurate estimates than the standard Joint and Dual EnKFs in all tested scenarios. Iterating on the update step of the new scheme further enhances the proposed filter’s behavior. In term of computational cost, the new Joint-EnKF is almost equivalent to that of the Dual-EnKF, but requires twice more model

  19. Moving mesh finite element simulation for phase-field modeling of brittle fracture and convergence of Newton's iteration

    Science.gov (United States)

    Zhang, Fei; Huang, Weizhang; Li, Xianping; Zhang, Shicheng

    2018-03-01

    A moving mesh finite element method is studied for the numerical solution of a phase-field model for brittle fracture. The moving mesh partial differential equation approach is employed to dynamically track crack propagation. Meanwhile, the decomposition of the strain tensor into tensile and compressive components is essential for the success of the phase-field modeling of brittle fracture but results in a non-smooth elastic energy and stronger nonlinearity in the governing equation. This makes the governing equation much more difficult to solve and, in particular, Newton's iteration often fails to converge. Three regularization methods are proposed to smooth out the decomposition of the strain tensor. Numerical examples of fracture propagation under quasi-static load demonstrate that all of the methods can effectively improve the convergence of Newton's iteration for relatively small values of the regularization parameter but without compromising the accuracy of the numerical solution. They also show that the moving mesh finite element method is able to adaptively concentrate the mesh elements around propagating cracks and handle multiple and complex crack systems.

  20. ITER toroidal field model coil (TFMC). Test and analysis summary report (testing handbook) chapter 3 TOSKA FACILITY

    International Nuclear Information System (INIS)

    Ulbricht, A.

    2001-05-01

    In the frame of a contract between the ITER (International Thermonuclear Experimental Reactor) Director and the European Home Team Director was concluded the extension of the TOSKA facility of the Forschungszentrum Karlsruhe as test bed for the ITER toroidal field model coil (TFMC), one of the 7 large research and development projects of the ITER EDA (Engineering Design Activity). The report describes the work and development, which were performed together with industry to extend the existing components and add new components. In this frame a new 2 kW refrigerator was added to the TOSKA facility including the cold lines to the Helium dewar in the TOSKA experimental area. The measuring and control system as well as data acquisition was renewed according to the state-of-the-art. Two power supplies (30 kA, 50 kA) were switched in parallel across an Al bus bar system and combined with an 80 kA dump circuit. For the test of the TFMC in the background field of the EURATOM LCT coil a new 20 kA power supply was taken into operation with the existing 20 kA discharge circuit. Two forced flow cooled 80 kA current leads for the TFMC were developed. The total lifting capacity for loads in the TOSKA building was increased by an ordered new 80 t crane with a suitable cross head (125 t lifting capacity +5 t net mass) to 130 t for assembling and installation of the test arrangement. Numerous pre-tests and development and adaptation work was required to make the components suitable for application. The 1.8 K test of the EURATOM LCT coil and the test of the W 7-X prototype coil count to these tests as overall pre-tests. (orig.)

  1. Use of the iterative solution method for coupled finite element and boundary element modeling

    International Nuclear Information System (INIS)

    Koteras, J.R.

    1993-07-01

    Tunnels buried deep within the earth constitute an important class geomechanics problems. Two numerical techniques used for the analysis of geomechanics problems, the finite element method and the boundary element method, have complementary characteristics for applications to problems of this type. The usefulness of combining these two methods for use as a geomechanics analysis tool has been recognized for some time, and a number of coupling techniques have been proposed. However, not all of them lend themselves to efficient computational implementations for large-scale problems. This report examines a coupling technique that can form the basis for an efficient analysis tool for large scale geomechanics problems through the use of an iterative equation solver

  2. Application of Homotopy Perturbation and Variational Iteration Methods to SIR Epidemic Model

    DEFF Research Database (Denmark)

    Ghotbi, Abdoul R.; Barari, Amin; Omidvar, M.

    2011-01-01

    Children born are susceptible to various diseases such as mumps, chicken pox etc. These diseases are the most common form of infectious diseases. In recent years, scientists have been trying to devise strategies to fight against these diseases. Since vaccination is considered to be the most....... In this article two methods namely Homotopy Perturbation Method (HPM) and Variational Iteration Method (VIM) are employed to compute an approximation to the solution of non-linear system of differential equations governing the problem. The obtained results are compared with those obtained by Adomian Decomposition...... Method (ADM). This research reveals that although the obtained results are the same, HPM and VIM are much more robust, more convenient and efficient in comparison to ADM....

  3. Facilities for technology testing of ITER divertor concepts, models, and prototypes in a plasma environment

    International Nuclear Information System (INIS)

    Cohen, S.A.

    1991-12-01

    The exhaust of power and fusion-reaction products from ITER plasma are critical physics and technology issues from performance, safety, and reliability perspectives. Because of inadequate pulse length, fluence, flux, scrape-off layer plasma temperature and density, and other parameters, the present generation of tokamaks, linear plasma devices, or energetic beam facilities are unable to perform adequate technology testing of divertor components, though they are essential contributors to many physics issues such as edge-plasma transport and disruption effects and control. This Technical Requirements Documents presents a description of the capabilities and parameters divertor test facilities should have to perform accelerated life testing on predominantly technological divertor issues such as basic divertor concepts, heat load limits, thermal fatigue, tritium inventory and erosion/redeposition. The cost effectiveness of such divertor technology testing is also discussed

  4. Staying Power of Churn Prediction Models

    NARCIS (Netherlands)

    Risselada, Hans; Verhoef, Peter C.; Bijmolt, Tammo H. A.

    In this paper, we study the staying power of various churn prediction models. Staying power is defined as the predictive performance of a model in a number of periods after the estimation period. We examine two methods, logit models and classification trees, both with and without applying a bagging

  5. ITER magnets

    International Nuclear Information System (INIS)

    Bottura, L.; Hasegawa, M.; Heim, J.

    1991-01-01

    As part of the summary of the Conceptual Design Activities (CDA) for the International Thermonuclear Experimental Reactor (ITER), this document describes the magnet systems for ITER, including the Toroidal Field (TF) and Poloidal Field (PF) Magnets, the Structural Support System and Cryostat, the Cryogenic System, the TF and PF Power and Protection Systems, and Coil Services and Diagnostics. After an Introduction and Summary, the document discusses the (i) Design Basis, including General Requirements, Design Criteria, Design Philosophy, and the Database (a.o., engineering data on key materials and components), and (ii) the Subsystem Design and Analysis, including Conductor Design, TF Coil and Structure Design, TF Structural Analysis, PF Coil and Structure Design, PF Structural Performance, Fatigue Assessment of Structures, AC Loss Performance, Thermohydraulic Performance, Stability, Cryogenic System, Power Supply Systems, and Coil Services. All magnets are superconducting, (based on Nb 3 Sn) except the Active Control Coils inside the Vacuum Vessel. The fault analysis has been taken to a level consistent with the design definition, showing that the present design meets the requirement for passive safety or can be made to meet it with only minor modifications. A more detailed assessment in this regard is needed but must await further development of the design. In conclusion, the magnet design concepts presently proposed can be developed into an engineering design. Refs, figs and tabs

  6. Automatic iterative segmentation of multiple sclerosis lesions using Student's t mixture models and probabilistic anatomical atlases in FLAIR images.

    Science.gov (United States)

    Freire, Paulo G L; Ferrari, Ricardo J

    2016-06-01

    Multiple sclerosis (MS) is a demyelinating autoimmune disease that attacks the central nervous system (CNS) and affects more than 2 million people worldwide. The segmentation of MS lesions in magnetic resonance imaging (MRI) is a very important task to assess how a patient is responding to treatment and how the disease is progressing. Computational approaches have been proposed over the years to segment MS lesions and reduce the amount of time spent on manual delineation and inter- and intra-rater variability and bias. However, fully-automatic segmentation of MS lesions still remains an open problem. In this work, we propose an iterative approach using Student's t mixture models and probabilistic anatomical atlases to automatically segment MS lesions in Fluid Attenuated Inversion Recovery (FLAIR) images. Our technique resembles a refinement approach by iteratively segmenting brain tissues into smaller classes until MS lesions are grouped as the most hyperintense one. To validate our technique we used 21 clinical images from the 2015 Longitudinal Multiple Sclerosis Lesion Segmentation Challenge dataset. Evaluation using Dice Similarity Coefficient (DSC), True Positive Ratio (TPR), False Positive Ratio (FPR), Volume Difference (VD) and Pearson's r coefficient shows that our technique has a good spatial and volumetric agreement with raters' manual delineations. Also, a comparison between our proposal and the state-of-the-art shows that our technique is comparable and, in some cases, better than some approaches, thus being a viable alternative for automatic MS lesion segmentation in MRI. Copyright © 2016 Elsevier Ltd. All rights reserved.

  7. Comparison of Prediction-Error-Modelling Criteria

    DEFF Research Database (Denmark)

    Jørgensen, John Bagterp; Jørgensen, Sten Bay

    2007-01-01

    Single and multi-step prediction-error-methods based on the maximum likelihood and least squares criteria are compared. The prediction-error methods studied are based on predictions using the Kalman filter and Kalman predictors for a linear discrete-time stochastic state space model, which is a r...

  8. A Homogeneous and Self-Dual Interior-Point Linear Programming Algorithm for Economic Model Predictive Control

    DEFF Research Database (Denmark)

    Sokoler, Leo Emil; Frison, Gianluca; Skajaa, Anders

    2015-01-01

    We develop an efficient homogeneous and self-dual interior-point method (IPM) for the linear programs arising in economic model predictive control of constrained linear systems with linear objective functions. The algorithm is based on a Riccati iteration procedure, which is adapted to the linear...... is significantly faster than several state-of-the-art IPMs based on sparse linear algebra, and 2) warm-start reduces the average number of iterations by 35-40%.......We develop an efficient homogeneous and self-dual interior-point method (IPM) for the linear programs arising in economic model predictive control of constrained linear systems with linear objective functions. The algorithm is based on a Riccati iteration procedure, which is adapted to the linear...

  9. Higher order explicit solutions for nonlinear dynamic model of column buckling using variational approach and variational iteration algorithm-II

    Energy Technology Data Exchange (ETDEWEB)

    Bagheri, Saman; Nikkar, Ali [University of Tabriz, Tabriz (Iran, Islamic Republic of)

    2014-11-15

    This paper deals with the determination of approximate solutions for a model of column buckling using two efficient and powerful methods called He's variational approach and variational iteration algorithm-II. These methods are used to find analytical approximate solution of nonlinear dynamic equation of a model for the column buckling. First and second order approximate solutions of the equation of the system are achieved. To validate the solutions, the analytical results have been compared with those resulted from Runge-Kutta 4th order method. A good agreement of the approximate frequencies and periodic solutions with the numerical results and the exact solution shows that the present methods can be easily extended to other nonlinear oscillation problems in engineering. The accuracy and convenience of the proposed methods are also revealed in comparisons with the other solution techniques.

  10. Calibration of PMIS pavement performance prediction models.

    Science.gov (United States)

    2012-02-01

    Improve the accuracy of TxDOTs existing pavement performance prediction models through calibrating these models using actual field data obtained from the Pavement Management Information System (PMIS). : Ensure logical performance superiority patte...

  11. Predictive Model Assessment for Count Data

    National Research Council Canada - National Science Library

    Czado, Claudia; Gneiting, Tilmann; Held, Leonhard

    2007-01-01

    .... In case studies, we critique count regression models for patent data, and assess the predictive performance of Bayesian age-period-cohort models for larynx cancer counts in Germany. Key words: Calibration...

  12. Modeling and Prediction Using Stochastic Differential Equations

    DEFF Research Database (Denmark)

    Juhl, Rune; Møller, Jan Kloppenborg; Jørgensen, John Bagterp

    2016-01-01

    deterministic and can predict the future perfectly. A more realistic approach would be to allow for randomness in the model due to e.g., the model be too simple or errors in input. We describe a modeling and prediction setup which better reflects reality and suggests stochastic differential equations (SDEs......) for modeling and forecasting. It is argued that this gives models and predictions which better reflect reality. The SDE approach also offers a more adequate framework for modeling and a number of efficient tools for model building. A software package (CTSM-R) for SDE-based modeling is briefly described....... that describes the variation between subjects. The ODE setup implies that the variation for a single subject is described by a single parameter (or vector), namely the variance (covariance) of the residuals. Furthermore the prediction of the states is given as the solution to the ODEs and hence assumed...

  13. Core-Level Modeling and Frequency Prediction for DSP Applications on FPGAs

    Directory of Open Access Journals (Sweden)

    Gongyu Wang

    2015-01-01

    Full Text Available Field-programmable gate arrays (FPGAs provide a promising technology that can improve performance of many high-performance computing and embedded applications. However, unlike software design tools, the relatively immature state of FPGA tools significantly limits productivity and consequently prevents widespread adoption of the technology. For example, the lengthy design-translate-execute (DTE process often must be iterated to meet the application requirements. Previous works have enabled model-based, design-space exploration to reduce DTE iterations but are limited by a lack of accurate model-based prediction of key design parameters, the most important of which is clock frequency. In this paper, we present a core-level modeling and design (CMD methodology that enables modeling of FPGA applications at an abstract level and yet produces accurate predictions of parameters such as clock frequency, resource utilization (i.e., area, and latency. We evaluate CMD’s prediction methods using several high-performance DSP applications on various families of FPGAs and show an average clock-frequency prediction error of 3.6%, with a worst-case error of 20.4%, compared to the best of existing high-level prediction methods, 13.9% average error with 48.2% worst-case error. We also demonstrate how such prediction enables accurate design-space exploration without coding in a hardware-description language (HDL, significantly reducing the total design time.

  14. Predictive models for arteriovenous fistula maturation.

    Science.gov (United States)

    Al Shakarchi, Julien; McGrogan, Damian; Van der Veer, Sabine; Sperrin, Matthew; Inston, Nicholas

    2016-05-07

    Haemodialysis (HD) is a lifeline therapy for patients with end-stage renal disease (ESRD). A critical factor in the survival of renal dialysis patients is the surgical creation of vascular access, and international guidelines recommend arteriovenous fistulas (AVF) as the gold standard of vascular access for haemodialysis. Despite this, AVFs have been associated with high failure rates. Although risk factors for AVF failure have been identified, their utility for predicting AVF failure through predictive models remains unclear. The objectives of this review are to systematically and critically assess the methodology and reporting of studies developing prognostic predictive models for AVF outcomes and assess them for suitability in clinical practice. Electronic databases were searched for studies reporting prognostic predictive models for AVF outcomes. Dual review was conducted to identify studies that reported on the development or validation of a model constructed to predict AVF outcome following creation. Data were extracted on study characteristics, risk predictors, statistical methodology, model type, as well as validation process. We included four different studies reporting five different predictive models. Parameters identified that were common to all scoring system were age and cardiovascular disease. This review has found a small number of predictive models in vascular access. The disparity between each study limits the development of a unified predictive model.

  15. Model Predictive Control Fundamentals | Orukpe | Nigerian Journal ...

    African Journals Online (AJOL)

    Model Predictive Control (MPC) has developed considerably over the last two decades, both within the research control community and in industries. MPC strategy involves the optimization of a performance index with respect to some future control sequence, using predictions of the output signal based on a process model, ...

  16. Unreachable Setpoints in Model Predictive Control

    DEFF Research Database (Denmark)

    Rawlings, James B.; Bonné, Dennis; Jørgensen, John Bagterp

    2008-01-01

    In this work, a new model predictive controller is developed that handles unreachable setpoints better than traditional model predictive control methods. The new controller induces an interesting fast/slow asymmetry in the tracking response of the system. Nominal asymptotic stability of the optim...

  17. ITER council proceedings: 2001

    International Nuclear Information System (INIS)

    2001-01-01

    Continuing the ITER EDA, two further ITER Council Meetings were held since the publication of ITER EDA documentation series no, 20, namely the ITER Council Meeting on 27-28 February 2001 in Toronto, and the ITER Council Meeting on 18-19 July in Vienna. That Meeting was the last one during the ITER EDA. This volume contains records of these Meetings, including: Records of decisions; List of attendees; ITER EDA status report; ITER EDA technical activities report; MAC report and advice; Final report of ITER EDA; and Press release

  18. Clinical Prediction Models for Cardiovascular Disease: Tufts Predictive Analytics and Comparative Effectiveness Clinical Prediction Model Database.

    Science.gov (United States)

    Wessler, Benjamin S; Lai Yh, Lana; Kramer, Whitney; Cangelosi, Michael; Raman, Gowri; Lutz, Jennifer S; Kent, David M

    2015-07-01

    Clinical prediction models (CPMs) estimate the probability of clinical outcomes and hold the potential to improve decision making and individualize care. For patients with cardiovascular disease, there are numerous CPMs available although the extent of this literature is not well described. We conducted a systematic review for articles containing CPMs for cardiovascular disease published between January 1990 and May 2012. Cardiovascular disease includes coronary heart disease, heart failure, arrhythmias, stroke, venous thromboembolism, and peripheral vascular disease. We created a novel database and characterized CPMs based on the stage of development, population under study, performance, covariates, and predicted outcomes. There are 796 models included in this database. The number of CPMs published each year is increasing steadily over time. Seven hundred seventeen (90%) are de novo CPMs, 21 (3%) are CPM recalibrations, and 58 (7%) are CPM adaptations. This database contains CPMs for 31 index conditions, including 215 CPMs for patients with coronary artery disease, 168 CPMs for population samples, and 79 models for patients with heart failure. There are 77 distinct index/outcome pairings. Of the de novo models in this database, 450 (63%) report a c-statistic and 259 (36%) report some information on calibration. There is an abundance of CPMs available for a wide assortment of cardiovascular disease conditions, with substantial redundancy in the literature. The comparative performance of these models, the consistency of effects and risk estimates across models and the actual and potential clinical impact of this body of literature is poorly understood. © 2015 American Heart Association, Inc.

  19. Hybrid approaches to physiologic modeling and prediction

    Science.gov (United States)

    Olengü, Nicholas O.; Reifman, Jaques

    2005-05-01

    This paper explores how the accuracy of a first-principles physiological model can be enhanced by integrating data-driven, "black-box" models with the original model to form a "hybrid" model system. Both linear (autoregressive) and nonlinear (neural network) data-driven techniques are separately combined with a first-principles model to predict human body core temperature. Rectal core temperature data from nine volunteers, subject to four 30/10-minute cycles of moderate exercise/rest regimen in both CONTROL and HUMID environmental conditions, are used to develop and test the approach. The results show significant improvements in prediction accuracy, with average improvements of up to 30% for prediction horizons of 20 minutes. The models developed from one subject's data are also used in the prediction of another subject's core temperature. Initial results for this approach for a 20-minute horizon show no significant improvement over the first-principles model by itself.

  20. An iterated GMM procedure for estimating the Campbell-Cochrane habit formation model, with an application to Danish stock and bond returns

    DEFF Research Database (Denmark)

    Engsted, Tom; Møller, Stig V.

    We suggest an iterated GMM approach to estimate and test the consumption based habit persistence model of Campbell and Cochrane (1999), and we apply the approach on annual and quarterly Danish stock and bond returns. For comparative purposes we also estimate and test the standard CRRA model...

  1. Evaluating the Predictive Value of Growth Prediction Models

    Science.gov (United States)

    Murphy, Daniel L.; Gaertner, Matthew N.

    2014-01-01

    This study evaluates four growth prediction models--projection, student growth percentile, trajectory, and transition table--commonly used to forecast (and give schools credit for) middle school students' future proficiency. Analyses focused on vertically scaled summative mathematics assessments, and two performance standards conditions (high…

  2. Model predictive control classical, robust and stochastic

    CERN Document Server

    Kouvaritakis, Basil

    2016-01-01

    For the first time, a textbook that brings together classical predictive control with treatment of up-to-date robust and stochastic techniques. Model Predictive Control describes the development of tractable algorithms for uncertain, stochastic, constrained systems. The starting point is classical predictive control and the appropriate formulation of performance objectives and constraints to provide guarantees of closed-loop stability and performance. Moving on to robust predictive control, the text explains how similar guarantees may be obtained for cases in which the model describing the system dynamics is subject to additive disturbances and parametric uncertainties. Open- and closed-loop optimization are considered and the state of the art in computationally tractable methods based on uncertainty tubes presented for systems with additive model uncertainty. Finally, the tube framework is also applied to model predictive control problems involving hard or probabilistic constraints for the cases of multiplic...

  3. Low contrast detectability and spatial resolution with model-based iterative reconstructions of MDCT images: a phantom and cadaveric study

    Energy Technology Data Exchange (ETDEWEB)

    Millon, Domitille; Coche, Emmanuel E. [Universite Catholique de Louvain, Department of Radiology and Medical Imaging, Cliniques Universitaires Saint Luc, Brussels (Belgium); Vlassenbroek, Alain [Philips Healthcare, Brussels (Belgium); Maanen, Aline G. van; Cambier, Samantha E. [Universite Catholique de Louvain, Statistics Unit, King Albert II Cancer Institute, Brussels (Belgium)

    2017-03-15

    To compare image quality [low contrast (LC) detectability, noise, contrast-to-noise (CNR) and spatial resolution (SR)] of MDCT images reconstructed with an iterative reconstruction (IR) algorithm and a filtered back projection (FBP) algorithm. The experimental study was performed on a 256-slice MDCT. LC detectability, noise, CNR and SR were measured on a Catphan phantom scanned with decreasing doses (48.8 down to 0.7 mGy) and parameters typical of a chest CT examination. Images were reconstructed with FBP and a model-based IR algorithm. Additionally, human chest cadavers were scanned and reconstructed using the same technical parameters. Images were analyzed to illustrate the phantom results. LC detectability and noise were statistically significantly different between the techniques, supporting model-based IR algorithm (p < 0.0001). At low doses, the noise in FBP images only enabled SR measurements of high contrast objects. The superior CNR of model-based IR algorithm enabled lower dose measurements, which showed that SR was dose and contrast dependent. Cadaver images reconstructed with model-based IR illustrated that visibility and delineation of anatomical structure edges could be deteriorated at low doses. Model-based IR improved LC detectability and enabled dose reduction. At low dose, SR became dose and contrast dependent. (orig.)

  4. ITER EDA newsletter. V. 9, no. 8

    International Nuclear Information System (INIS)

    2000-08-01

    This ITER EDA Newsletter reports on the ITER meeting on 29-30 June 2000 in Moscow, summarizes the status report on the ITER EDA by R. Aymar, the ITER Director, and gives overviews of the expert group workshop on transport and internal barrier physics, confinement database and modelling and edge and pedestal physics, and the IEA workshop on transport barriers at edge and core. Individual abstracts have been prepared

  5. ITER EDA newsletter. V. 5, no. 5

    International Nuclear Information System (INIS)

    1996-05-01

    This issues of the ITER Engineering Design Activities Newsletter contains a report on the Tenth Meeting of the ITER Management Advisory Committee held at JAERI Headquarters, Tokyo, June 5-6, 1996; on the Fourth ITER Divertor Physics and Divertor Modelling and Database Expert Group Workshop, held at the San Diego ITER Joint Worksite, March 11-15, 1996, and on the Agenda for the 16th IAEA Fusion Energy Conference (7-11 October 1996)

  6. Positive feedback : exploring current approaches in iterative travel demand model implementation.

    Science.gov (United States)

    2012-01-01

    Currently, the models that TxDOTs Transportation Planning and Programming Division (TPP) developed are : traditional three-step models (i.e., trip generation, trip distribution, and traffic assignment) that are sequentially : applied. A limitation...

  7. A Global Model for Bankruptcy Prediction.

    Science.gov (United States)

    Alaminos, David; Del Castillo, Agustín; Fernández, Manuel Ángel

    2016-01-01

    The recent world financial crisis has increased the number of bankruptcies in numerous countries and has resulted in a new area of research which responds to the need to predict this phenomenon, not only at the level of individual countries, but also at a global level, offering explanations of the common characteristics shared by the affected companies. Nevertheless, few studies focus on the prediction of bankruptcies globally. In order to compensate for this lack of empirical literature, this study has used a methodological framework of logistic regression to construct predictive bankruptcy models for Asia, Europe and America, and other global models for the whole world. The objective is to construct a global model with a high capacity for predicting bankruptcy in any region of the world. The results obtained have allowed us to confirm the superiority of the global model in comparison to regional models over periods of up to three years prior to bankruptcy.

  8. Estimating the weight of Douglas-fir tree boles and logs with an iterative computer model.

    Science.gov (United States)

    Dale R. Waddell; Dale L Weyermann; Michael B. Lambert

    1987-01-01

    A computer model that estimates the green weights of standing trees was developed and validated for old-growth Douglas-fir. The model calculates the green weight for the entire bole, for the bole to any merchantable top, and for any log length within the bole. The model was validated by estimating the bias and accuracy of an independent subsample selected from the...

  9. Non-LTE line-blanketed model atmospheres of hot stars. 1: Hybrid complete linearization/accelerated lambda iteration method

    Science.gov (United States)

    Hubeny, I.; Lanz, T.

    1995-01-01

    A new munerical method for computing non-Local Thermodynamic Equilibrium (non-LTE) model stellar atmospheres is presented. The method, called the hybird complete linearization/accelerated lambda iretation (CL/ALI) method, combines advantages of both its constituents. Its rate of convergence is virtually as high as for the standard CL method, while the computer time per iteration is almost as low as for the standard ALI method. The method is formulated as the standard complete lineariation, the only difference being that the radiation intensity at selected frequency points is not explicity linearized; instead, it is treated by means of the ALI approach. The scheme offers a wide spectrum of options, ranging from the full CL to the full ALI method. We deonstrate that the method works optimally if the majority of frequency points are treated in the ALI mode, while the radiation intensity at a few (typically two to 30) frequency points is explicity linearized. We show how this method can be applied to calculate metal line-blanketed non-LTE model atmospheres, by using the idea of 'superlevels' and 'superlines' introduced originally by Anderson (1989). We calculate several illustrative models taking into accont several tens of thosands of lines of Fe III to Fe IV and show that the hybrid CL/ALI method provides a robust method for calculating non-LTE line-blanketed model atmospheres for a wide range of stellar parameters. The results for individual stellar types will be presented in subsequent papers in this series.

  10. Optimization for steady-state and hybrid operations of ITER by using scaling models of divertor heat load

    International Nuclear Information System (INIS)

    Murakami, Yoshiki; Itami, Kiyoshi; Sugihara, Masayoshi; Fujieda, Hirobumi.

    1992-09-01

    Steady-state and hybrid mode operations of ITER are investigated by 0-D power balance calculations assuming no radiation and charge-exchange cooling in divertor region. Operation points are optimized with respect to divertor heat load which must be reduced to the level of ignition mode (∼5 MW/m 2 ). Dependence of the divertor heat load on the variety of the models, i.e., constant-χ model, Bohm-type-χ model and JT-60U empirical scaling model, is also discussed. The divertor heat load increases linearly with the fusion power (P FUS ) in all models. The possible highest fusion power much differs for each model with an allowable divertor heat load. The heat load evaluated by constant-χ model is, for example, about 1.8 times larger than that by Bohm-type-χ model at P FUS = 750 MW. Effect of reduction of the helium accumulation, improvements of the confinement capability and the current-drive efficiency are also investigated aiming at lowering the divertor heat load. It is found that NBI power should be larger than about 60 MW to obtain a burn time longer than 2000 s. The optimized operation point, where the minimum divertor heat load is achieved, does not depend on the model and is the point with the minimum-P FUS and the maximum-P NBI . When P FUS = 690 MW and P NBI = 110 MW, the divertor heat load can be reduced to the level of ignition mode without impurity seeding if H = 2.2 is achieved. Controllability of the current-profile is also discussed. (J.P.N.)

  11. Fingerprint verification prediction model in hand dermatitis.

    Science.gov (United States)

    Lee, Chew K; Chang, Choong C; Johor, Asmah; Othman, Puwira; Baba, Roshidah

    2015-07-01

    Hand dermatitis associated fingerprint changes is a significant problem and affects fingerprint verification processes. This study was done to develop a clinically useful prediction model for fingerprint verification in patients with hand dermatitis. A case-control study involving 100 patients with hand dermatitis. All patients verified their thumbprints against their identity card. Registered fingerprints were randomized into a model derivation and model validation group. Predictive model was derived using multiple logistic regression. Validation was done using the goodness-of-fit test. The fingerprint verification prediction model consists of a major criterion (fingerprint dystrophy area of ≥ 25%) and two minor criteria (long horizontal lines and long vertical lines). The presence of the major criterion predicts it will almost always fail verification, while presence of both minor criteria and presence of one minor criterion predict high and low risk of fingerprint verification failure, respectively. When none of the criteria are met, the fingerprint almost always passes the verification. The area under the receiver operating characteristic curve was 0.937, and the goodness-of-fit test showed agreement between the observed and expected number (P = 0.26). The derived fingerprint verification failure prediction model is validated and highly discriminatory in predicting risk of fingerprint verification in patients with hand dermatitis. © 2014 The International Society of Dermatology.

  12. Massive Predictive Modeling using Oracle R Enterprise

    CERN Multimedia

    CERN. Geneva

    2014-01-01

    R is fast becoming the lingua franca for analyzing data via statistics, visualization, and predictive analytics. For enterprise-scale data, R users have three main concerns: scalability, performance, and production deployment. Oracle's R-based technologies - Oracle R Distribution, Oracle R Enterprise, Oracle R Connector for Hadoop, and the R package ROracle - address these concerns. In this talk, we introduce Oracle's R technologies, highlighting how each enables R users to achieve scalability and performance while making production deployment of R results a natural outcome of the data analyst/scientist efforts. The focus then turns to Oracle R Enterprise with code examples using the transparency layer and embedded R execution, targeting massive predictive modeling. One goal behind massive predictive modeling is to build models per entity, such as customers, zip codes, simulations, in an effort to understand behavior and tailor predictions at the entity level. Predictions...

  13. Characterization of a commercial hybrid iterative and model-based reconstruction algorithm in radiation oncology

    Energy Technology Data Exchange (ETDEWEB)

    Price, Ryan G. [Department of Radiation Oncology, Henry Ford Health Systems, Detroit, Michigan 48202 and Wayne State University School of Medicine, Detroit, Michigan 48201 (United States); Vance, Sean; Cattaneo, Richard; Elshaikh, Mohamed A.; Chetty, Indrin J.; Glide-Hurst, Carri K., E-mail: churst2@hfhs.org [Department of Radiation Oncology, Henry Ford Health Systems, Detroit, Michigan 48202 (United States); Schultz, Lonni [Department of Public Health Sciences, Henry Ford Health Systems, Detroit, Michigan 48202 (United States)

    2014-08-15

    Purpose: Iterative reconstruction (IR) reduces noise, thereby allowing dose reduction in computed tomography (CT) while maintaining comparable image quality to filtered back-projection (FBP). This study sought to characterize image quality metrics, delineation, dosimetric assessment, and other aspects necessary to integrate IR into treatment planning. Methods: CT images (Brilliance Big Bore v3.6, Philips Healthcare) were acquired of several phantoms using 120 kVp and 25–800 mAs. IR was applied at levels corresponding to noise reduction of 0.89–0.55 with respect to FBP. Noise power spectrum (NPS) analysis was used to characterize noise magnitude and texture. CT to electron density (CT-ED) curves were generated over all IR levels. Uniformity as well as spatial and low contrast resolution were quantified using a CATPHAN phantom. Task specific modulation transfer functions (MTF{sub task}) were developed to characterize spatial frequency across objects of varied contrast. A prospective dose reduction study was conducted for 14 patients undergoing interfraction CT scans for high-dose rate brachytherapy. Three physicians performed image quality assessment using a six-point grading scale between the normal-dose FBP (reference), low-dose FBP, and low-dose IR scans for the following metrics: image noise, detectability of the vaginal cuff/bladder interface, spatial resolution, texture, segmentation confidence, and overall image quality. Contouring differences between FBP and IR were quantified for the bladder and rectum via overlap indices (OI) and Dice similarity coefficients (DSC). Line profile and region of interest analyses quantified noise and boundary changes. For two subjects, the impact of IR on external beam dose calculation was assessed via gamma analysis and changes in digitally reconstructed radiographs (DRRs) were quantified. Results: NPS showed large reduction in noise magnitude (50%), and a slight spatial frequency shift (∼0.1 mm{sup −1}) with

  14. Predictive Model of Systemic Toxicity (SOT)

    Science.gov (United States)

    In an effort to ensure chemical safety in light of regulatory advances away from reliance on animal testing, USEPA and L’Oréal have collaborated to develop a quantitative systemic toxicity prediction model. Prediction of human systemic toxicity has proved difficult and remains a ...

  15. Testicular Cancer Risk Prediction Models

    Science.gov (United States)

    Developing statistical models that estimate the probability of testicular cervical cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.

  16. Pancreatic Cancer Risk Prediction Models

    Science.gov (United States)

    Developing statistical models that estimate the probability of developing pancreatic cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.

  17. Colorectal Cancer Risk Prediction Models

    Science.gov (United States)

    Developing statistical models that estimate the probability of developing colorectal cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.

  18. Prostate Cancer Risk Prediction Models

    Science.gov (United States)

    Developing statistical models that estimate the probability of developing prostate cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.

  19. Bladder Cancer Risk Prediction Models

    Science.gov (United States)

    Developing statistical models that estimate the probability of developing bladder cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.

  20. Esophageal Cancer Risk Prediction Models

    Science.gov (United States)

    Developing statistical models that estimate the probability of developing esophageal cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.

  1. Cervical Cancer Risk Prediction Models

    Science.gov (United States)

    Developing statistical models that estimate the probability of developing cervical cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.

  2. Breast Cancer Risk Prediction Models

    Science.gov (United States)

    Developing statistical models that estimate the probability of developing breast cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.

  3. Lung Cancer Risk Prediction Models

    Science.gov (United States)

    Developing statistical models that estimate the probability of developing lung cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.

  4. Liver Cancer Risk Prediction Models

    Science.gov (United States)

    Developing statistical models that estimate the probability of developing liver cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.

  5. Ovarian Cancer Risk Prediction Models

    Science.gov (United States)

    Developing statistical models that estimate the probability of developing ovarian cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.

  6. Posterior Predictive Model Checking in Bayesian Networks

    Science.gov (United States)

    Crawford, Aaron

    2014-01-01

    This simulation study compared the utility of various discrepancy measures within a posterior predictive model checking (PPMC) framework for detecting different types of data-model misfit in multidimensional Bayesian network (BN) models. The investigated conditions were motivated by an applied research program utilizing an operational complex…

  7. THELMA code electromagnetic model of ITER superconducting cables and application to the ENEA Stability Experiment

    NARCIS (Netherlands)

    Ciotti, M.; Nijhuis, Arend; Ribani, P.L.; Savoldi Richard, L.; Zanino, R.

    2006-01-01

    The new THELMA code, including a thermal-hydraulic (TH) and an electro-magnetic (EM) model of a cable-in-conduit conductor (CICC), has been developed. The TH model is at this stage relatively conventional, with two fluid components (He flowing in the annular cable region and He flowing in the

  8. Numerical modeling and experimental simulation of vapor shield formation and divertor material erosion for ITER typical plasma disruptions

    International Nuclear Information System (INIS)

    Wuerz, H.; Arkhipov, N.I.; Bakhin, V.P.; Goel, B.; Hoebel, W.; Konkashbaev, I.; Landman, I.; Piazza, G.; Safronov, V.M.; Sherbakov, A.R.; Toporkov, D.A.; Zhitlukhin, A.M.

    1994-01-01

    The high divertor heat load during a tokamak plasma disruption results in sudden evaporation of a thin layer of divertor plate material, which acts as vapor shield and protects the target from further excessive evaporation. Formation and effectiveness of the vapor shield are theoretically modeled and experimentally investigated at the 2MK-200 facility under conditions simulating the thermal quench phase of ITER tokamak plasma disruptions. In the optical wavelength range C II, C III, C IV emission lines for graphite, Cu I, Cu II lines for copper and continuum radiation for tungsten samples are observed in the target plasma. The plasma expands along the magnetic field lines with velocities of (4±1)x10 6 cm/s for graphite and 10 5 cm/s for copper. Modeling was done with a radiation hydrodynamics code in one-dimensional planar geometry. The multifrequency radiation transport is treated in flux limited diffusion and in forward reverse transport approximation. In these first modeling studies the overall shielding efficiency for carbon and tungsten defined as ratio of the incident energy and the vaporization energy for power densities of 10 MW/cm 2 exceeds a factor of 30. The vapor shield is established within 2 μs, the power fraction to the target after 10 μs is below 3% and reaches in the stationary state after about 20 μs a value of around 1.5%. ((orig.))

  9. Predicting and Modeling RNA Architecture

    Science.gov (United States)

    Westhof, Eric; Masquida, Benoît; Jossinet, Fabrice

    2011-01-01

    SUMMARY A general approach for modeling the architecture of large and structured RNA molecules is described. The method exploits the modularity and the hierarchical folding of RNA architecture that is viewed as the assembly of preformed double-stranded helices defined by Watson-Crick base pairs and RNA modules maintained by non-Watson-Crick base pairs. Despite the extensive molecular neutrality observed in RNA structures, specificity in RNA folding is achieved through global constraints like lengths of helices, coaxiality of helical stacks, and structures adopted at the junctions of helices. The Assemble integrated suite of computer tools allows for sequence and structure analysis as well as interactive modeling by homology or ab initio assembly with possibilities for fitting within electronic density maps. The local key role of non-Watson-Crick pairs guides RNA architecture formation and offers metrics for assessing the accuracy of three-dimensional models in a more useful way than usual root mean square deviation (RMSD) values. PMID:20504963

  10. Multiple Steps Prediction with Nonlinear ARX Models

    OpenAIRE

    Zhang, Qinghua; Ljung, Lennart

    2007-01-01

    NLARX (NonLinear AutoRegressive with eXogenous inputs) models are frequently used in black-box nonlinear system identication. Though it is easy to make one step ahead prediction with such models, multiple steps prediction is far from trivial. The main difficulty is that in general there is no easy way to compute the mathematical expectation of an output conditioned by past measurements. An optimal solution would require intensive numerical computations related to nonlinear filltering. The pur...

  11. Predictability of extreme values in geophysical models

    Directory of Open Access Journals (Sweden)

    A. E. Sterk

    2012-09-01

    Full Text Available Extreme value theory in deterministic systems is concerned with unlikely large (or small values of an observable evaluated along evolutions of the system. In this paper we study the finite-time predictability of extreme values, such as convection, energy, and wind speeds, in three geophysical models. We study whether finite-time Lyapunov exponents are larger or smaller for initial conditions leading to extremes. General statements on whether extreme values are better or less predictable are not possible: the predictability of extreme values depends on the observable, the attractor of the system, and the prediction lead time.

  12. An Iterated GMM Procedure for Estimating the Campbell-Cochrane Habit Formation Model, with an Application to Danish Stock and Bond Returns

    DEFF Research Database (Denmark)

    Engsted, Tom; Møller, Stig Vinther

    2010-01-01

    We suggest an iterated GMM approach to estimate and test the consumption based habit persistence model of Campbell and Cochrane, and we apply the approach on annual and quarterly Danish stock and bond returns. For comparative purposes we also estimate and test the standard constant relative risk...

  13. Model complexity control for hydrologic prediction

    Science.gov (United States)

    Schoups, G.; van de Giesen, N. C.; Savenije, H. H. G.

    2008-12-01

    A common concern in hydrologic modeling is overparameterization of complex models given limited and noisy data. This leads to problems of parameter nonuniqueness and equifinality, which may negatively affect prediction uncertainties. A systematic way of controlling model complexity is therefore needed. We compare three model complexity control methods for hydrologic prediction, namely, cross validation (CV), Akaike's information criterion (AIC), and structural risk minimization (SRM). Results show that simulation of water flow using non-physically-based models (polynomials in this case) leads to increasingly better calibration fits as the model complexity (polynomial order) increases. However, prediction uncertainty worsens for complex non-physically-based models because of overfitting of noisy data. Incorporation of physically based constraints into the model (e.g., storage-discharge relationship) effectively bounds prediction uncertainty, even as the number of parameters increases. The conclusion is that overparameterization and equifinality do not lead to a continued increase in prediction uncertainty, as long as models are constrained by such physical principles. Complexity control of hydrologic models reduces parameter equifinality and identifies the simplest model that adequately explains the data, thereby providing a means of hydrologic generalization and classification. SRM is a promising technique for this purpose, as it (1) provides analytic upper bounds on prediction uncertainty, hence avoiding the computational burden of CV, and (2) extends the applicability of classic methods such as AIC to finite data. The main hurdle in applying SRM is the need for an a priori estimation of the complexity of the hydrologic model, as measured by its Vapnik-Chernovenkis (VC) dimension. Further research is needed in this area.

  14. Modeling of Self-Excited Isolated Permanent Magnet Induction Generator Using Iterative Numerical Method

    Directory of Open Access Journals (Sweden)

    Mohamed Mostafa R.

    2016-01-01

    Full Text Available Self-Excited Permanent Magnet Induction Generator (PMIG is commonly used in wind energy generation systems. The difficulty of Self-Excited Permanent Magnet Induction Generator (SEPMIG modeling is the circuit parameters of the generator vary at each load conditions due to the a change in the frequency and stator voltage. The paper introduces a new modeling for SEPMIG using Gauss-sidle relaxation method. The SEPMIG characteristics using the proposed method are studied at different load conditions according to the wind speed variation, load impedance changes and different shunted capacitor values. The system modeling is investigated due to the magnetizing current variation, the efficiency variation, the power variation and power factor variation. The proposed modeling system satisfies high degree of simplicity and accuracy.

  15. Quantifying predictive accuracy in survival models.

    Science.gov (United States)

    Lirette, Seth T; Aban, Inmaculada

    2017-12-01

    For time-to-event outcomes in medical research, survival models are the most appropriate to use. Unlike logistic regression models, quantifying the predictive accuracy of these models is not a trivial task. We present the classes of concordance (C) statistics and R 2 statistics often used to assess the predictive ability of these models. The discussion focuses on Harrell's C, Kent and O'Quigley's R 2 , and Royston and Sauerbrei's R 2 . We present similarities and differences between the statistics, discuss the software options from the most widely used statistical analysis packages, and give a practical example using the Worcester Heart Attack Study dataset.

  16. Predictive power of nuclear-mass models

    Directory of Open Access Journals (Sweden)

    Yu. A. Litvinov

    2013-12-01

    Full Text Available Ten different theoretical models are tested for their predictive power in the description of nuclear masses. Two sets of experimental masses are used for the test: the older set of 2003 and the newer one of 2011. The predictive power is studied in two regions of nuclei: the global region (Z, N ≥ 8 and the heavy-nuclei region (Z ≥ 82, N ≥ 126. No clear correlation is found between the predictive power of a model and the accuracy of its description of the masses.

  17. Return Predictability, Model Uncertainty, and Robust Investment

    DEFF Research Database (Denmark)

    Lukas, Manuel

    Stock return predictability is subject to great uncertainty. In this paper we use the model confidence set approach to quantify uncertainty about expected utility from investment, accounting for potential return predictability. For monthly US data and six representative return prediction models, we...... find that confidence sets are very wide, change significantly with the predictor variables, and frequently include expected utilities for which the investor prefers not to invest. The latter motivates a robust investment strategy maximizing the minimal element of the confidence set. The robust investor...... allocates a much lower share of wealth to stocks compared to a standard investor....

  18. A Warm-Started Homogeneous and Self-Dual Interior-Point Method for Linear Economic Model Predictive Control

    DEFF Research Database (Denmark)

    Sokoler, Leo Emil; Skajaa, Anders; Frison, Gianluca

    2013-01-01

    algorithm in MATLAB and its performance is analyzed based on a smart grid power management case study. Closed loop simulations show that 1) our algorithm is significantly faster than state-of-the-art IPMs based on sparse linear algebra routines, and 2) warm-starting reduces the number of iterations......In this paper, we present a warm-started homogenous and self-dual interior-point method (IPM) for the linear programs arising in economic model predictive control (MPC) of linear systems. To exploit the structure in the optimization problems, our algorithm utilizes a Riccati iteration procedure...

  19. Spatial Economics Model Predicting Transport Volume

    Directory of Open Access Journals (Sweden)

    Lu Bo

    2016-10-01

    Full Text Available It is extremely important to predict the logistics requirements in a scientific and rational way. However, in recent years, the improvement effect on the prediction method is not very significant and the traditional statistical prediction method has the defects of low precision and poor interpretation of the prediction model, which cannot only guarantee the generalization ability of the prediction model theoretically, but also cannot explain the models effectively. Therefore, in combination with the theories of the spatial economics, industrial economics, and neo-classical economics, taking city of Zhuanghe as the research object, the study identifies the leading industry that can produce a large number of cargoes, and further predicts the static logistics generation of the Zhuanghe and hinterlands. By integrating various factors that can affect the regional logistics requirements, this study established a logistics requirements potential model from the aspect of spatial economic principles, and expanded the way of logistics requirements prediction from the single statistical principles to an new area of special and regional economics.

  20. Model-Free Primitive-Based Iterative Learning Control Approach to Trajectory Tracking of MIMO Systems With Experimental Validation.

    Science.gov (United States)

    Radac, Mircea-Bogdan; Precup, Radu-Emil; Petriu, Emil M

    2015-11-01

    This paper proposes a novel model-free trajectory tracking of multiple-input multiple-output (MIMO) systems by the combination of iterative learning control (ILC) and primitives. The optimal trajectory tracking solution is obtained in terms of previously learned solutions to simple tasks called primitives. The library of primitives that are stored in memory consists of pairs of reference input/controlled output signals. The reference input primitives are optimized in a model-free ILC framework without using knowledge of the controlled process. The guaranteed convergence of the learning scheme is built upon a model-free virtual reference feedback tuning design of the feedback decoupling controller. Each new complex trajectory to be tracked is decomposed into the output primitives regarded as basis functions. The optimal reference input for the control system to track the desired trajectory is next recomposed from the reference input primitives. This is advantageous because the optimal reference input is computed straightforward without the need to learn from repeated executions of the tracking task. In addition, the optimization problem specific to trajectory tracking of square MIMO systems is decomposed in a set of optimization problems assigned to each separate single-input single-output control channel that ensures a convenient model-free decoupling. The new model-free primitive-based ILC approach is capable of planning, reasoning, and learning. A case study dealing with the model-free control tuning for a nonlinear aerodynamic system is included to validate the new approach. The experimental results are given.

  1. Accuracy assessment of landslide prediction models

    International Nuclear Information System (INIS)

    Othman, A N; Mohd, W M N W; Noraini, S

    2014-01-01

    The increasing population and expansion of settlements over hilly areas has greatly increased the impact of natural disasters such as landslide. Therefore, it is important to developed models which could accurately predict landslide hazard zones. Over the years, various techniques and models have been developed to predict landslide hazard zones. The aim of this paper is to access the accuracy of landslide prediction models developed by the authors. The methodology involved the selection of study area, data acquisition, data processing and model development and also data analysis. The development of these models are based on nine different landslide inducing parameters i.e. slope, land use, lithology, soil properties, geomorphology, flow accumulation, aspect, proximity to river and proximity to road. Rank sum, rating, pairwise comparison and AHP techniques are used to determine the weights for each of the parameters used. Four (4) different models which consider different parameter combinations are developed by the authors. Results obtained are compared to landslide history and accuracies for Model 1, Model 2, Model 3 and Model 4 are 66.7, 66.7%, 60% and 22.9% respectively. From the results, rank sum, rating and pairwise comparison can be useful techniques to predict landslide hazard zones

  2. Iterative participatory design

    DEFF Research Database (Denmark)

    Simonsen, Jesper; Hertzum, Morten

    2010-01-01

    The theoretical background in this chapter is information systems development in an organizational context. This includes theories from participatory design, human-computer interaction, and ethnographically inspired studies of work practices. The concept of design is defined as an experimental...... iterative process of mutual learning by designers and domain experts (users), who aim to change the users’ work practices through the introduction of information systems. We provide an illustrative case example with an ethnographic study of clinicians experimenting with a new electronic patient record...... system, focussing on emergent and opportunity-based change enabled by appropriating the system into real work. The contribution to a general core of design research is a reconstruction of the iterative prototyping approach into a general model for sustained participatory design....

  3. An integrated model for the assessment of unmitigated fault events in ITER's superconducting magnets

    Energy Technology Data Exchange (ETDEWEB)

    McIntosh, S., E-mail: simon.mcintosh@ccfe.ac.uk [Culham Centre for Fusion Energy, Culham Science Center, Abingdon OX14 3DB, Oxfordshire (United Kingdom); Holmes, A. [Marcham Scientific Ltd., Sarum House, 10 Salisbury Rd., Hungerford RG17 0LH, Berkshire (United Kingdom); Cave-Ayland, K.; Ash, A.; Domptail, F.; Zheng, S.; Surrey, E.; Taylor, N. [Culham Centre for Fusion Energy, Culham Science Center, Abingdon OX14 3DB, Oxfordshire (United Kingdom); Hamada, K.; Mitchell, N. [ITER Organization, Magnet Division, St Paul Lez Durance Cedex (France)

    2016-11-01

    A large amount of energy is stored in ITER superconducting magnet system. Faults which initiate a discharge are typically mitigated to quickly transfer away the stored magnetic energy for dissipation through a bank of resistors. In an extreme unlikely occurrence, an unmitigated fault event represents a potentially severe discharge of energy into the coils and the surrounding structure. A new simulation tool has been developed for the detailed study of these unmitigated fault events. The tool integrates: the propagation of multiple quench fronts initiated by an initial fault or by subsequent coil heating; the 3D convection and conduction of heat through the magnet structure; the 3D conduction of current and Ohmic heating both along the conductor and via alternate pathways generated by arcing or material melt. Arcs linking broken sections of conductor or separate turns are simulated with a new unconstrained arc model to balance electrical current paths and heat generation within the arc column in the multi-physics model. The influence under the high Lorenz forces present is taken into account. Simulation results for an unmitigated fault in a poloidal field coil are presented.

  4. Predictive validation of an influenza spread model.

    Directory of Open Access Journals (Sweden)

    Ayaz Hyder

    Full Text Available BACKGROUND: Modeling plays a critical role in mitigating impacts of seasonal influenza epidemics. Complex simulation models are currently at the forefront of evaluating optimal mitigation strategies at multiple scales and levels of organization. Given their evaluative role, these models remain limited in their ability to predict and forecast future epidemics leading some researchers and public-health practitioners to question their usefulness. The objective of this study is to evaluate the predictive ability of an existing complex simulation model of influenza spread. METHODS AND FINDINGS: We used extensive data on past epidemics to demonstrate the process of predictive validation. This involved generalizing an individual-based model for influenza spread and fitting it to laboratory-confirmed influenza infection data from a single observed epidemic (1998-1999. Next, we used the fitted model and modified two of its parameters based on data on real-world perturbations (vaccination coverage by age group and strain type. Simulating epidemics under these changes allowed us to estimate the deviation/error between the expected epidemic curve under perturbation and observed epidemics taking place from 1999 to 2006. Our model was able to forecast absolute intensity and epidemic peak week several weeks earlier with reasonable reliability and depended on the method of forecasting-static or dynamic. CONCLUSIONS: Good predictive ability of influenza epidemics is critical for implementing mitigation strategies in an effective and timely manner. Through the process of predictive validation applied to a current complex simulation model of influenza spread, we provided users of the model (e.g. public-health officials and policy-makers with quantitative metrics and practical recommendations on mitigating impacts of seasonal influenza epidemics. This methodology may be applied to other models of communicable infectious diseases to test and potentially improve

  5. Predictive Validation of an Influenza Spread Model

    Science.gov (United States)

    Hyder, Ayaz; Buckeridge, David L.; Leung, Brian

    2013-01-01

    Background Modeling plays a critical role in mitigating impacts of seasonal influenza epidemics. Complex simulation models are currently at the forefront of evaluating optimal mitigation strategies at multiple scales and levels of organization. Given their evaluative role, these models remain limited in their ability to predict and forecast future epidemics leading some researchers and public-health practitioners to question their usefulness. The objective of this study is to evaluate the predictive ability of an existing complex simulation model of influenza spread. Methods and Findings We used extensive data on past epidemics to demonstrate the process of predictive validation. This involved generalizing an individual-based model for influenza spread and fitting it to laboratory-confirmed influenza infection data from a single observed epidemic (1998–1999). Next, we used the fitted model and modified two of its parameters based on data on real-world perturbations (vaccination coverage by age group and strain type). Simulating epidemics under these changes allowed us to estimate the deviation/error between the expected epidemic curve under perturbation and observed epidemics taking place from 1999 to 2006. Our model was able to forecast absolute intensity and epidemic peak week several weeks earlier with reasonable reliability and depended on the method of forecasting-static or dynamic. Conclusions Good predictive ability of influenza epidemics is critical for implementing mitigation strategies in an effective and timely manner. Through the process of predictive validation applied to a current complex simulation model of influenza spread, we provided users of the model (e.g. public-health officials and policy-makers) with quantitative metrics and practical recommendations on mitigating impacts of seasonal influenza epidemics. This methodology may be applied to other models of communicable infectious diseases to test and potentially improve their predictive

  6. Automated main-chain model building by template matching and iterative fragment extension

    International Nuclear Information System (INIS)

    Terwilliger, Thomas C.

    2003-01-01

    A method for automated macromolecular main-chain model building is described. An algorithm for the automated macromolecular model building of polypeptide backbones is described. The procedure is hierarchical. In the initial stages, many overlapping polypeptide fragments are built. In subsequent stages, the fragments are extended and then connected. Identification of the locations of helical and β-strand regions is carried out by FFT-based template matching. Fragment libraries of helices and β-strands from refined protein structures are then positioned at the potential locations of helices and strands and the longest segments that fit the electron-density map are chosen. The helices and strands are then extended using fragment libraries consisting of sequences three amino acids long derived from refined protein structures. The resulting segments of polypeptide chain are then connected by choosing those which overlap at two or more C α positions. The fully automated procedure has been implemented in RESOLVE and is capable of model building at resolutions as low as 3.5 Å. The algorithm is useful for building a preliminary main-chain model that can serve as a basis for refinement and side-chain addition

  7. Repurposing and probabilistic integration of data: An iterative and data model independent approach

    NARCIS (Netherlands)

    Wanders, B.

    2016-01-01

    Besides the scientific paradigms of empiricism, mathematical modelling, and simulation, the method of combining and analysing data in novel ways has become a main research paradigm capable of tackling research questions that could not be answered before. To speed up research in this new paradigm,

  8. Comparison of Iterative Methods for Computing the Pressure Field in a Dynamic Network Model

    DEFF Research Database (Denmark)

    Mogensen, Kristian; Stenby, Erling Halfdan; Banerjee, Srilekha

    1999-01-01

    In dynamic network models, the pressure map (the pressure in the pores) must be evaluated at each time step. This calculation involves the solution of a large number of nonlinear algebraic systems of equations and accounts for more than 80 of the total CPU-time. Each nonlinear system requires...

  9. Tests on the integration of the ITER divertor dummy armour prototype on a simplified model of cassette body

    International Nuclear Information System (INIS)

    Dell'Orco, G.; Canneta, A.; Cattadori, G.; Gaspari, G.P.; Merola, M.; Polazzi, G.; Vieider, G.; Zito, D.

    2001-01-01

    In 1998, in the frame of the European R and D on ITER high heat flux components, the fabrication of a full scale ITER Divertor Outboard mock-up was launched. It comprised a Cassette Body, designed with some mechanical and hydraulic simplifications with respect to the reference body, and the actively cooled Dummy Armour Prototype (DAP). This DAP consists of the Vertical Target, the Wing and the Dump Target, manufactured by the European industry, which are integrated with the Gas Box Liner supplied by the Russian Federation Home Team. In order to simplify the manufacturing, the DAP was layered with an equivalent CuCrZr thickness simulating the real armour (CFC or W tiles). In parallel with the manufacturing activity, the ITER European HT decided to assign to ENEA the Task EU-DV1 for the 'Component Integration and Thermal-Hydraulic Testing of the ITER Divertor Targets and Wing Dummy Prototypes and Cassette Body'

  10. Posterior predictive checking of multiple imputation models.

    Science.gov (United States)

    Nguyen, Cattram D; Lee, Katherine J; Carlin, John B

    2015-07-01

    Multiple imputation is gaining popularity as a strategy for handling missing data, but there is a scarcity of tools for checking imputation models, a critical step in model fitting. Posterior predictive checking (PPC) has been recommended as an imputation diagnostic. PPC involves simulating "replicated" data from the posterior predictive distribution of the model under scrutiny. Model fit is assessed by examining whether the analysis from the observed data appears typical of results obtained from the replicates produced by the model. A proposed diagnostic measure is the posterior predictive "p-value", an extreme value of which (i.e., a value close to 0 or 1) suggests a misfit between the model and the data. The aim of this study was to evaluate the performance of the posterior predictive p-value as an imputation diagnostic. Using simulation methods, we deliberately misspecified imputation models to determine whether posterior predictive p-values were effective in identifying these problems. When estimating the regression parameter of interest, we found that more extreme p-values were associated with poorer imputation model performance, although the results highlighted that traditional thresholds for classical p-values do not apply in this context. A shortcoming of the PPC method was its reduced ability to detect misspecified models with increasing amounts of missing data. Despite the limitations of posterior predictive p-values, they appear to have a valuable place in the imputer's toolkit. In addition to automated checking using p-values, we recommend imputers perform graphical checks and examine other summaries of the test quantity distribution. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  11. ITER ITA newsletter. No. 7, August 2003

    International Nuclear Information System (INIS)

    2003-10-01

    This issue of ITER ITA (ITER transitional Arrangements) newsletter contains concise information about ITER related meetings including the ninth meeting of the ITER Negotiators' Standing Sub-Group (NSSG-9), which was held on 28 and 29 July 2003 at Mita International Conference Center in Tokyo and the Confinement Database and Modelling (CDB and M) and Transport and Internal Transport Barrier (T and ITB) Topical Groups (TGs) joint meeting, which was held in St. Petersburg, Russia from 8 to 12 April 2003 and about retirement of Boris Kuvshinnikov, senior information officer, after 13 years at the ITER office Vienna

  12. Predicting Protein Secondary Structure with Markov Models

    DEFF Research Database (Denmark)

    Fischer, Paul; Larsen, Simon; Thomsen, Claus

    2004-01-01

    we are considering here, is to predict the secondary structure from the primary one. To this end we train a Markov model on training data and then use it to classify parts of unknown protein sequences as sheets, helices or coils. We show how to exploit the directional information contained...... in the Markov model for this task. Classifications that are purely based on statistical models might not always be biologically meaningful. We present combinatorial methods to incorporate biological background knowledge to enhance the prediction performance....

  13. Energy based prediction models for building acoustics

    DEFF Research Database (Denmark)

    Brunskog, Jonas

    2012-01-01

    In order to reach robust and simplified yet accurate prediction models, energy based principle are commonly used in many fields of acoustics, especially in building acoustics. This includes simple energy flow models, the framework of statistical energy analysis (SEA) as well as more elaborated...... principles as, e.g., wave intensity analysis (WIA). The European standards for building acoustic predictions, the EN 12354 series, are based on energy flow and SEA principles. In the present paper, different energy based prediction models are discussed and critically reviewed. Special attention is placed...... on underlying basic assumptions, such as diffuse fields, high modal overlap, resonant field being dominant, etc., and the consequences of these in terms of limitations in the theory and in the practical use of the models....

  14. Comparative Study of Bancruptcy Prediction Models

    Directory of Open Access Journals (Sweden)

    Isye Arieshanti

    2013-09-01

    Full Text Available Early indication of bancruptcy is important for a company. If companies aware of  potency of their bancruptcy, they can take a preventive action to anticipate the bancruptcy. In order to detect the potency of a bancruptcy, a company can utilize a a model of bancruptcy prediction. The prediction model can be built using a machine learning methods. However, the choice of machine learning methods should be performed carefully. Because the suitability of a model depends on the problem specifically. Therefore, in this paper we perform a comparative study of several machine leaning methods for bancruptcy prediction. According to the comparative study, the performance of several models that based on machine learning methods (k-NN, fuzzy k-NN, SVM, Bagging Nearest Neighbour SVM, Multilayer Perceptron(MLP, Hybrid of MLP + Multiple Linear Regression, it can be showed that fuzzy k-NN method achieve the best performance with accuracy 77.5%

  15. ITER council proceedings: 1998

    International Nuclear Information System (INIS)

    1999-01-01

    This volume contains documents of the 13th and the 14th ITER council meeting as well as of the 1st extraordinary ITER council meeting. Documents of the ITER meetings held in Vienna and Yokohama during 1998 are also included. The contents include an outline of the ITER objectives, the ITER parameters and design overview as well as operating scenarios and plasma performance. Furthermore, design features, safety and environmental characteristics are given

  16. Intra-patient comparison of reduced-dose model-based iterative reconstruction with standard-dose adaptive statistical iterative reconstruction in the CT diagnosis and follow-up of urolithiasis

    Energy Technology Data Exchange (ETDEWEB)

    Tenant, Sean; Pang, Chun Lap; Dissanayake, Prageeth [Peninsula Radiology Academy, Plymouth (United Kingdom); Vardhanabhuti, Varut [Plymouth University Peninsula Schools of Medicine and Dentistry, Plymouth (United Kingdom); University of Hong Kong, Department of Diagnostic Radiology, Li Ka Shing Faculty of Medicine, Pokfulam (China); Stuckey, Colin; Gutteridge, Catherine [Plymouth Hospitals NHS Trust, Plymouth (United Kingdom); Hyde, Christopher [University of Exeter Medical School, St Luke' s Campus, Exeter (United Kingdom); Roobottom, Carl [Plymouth University Peninsula Schools of Medicine and Dentistry, Plymouth (United Kingdom); Plymouth Hospitals NHS Trust, Plymouth (United Kingdom)

    2017-10-15

    To evaluate the accuracy of reduced-dose CT scans reconstructed using a new generation of model-based iterative reconstruction (MBIR) in the imaging of urinary tract stone disease, compared with a standard-dose CT using 30% adaptive statistical iterative reconstruction. This single-institution prospective study recruited 125 patients presenting either with acute renal colic or for follow-up of known urinary tract stones. They underwent two immediately consecutive scans, one at standard dose settings and one at the lowest dose (highest noise index) the scanner would allow. The reduced-dose scans were reconstructed using both ASIR 30% and MBIR algorithms and reviewed independently by two radiologists. Objective and subjective image quality measures as well as diagnostic data were obtained. The reduced-dose MBIR scan was 100% concordant with the reference standard for the assessment of ureteric stones. It was extremely accurate at identifying calculi of 3 mm and above. The algorithm allowed a dose reduction of 58% without any loss of scan quality. A reduced-dose CT scan using MBIR is accurate in acute imaging for renal colic symptoms and for urolithiasis follow-up and allows a significant reduction in dose. (orig.)

  17. Basic features of boron isotope separation by SILARC method in the two-step iterative static model

    Science.gov (United States)

    Lyakhov, K. A.; Lee, H. J.

    2013-05-01

    In this paper we develop a new static model for boron isotope separation by the laser assisted retardation of condensation method (SILARC) on the basis of model proposed by Jeff Eerkens. Our model is thought to be adequate to so-called two-step iterative scheme for isotope separation. This rather simple model helps to understand combined action on boron separation by SILARC method of all important parameters and relations between them. These parameters include carrier gas, molar fraction of BCl3 molecules in carrier gas, laser pulse intensity, gas pulse duration, gas pressure and temperature in reservoir and irradiation cells, optimal irradiation cell and skimmer chamber volumes, and optimal nozzle throughput. A method for finding optimal values of these parameters based on some objective function global minimum search was suggested. It turns out that minimum of this objective function is directly related to the minimum of total energy consumed, and total setup volume. Relations between nozzle throat area, IC volume, laser intensity, number of nozzles, number of vacuum pumps, and required isotope production rate were derived. Two types of industrial scale irradiation cells are compared. The first one has one large throughput slit nozzle, while the second one has numerous small nozzles arranged in parallel arrays for better overlap with laser beam. It is shown that the last one outperforms the former one significantly. It is argued that NO2 is the best carrier gas for boron isotope separation from the point of view of energy efficiency and Ar from the point of view of setup compactness.

  18. Prediction Models for Dynamic Demand Response

    Energy Technology Data Exchange (ETDEWEB)

    Aman, Saima; Frincu, Marc; Chelmis, Charalampos; Noor, Muhammad; Simmhan, Yogesh; Prasanna, Viktor K.

    2015-11-02

    As Smart Grids move closer to dynamic curtailment programs, Demand Response (DR) events will become necessary not only on fixed time intervals and weekdays predetermined by static policies, but also during changing decision periods and weekends to react to real-time demand signals. Unique challenges arise in this context vis-a-vis demand prediction and curtailment estimation and the transformation of such tasks into an automated, efficient dynamic demand response (D2R) process. While existing work has concentrated on increasing the accuracy of prediction models for DR, there is a lack of studies for prediction models for D2R, which we address in this paper. Our first contribution is the formal definition of D2R, and the description of its challenges and requirements. Our second contribution is a feasibility analysis of very-short-term prediction of electricity consumption for D2R over a diverse, large-scale dataset that includes both small residential customers and large buildings. Our third, and major contribution is a set of insights into the predictability of electricity consumption in the context of D2R. Specifically, we focus on prediction models that can operate at a very small data granularity (here 15-min intervals), for both weekdays and weekends - all conditions that characterize scenarios for D2R. We find that short-term time series and simple averaging models used by Independent Service Operators and utilities achieve superior prediction accuracy. We also observe that workdays are more predictable than weekends and holiday. Also, smaller customers have large variation in consumption and are less predictable than larger buildings. Key implications of our findings are that better models are required for small customers and for non-workdays, both of which are critical for D2R. Also, prediction models require just few days’ worth of data indicating that small amounts of

  19. Are animal models predictive for humans?

    Directory of Open Access Journals (Sweden)

    Greek Ray

    2009-01-01

    Full Text Available Abstract It is one of the central aims of the philosophy of science to elucidate the meanings of scientific terms and also to think critically about their application. The focus of this essay is the scientific term predict and whether there is credible evidence that animal models, especially in toxicology and pathophysiology, can be used to predict human outcomes. Whether animals can be used to predict human response to drugs and other chemicals is apparently a contentious issue. However, when one empirically analyzes animal models using scientific tools they fall far short of being able to predict human responses. This is not surprising considering what we have learned from fields such evolutionary and developmental biology, gene regulation and expression, epigenetics, complexity theory, and comparative genomics.

  20. Built To Last: Using Iterative Development Models for Sustainable Scientific Software Development

    Science.gov (United States)

    Jasiak, M. E.; Truslove, I.; Savoie, M.

    2013-12-01

    In scientific research, software development exists fundamentally for the results they create. The core research must take focus. It seems natural to researchers, driven by grant deadlines, that every dollar invested in software development should be used to push the boundaries of problem solving. This system of values is frequently misaligned with those of the software being created in a sustainable fashion; short-term optimizations create longer-term sustainability issues. The National Snow and Ice Data Center (NSIDC) has taken bold cultural steps in using agile and lean development and management methodologies to help its researchers meet critical deadlines, while building in the necessary support structure for the code to live far beyond its original milestones. Agile and lean software development and methodologies including Scrum, Kanban, Continuous Delivery and Test-Driven Development have seen widespread adoption within NSIDC. This focus on development methods is combined with an emphasis on explaining to researchers why these methods produce more desirable results for everyone, as well as promoting developers interacting with researchers. This presentation will describe NSIDC's current scientific software development model, how this addresses the short-term versus sustainability dichotomy, the lessons learned and successes realized by transitioning to this agile and lean-influenced model, and the current challenges faced by the organization.

  1. Evaluation of CASP8 model quality predictions

    KAUST Repository

    Cozzetto, Domenico

    2009-01-01

    The model quality assessment problem consists in the a priori estimation of the overall and per-residue accuracy of protein structure predictions. Over the past years, a number of methods have been developed to address this issue and CASP established a prediction category to evaluate their performance in 2006. In 2008 the experiment was repeated and its results are reported here. Participants were invited to infer the correctness of the protein models submitted by the registered automatic servers. Estimates could apply to both whole models and individual amino acids. Groups involved in the tertiary structure prediction categories were also asked to assign local error estimates to each predicted residue in their own models and their results are also discussed here. The correlation between the predicted and observed correctness measures was the basis of the assessment of the results. We observe that consensus-based methods still perform significantly better than those accepting single models, similarly to what was concluded in the previous edition of the experiment. © 2009 WILEY-LISS, INC.

  2. Interaction among competitive producers in the electricity market: An iterative market model for the strategic management of thermal power plants

    Energy Technology Data Exchange (ETDEWEB)

    Carraretto, Cristian; Zigante, Andrea [University of Padova (Italy). Department of Mechanical Engineering

    2006-12-15

    The liberalization of the electricity sector requires utilities to develop sound operation strategies for their power plants. In this paper, attention is focused on the problem of optimizing the management of the thermal power plants belonging to a strategic producer that competes with other strategic companies and a set of smaller non-strategic ones in the day-ahead market. The market model suggested here determines an equilibrium condition over the selected period of analysis, in which no producer can increase profits by changing its supply offers given all rivals' bids. Power plants technical and operating constraints are considered. An iterative procedure, based on the dynamic programming, is used to find the optimum production plans of each producer. Some combinations of power plants and number of producers are analyzed, to simulate for instance the decommissioning of old expensive power plants, the installation of new more efficient capacity, the severance of large dominant producers into smaller utilities, the access of new producers to the market. Their effect on power plants management, market equilibrium, electricity quantities traded and prices is discussed. (author)

  3. Influence of model based iterative reconstruction algorithm on image quality of multiplanar reformations in reduced dose chest CT

    International Nuclear Information System (INIS)

    Barras, Heloise; Dunet, Vincent; Hachulla, Anne-Lise; Grimm, Jochen; Beigelman-Aubry, Catherine

    2016-01-01

    Model-based iterative reconstruction (MBIR) reduces image noise and improves image quality (IQ) but its influence on post-processing tools including maximal intensity projection (MIP) and minimal intensity projection (mIP) remains unknown. To evaluate the influence on IQ of MBIR on native, mIP, MIP axial and coronal reformats of reduced dose computed tomography (RD-CT) chest acquisition. Raw data of 50 patients, who underwent a standard dose CT (SD-CT) and a follow-up RD-CT with a CT dose index (CTDI) of 2–3 mGy, were reconstructed by MBIR and FBP. Native slices, 4-mm-thick MIP, and 3-mm-thick mIP axial and coronal reformats were generated. The relative IQ, subjective IQ, image noise, and number of artifacts were determined in order to compare different reconstructions of RD-CT with reference SD-CT. The lowest noise was observed with MBIR. RD-CT reconstructed by MBIR exhibited the best relative and subjective IQ on coronal view regardless of the post-processing tool. MBIR generated the lowest rate of artefacts on coronal mIP/MIP reformats and the highest one on axial reformats, mainly represented by distortions and stairsteps artifacts. The MBIR algorithm reduces image noise but generates more artifacts than FBP on axial mIP and MIP reformats of RD-CT. Conversely, it significantly improves IQ on coronal views, without increasing artifacts, regardless of the post-processing technique

  4. Interaction among competitive producers in the electricity market: An iterative market model for the strategic management of thermal power plants

    International Nuclear Information System (INIS)

    Carraretto, Cristian; Zigante, Andrea

    2006-01-01

    The liberalization of the electricity sector requires utilities to develop sound operation strategies for their power plants. In this paper, attention is focused on the problem of optimizing the management of the thermal power plants belonging to a strategic producer that competes with other strategic companies and a set of smaller non-strategic ones in the day-ahead market. The market model suggested here determines an equilibrium condition over the selected period of analysis, in which no producer can increase profits by changing its supply offers given all rivals' bids. Power plants technical and operating constraints are considered. An iterative procedure, based on the dynamic programming, is used to find the optimum production plans of each producer. Some combinations of power plants and number of producers are analyzed, to simulate for instance the decommissioning of old expensive power plants, the installation of new more efficient capacity, the severance of large dominant producers into smaller utilities, the access of new producers to the market. Their effect on power plants management, market equilibrium, electricity quantities traded and prices is discussed. (author)

  5. Design and Implementation of Recursive Model Predictive Control for Permanent Magnet Synchronous Motor Drives

    Directory of Open Access Journals (Sweden)

    Xuan Wu

    2015-01-01

    Full Text Available In order to control the permanent-magnet synchronous motor system (PMSM with different disturbances and nonlinearity, an improved current control algorithm for the PMSM systems using recursive model predictive control (RMPC is developed in this paper. As the conventional MPC has to be computed online, its iterative computational procedure needs long computing time. To enhance computational speed, a recursive method based on recursive Levenberg-Marquardt algorithm (RLMA and iterative learning control (ILC is introduced to solve the learning issue in MPC. RMPC is able to significantly decrease the computation cost of traditional MPC in the PMSM system. The effectiveness of the proposed algorithm has been verified by simulation and experimental results.

  6. Model predictive controller design of hydrocracker reactors

    OpenAIRE

    GÖKÇE, Dila

    2014-01-01

    This study summarizes the design of a Model Predictive Controller (MPC) in Tüpraş, İzmit Refinery Hydrocracker Unit Reactors. Hydrocracking process, in which heavy vacuum gasoil is converted into lighter and valuable products at high temperature and pressure is described briefly. Controller design description, identification and modeling studies are examined and the model variables are presented. WABT (Weighted Average Bed Temperature) equalization and conversion increase are simulate...

  7. Multi-Model Ensemble Wake Vortex Prediction

    Science.gov (United States)

    Koerner, Stephan; Holzaepfel, Frank; Ahmad, Nash'at N.

    2015-01-01

    Several multi-model ensemble methods are investigated for predicting wake vortex transport and decay. This study is a joint effort between National Aeronautics and Space Administration and Deutsches Zentrum fuer Luft- und Raumfahrt to develop a multi-model ensemble capability using their wake models. An overview of different multi-model ensemble methods and their feasibility for wake applications is presented. The methods include Reliability Ensemble Averaging, Bayesian Model Averaging, and Monte Carlo Simulations. The methodologies are evaluated using data from wake vortex field experiments.

  8. Thermodynamic modeling of activity coefficient and prediction of solubility: Part 1. Predictive models.

    Science.gov (United States)

    Mirmehrabi, Mahmoud; Rohani, Sohrab; Perry, Luisa

    2006-04-01

    A new activity coefficient model was developed from excess Gibbs free energy in the form G(ex) = cA(a) x(1)(b)...x(n)(b). The constants of the proposed model were considered to be function of solute and solvent dielectric constants, Hildebrand solubility parameters and specific volumes of solute and solvent molecules. The proposed model obeys the Gibbs-Duhem condition for activity coefficient models. To generalize the model and make it as a purely predictive model without any adjustable parameters, its constants were found using the experimental activity coefficient and physical properties of 20 vapor-liquid systems. The predictive capability of the proposed model was tested by calculating the activity coefficients of 41 binary vapor-liquid equilibrium systems and showed good agreement with the experimental data in comparison with two other predictive models, the UNIFAC and Hildebrand models. The only data used for the prediction of activity coefficients, were dielectric constants, Hildebrand solubility parameters, and specific volumes of the solute and solvent molecules. Furthermore, the proposed model was used to predict the activity coefficient of an organic compound, stearic acid, whose physical properties were available in methanol and 2-butanone. The predicted activity coefficient along with the thermal properties of the stearic acid were used to calculate the solubility of stearic acid in these two solvents and resulted in a better agreement with the experimental data compared to the UNIFAC and Hildebrand predictive models.

  9. ITER Council proceedings: 1993

    International Nuclear Information System (INIS)

    1994-01-01

    Records of the third ITER Council Meeting (IC-3), held on 21-22 April 1993, in Tokyo, Japan, and the fourth ITER Council Meeting (IC-4) held on 29 September - 1 October 1993 in San Diego, USA, are presented, giving essential information on the evolution of the ITER Engineering Design Activities (EDA), such as the text of the draft of Protocol 2 further elaborated in ''ITER EDA Agreement and Protocol 2'' (ITER EDA Documentation Series No. 5), recommendations on future work programmes: a description of technology R and D tasks; the establishment of a trust fund for the ITER EDA activities; arrangements for Visiting Home Team Personnel; the general framework for the involvement of other countries in the ITER EDA; conditions for the involvement of Canada in the Euratom Contribution to the ITER EDA; and other attachments as parts of the Records of Decision of the aforementioned ITER Council Meetings

  10. ITER council proceedings: 2000

    International Nuclear Information System (INIS)

    2001-01-01

    No ITER Council Meetings were held during 2000. However, two ITER EDA Meetings were held, one in Tokyo, January 19-20, and one in Moscow, June 29-30. The parties participating in these meetings were those that partake in the extended ITER EDA, namely the EU, the Russian Federation, and Japan. This document contains, a/o, the records of these meetings, the list of attendees, the agenda, the ITER EDA Status Reports issued during these meetings, the TAC (Technical Advisory Committee) reports and recommendations, the MAC Reports and Advice (also for the July 1999 Meeting), the ITER-FEAT Outline Design Report, the TAC Reports and Recommendations both meetings), Site requirements and Site Design Assumptions, the Tentative Sequence of technical Activities 2000-2001, Report of the ITER SWG-P2 on Joint Implementation of ITER, EU/ITER Canada Proposal for New ITER Identification

  11. PREDICTIVE CAPACITY OF ARCH FAMILY MODELS

    Directory of Open Access Journals (Sweden)

    Raphael Silveira Amaro

    2016-03-01

    Full Text Available In the last decades, a remarkable number of models, variants from the Autoregressive Conditional Heteroscedastic family, have been developed and empirically tested, making extremely complex the process of choosing a particular model. This research aim to compare the predictive capacity, using the Model Confidence Set procedure, than five conditional heteroskedasticity models, considering eight different statistical probability distributions. The financial series which were used refers to the log-return series of the Bovespa index and the Dow Jones Industrial Index in the period between 27 October 2008 and 30 December 2014. The empirical evidences showed that, in general, competing models have a great homogeneity to make predictions, either for a stock market of a developed country or for a stock market of a developing country. An equivalent result can be inferred for the statistical probability distributions that were used.

  12. A revised prediction model for natural conception.

    Science.gov (United States)

    Bensdorp, Alexandra J; van der Steeg, Jan Willem; Steures, Pieternel; Habbema, J Dik F; Hompes, Peter G A; Bossuyt, Patrick M M; van der Veen, Fulco; Mol, Ben W J; Eijkemans, Marinus J C

    2017-06-01

    One of the aims in reproductive medicine is to differentiate between couples that have favourable chances of conceiving naturally and those that do not. Since the development of the prediction model of Hunault, characteristics of the subfertile population have changed. The objective of this analysis was to assess whether additional predictors can refine the Hunault model and extend its applicability. Consecutive subfertile couples with unexplained and mild male subfertility presenting in fertility clinics were asked to participate in a prospective cohort study. We constructed a multivariable prediction model with the predictors from the Hunault model and new potential predictors. The primary outcome, natural conception leading to an ongoing pregnancy, was observed in 1053 women of the 5184 included couples (20%). All predictors of the Hunault model were selected into the revised model plus an additional seven (woman's body mass index, cycle length, basal FSH levels, tubal status,history of previous pregnancies in the current relationship (ongoing pregnancies after natural conception, fertility treatment or miscarriages), semen volume, and semen morphology. Predictions from the revised model seem to concur better with observed pregnancy rates compared with the Hunault model; c-statistic of 0.71 (95% CI 0.69 to 0.73) compared with 0.59 (95% CI 0.57 to 0.61). Copyright © 2017. Published by Elsevier Ltd.

  13. Fast Nonconvex Model Predictive Control for Commercial Refrigeration

    DEFF Research Database (Denmark)

    Hovgaard, Tobias Gybel; Larsen, Lars F. S.; Jørgensen, John Bagterp

    2012-01-01

    in fewer than 5 or so iterations. We employ a fast convex quadratic programming solver to carry out the iterations, which is more than fast enough to run in real-time. We demonstrate our method on a realistic model, with a full year simulation, using real historical data. These simulations show substantial...... cost savings, and reveal how the method exhibits sophisticated response to real-time variations in electricity prices. This demand response is critical to help balance real-time uncertainties associated with large penetration of intermittent renewable energy sources in a future smart grid....

  14. Modelling the predictive performance of credit scoring

    Directory of Open Access Journals (Sweden)

    Shi-Wei Shen

    2013-07-01

    Research purpose: The purpose of this empirical paper was to examine the predictive performance of credit scoring systems in Taiwan. Motivation for the study: Corporate lending remains a major business line for financial institutions. However, in light of the recent global financial crises, it has become extremely important for financial institutions to implement rigorous means of assessing clients seeking access to credit facilities. Research design, approach and method: Using a data sample of 10 349 observations drawn between 1992 and 2010, logistic regression models were utilised to examine the predictive performance of credit scoring systems. Main findings: A test of Goodness of fit demonstrated that credit scoring models that incorporated the Taiwan Corporate Credit Risk Index (TCRI, micro- and also macroeconomic variables possessed greater predictive power. This suggests that macroeconomic variables do have explanatory power for default credit risk. Practical/managerial implications: The originality in the study was that three models were developed to predict corporate firms’ defaults based on different microeconomic and macroeconomic factors such as the TCRI, asset growth rates, stock index and gross domestic product. Contribution/value-add: The study utilises different goodness of fits and receiver operator characteristics during the examination of the robustness of the predictive power of these factors.

  15. Modelling language evolution: Examples and predictions

    Science.gov (United States)

    Gong, Tao; Shuai, Lan; Zhang, Menghan

    2014-06-01

    We survey recent computer modelling research of language evolution, focusing on a rule-based model simulating the lexicon-syntax coevolution and an equation-based model quantifying the language competition dynamics. We discuss four predictions of these models: (a) correlation between domain-general abilities (e.g. sequential learning) and language-specific mechanisms (e.g. word order processing); (b) coevolution of language and relevant competences (e.g. joint attention); (c) effects of cultural transmission and social structure on linguistic understandability; and (d) commonalities between linguistic, biological, and physical phenomena. All these contribute significantly to our understanding of the evolutions of language structures, individual learning mechanisms, and relevant biological and socio-cultural factors. We conclude the survey by highlighting three future directions of modelling studies of language evolution: (a) adopting experimental approaches for model evaluation; (b) consolidating empirical foundations of models; and (c) multi-disciplinary collaboration among modelling, linguistics, and other relevant disciplines.

  16. IHadoop: Asynchronous iterations for MapReduce

    KAUST Repository

    Elnikety, Eslam Mohamed Ibrahim

    2011-11-01

    MapReduce is a distributed programming frame-work designed to ease the development of scalable data-intensive applications for large clusters of commodity machines. Most machine learning and data mining applications involve iterative computations over large datasets, such as the Web hyperlink structures and social network graphs. Yet, the MapReduce model does not efficiently support this important class of applications. The architecture of MapReduce, most critically its dataflow techniques and task scheduling, is completely unaware of the nature of iterative applications; tasks are scheduled according to a policy that optimizes the execution for a single iteration which wastes bandwidth, I/O, and CPU cycles when compared with an optimal execution for a consecutive set of iterations. This work presents iHadoop, a modified MapReduce model, and an associated implementation, optimized for iterative computations. The iHadoop model schedules iterations asynchronously. It connects the output of one iteration to the next, allowing both to process their data concurrently. iHadoop\\'s task scheduler exploits inter-iteration data locality by scheduling tasks that exhibit a producer/consumer relation on the same physical machine allowing a fast local data transfer. For those iterative applications that require satisfying certain criteria before termination, iHadoop runs the check concurrently during the execution of the subsequent iteration to further reduce the application\\'s latency. This paper also describes our implementation of the iHadoop model, and evaluates its performance against Hadoop, the widely used open source implementation of MapReduce. Experiments using different data analysis applications over real-world and synthetic datasets show that iHadoop performs better than Hadoop for iterative algorithms, reducing execution time of iterative applications by 25% on average. Furthermore, integrating iHadoop with HaLoop, a variant Hadoop implementation that caches

  17. Model Predictive Control of Sewer Networks

    DEFF Research Database (Denmark)

    Pedersen, Einar B.; Herbertsson, Hannes R.; Niemann, Henrik

    2016-01-01

    The developments in solutions for management of urban drainage are of vital importance, as the amount of sewer water from urban areas continues to increase due to the increase of the world’s population and the change in the climate conditions. How a sewer network is structured, monitored and cont...... benchmark model. Due to the inherent constraints the applied approach is based on Model Predictive Control....... and controlled have thus become essential factors for efficient performance of waste water treatment plants. This paper examines methods for simplified modelling and controlling a sewer network. A practical approach to the problem is used by analysing simplified design model, which is based on the Barcelona...

  18. Bayesian Predictive Models for Rayleigh Wind Speed

    DEFF Research Database (Denmark)

    Shahirinia, Amir; Hajizadeh, Amin; Yu, David C

    2017-01-01

    predictive model of the wind speed aggregates the non-homogeneous distributions into a single continuous distribution. Therefore, the result is able to capture the variation among the probability distributions of the wind speeds at the turbines’ locations in a wind farm. More specifically, instead of using...... a wind speed distribution whose parameters are known or estimated, the parameters are considered as random whose variations are according to probability distributions. The Bayesian predictive model for a Rayleigh which only has a single model scale parameter has been proposed. Also closed-form posterior......One of the major challenges with the increase in wind power generation is the uncertain nature of wind speed. So far the uncertainty about wind speed has been presented through probability distributions. Also the existing models that consider the uncertainty of the wind speed primarily view...

  19. A Hybrid Backward-Forward Iterative Model for Improving Capacity Building of Earth Observations for Sustainable Societal Application

    Science.gov (United States)

    Hossain, F.; Iqbal, N.; Lee, H.; Muhammad, A.

    2015-12-01

    When it comes to building durable capacity for implementing state of the art technology and earth observation (EO) data for improved decision making, it has been long recognized that a unidirectional approach (from research to application) often does not work. Co-design of capacity building effort has recently been recommended as a better alternative. This approach is a two-way street where scientists and stakeholders engage intimately along the entire chain of actions from design of research experiments to packaging of decision making tools and each party provides an equal amount of input. Scientists execute research experiments based on boundary conditions and outputs that are defined as tangible by stakeholders for decision making. On the other hand, decision making tools are packaged by stakeholders with scientists ensuring the application-specific science is relevant. In this talk, we will overview one such iterative capacity building approach that we have implemented for gravimetry-based satellite (GRACE) EO data for improved groundwater management in Pakistan. We call our approach a hybrid approach where the initial step is a forward model involving a conventional short-term (3 day) capacity building workshop in the stakeholder environment addressing a very large audience. In this forward model, the net is cast wide to 'shortlist' a set of highly motivated stakeholder agency staffs who are then engaged more directly in 1-1 training. In the next step (the backward model), these short listed staffs are then brought back in the research environment of the scientists (supply) for 1-1 and long-term (6 months) intense brainstorming, training, and design of decision making tools. The advantage of this backward model is that it allows for a much better understanding for scientists of the ground conditions and hurdles of making a EO-based scientific innovation work for a specific decision making problem that is otherwise fundamentally impossible in conventional

  20. Generation of a statistical shape model with probabilistic point correspondences and the expectation maximization- iterative closest point algorithm

    International Nuclear Information System (INIS)

    Hufnagel, Heike; Pennec, Xavier; Ayache, Nicholas; Ehrhardt, Jan; Handels, Heinz

    2008-01-01

    Identification of point correspondences between shapes is required for statistical analysis of organ shapes differences. Since manual identification of landmarks is not a feasible option in 3D, several methods were developed to automatically find one-to-one correspondences on shape surfaces. For unstructured point sets, however, one-to-one correspondences do not exist but correspondence probabilities can be determined. A method was developed to compute a statistical shape model based on shapes which are represented by unstructured point sets with arbitrary point numbers. A fundamental problem when computing statistical shape models is the determination of correspondences between the points of the shape observations of the training data set. In the absence of landmarks, exact correspondences can only be determined between continuous surfaces, not between unstructured point sets. To overcome this problem, we introduce correspondence probabilities instead of exact correspondences. The correspondence probabilities are found by aligning the observation shapes with the affine expectation maximization-iterative closest points (EM-ICP) registration algorithm. In a second step, the correspondence probabilities are used as input to compute a mean shape (represented once again by an unstructured point set). Both steps are unified in a single optimization criterion which depe nds on the two parameters 'registration transformation' and 'mean shape'. In a last step, a variability model which best represents the variability in the training data set is computed. Experiments on synthetic data sets and in vivo brain structure data sets (MRI) are then designed to evaluate the performance of our algorithm. The new method was applied to brain MRI data sets, and the estimated point correspondences were compared to a statistical shape model built on exact correspondences. Based on established measures of ''generalization ability'' and ''specificity'', the estimates were very satisfactory

  1. Comparison of two ordinal prediction models

    DEFF Research Database (Denmark)

    Kattan, Michael W; Gerds, Thomas A

    2015-01-01

    system (i.e. old or new), such as the level of evidence for one or more factors included in the system or the general opinions of expert clinicians. However, given the major objective of estimating prognosis on an ordinal scale, we argue that the rival staging system candidates should be compared...... on their ability to predict outcome. We sought to outline an algorithm that would compare two rival ordinal systems on their predictive ability. RESULTS: We devised an algorithm based largely on the concordance index, which is appropriate for comparing two models in their ability to rank observations. We...... demonstrate our algorithm with a prostate cancer staging system example. CONCLUSION: We have provided an algorithm for selecting the preferred staging system based on prognostic accuracy. It appears to be useful for the purpose of selecting between two ordinal prediction models....

  2. The ITER reduced cost design

    International Nuclear Information System (INIS)

    Aymar, R.

    2000-01-01

    Six years of joint work under the international thermonuclear experimental reactor (ITER) EDA agreement yielded a mature design for ITER which met the objectives set for it (ITER final design report (FDR)), together with a corpus of scientific and technological data, large/full scale models or prototypes of key components/systems and progress in understanding which both validated the specific design and are generally applicable to a next step, reactor-oriented tokamak on the road to the development of fusion as an energy source. In response to requests from the parties to explore the scope for addressing ITER's programmatic objective at reduced cost, the study of options for cost reduction has been the main feature of ITER work since summer 1998, using the advances in physics and technology databases, understandings, and tools arising out of the ITER collaboration to date. A joint concept improvement task force drawn from the joint central team and home teams has overseen and co-ordinated studies of the key issues in physics and technology which control the possibility of reducing the overall investment and simultaneously achieving the required objectives. The aim of this task force is to achieve common understandings of these issues and their consequences so as to inform and to influence the best cost-benefit choice, which will attract consensus between the ITER partners. A report to be submitted to the parties by the end of 1999 will present key elements of a specific design of minimum capital investment, with a target cost saving of about 50% the cost of the ITER FDR design, and a restricted number of design variants. Outline conclusions from the work of the task force are presented in terms of physics, operations, and design of the main tokamak systems. Possible implications for the way forward are discussed

  3. Predictive analytics can support the ACO model.

    Science.gov (United States)

    Bradley, Paul

    2012-04-01

    Predictive analytics can be used to rapidly spot hard-to-identify opportunities to better manage care--a key tool in accountable care. When considering analytics models, healthcare providers should: Make value-based care a priority and act on information from analytics models. Create a road map that includes achievable steps, rather than major endeavors. Set long-term expectations and recognize that the effectiveness of an analytics program takes time, unlike revenue cycle initiatives that may show a quick return.

  4. Predictive modeling in homogeneous catalysis: a tutorial

    NARCIS (Netherlands)

    Maldonado, A.G.; Rothenberg, G.

    2010-01-01

    Predictive modeling has become a practical research tool in homogeneous catalysis. It can help to pinpoint ‘good regions’ in the catalyst space, narrowing the search for the optimal catalyst for a given reaction. Just like any other new idea, in silico catalyst optimization is accepted by some

  5. Model predictive control of smart microgrids

    DEFF Research Database (Denmark)

    Hu, Jiefeng; Zhu, Jianguo; Guerrero, Josep M.

    2014-01-01

    required to realise high-performance of distributed generations and will realise innovative control techniques utilising model predictive control (MPC) to assist in coordinating the plethora of generation and load combinations, thus enable the effective exploitation of the clean renewable energy sources...

  6. Feedback model predictive control by randomized algorithms

    NARCIS (Netherlands)

    Batina, Ivo; Stoorvogel, Antonie Arij; Weiland, Siep

    2001-01-01

    In this paper we present a further development of an algorithm for stochastic disturbance rejection in model predictive control with input constraints based on randomized algorithms. The algorithm presented in our work can solve the problem of stochastic disturbance rejection approximately but with

  7. A Robustly Stabilizing Model Predictive Control Algorithm

    Science.gov (United States)

    Ackmece, A. Behcet; Carson, John M., III

    2007-01-01

    A model predictive control (MPC) algorithm that differs from prior MPC algorithms has been developed for controlling an uncertain nonlinear system. This algorithm guarantees the resolvability of an associated finite-horizon optimal-control problem in a receding-horizon implementation.

  8. Hierarchical Model Predictive Control for Resource Distribution

    DEFF Research Database (Denmark)

    Bendtsen, Jan Dimon; Trangbæk, K; Stoustrup, Jakob

    2010-01-01

    This paper deals with hierarchichal model predictive control (MPC) of distributed systems. A three level hierachical approach is proposed, consisting of a high level MPC controller, a second level of so-called aggregators, controlled by an online MPC-like algorithm, and a lower level of autonomous...

  9. Model Predictive Control based on Finite Impulse Response Models

    DEFF Research Database (Denmark)

    Prasath, Guru; Jørgensen, John Bagterp

    2008-01-01

    We develop a regularized l2 finite impulse response (FIR) predictive controller with input and input-rate constraints. Feedback is based on a simple constant output disturbance filter. The performance of the predictive controller in the face of plant-model mismatch is investigated by simulations ...

  10. PSO-MISMO modeling strategy for multistep-ahead time series prediction.

    Science.gov (United States)

    Bao, Yukun; Xiong, Tao; Hu, Zhongyi

    2014-05-01

    Multistep-ahead time series prediction is one of the most challenging research topics in the field of time series modeling and prediction, and is continually under research. Recently, the multiple-input several multiple-outputs (MISMO) modeling strategy has been proposed as a promising alternative for multistep-ahead time series prediction, exhibiting advantages compared with the two currently dominating strategies, the iterated and the direct strategies. Built on the established MISMO strategy, this paper proposes a particle swarm optimization (PSO)-based MISMO modeling strategy, which is capable of determining the number of sub-models in a self-adaptive mode, with varying prediction horizons. Rather than deriving crisp divides with equal-size s prediction horizons from the established MISMO, the proposed PSO-MISMO strategy, implemented with neural networks, employs a heuristic to create flexible divides with varying sizes of prediction horizons and to generate corresponding sub-models, providing considerable flexibility in model construction, which has been validated with simulated and real datasets.

  11. The ITER project technological challenges

    CERN Multimedia

    CERN. Geneva; Lister, Joseph; Marquina, Miguel A; Todesco, Ezio

    2005-01-01

    The first lecture reminds us of the ITER challenges, presents hard engineering problems, typically due to mechanical forces and thermal loads and identifies where the physics uncertainties play a significant role in the engineering requirements. The second lecture presents soft engineering problems of measuring the plasma parameters, feedback control of the plasma and handling the physics data flow and slow controls data flow from a large experiment like ITER. The last three lectures focus on superconductors for fusion. The third lecture reviews the design criteria and manufacturing methods for 6 milestone-conductors of large fusion devices (T-7, T-15, Tore Supra, LHD, W-7X, ITER). The evolution of the designer approach and the available technologies are critically discussed. The fourth lecture is devoted to the issue of performance prediction, from a superconducting wire to a large size conductor. The role of scaling laws, self-field, current distribution, voltage-current characteristic and transposition are...

  12. Submillisievert CT using model-based iterative reconstruction with lung-specific setting: An initial phantom study

    Energy Technology Data Exchange (ETDEWEB)

    Hata, Akinori; Yanagawa, Masahiro; Honda, Osamu; Gyobu, Tomoko; Ueda, Ken; Tomiyama, Noriyuki [Osaka University Graduate School of Medicine, Department of Diagnostic and Interventional Radiology, Suita, Osaka (Japan)

    2016-12-15

    To assess image quality of filtered back-projection (FBP) and model-based iterative reconstruction (MBIR) with a conventional setting and a new lung-specific setting on submillisievert CT. A lung phantom with artificial nodules was scanned with 10 mA at 120 kVp and 80 kVp (0.14 mSv and 0.05 mSv, respectively); images were reconstructed using FBP and MBIR with conventional setting (MBIR{sub Stnd}) and lung-specific settings (MBIR{sub RP20/Tx} and MBIR{sub RP20}). Three observers subjectively scored overall image quality and image findings on a 5-point scale (1 = worst, 5 = best) compared with reference standard images (50 mA-FBP at 120, 100, 80 kVp). Image noise was measured objectively. MBIR{sub RP20/Tx} performed significantly better than MBIR{sub Stnd} for overall image quality in 80-kVp images (p < 0.01), blurring of the border between lung and chest wall in 120p-kVp images (p < 0.05) and the ventral area of 80-kVp images (p < 0.001), and clarity of small vessels in the ventral area of 80-kVp images (p = 0.037). At 120 kVp, 10 mA-MBIR{sub RP20} and 10 mA-MBIR{sub RP20/Tx} showed similar performance to 50 mA-FBP. MBIR{sub Stnd} was better for noise reduction. Except for blurring in 120 kVp-MBIR{sub Stnd}, MBIRs performed better than FBP. Although a conventional setting was advantageous in noise reduction, a lung-specific setting can provide more appropriate image quality, even on submillisievert CT. (orig.)

  13. Persistent pulmonary subsolid nodules: model-based iterative reconstruction for nodule classification and measurement variability on low-dose CT.

    Science.gov (United States)

    Kim, Hyungjin; Park, Chang Min; Kim, Seong Ho; Lee, Sang Min; Park, Sang Joon; Lee, Kyung Hee; Goo, Jin Mo

    2014-11-01

    To compare the pulmonary subsolid nodule (SSN) classification agreement and measurement variability between filtered back projection (FBP) and model-based iterative reconstruction (MBIR). Low-dose CTs were reconstructed using FBP and MBIR for 47 patients with 47 SSNs. Two readers independently classified SSNs into pure or part-solid ground-glass nodules, and measured the size of the whole nodule and solid portion twice on both reconstruction algorithms. Nodule classification agreement was analyzed using Cohen's kappa and compared between reconstruction algorithms using McNemar's test. Measurement variability was investigated using Bland-Altman analysis and compared with the paired t-test. Cohen's kappa for inter-reader SSN classification agreement was 0.541-0.662 on FBP and 0.778-0.866 on MBIR. Between the two readers, nodule classification was consistent in 79.8 % (75/94) with FBP and 91.5 % (86/94) with MBIR (p = 0.027). Inter-reader measurement variability range was -5.0-2.1 mm on FBP and -3.3-1.8 mm on MBIR for whole nodule size, and was -6.5-0.9 mm on FBP and -5.5-1.5 mm on MBIR for solid portion size. Inter-reader measurement differences were significantly smaller on MBIR (p = 0.027, whole nodule; p = 0.011, solid portion). MBIR significantly improved SSN classification agreement and reduced measurement variability of both whole nodules and solid portions between readers. • Low-dose CT using MBIR algorithm improves reproducibility in the classification of SSNs. • MBIR would enable more confident clinical planning according to the SSN type. • Reduced measurement variability on MBIR allows earlier detection of potentially malignant nodules.

  14. A two-dimensional iterative panel method and boundary layer model for bio-inspired multi-body wings

    Science.gov (United States)

    Blower, Christopher J.; Dhruv, Akash; Wickenheiser, Adam M.

    2014-03-01

    The increased use of Unmanned Aerial Vehicles (UAVs) has created a continuous demand for improved flight capabilities and range of use. During the last decade, engineers have turned to bio-inspiration for new and innovative flow control methods for gust alleviation, maneuverability, and stability improvement using morphing aircraft wings. The bio-inspired wing design considered in this study mimics the flow manipulation techniques performed by birds to extend the operating envelope of UAVs through the installation of an array of feather-like panels across the airfoil's upper and lower surfaces while replacing the trailing edge flap. Each flap has the ability to deflect into both the airfoil and the inbound airflow using hinge points with a single degree-of-freedom, situated at 20%, 40%, 60% and 80% of the chord. The installation of the surface flaps offers configurations that enable advantageous maneuvers while alleviating gust disturbances. Due to the number of possible permutations available for the flap configurations, an iterative constant-strength doublet/source panel method has been developed with an integrated boundary layer model to calculate the pressure distribution and viscous drag over the wing's surface. As a result, the lift, drag and moment coefficients for each airfoil configuration can be calculated. The flight coefficients of this numerical method are validated using experimental data from a low speed suction wind tunnel operating at a Reynolds Number 300,000. This method enables the aerodynamic assessment of a morphing wing profile to be performed accurately and efficiently in comparison to Computational Fluid Dynamics methods and experiments as discussed herein.

  15. Persistent pulmonary subsolid nodules: model-based iterative reconstruction for nodule classification and measurement variability on low-dose CT

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Hyungjin; Kim, Seong Ho; Lee, Sang Min; Lee, Kyung Hee [Seoul National University College of Medicine, Department of Radiology, Seoul (Korea, Republic of); Seoul National University Medical Research Center, Institute of Radiation Medicine, Seoul (Korea, Republic of); Park, Chang Min; Park, Sang Joon; Goo, Jin Mo [Seoul National University College of Medicine, Department of Radiology, Seoul (Korea, Republic of); Seoul National University Medical Research Center, Institute of Radiation Medicine, Seoul (Korea, Republic of); Seoul National University, Cancer Research Institute, Seoul (Korea, Republic of)

    2014-11-15

    To compare the pulmonary subsolid nodule (SSN) classification agreement and measurement variability between filtered back projection (FBP) and model-based iterative reconstruction (MBIR). Low-dose CTs were reconstructed using FBP and MBIR for 47 patients with 47 SSNs. Two readers independently classified SSNs into pure or part-solid ground-glass nodules, and measured the size of the whole nodule and solid portion twice on both reconstruction algorithms. Nodule classification agreement was analyzed using Cohen's kappa and compared between reconstruction algorithms using McNemar's test. Measurement variability was investigated using Bland-Altman analysis and compared with the paired t-test. Cohen's kappa for inter-reader SSN classification agreement was 0.541-0.662 on FBP and 0.778-0.866 on MBIR. Between the two readers, nodule classification was consistent in 79.8 % (75/94) with FBP and 91.5 % (86/94) with MBIR (p = 0.027). Inter-reader measurement variability range was -5.0-2.1 mm on FBP and -3.3-1.8 mm on MBIR for whole nodule size, and was -6.5-0.9 mm on FBP and -5.5-1.5 mm on MBIR for solid portion size. Inter-reader measurement differences were significantly smaller on MBIR (p = 0.027, whole nodule; p = 0.011, solid portion). MBIR significantly improved SSN classification agreement and reduced measurement variability of both whole nodules and solid portions between readers. (orig.)

  16. Disease prediction models and operational readiness.

    Directory of Open Access Journals (Sweden)

    Courtney D Corley

    Full Text Available The objective of this manuscript is to present a systematic review of biosurveillance models that operate on select agents and can forecast the occurrence of a disease event. We define a disease event to be a biological event with focus on the One Health paradigm. These events are characterized by evidence of infection and or disease condition. We reviewed models that attempted to predict a disease event, not merely its transmission dynamics and we considered models involving pathogens of concern as determined by the US National Select Agent Registry (as of June 2011. We searched commercial and government databases and harvested Google search results for eligible models, using terms and phrases provided by public health analysts relating to biosurveillance, remote sensing, risk assessments, spatial epidemiology, and ecological niche modeling. After removal of duplications and extraneous material, a core collection of 6,524 items was established, and these publications along with their abstracts are presented in a semantic wiki at http://BioCat.pnnl.gov. As a result, we systematically reviewed 44 papers, and the results are presented in this analysis. We identified 44 models, classified as one or more of the following: event prediction (4, spatial (26, ecological niche (28, diagnostic or clinical (6, spread or response (9, and reviews (3. The model parameters (e.g., etiology, climatic, spatial, cultural and data sources (e.g., remote sensing, non-governmental organizations, expert opinion, epidemiological were recorded and reviewed. A component of this review is the identification of verification and validation (V&V methods applied to each model, if any V&V method was reported. All models were classified as either having undergone Some Verification or Validation method, or No Verification or Validation. We close by outlining an initial set of operational readiness level guidelines for disease prediction models based upon established Technology

  17. Caries risk assessment models in caries prediction

    Directory of Open Access Journals (Sweden)

    Amila Zukanović

    2013-11-01

    Full Text Available Objective. The aim of this research was to assess the efficiency of different multifactor models in caries prediction. Material and methods. Data from the questionnaire and objective examination of 109 examinees was entered into the Cariogram, Previser and Caries-Risk Assessment Tool (CAT multifactor risk assessment models. Caries risk was assessed with the help of all three models for each patient, classifying them as low, medium or high-risk patients. The development of new caries lesions over a period of three years [Decay Missing Filled Tooth (DMFT increment = difference between Decay Missing Filled Tooth Surface (DMFTS index at baseline and follow up], provided for examination of the predictive capacity concerning different multifactor models. Results. The data gathered showed that different multifactor risk assessment models give significantly different results (Friedman test: Chi square = 100.073, p=0.000. Cariogram is the model which identified the majority of examinees as medium risk patients (70%. The other two models were more radical in risk assessment, giving more unfavorable risk –profiles for patients. In only 12% of the patients did the three multifactor models assess the risk in the same way. Previser and CAT gave the same results in 63% of cases – the Wilcoxon test showed that there is no statistically significant difference in caries risk assessment between these two models (Z = -1.805, p=0.071. Conclusions. Evaluation of three different multifactor caries risk assessment models (Cariogram, PreViser and CAT showed that only the Cariogram can successfully predict new caries development in 12-year-old Bosnian children.

  18. Link Prediction via Sparse Gaussian Graphical Model

    Directory of Open Access Journals (Sweden)

    Liangliang Zhang

    2016-01-01

    Full Text Available Link prediction is an important task in complex network analysis. Traditional link prediction methods are limited by network topology and lack of node property information, which makes predicting links challenging. In this study, we address link prediction using a sparse Gaussian graphical model and demonstrate its theoretical and practical effectiveness. In theory, link prediction is executed by estimating the inverse covariance matrix of samples to overcome information limits. The proposed method was evaluated with four small and four large real-world datasets. The experimental results show that the area under the curve (AUC value obtained by the proposed method improved by an average of 3% and 12.5% compared to 13 mainstream similarity methods, respectively. This method outperforms the baseline method, and the prediction accuracy is superior to mainstream methods when using only 80% of the training set. The method also provides significantly higher AUC values when using only 60% in Dolphin and Taro datasets. Furthermore, the error rate of the proposed method demonstrates superior performance with all datasets compared to mainstream methods.

  19. Coronary Computed Tomographic Angiography at 80 kVp and Knowledge-Based Iterative Model Reconstruction Is Non-Inferior to that at 100 kVp with Iterative Reconstruction.

    Directory of Open Access Journals (Sweden)

    Joohee Lee

    Full Text Available The aims of this study were to compare the image noise and quality of coronary computed tomographic angiography (CCTA at 80 kVp with knowledge-based iterative model reconstruction (IMR to those of CCTA at 100 kVp with hybrid iterative reconstruction (IR, and to evaluate the feasibility of a low-dose radiation protocol with IMR. Thirty subjects who underwent prospective electrocardiogram-gating CCTA at 80 kVp, 150 mAs, and IMR (Group A, and 30 subjects with 100 kVp, 150 mAs, and hybrid IR (Group B were retrospectively enrolled after sample-size calculation. A BMI of less than 25 kg/m2 was required for inclusion. The attenuation value and image noise of CCTA were measured and the signal-to-noise ratio (SNR and contrast-to-noise ratio (CNR were calculated at the proximal right coronary artery and left main coronary artery. The image noise was analyzed using a non-inferiority test. The CCTA images were qualitatively evaluated using a four-point scale. The radiation dose was significantly lower in Group A than Group B (0.69 ± 0.08 mSv vs. 1.39 ± 0.15 mSv, p < 0.001. The attenuation values were higher in Group A than Group B (p < 0.001. The SNR and CNR in Group A were higher than those of Group B. The image noise of Group A was non-inferior to that of Group B. Qualitative image quality of Group A was better than that of Group B (3.6 vs. 3.4, p = 0.017. CCTA at 80 kVp with IMR could reduce the radiation dose by about 50%, with non-inferior image noise and image quality than those of CCTA at 100 kVp with hybrid IR.

  20. ITER council proceedings: 1995

    International Nuclear Information System (INIS)

    1996-01-01

    Records of the 8. ITER Council Meeting (IC-8), held on 26-27 July 1995, in San Diego, USA, and the 9. ITER Council Meeting (IC-9) held on 12-13 December 1995, in Garching, Germany, are presented, giving essential information on the evolution of the ITER Engineering Design Activities (EDA) and the ITER Interim Design Report Package and Relevant Documents. Figs, tabs

  1. Characterizing Attention with Predictive Network Models.

    Science.gov (United States)

    Rosenberg, M D; Finn, E S; Scheinost, D; Constable, R T; Chun, M M

    2017-04-01

    Recent work shows that models based on functional connectivity in large-scale brain networks can predict individuals' attentional abilities. While being some of the first generalizable neuromarkers of cognitive function, these models also inform our basic understanding of attention, providing empirical evidence that: (i) attention is a network property of brain computation; (ii) the functional architecture that underlies attention can be measured while people are not engaged in any explicit task; and (iii) this architecture supports a general attentional ability that is common to several laboratory-based tasks and is impaired in attention deficit hyperactivity disorder (ADHD). Looking ahead, connectivity-based predictive models of attention and other cognitive abilities and behaviors may potentially improve the assessment, diagnosis, and treatment of clinical dysfunction. Copyright © 2017 Elsevier Ltd. All rights reserved.

  2. Genetic models of homosexuality: generating testable predictions

    Science.gov (United States)

    Gavrilets, Sergey; Rice, William R

    2006-01-01

    Homosexuality is a common occurrence in humans and other species, yet its genetic and evolutionary basis is poorly understood. Here, we formulate and study a series of simple mathematical models for the purpose of predicting empirical patterns that can be used to determine the form of selection that leads to polymorphism of genes influencing homosexuality. Specifically, we develop theory to make contrasting predictions about the genetic characteristics of genes influencing homosexuality including: (i) chromosomal location, (ii) dominance among segregating alleles and (iii) effect sizes that distinguish between the two major models for their polymorphism: the overdominance and sexual antagonism models. We conclude that the measurement of the genetic characteristics of quantitative trait loci (QTLs) found in genomic screens for genes influencing homosexuality can be highly informative in resolving the form of natural selection maintaining their polymorphism. PMID:17015344

  3. Kinetic modeling of high-Z tungsten impurity transport in ITER plasmas using the IMPGYRO code in the trace impurity limit

    Science.gov (United States)

    Yamoto, S.; Bonnin, X.; Homma, Y.; Inoue, H.; Hoshino, K.; Hatayama, A.; Pitts, R. A.

    2017-11-01

    In order to obtain a better understanding of tungsten (W) transport processes, we are developing the Monte-Carlo W transport code IMPGYRO. The code has the following characteristics which are important for calculating W transport: (1) the exact Larmor motion of W ions is computed so that the effects of drifts are automatically taken into account; (2) Coulomb collisions between W impurities and background plasma ions are modelled using the Binary Collision Model which provides more precise kinetic calculations of the friction and thermal forces. By using the IMPGYRO code, the W production/transport in the ITER geometry has been calculated under two different divertor operation modes (Case A: partially detached state and Case B: high recycling state) obtained from the SOLPS-ITER code suite calculation without the effect of drifts. The results of the W-density in the upstream SOL (scrape-off layer) strongly depend on the divertor operation mode. From the comparison of the W impurity transport between Case A and Case B, obtaining a partially detached state is shown to be effective to reduce W-impurities in the upstream SOL. The limitations of the employed model and the validity of the above results are discussed and future problems are summarized for further applications of IMPGYRO code to ITER plasmas.

  4. A new method for assessing content validity in model-based creation and iteration of eHealth interventions.

    Science.gov (United States)

    Kassam-Adams, Nancy; Marsac, Meghan L; Kohser, Kristen L; Kenardy, Justin A; March, Sonja; Winston, Flaura K

    2015-04-15

    The advent of eHealth interventions to address psychological concerns and health behaviors has created new opportunities, including the ability to optimize the effectiveness of intervention activities and then deliver these activities consistently to a large number of individuals in need. Given that eHealth interventions grounded in a well-delineated theoretical model for change are more likely to be effective and that eHealth interventions can be costly to develop, assuring the match of final intervention content and activities to the underlying model is a key step. We propose to apply the concept of "content validity" as a crucial checkpoint to evaluate the extent to which proposed intervention activities in an eHealth intervention program are valid (eg, relevant and likely to be effective) for the specific mechanism of change that each is intended to target and the intended target population for the intervention. The aims of this paper are to define content validity as it applies to model-based eHealth intervention development, to present a feasible method for assessing content validity in this context, and to describe the implementation of this new method during the development of a Web-based intervention for children. We designed a practical 5-step method for assessing content validity in eHealth interventions that includes defining key intervention targets, delineating intervention activity-target pairings, identifying experts and using a survey tool to gather expert ratings of the relevance of each activity to its intended target, its likely effectiveness in achieving the intended target, and its appropriateness with a specific intended audience, and then using quantitative and qualitative results to identify intervention activities that may need modification. We applied this method during our development of the Coping Coach Web-based intervention for school-age children. In the evaluation of Coping Coach content validity, 15 experts from five countries

  5. A statistical model for predicting muscle performance

    Science.gov (United States)

    Byerly, Diane Leslie De Caix

    The objective of these studies was to develop a capability for predicting muscle performance and fatigue to be utilized for both space- and ground-based applications. To develop this predictive model, healthy test subjects performed a defined, repetitive dynamic exercise to failure using a Lordex spinal machine. Throughout the exercise, surface electromyography (SEMG) data were collected from the erector spinae using a Mega Electronics ME3000 muscle tester and surface electrodes placed on both sides of the back muscle. These data were analyzed using a 5th order Autoregressive (AR) model and statistical regression analysis. It was determined that an AR derived parameter, the mean average magnitude of AR poles, significantly correlated with the maximum number of repetitions (designated Rmax) that a test subject was able to perform. Using the mean average magnitude of AR poles, a test subject's performance to failure could be predicted as early as the sixth repetition of the exercise. This predictive model has the potential to provide a basis for improving post-space flight recovery, monitoring muscle atrophy in astronauts and assessing the effectiveness of countermeasures, monitoring astronaut performance and fatigue during Extravehicular Activity (EVA) operations, providing pre-flight assessment of the ability of an EVA crewmember to perform a given task, improving the design of training protocols and simulations for strenuous International Space Station assembly EVA, and enabling EVA work task sequences to be planned enhancing astronaut performance and safety. Potential ground-based, medical applications of the predictive model include monitoring muscle deterioration and performance resulting from illness, establishing safety guidelines in the industry for repetitive tasks, monitoring the stages of rehabilitation for muscle-related injuries sustained in sports and accidents, and enhancing athletic performance through improved training protocols while reducing

  6. ITER EDA technical activities

    International Nuclear Information System (INIS)

    Aymar, R.

    1998-01-01

    Six years of technical work under the ITER EDA Agreement have resulted in a design which constitutes a complete description of the ITER device and of its auxiliary systems and facilities. The ITER Council commented that the Final Design Report provides the first comprehensive design of a fusion reactor based on well established physics and technology

  7. ITER radio frequency systems

    International Nuclear Information System (INIS)

    Bosia, G.

    1998-01-01

    Neutral Beam Injection and RF heating are two of the methods for heating and current drive in ITER. The three ITER RF systems, which have been developed during the EDA, offer several complementary services and are able to fulfil ITER operational requirements

  8. ITER council proceedings: 1999

    International Nuclear Information System (INIS)

    1999-01-01

    In 1999 the ITER meeting in Cadarache (10-11 March 1999) and the Programme Directors Meeting in Grenoble (28-29 July 1999) took place. Both meetings were exclusively devoted to ITER engineering design activities and their agendas covered all issues important for the development of ITER. This volume presents the documents of these two important meetings

  9. ITER council proceedings: 1996

    International Nuclear Information System (INIS)

    1997-01-01

    Records of the 10. ITER Council Meeting (IC-10), held on 26-27 July 1996, in St. Petersburg, Russia, and the 11. ITER Council Meeting (IC-11) held on 17-18 December 1996, in Tokyo, Japan, are presented, giving essential information on the evolution of the ITER Engineering Design Activities (EDA) and the cost review and safety analysis. Figs, tabs

  10. Iterative near-term ecological forecasting: Needs, opportunities, and challenges.

    Science.gov (United States)

    Dietze, Michael C; Fox, Andrew; Beck-Johnson, Lindsay M; Betancourt, Julio L; Hooten, Mevin B; Jarnevich, Catherine S; Keitt, Timothy H; Kenney, Melissa A; Laney, Christine M; Larsen, Laurel G; Loescher, Henry W; Lunch, Claire K; Pijanowski, Bryan C; Randerson, James T; Read, Emily K; Tredennick, Andrew T; Vargas, Rodrigo; Weathers, Kathleen C; White, Ethan P

    2018-02-13

    Two foundational questions about sustainability are "How are ecosystems and the services they provide going to change in the future?" and "How do human decisions affect these trajectories?" Answering these questions requires an ability to forecast ecological processes. Unfortunately, most ecological forecasts focus on centennial-scale climate responses, therefore neither meeting the needs of near-term (daily to decadal) environmental decision-making nor allowing comparison of specific, quantitative predictions to new observational data, one of the strongest tests of scientific theory. Near-term forecasts provide the opportunity to iteratively cycle between performing analyses and updating predictions in light of new evidence. This iterative process of gaining feedback, building experience, and correcting models and methods is critical for improving forecasts. Iterative, near-term forecasting will accelerate ecological research, make it more relevant to society, and inform sustainable decision-making under high uncertainty and adaptive management. Here, we identify the immediate scientific and societal needs, opportunities, and challenges for iterative near-term ecological forecasting. Over the past decade, data volume, variety, and accessibility have greatly increased, but challenges remain in interoperability, latency, and uncertainty quantification. Similarly, ecologists have made considerable advances in applying computational, informatic, and statistical methods, but opportunities exist for improving forecast-specific theory, methods, and cyberinfrastructure. Effective forecasting will also require changes in scientific training, culture, and institutions. The need to start forecasting is now; the time for making ecology more predictive is here, and learning by doing is the fastest route to drive the science forward.

  11. Prediction models : the right tool for the right problem

    NARCIS (Netherlands)

    Kappen, Teus H.; Peelen, Linda M.

    2016-01-01

    PURPOSE OF REVIEW: Perioperative prediction models can help to improve personalized patient care by providing individual risk predictions to both patients and providers. However, the scientific literature on prediction model development and validation can be quite technical and challenging to

  12. Modeling of frequency-domain scalar wave equation with the average-derivative optimal scheme based on a multigrid-preconditioned iterative solver

    Science.gov (United States)

    Cao, Jian; Chen, Jing-Bo; Dai, Meng-Xue

    2018-01-01

    An efficient finite-difference frequency-domain modeling of seismic wave propagation relies on the discrete schemes and appropriate solving methods. The average-derivative optimal scheme for the scalar wave modeling is advantageous in terms of the storage saving for the system of linear equations and the flexibility for arbitrary directional sampling intervals. However, using a LU-decomposition-based direct solver to solve its resulting system of linear equations is very costly for both memory and computational requirements. To address this issue, we consider establishing a multigrid-preconditioned BI-CGSTAB iterative solver fit for the average-derivative optimal scheme. The choice of preconditioning matrix and its corresponding multigrid components is made with the help of Fourier spectral analysis and local mode analysis, respectively, which is important for the convergence. Furthermore, we find that for the computation with unequal directional sampling interval, the anisotropic smoothing in the multigrid precondition may affect the convergence rate of this iterative solver. Successful numerical applications of this iterative solver for the homogenous and heterogeneous models in 2D and 3D are presented where the significant reduction of computer memory and the improvement of computational efficiency are demonstrated by comparison with the direct solver. In the numerical experiments, we also show that the unequal directional sampling interval will weaken the advantage of this multigrid-preconditioned iterative solver in the computing speed or, even worse, could reduce its accuracy in some cases, which implies the need for a reasonable control of directional sampling interval in the discretization.

  13. Neuro-fuzzy modeling in bankruptcy prediction

    Directory of Open Access Journals (Sweden)

    Vlachos D.

    2003-01-01

    Full Text Available For the past 30 years the problem of bankruptcy prediction had been thoroughly studied. From the paper of Altman in 1968 to the recent papers in the '90s, the progress of prediction accuracy was not satisfactory. This paper investigates an alternative modeling of the system (firm, combining neural networks and fuzzy controllers, i.e. using neuro-fuzzy models. Classical modeling is based on mathematical models that describe the behavior of the firm under consideration. The main idea of fuzzy control, on the other hand, is to build a model of a human control expert who is capable of controlling the process without thinking in a mathematical model. This control expert specifies his control action in the form of linguistic rules. These control rules are translated into the framework of fuzzy set theory providing a calculus, which can stimulate the behavior of the control expert and enhance its performance. The accuracy of the model is studied using datasets from previous research papers.

  14. ITER EDA Newsletter. V. 6, no. 9

    International Nuclear Information System (INIS)

    1997-09-01

    This issue of the Newsletter reports on new ITER Home Page and contains a report on the combined workshop of the ITER confinement and transport expert group and of the confinement modeling and database expert group, by D. Boucher, V. Mukhavatov (both ITER JCT), J.G. Cordey, JET Joint Undertaking , M. Wakatani, Kyoto University held at the Max-Planck-Institut for Plasmaphysik, Garching, Germany on September 25 - 30 1997

  15. Design of a robust model predictive controller with reduced computational complexity.

    Science.gov (United States)

    Razi, M; Haeri, M

    2014-11-01

    The practicality of robust model predictive control of systems with model uncertainties depends on the time consumed for solving a defined optimization problem. This paper presents a method for the computational complexity reduction in a robust model predictive control. First a scaled state vector is defined such that the objective function contours in the defined optimization problem become vertical or horizontal ellipses or circles, and then the control input is determined at each sampling time as a state feedback that minimizes the infinite horizon objective function by solving some linear matrix inequalities. The simulation results show that the number of iterations to solve the problem at each sampling interval is reduced while the control performance does not alter noticeably. Copyright © 2014 ISA. Published by Elsevier Ltd. All rights reserved.

  16. Assessing image quality and dose reduction of a new x-ray computed tomography iterative reconstruction algorithm using model observers

    International Nuclear Information System (INIS)

    Tseng, Hsin-Wu; Kupinski, Matthew A.; Fan, Jiahua; Sainath, Paavana; Hsieh, Jiang

    2014-01-01

    Purpose: A number of different techniques have been developed to reduce radiation dose in x-ray computed tomography (CT) imaging. In this paper, the authors will compare task-based measures of image quality of CT images reconstructed by two algorithms: conventional filtered back projection (FBP), and a new iterative reconstruction algorithm (IR). Methods: To assess image quality, the authors used the performance of a channelized Hotelling observer acting on reconstructed image slices. The selected channels are dense difference Gaussian channels (DDOG).A body phantom and a head phantom were imaged 50 times at different dose levels to obtain the data needed to assess image quality. The phantoms consisted of uniform backgrounds with low contrast signals embedded at various locations. The tasks the observer model performed included (1) detection of a signal of known location and shape, and (2) detection and localization of a signal of known shape. The employed DDOG channels are based on the response of the human visual system. Performance was assessed using the areas under ROC curves and areas under localization ROC curves. Results: For signal known exactly (SKE) and location unknown/signal shape known tasks with circular signals of different sizes and contrasts, the authors’ task-based measures showed that a FBP equivalent image quality can be achieved at lower dose levels using the IR algorithm. For the SKE case, the range of dose reduction is 50%–67% (head phantom) and 68%–82% (body phantom). For the study of location unknown/signal shape known, the dose reduction range can be reached at 67%–75% for head phantom and 67%–77% for body phantom case. These results suggest that the IR images at lower dose settings can reach the same image quality when compared to full dose conventional FBP images. Conclusions: The work presented provides an objective way to quantitatively assess the image quality of a newly introduced CT IR algorithm. The performance of the

  17. Assessing image quality and dose reduction of a new x-ray computed tomography iterative reconstruction algorithm using model observers.

    Science.gov (United States)

    Tseng, Hsin-Wu; Fan, Jiahua; Kupinski, Matthew A; Sainath, Paavana; Hsieh, Jiang

    2014-07-01

    A number of different techniques have been developed to reduce radiation dose in x-ray computed tomography (CT) imaging. In this paper, the authors will compare task-based measures of image quality of CT images reconstructed by two algorithms: conventional filtered back projection (FBP), and a new iterative reconstruction algorithm (IR). To assess image quality, the authors used the performance of a channelized Hotelling observer acting on reconstructed image slices. The selected channels are dense difference Gaussian channels (DDOG).A body phantom and a head phantom were imaged 50 times at different dose levels to obtain the data needed to assess image quality. The phantoms consisted of uniform backgrounds with low contrast signals embedded at various locations. The tasks the observer model performed included (1) detection of a signal of known location and shape, and (2) detection and localization of a signal of known shape. The employed DDOG channels are based on the response of the human visual system. Performance was assessed using the areas under ROC curves and areas under localization ROC curves. For signal known exactly (SKE) and location unknown/signal shape known tasks with circular signals of different sizes and contrasts, the authors' task-based measures showed that a FBP equivalent image quality can be achieved at lower dose levels using the IR algorithm. For the SKE case, the range of dose reduction is 50%-67% (head phantom) and 68%-82% (body phantom). For the study of location unknown/signal shape known, the dose reduction range can be reached at 67%-75% for head phantom and 67%-77% for body phantom case. These results suggest that the IR images at lower dose settings can reach the same image quality when compared to full dose conventional FBP images. The work presented provides an objective way to quantitatively assess the image quality of a newly introduced CT IR algorithm. The performance of the model observers using the IR images was always higher

  18. Predictive Models for Carcinogenicity and Mutagenicity ...

    Science.gov (United States)

    Mutagenicity and carcinogenicity are endpoints of major environmental and regulatory concern. These endpoints are also important targets for development of alternative methods for screening and prediction due to the large number of chemicals of potential concern and the tremendous cost (in time, money, animals) of rodent carcinogenicity bioassays. Both mutagenicity and carcinogenicity involve complex, cellular processes that are only partially understood. Advances in technologies and generation of new data will permit a much deeper understanding. In silico methods for predicting mutagenicity and rodent carcinogenicity based on chemical structural features, along with current mutagenicity and carcinogenicity data sets, have performed well for local prediction (i.e., within specific chemical classes), but are less successful for global prediction (i.e., for a broad range of chemicals). The predictivity of in silico methods can be improved by improving the quality of the data base and endpoints used for modelling. In particular, in vitro assays for clastogenicity need to be improved to reduce false positives (relative to rodent carcinogenicity) and to detect compounds that do not interact directly with DNA or have epigenetic activities. New assays emerging to complement or replace some of the standard assays include VitotoxTM, GreenScreenGC, and RadarScreen. The needs of industry and regulators to assess thousands of compounds necessitate the development of high-t

  19. ITER-FEAT safety

    International Nuclear Information System (INIS)

    Gordon, C.W.; Bartels, H.-W.; Honda, T.; Raeder, J.; Topilski, L.; Iseli, M.; Moshonas, K.; Taylor, N.; Gulden, W.; Kolbasov, B.; Inabe, T.; Tada, E.

    2001-01-01

    Safety has been an integral part of the design process for ITER since the Conceptual Design Activities of the project. The safety approach adopted in the ITER-FEAT design and the complementary assessments underway, to be documented in the Generic Site Safety Report (GSSR), are expected to help demonstrate the attractiveness of fusion and thereby set a good precedent for future fusion power reactors. The assessments address ITER's radiological hazards taking into account fusion's favourable safety characteristics. The expectation that ITER will need regulatory approval has influenced the entire safety design and assessment approach. This paper summarises the ITER-FEAT safety approach and assessments underway. (author)

  20. ITER council proceedings: 1997

    International Nuclear Information System (INIS)

    1997-01-01

    This volume of the ITER EDA Documentation Series presents records of the 12th ITER Council Meeting, IC-12, which took place on 23-24 July, 1997 in Tampere, Finland. The Council received from the Parties (EU, Japan, Russia, US) positive responses on the Detailed Design Report. The Parties stated their willingness to contribute to fulfil their obligations in contributing to the ITER EDA. The summary discussions among the Parties led to the consensus that in July 1998 the ITER activities should proceed for additional three years with a general intent to enable an efficient start of possible, future ITER construction

  1. Disease Prediction Models and Operational Readiness

    Energy Technology Data Exchange (ETDEWEB)

    Corley, Courtney D.; Pullum, Laura L.; Hartley, David M.; Benedum, Corey M.; Noonan, Christine F.; Rabinowitz, Peter M.; Lancaster, Mary J.

    2014-03-19

    INTRODUCTION: The objective of this manuscript is to present a systematic review of biosurveillance models that operate on select agents and can forecast the occurrence of a disease event. One of the primary goals of this research was to characterize the viability of biosurveillance models to provide operationally relevant information for decision makers to identify areas for future research. Two critical characteristics differentiate this work from other infectious disease modeling reviews. First, we reviewed models that attempted to predict the disease event, not merely its transmission dynamics. Second, we considered models involving pathogens of concern as determined by the US National Select Agent Registry (as of June 2011). Methods: We searched dozens of commercial and government databases and harvested Google search results for eligible models utilizing terms and phrases provided by public health analysts relating to biosurveillance, remote sensing, risk assessments, spatial epidemiology, and ecological niche-modeling, The publication date of search results returned are bound by the dates of coverage of each database and the date in which the search was performed, however all searching was completed by December 31, 2010. This returned 13,767 webpages and 12,152 citations. After de-duplication and removal of extraneous material, a core collection of 6,503 items was established and these publications along with their abstracts are presented in a semantic wiki at http://BioCat.pnnl.gov. Next, PNNL’s IN-SPIRE visual analytics software was used to cross-correlate these publications with the definition for a biosurveillance model resulting in the selection of 54 documents that matched the criteria resulting Ten of these documents, However, dealt purely with disease spread models, inactivation of bacteria, or the modeling of human immune system responses to pathogens rather than predicting disease events. As a result, we systematically reviewed 44 papers and the

  2. Nonlinear model predictive control theory and algorithms

    CERN Document Server

    Grüne, Lars

    2017-01-01

    This book offers readers a thorough and rigorous introduction to nonlinear model predictive control (NMPC) for discrete-time and sampled-data systems. NMPC schemes with and without stabilizing terminal constraints are detailed, and intuitive examples illustrate the performance of different NMPC variants. NMPC is interpreted as an approximation of infinite-horizon optimal control so that important properties like closed-loop stability, inverse optimality and suboptimality can be derived in a uniform manner. These results are complemented by discussions of feasibility and robustness. An introduction to nonlinear optimal control algorithms yields essential insights into how the nonlinear optimization routine—the core of any nonlinear model predictive controller—works. Accompanying software in MATLAB® and C++ (downloadable from extras.springer.com/), together with an explanatory appendix in the book itself, enables readers to perform computer experiments exploring the possibilities and limitations of NMPC. T...

  3. A predictive model for dimensional errors in fused deposition modeling

    DEFF Research Database (Denmark)

    Stolfi, A.

    2015-01-01

    values of L (0.254 mm, 0.330 mm) was produced by comparing predicted values with external face-to-face measurements. After removing outliers, the results show that the developed two-parameter model can serve as tool for modeling the FDM dimensional behavior in a wide range of deposition angles....

  4. A predictive model for dimensional errors in fused deposition modeling

    DEFF Research Database (Denmark)

    Stolfi, A.

    2015-01-01

    This work concerns the effect of deposition angle (a) and layer thickness (L) on the dimensional performance of FDM parts using a predictive model based on the geometrical description of the FDM filament profile. An experimental validation over the whole a range from 0° to 177° at 3° steps and two...... values of L (0.254 mm, 0.330 mm) was produced by comparing predicted values with external face-to-face measurements. After removing outliers, the results show that the developed two-parameter model can serve as tool for modeling the FDM dimensional behavior in a wide range of deposition angles....

  5. Predictive Modeling in Actinide Chemistry and Catalysis

    Energy Technology Data Exchange (ETDEWEB)

    Yang, Ping [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2016-05-16

    These are slides from a presentation on predictive modeling in actinide chemistry and catalysis. The following topics are covered in these slides: Structures, bonding, and reactivity (bonding can be quantified by optical probes and theory, and electronic structures and reaction mechanisms of actinide complexes); Magnetic resonance properties (transition metal catalysts with multi-nuclear centers, and NMR/EPR parameters); Moving to more complex systems (surface chemistry of nanomaterials, and interactions of ligands with nanoparticles); Path forward and conclusions.

  6. Predictive modelling of evidence informed teaching

    OpenAIRE

    Zhang, Dell; Brown, C.

    2017-01-01

    In this paper, we analyse the questionnaire survey data collected from 79 English primary schools about the situation of evidence informed teaching, where the evidences could come from research journals or conferences. Specifically, we build a predictive model to see what external factors could help to close the gap between teachers’ belief and behaviour in evidence informed teaching, which is the first of its kind to our knowledge. The major challenge, from the data mining perspective, is th...

  7. A Predictive Model for Cognitive Radio

    Science.gov (United States)

    2006-09-14

    response in a given situation. Vadde et al. interest and produce a model for prediction of the response. have applied response surface methodology and...34 2000. [3] K. K. Vadde and V. R. Syrotiuk, "Factor interaction on service configurations to those that best meet our communication delivery in mobile ad...resulting set of configurations randomly or apply additional 2004. screening criteria. [4] K. K. Vadde , M.-V. R. Syrotiuk, and D. C. Montgomery

  8. Tectonic predictions with mantle convection models

    Science.gov (United States)

    Coltice, Nicolas; Shephard, Grace E.

    2018-04-01

    Over the past 15 yr, numerical models of convection in Earth's mantle have made a leap forward: they can now produce self-consistent plate-like behaviour at the surface together with deep mantle circulation. These digital tools provide a new window into the intimate connections between plate tectonics and mantle dynamics, and can therefore be used for tectonic predictions, in principle. This contribution explores this assumption. First, initial conditions at 30, 20, 10 and 0 Ma are generated by driving a convective flow with imposed plate velocities at the surface. We then compute instantaneous mantle flows in response to the guessed temperature fields without imposing any boundary conditions. Plate boundaries self-consistently emerge at correct locations with respect to reconstructions, except for small plates close to subduction zones. As already observed for other types of instantaneous flow calculations, the structure of the top boundary layer and upper-mantle slab is the dominant character that leads to accurate predictions of surface velocities. Perturbations of the rheological parameters have little impact on the resulting surface velocities. We then compute fully dynamic model evolution from 30 and 10 to 0 Ma, without imposing plate boundaries or plate velocities. Contrary to instantaneous calculations, errors in kinematic predictions are substantial, although the plate layout and kinematics in several areas remain consistent with the expectations for the Earth. For these calculations, varying the rheological parameters makes a difference for plate boundary evolution. Also, identified errors in initial conditions contribute to first-order kinematic errors. This experiment shows that the tectonic predictions of dynamic models over 10 My are highly sensitive to uncertainties of rheological parameters and initial temperature field in comparison to instantaneous flow calculations. Indeed, the initial conditions and the rheological parameters can be good enough

  9. Predictive Modeling of the CDRA 4BMS

    Science.gov (United States)

    Coker, Robert F.; Knox, James C.

    2016-01-01

    As part of NASA's Advanced Exploration Systems (AES) program and the Life Support Systems Project (LSSP), fully predictive models of the Four Bed Molecular Sieve (4BMS) of the Carbon Dioxide Removal Assembly (CDRA) on the International Space Station (ISS) are being developed. This virtual laboratory will be used to help reduce mass, power, and volume requirements for future missions. In this paper we describe current and planned modeling developments in the area of carbon dioxide removal to support future crewed Mars missions as well as the resolution of anomalies observed in the ISS CDRA.

  10. ITER EDA Newsletter. V. 3, no. 9

    International Nuclear Information System (INIS)

    1994-09-01

    This ITER EDA (Engineering Design Activities) Newsletter issue contains a description of the ITER Physics Research and Development (F.Perkins), a report on the first meeting of the ITER Divertor Physics and Divertor Modelling and Database Expert Groups (D. Post, G. Janeschitz, R. Stambaugh, M. Shimada), a report on the first meeting of the ITER Physics Expert Group on Diagnostics (A.E. Costley and K.M. Young), and a contribution entitled ''to meet or not to meet? If yes, for how long?'' (L. Golubchikov)

  11. ITER EDA newsletter. V. 5, no. 12

    International Nuclear Information System (INIS)

    1996-12-01

    This issue of the newsletter on the Engineering Design Activities (EDA) for the ITER Tokamak project contains a report on the Eleventh ITER Council Meeting held on December 17-18, 1996 in Tokyo, Japan; a report on the Eleventh Meeting of the ITER Technical Advisory Committee (TAC-11) Meeting held 3-7 December, 1996, at the ITER Naka Joint Work Site, Japan; and a report on the Fifth Workshop of the Confinement Modelling and Database Expert Group held in Montreal, Canada, October 13-16, 1996

  12. Modelling of steady state erosion of CFC actively water-cooled mock-up for the ITER divertor

    Science.gov (United States)

    Ogorodnikova, O. V.

    2008-04-01

    Calculations of the physical and chemical erosion of CFC (carbon fibre composite) monoblocks as outer vertical target of the ITER divertor during normal operation regimes have been done. Off-normal events and ELM's are not considered here. For a set of components under thermal and particles loads at glancing incident angle, variations in the material properties and/or assembly of defects could result in different erosion of actively-cooled components and, thus, in temperature instabilities. Operation regimes where the temperature instability takes place are investigated. It is shown that the temperature and erosion instabilities, probably, are not a critical point for the present design of ITER vertical target if a realistic variation of material properties is assumed, namely, the difference in the thermal conductivities of the neighbouring monoblocks is 20% and the maximum allowable size of a defect between CFC armour and cooling tube is +/-90° in circumferential direction from the apex.

  13. Method for predicting homology modeling accuracy from amino acid sequence alignment: the power function.

    Science.gov (United States)

    Iwadate, Mitsuo; Kanou, Kazuhiko; Terashi, Genki; Umeyama, Hideaki; Takeda-Shitaka, Mayuko

    2010-01-01

    We have devised a power function (PF) that can predict the accuracy of a three-dimensional (3D) structure model of a protein using only amino acid sequence alignments. This Power Function (PF) consists of three parts; (1) the length of a model, (2) a homology identity percent value and (3) the agreement rate between PSI-PRED secondary structure prediction and the secondary structure judgment of a reference protein. The PF value is mathematically computed from the execution process of homology search tools, such as FASTA or various BLAST programs, to obtain the amino acid sequence alignments. There is a high correlation between the global distance test-total score (GDT_TS) value of the protein model based upon the PF score and the GDT_TS(MAX) value used as an index of protein modeling accuracy in the international contest Critical Assessment of Techniques for Protein Structure Prediction (CASP). Accordingly, the PF method is valuable for constructing a highly accurate model without wasteful calculations of homology modeling that is normally performed by an iterative method to move the main chain and side chains in the modeling process. Moreover, a model with higher accuracy can be obtained by combining the models ordered by the PF score with models sorted by the size of the CIRCLE score. The CIRCLE software is a 3D-1D program, in which energetic stabilization is estimated based upon the experimental environment of each amino acid residue in the protein solution or protein crystals.

  14. An iterative sensory procedure to select odor-active associations in complex consortia of microorganisms: application to the construction of a cheese model.

    Science.gov (United States)

    Bonaïti, C; Irlinger, F; Spinnler, H E; Engel, E

    2005-05-01

    The aim of this study was to develop and validate an iterative procedure based on odor assessment to select odor-active associations of microorganisms from a starting association of 82 strains (G1), which were chosen to be representative of Livarot cheese biodiversity. A 3-step dichotomous procedure was applied to reduce the starting association G1. At each step, 3 methods were used to evaluate the odor proximity between mother (n strains) and daughter (n/2 strains) associations: a direct assessment of odor dissimilarity using an original bidimensional scale system and 2 indirect methods based on comparisons of odor profile or hedonic scores. Odor dissimilarity ratings and odor profile gave reliable and sometimes complementary criteria to select G3 and G4 at the first iteration, G31 and G42 at the second iteration, and G312 and G421 at the final iteration. Principal component analysis of odor profile data permitted the interpretation at least in part, of the 2D multidimensional scaling representation of the similarity data. The second part of the study was dedicated to 1) validating the choice of the dichotomous procedure made at each iteration, and 2) evaluating together the magnitude of odor differences that may exist between G1 and its subsequent simplified associations. The strategy consisted of assessing odor similarity between the 13 cheese models by comparing the contents of their odor-active compounds. By using a purge-and-trap gas chromatography-olfactory/mass spectrometry device, 50 potent odorants were identified in models G312, G421, and in a typical Protected Denomination of Origin Livarot cheese. Their contributions to the odor profile of both selected model cheeses are discussed. These compounds were quantified by purge and trap-gas chromatography-mass spectrometry in the 13 products and the normalized data matrix was transformed to a between-product distance matrix. This instrumental assessment of odor similarities allowed validation of the choice

  15. Predictive Modeling by the Cerebellum Improves Proprioception

    Science.gov (United States)

    Bhanpuri, Nasir H.; Okamura, Allison M.

    2013-01-01

    Because sensation is delayed, real-time movement control requires not just sensing, but also predicting limb position, a function hypothesized for the cerebellum. Such cerebellar predictions could contribute to perception of limb position (i.e., proprioception), particularly when a person actively moves the limb. Here we show that human cerebellar patients have proprioceptive deficits compared with controls during active movement, but not when the arm is moved passively. Furthermore, when healthy subjects move in a force field with unpredictable dynamics, they have active proprioceptive deficits similar to cerebellar patients. Therefore, muscle activity alone is likely insufficient to enhance proprioception and predictability (i.e., an internal model of the body and environment) is important for active movement to benefit proprioception. We conclude that cerebellar patients have an active proprioceptive deficit consistent with disrupted movement prediction rather than an inability to generally enhance peripheral proprioceptive signals during action and suggest that active proprioceptive deficits should be considered a fundamental cerebellar impairment of clinical importance. PMID:24005283

  16. Accurate Holdup Calculations with Predictive Modeling & Data Integration

    Energy Technology Data Exchange (ETDEWEB)

    Azmy, Yousry [North Carolina State Univ., Raleigh, NC (United States). Dept. of Nuclear Engineering; Cacuci, Dan [Univ. of South Carolina, Columbia, SC (United States). Dept. of Mechanical Engineering

    2017-04-03

    Bayes’ Theorem, one must have a model y(x) that maps the state variables x (the solution in this case) to the measurements y. In this case, the unknown state variables are the configuration and composition of the heldup SNM. The measurements are the detector readings. Thus, the natural model is neutral-particle radiation transport where a wealth of computational tools exists for performing these simulations accurately and efficiently. The combination of predictive model and Bayesian inference forms the Data Integration with Modeled Predictions (DIMP) method that serves as foundation for this project. The cost functional describing the model-to-data misfit is computed via a norm created by the inverse of the covariance matrix of the model parameters and responses. Since the model y(x) for the holdup problem is nonlinear, a nonlinear optimization on Q is conducted via Newton-type iterative methods to find the optimal values of the model parameters x. This project comprised a collaboration between NC State University (NCSU), the University of South Carolina (USC), and Oak Ridge National Laboratory (ORNL). The project was originally proposed in seven main tasks with an eighth contingency task to be performed if time and funding permitted; in fact time did not permit commencement of the contingency task and it was not performed. The remaining tasks involved holdup analysis with gamma detection strategies and separately with neutrons based on coincidence counting. Early in the project, and upon consultation with experts in coincidence counting it became evident that this approach is not viable for holdup applications and this task was replaced with an alternative, but valuable investigation that was carried out by the USC partner. Nevertheless, the experimental 4 measurements at ORNL of both gamma and neutron sources for the purpose of constructing Detector Response Functions (DRFs) with the associated uncertainties were indeed completed.

  17. Development and Application of a Coarse-Grained Model for PNIPAM by Iterative Boltzmann Inversion and Its Combination with Lattice Boltzmann Hydrodynamics.

    Science.gov (United States)

    Boţan, Vitalie; Ustach, Vincent D; Leonhard, Kai; Faller, Roland

    2017-11-16

    The polymer poly(N-isopropylacrylamide) (PNIPAM) is studied using a novel combination of multiscale modeling methodologies. We develop an iterative Boltzmann inversion potential of concentrated PNIPAM solutions and combine it with lattice Boltzmann as a Navier-Stokes equation solver for the solvent. We study in detail the influence of the methodology on statics and dynamics of the system. The combination is successful and significantly simpler and faster than other mapping techniques for polymer solution while keeping the correct hydrodynamics. The model can semiquantitatively describe the correct phase behavior and polymer dynamics.

  18. Prediction of Chemical Function: Model Development and ...

    Science.gov (United States)

    The United States Environmental Protection Agency’s Exposure Forecaster (ExpoCast) project is developing both statistical and mechanism-based computational models for predicting exposures to thousands of chemicals, including those in consumer products. The high-throughput (HT) screening-level exposures developed under ExpoCast can be combined with HT screening (HTS) bioactivity data for the risk-based prioritization of chemicals for further evaluation. The functional role (e.g. solvent, plasticizer, fragrance) that a chemical performs can drive both the types of products in which it is found and the concentration in which it is present and therefore impacting exposure potential. However, critical chemical use information (including functional role) is lacking for the majority of commercial chemicals for which exposure estimates are needed. A suite of machine-learning based models for classifying chemicals in terms of their likely functional roles in products based on structure were developed. This effort required collection, curation, and harmonization of publically-available data sources of chemical functional use information from government and industry bodies. Physicochemical and structure descriptor data were generated for chemicals with function data. Machine-learning classifier models for function were then built in a cross-validated manner from the descriptor/function data using the method of random forests. The models were applied to: 1) predict chemi

  19. Gamma-Ray Pulsars Models and Predictions

    CERN Document Server

    Harding, A K

    2001-01-01

    Pulsed emission from gamma-ray pulsars originates inside the magnetosphere, from radiation by charged particles accelerated near the magnetic poles or in the outer gaps. In polar cap models, the high energy spectrum is cut off by magnetic pair production above an energy that is dependent on the local magnetic field strength. While most young pulsars with surface fields in the range B = 10^{12} - 10^{13} G are expected to have high energy cutoffs around several GeV, the gamma-ray spectra of old pulsars having lower surface fields may extend to 50 GeV. Although the gamma-ray emission of older pulsars is weaker, detecting pulsed emission at high energies from nearby sources would be an important confirmation of polar cap models. Outer gap models predict more gradual high-energy turnovers at around 10 GeV, but also predict an inverse Compton component extending to TeV energies. Detection of pulsed TeV emission, which would not survive attenuation at the polar caps, is thus an important test of outer gap models. N...

  20. A prediction model for Clostridium difficile recurrence

    Directory of Open Access Journals (Sweden)

    Francis D. LaBarbera

    2015-02-01

    Full Text Available Background: Clostridium difficile infection (CDI is a growing problem in the community and hospital setting. Its incidence has been on the rise over the past two decades, and it is quickly becoming a major concern for the health care system. High rate of recurrence is one of the major hurdles in the successful treatment of C. difficile infection. There have been few studies that have looked at patterns of recurrence. The studies currently available have shown a number of risk factors associated with C. difficile recurrence (CDR; however, there is little consensus on the impact of most of the identified risk factors. Methods: Our study was a retrospective chart review of 198 patients diagnosed with CDI via Polymerase Chain Reaction (PCR from February 2009 to Jun 2013. In our study, we decided to use a machine learning algorithm called the Random Forest (RF to analyze all of the factors proposed to be associated with CDR. This model is capable of making predictions based on a large number of variables, and has outperformed numerous other models and statistical methods. Results: We came up with a model that was able to accurately predict the CDR with a sensitivity of 83.3%, specificity of 63.1%, and area under curve of 82.6%. Like other similar studies that have used the RF model, we also had very impressive results. Conclusions: We hope that in the future, machine learning algorithms, such as the RF, will see a wider application.

  1. Artificial Neural Network Model for Predicting Compressive

    Directory of Open Access Journals (Sweden)

    Salim T. Yousif

    2013-05-01

    Full Text Available   Compressive strength of concrete is a commonly used criterion in evaluating concrete. Although testing of the compressive strength of concrete specimens is done routinely, it is performed on the 28th day after concrete placement. Therefore, strength estimation of concrete at early time is highly desirable. This study presents the effort in applying neural network-based system identification techniques to predict the compressive strength of concrete based on concrete mix proportions, maximum aggregate size (MAS, and slump of fresh concrete. Back-propagation neural networks model is successively developed, trained, and tested using actual data sets of concrete mix proportions gathered from literature.    The test of the model by un-used data within the range of input parameters shows that the maximum absolute error for model is about 20% and 88% of the output results has absolute errors less than 10%. The parametric study shows that water/cement ratio (w/c is the most significant factor  affecting the output of the model.     The results showed that neural networks has strong potential as a feasible tool for predicting compressive strength of concrete.

  2. Evaluating predictive models of software quality

    International Nuclear Information System (INIS)

    Ciaschini, V; Canaparo, M; Ronchieri, E; Salomoni, D

    2014-01-01

    Applications from High Energy Physics scientific community are constantly growing and implemented by a large number of developers. This implies a strong churn on the code and an associated risk of faults, which is unavoidable as long as the software undergoes active evolution. However, the necessities of production systems run counter to this. Stability and predictability are of paramount importance; in addition, a short turn-around time for the defect discovery-correction-deployment cycle is required. A way to reconcile these opposite foci is to use a software quality model to obtain an approximation of the risk before releasing a program to only deliver software with a risk lower than an agreed threshold. In this article we evaluated two quality predictive models to identify the operational risk and the quality of some software products. We applied these models to the development history of several EMI packages with intent to discover the risk factor of each product and compare it with its real history. We attempted to determine if the models reasonably maps reality for the applications under evaluation, and finally we concluded suggesting directions for further studies.

  3. A generative model for predicting terrorist incidents

    Science.gov (United States)

    Verma, Dinesh C.; Verma, Archit; Felmlee, Diane; Pearson, Gavin; Whitaker, Roger

    2017-05-01

    A major concern in coalition peace-support operations is the incidence of terrorist activity. In this paper, we propose a generative model for the occurrence of the terrorist incidents, and illustrate that an increase in diversity, as measured by the number of different social groups to which that an individual belongs, is inversely correlated with the likelihood of a terrorist incident in the society. A generative model is one that can predict the likelihood of events in new contexts, as opposed to statistical models which are used to predict the future incidents based on the history of the incidents in an existing context. Generative models can be useful in planning for persistent Information Surveillance and Reconnaissance (ISR) since they allow an estimation of regions in the theater of operation where terrorist incidents may arise, and thus can be used to better allocate the assignment and deployment of ISR assets. In this paper, we present a taxonomy of terrorist incidents, identify factors related to occurrence of terrorist incidents, and provide a mathematical analysis calculating the likelihood of occurrence of terrorist incidents in three common real-life scenarios arising in peace-keeping operations

  4. PREDICTION MODELS OF GRAIN YIELD AND CHARACTERIZATION

    Directory of Open Access Journals (Sweden)

    Narciso Ysac Avila Serrano

    2009-06-01

    Full Text Available With the objective to characterize the grain yield of five cowpea cultivars and to find linear regression models to predict it, a study was developed in La Paz, Baja California Sur, Mexico. A complete randomized blocks design was used. Simple and multivariate analyses of variance were carried out using the canonical variables to characterize the cultivars. The variables cluster per plant, pods per plant, pods per cluster, seeds weight per plant, seeds hectoliter weight, 100-seed weight, seeds length, seeds wide, seeds thickness, pods length, pods wide, pods weight, seeds per pods, and seeds weight per pods, showed significant differences (P≤ 0.05 among cultivars. Paceño and IT90K-277-2 cultivars showed the higher seeds weight per plant. The linear regression models showed correlation coefficients ≥0.92. In these models, the seeds weight per plant, pods per cluster, pods per plant, cluster per plant and pods length showed significant correlations (P≤ 0.05. In conclusion, the results showed that grain yield differ among cultivars and for its estimation, the prediction models showed determination coefficients highly dependable.

  5. Detection of Adverse Reaction to Drugs in Elderly Patients through Predictive Modeling

    Directory of Open Access Journals (Sweden)

    Rafael San-Miguel Carrasco

    2016-03-01

    Full Text Available Geriatrics Medicine constitutes a clinical research field in which data analytics, particularly predictive modeling, can deliver compelling, reliable and long-lasting benefits, as well as non-intuitive clinical insights and net new knowledge. The research work described in this paper leverages predictive modeling to uncover new insights related to adverse reaction to drugs in elderly patients. The differentiation factor that sets this research exercise apart from traditional clinical research is the fact that it was not designed by formulating a particular hypothesis to be validated. Instead, it was data-centric, with data being mined to discover relationships or correlations among variables. Regression techniques were systematically applied to data through multiple iterations and under different configurations. The obtained results after the process was completed are explained and discussed next.

  6. The danger of iteration methods

    International Nuclear Information System (INIS)

    Villain, J.; Semeria, B.

    1983-01-01

    When a Hamiltonian H depends on variables phisub(i), the values of these variables which minimize H satisfy the equations deltaH/deltaphisub(i) = O. If this set of equations is solved by iteration, there is no guarantee that the solution is the one which minimizes H. In the case of a harmonic system with a random potential periodic with respect to the phisub(i)'s, the fluctuations have been calculated by Efetov and Larkin by means of the iteration method. The result is wrong in the case of a strong disorder. Even in the weak disorder case, it is wrong for a one-dimensional system and for a finite system of 2 particles. It is argued that the results obtained by iteration are always wrong, and that between 2 and 4 dimensions, spin-pair correlation functions decay like powers of the distance, as found by Aharony and Pytte for another model

  7. Predictive Models for Normal Fetal Cardiac Structures.

    Science.gov (United States)

    Krishnan, Anita; Pike, Jodi I; McCarter, Robert; Fulgium, Amanda L; Wilson, Emmanuel; Donofrio, Mary T; Sable, Craig A

    2016-12-01

    Clinicians rely on age- and size-specific measures of cardiac structures to diagnose cardiac disease. No universally accepted normative data exist for fetal cardiac structures, and most fetal cardiac centers do not use the same standards. The aim of this study was to derive predictive models for Z scores for 13 commonly evaluated fetal cardiac structures using a large heterogeneous population of fetuses without structural cardiac defects. The study used archived normal fetal echocardiograms in representative fetuses aged 12 to 39 weeks. Thirteen cardiac dimensions were remeasured by a blinded echocardiographer from digitally stored clips. Studies with inadequate imaging views were excluded. Regression models were developed to relate each dimension to estimated gestational age (EGA) by dates, biparietal diameter, femur length, and estimated fetal weight by the Hadlock formula. Dimension outcomes were transformed (e.g., using the logarithm or square root) as necessary to meet the normality assumption. Higher order terms, quadratic or cubic, were added as needed to improve model fit. Information criteria and adjusted R 2 values were used to guide final model selection. Each Z-score equation is based on measurements derived from 296 to 414 unique fetuses. EGA yielded the best predictive model for the majority of dimensions; adjusted R 2 values ranged from 0.72 to 0.893. However, each of the other highly correlated (r > 0.94) biometric parameters was an acceptable surrogate for EGA. In most cases, the best fitting model included squared and cubic terms to introduce curvilinearity. For each dimension, models based on EGA provided the best fit for determining normal measurements of fetal cardiac structures. Nevertheless, other biometric parameters, including femur length, biparietal diameter, and estimated fetal weight provided results that were nearly as good. Comprehensive Z-score results are available on the basis of highly predictive models derived from gestational

  8. An analytical model for climatic predictions

    International Nuclear Information System (INIS)

    Njau, E.C.

    1990-12-01

    A climatic model based upon analytical expressions is presented. This model is capable of making long-range predictions of heat energy variations on regional or global scales. These variations can then be transformed into corresponding variations of some other key climatic parameters since weather and climatic changes are basically driven by differential heating and cooling around the earth. On the basis of the mathematical expressions upon which the model is based, it is shown that the global heat energy structure (and hence the associated climatic system) are characterized by zonally as well as latitudinally propagating fluctuations at frequencies downward of 0.5 day -1 . We have calculated the propagation speeds for those particular frequencies that are well documented in the literature. The calculated speeds are in excellent agreement with the measured speeds. (author). 13 refs

  9. An Anisotropic Hardening Model for Springback Prediction

    International Nuclear Information System (INIS)

    Zeng, Danielle; Xia, Z. Cedric

    2005-01-01

    As more Advanced High-Strength Steels (AHSS) are heavily used for automotive body structures and closures panels, accurate springback prediction for these components becomes more challenging because of their rapid hardening characteristics and ability to sustain even higher stresses. In this paper, a modified Mroz hardening model is proposed to capture realistic Bauschinger effect at reverse loading, such as when material passes through die radii or drawbead during sheet metal forming process. This model accounts for material anisotropic yield surface and nonlinear isotropic/kinematic hardening behavior. Material tension/compression test data are used to accurately represent Bauschinger effect. The effectiveness of the model is demonstrated by comparison of numerical and experimental springback results for a DP600 straight U-channel test

  10. US ITER limiter module design

    International Nuclear Information System (INIS)

    Mattas, R.F.; Billone, M.; Hassanein, A.

    1996-08-01

    The recent U.S. effort on the ITER (International Thermonuclear Experimental Reactor) shield has been focused on the limiter module design. This is a multi-disciplinary effort that covers design layout, fabrication, thermal hydraulics, materials evaluation, thermo- mechanical response, and predicted response during off-normal events. The results of design analyses are presented. Conclusions and recommendations are also presented concerning, the capability of the limiter modules to meet performance goals and to be fabricated within design specifications using existing technology

  11. ITER EDA newsletter. V. 7, no. 5

    International Nuclear Information System (INIS)

    1998-05-01

    This newsletter contains the articles 'The materials selection in ITER and the first materials workshop', 'US fusion community discussion on fusion strategies', 'ITER central solenoid model coil heat treatment complete and assembly started' and 'Programme of the 17th IAEA fusion energy conference'. There is also a note in memoriam of Hiroschi Shibata, who died on the 5th of June 1998

  12. Predicting knee replacement damage in a simulator machine using a computational model with a consistent wear factor.

    Science.gov (United States)

    Zhao, Dong; Sakoda, Hideyuki; Sawyer, W Gregory; Banks, Scott A; Fregly, Benjamin J

    2008-02-01

    Wear of ultrahigh molecular weight polyethylene remains a primary factor limiting the longevity of total knee replacements (TKRs). However, wear testing on a simulator machine is time consuming and expensive, making it impractical for iterative design purposes. The objectives of this paper were first, to evaluate whether a computational model using a wear factor consistent with the TKR material pair can predict accurate TKR damage measured in a simulator machine, and second, to investigate how choice of surface evolution method (fixed or variable step) and material model (linear or nonlinear) affect the prediction. An iterative computational damage model was constructed for a commercial knee implant in an AMTI simulator machine. The damage model combined a dynamic contact model with a surface evolution model to predict how wear plus creep progressively alter tibial insert geometry over multiple simulations. The computational framework was validated by predicting wear in a cylinder-on-plate system for which an analytical solution was derived. The implant damage model was evaluated for 5 million cycles of simulated gait using damage measurements made on the same implant in an AMTI machine. Using a pin-on-plate wear factor for the same material pair as the implant, the model predicted tibial insert wear volume to within 2% error and damage depths and areas to within 18% and 10% error, respectively. Choice of material model had little influence, while inclusion of surface evolution affected damage depth and area but not wear volume predictions. Surface evolution method was important only during the initial cycles, where variable step was needed to capture rapid geometry changes due to the creep. Overall, our results indicate that accurate TKR damage predictions can be made with a computational model using a constant wear factor obtained from pin-on-plate tests for the same material pair, and furthermore, that surface evolution method matters only during the initial

  13. Web tools for predictive toxicology model building.

    Science.gov (United States)

    Jeliazkova, Nina

    2012-07-01

    The development and use of web tools in chemistry has accumulated more than 15 years of history already. Powered by the advances in the Internet technologies, the current generation of web systems are starting to expand into areas, traditional for desktop applications. The web platforms integrate data storage, cheminformatics and data analysis tools. The ease of use and the collaborative potential of the web is compelling, despite the challenges. The topic of this review is a set of recently published web tools that facilitate predictive toxicology model building. The focus is on software platforms, offering web access to chemical structure-based methods, although some of the frameworks could also provide bioinformatics or hybrid data analysis functionalities. A number of historical and current developments are cited. In order to provide comparable assessment, the following characteristics are considered: support for workflows, descriptor calculations, visualization, modeling algorithms, data management and data sharing capabilities, availability of GUI or programmatic access and implementation details. The success of the Web is largely due to its highly decentralized, yet sufficiently interoperable model for information access. The expected future convergence between cheminformatics and bioinformatics databases provides new challenges toward management and analysis of large data sets. The web tools in predictive toxicology will likely continue to evolve toward the right mix of flexibility, performance, scalability, interoperability, sets of unique features offered, friendly user interfaces, programmatic access for advanced users, platform independence, results reproducibility, curation and crowdsourcing utilities, collaborative sharing and secure access.

  14. [Endometrial cancer: Predictive models and clinical impact].

    Science.gov (United States)

    Bendifallah, Sofiane; Ballester, Marcos; Daraï, Emile

    2017-12-01

    In France, in 2015, endometrial cancer (CE) is the first gynecological cancer in terms of incidence and the fourth cause of cancer of the woman. About 8151 new cases and nearly 2179 deaths have been reported. Treatments (surgery, external radiotherapy, brachytherapy and chemotherapy) are currently delivered on the basis of an estimation of the recurrence risk, an estimation of lymph node metastasis or an estimate of survival probability. This risk is determined on the basis of prognostic factors (clinical, histological, imaging, biological) taken alone or grouped together in the form of classification systems, which are currently insufficient to account for the evolutionary and prognostic heterogeneity of endometrial cancer. For endometrial cancer, the concept of mathematical modeling and its application to prediction have developed in recent years. These biomathematical tools have opened a new era of care oriented towards the promotion of targeted therapies and personalized treatments. Many predictive models have been published to estimate the risk of recurrence and lymph node metastasis, but a tiny fraction of them is sufficiently relevant and of clinical utility. The optimization tracks are multiple and varied, suggesting the possibility in the near future of a place for these mathematical models. The development of high-throughput genomics is likely to offer a more detailed molecular characterization of the disease and its heterogeneity. Copyright © 2017 Société Française du Cancer. Published by Elsevier Masson SAS. All rights reserved.

  15. Predictive Capability Maturity Model for computational modeling and simulation.

    Energy Technology Data Exchange (ETDEWEB)

    Oberkampf, William Louis; Trucano, Timothy Guy; Pilch, Martin M.

    2007-10-01

    The Predictive Capability Maturity Model (PCMM) is a new model that can be used to assess the level of maturity of computational modeling and simulation (M&S) efforts. The development of the model is based on both the authors experience and their analysis of similar investigations in the past. The perspective taken in this report is one of judging the usefulness of a predictive capability that relies on the numerical solution to partial differential equations to better inform and improve decision making. The review of past investigations, such as the Software Engineering Institute's Capability Maturity Model Integration and the National Aeronautics and Space Administration and Department of Defense Technology Readiness Levels, indicates that a more restricted, more interpretable method is needed to assess the maturity of an M&S effort. The PCMM addresses six contributing elements to M&S: (1) representation and geometric fidelity, (2) physics and material model fidelity, (3) code verification, (4) solution verification, (5) model validation, and (6) uncertainty quantification and sensitivity analysis. For each of these elements, attributes are identified that characterize four increasing levels of maturity. Importantly, the PCMM is a structured method for assessing the maturity of an M&S effort that is directed toward an engineering application of interest. The PCMM does not assess whether the M&S effort, the accuracy of the predictions, or the performance of the engineering system satisfies or does not satisfy specified application requirements.

  16. Predictions of models for environmental radiological assessment

    International Nuclear Information System (INIS)

    Peres, Sueli da Silva; Lauria, Dejanira da Costa; Mahler, Claudio Fernando

    2011-01-01

    In the field of environmental impact assessment, models are used for estimating source term, environmental dispersion and transfer of radionuclides, exposure pathway, radiation dose and the risk for human beings Although it is recognized that the specific information of local data are important to improve the quality of the dose assessment results, in fact obtaining it can be very difficult and expensive. Sources of uncertainties are numerous, among which we can cite: the subjectivity of modelers, exposure scenarios and pathways, used codes and general parameters. The various models available utilize different mathematical approaches with different complexities that can result in different predictions. Thus, for the same inputs different models can produce very different outputs. This paper presents briefly the main advances in the field of environmental radiological assessment that aim to improve the reliability of the models used in the assessment of environmental radiological impact. The intercomparison exercise of model supplied incompatible results for 137 Cs and 60 Co, enhancing the need for developing reference methodologies for environmental radiological assessment that allow to confront dose estimations in a common comparison base. The results of the intercomparison exercise are present briefly. (author)

  17. ITER test programme

    International Nuclear Information System (INIS)

    Abdou, M.; Baker, C.; Casini, G.

    1991-01-01

    ITER has been designed to operate in two phases. The first phase which lasts for 6 years, is devoted to machine checkout and physics testing. The second phase lasts for 8 years and is devoted primarily to technology testing. This report describes the technology test program development for ITER, the ancillary equipment outside the torus necessary to support the test modules, the international collaboration aspects of conducting the test program on ITER, the requirements on the machine major parameters and the R and D program required to develop the test modules for testing in ITER. 15 refs, figs and tabs

  18. Effect on Prediction when Modeling Covariates in Bayesian Nonparametric Models.

    Science.gov (United States)

    Cruz-Marcelo, Alejandro; Rosner, Gary L; Müller, Peter; Stewart, Clinton F

    2013-04-01

    In biomedical research, it is often of interest to characterize biologic processes giving rise to observations and to make predictions of future observations. Bayesian nonparametric methods provide a means for carrying out Bayesian inference making as few assumptions about restrictive parametric models as possible. There are several proposals in the literature for extending Bayesian nonparametric models to include dependence on covariates. Limited attention, however, has been directed to the following two aspects. In this article, we examine the effect on fitting and predictive performance of incorporating covariates in a class of Bayesian nonparametric models by one of two primary ways: either in the weights or in the locations of a discrete random probability measure. We show that different strategies for incorporating continuous covariates in Bayesian nonparametric models can result in big differences when used for prediction, even though they lead to otherwise similar posterior inferences. When one needs the predictive density, as in optimal design, and this density is a mixture, it is better to make the weights depend on the covariates. We demonstrate these points via a simulated data example and in an application in which one wants to determine the optimal dose of an anticancer drug used in pediatric oncology.

  19. Combining GPS measurements and IRI model predictions

    International Nuclear Information System (INIS)

    Hernandez-Pajares, M.; Juan, J.M.; Sanz, J.; Bilitza, D.

    2002-01-01

    The free electrons distributed in the ionosphere (between one hundred and thousands of km in height) produce a frequency-dependent effect on Global Positioning System (GPS) signals: a delay in the pseudo-orange and an advance in the carrier phase. These effects are proportional to the columnar electron density between the satellite and receiver, i.e. the integrated electron density along the ray path. Global ionospheric TEC (total electron content) maps can be obtained with GPS data from a network of ground IGS (international GPS service) reference stations with an accuracy of few TEC units. The comparison with the TOPEX TEC, mainly measured over the oceans far from the IGS stations, shows a mean bias and standard deviation of about 2 and 5 TECUs respectively. The discrepancies between the STEC predictions and the observed values show an RMS typically below 5 TECUs (which also includes the alignment code noise). he existence of a growing database 2-hourly global TEC maps and with resolution of 5x2.5 degrees in longitude and latitude can be used to improve the IRI prediction capability of the TEC. When the IRI predictions and the GPS estimations are compared for a three month period around the Solar Maximum, they are in good agreement for middle latitudes. An over-determination of IRI TEC has been found at the extreme latitudes, the IRI predictions being, typically two times higher than the GPS estimations. Finally, local fits of the IRI model can be done by tuning the SSN from STEC GPS observations

  20. Mathematical models for indoor radon prediction

    International Nuclear Information System (INIS)

    Malanca, A.; Pessina, V.; Dallara, G.

    1995-01-01

    It is known that the indoor radon (Rn) concentration can be predicted by means of mathematical models. The simplest model relies on two variables only: the Rn source strength and the air exchange rate. In the Lawrence Berkeley Laboratory (LBL) model several environmental parameters are combined into a complex equation; besides, a correlation between the ventilation rate and the Rn entry rate from the soil is admitted. The measurements were carried out using activated carbon canisters. Seventy-five measurements of Rn concentrations were made inside two rooms placed on the second floor of a building block. One of the rooms had a single-glazed window whereas the other room had a double pane window. During three different experimental protocols, the mean Rn concentration was always higher into the room with a double-glazed window. That behavior can be accounted for by the simplest model. A further set of 450 Rn measurements was collected inside a ground-floor room with a grounding well in it. This trend maybe accounted for by the LBL model

  1. Statistical model based iterative reconstruction (MBIR) in clinical CT systems. Part II. Experimental assessment of spatial resolution performance

    International Nuclear Information System (INIS)

    Li, Ke; Chen, Guang-Hong; Garrett, John; Ge, Yongshuai

    2014-01-01

    Purpose: Statistical model based iterative reconstruction (MBIR) methods have been introduced to clinical CT systems and are being used in some clinical diagnostic applications. The purpose of this paper is to experimentally assess the unique spatial resolution characteristics of this nonlinear reconstruction method and identify its potential impact on the detectabilities and the associated radiation dose levels for specific imaging tasks. Methods: The thoracic section of a pediatric phantom was repeatedly scanned 50 or 100 times using a 64-slice clinical CT scanner at four different dose levels [CTDI vol =4, 8, 12, 16 (mGy)]. Both filtered backprojection (FBP) and MBIR (Veo ® , GE Healthcare, Waukesha, WI) were used for image reconstruction and results were compared with one another. Eight test objects in the phantom with contrast levels ranging from 13 to 1710 HU were used to assess spatial resolution. The axial spatial resolution was quantified with the point spread function (PSF), while the z resolution was quantified with the slice sensitivity profile. Both were measured locally on the test objects and in the image domain. The dependence of spatial resolution on contrast and dose levels was studied. The study also features a systematic investigation of the potential trade-off between spatial resolution and locally defined noise and their joint impact on the overall image quality, which was quantified by the image domain-based channelized Hotelling observer (CHO) detectability index d′. Results: (1) The axial spatial resolution of MBIR depends on both radiation dose level and image contrast level, whereas it is supposedly independent of these two factors in FBP. The axial spatial resolution of MBIR always improved with an increasing radiation dose level and/or contrast level. (2) The axial spatial resolution of MBIR became equivalent to that of FBP at some transitional contrast level, above which MBIR demonstrated superior spatial resolution than FBP (and

  2. A Predictive Maintenance Model for Railway Tracks

    DEFF Research Database (Denmark)

    Li, Rui; Wen, Min; Salling, Kim Bang

    2015-01-01

    presents a mathematical model based on Mixed Integer Programming (MIP) which is designed to optimize the predictive railway tamping activities for ballasted track for the time horizon up to four years. The objective function is setup to minimize the actual costs for the tamping machine (measured by time......). Five technical and economic aspects are taken into account to schedule tamping: (1) track degradation of the standard deviation of the longitudinal level over time; (2) track geometrical alignment; (3) track quality thresholds based on the train speed limits; (4) the dependency of the track quality...... recovery on the track quality after tamping operation and (5) Tamping machine operation factors. A Danish railway track between Odense and Fredericia with 57.2 km of length is applied for a time period of two to four years in the proposed maintenance model. The total cost can be reduced with up to 50...

  3. An Operational Model for the Prediction of Jet Blast

    Science.gov (United States)

    2012-01-09

    This paper presents an operational model for the prediction of jet blast. The model was : developed based upon three modules including a jet exhaust model, jet centerline decay : model and aircraft motion model. The final analysis was compared with d...

  4. The Physics Basis of ITER Confinement

    International Nuclear Information System (INIS)

    Wagner, F.

    2009-01-01

    ITER will be the first fusion reactor and the 50 year old dream of fusion scientists will become reality. The quality of magnetic confinement will decide about the success of ITER, directly in the form of the confinement time and indirectly because it decides about the plasma parameters and the fluxes, which cross the separatrix and have to be handled externally by technical means. This lecture portrays some of the basic principles which govern plasma confinement, uses dimensionless scaling to set the limits for the predictions for ITER, an approach which also shows the limitations of the predictions, and describes briefly the major characteristics and physics behind the H-mode--the preferred confinement regime of ITER.

  5. Continuous-Discrete Time Prediction-Error Identification Relevant for Linear Model Predictive Control

    DEFF Research Database (Denmark)

    Jørgensen, John Bagterp; Jørgensen, Sten Bay

    2007-01-01

    A Prediction-error-method tailored for model based predictive control is presented. The prediction-error method studied are based on predictions using the Kalman filter and Kalman predictors for a linear discrete-time stochastic state space model. The linear discrete-time stochastic state space...... model is realized from a continuous-discrete-time linear stochastic system specified using transfer functions with time-delays. It is argued that the prediction-error criterion should be selected such that it is compatible with the objective function of the predictive controller in which the model...

  6. United States rejoin ITER

    International Nuclear Information System (INIS)

    Roberts, M.

    2003-01-01

    Upon pressure from the United States Congress, the US Department of Energy had to withdraw from further American participation in the ITER Engineering Design Activities after the end of its commitment to the EDA in July 1998. In the years since that time, changes have taken place in both the ITER activity and the US fusion community's position on burning plasma physics. Reflecting the interest in the United States in pursuing burning plasma physics, the DOE's Office of Science commissioned three studies as part of its examination of the option of entering the Negotiations on the Agreement on the Establishment of the International Fusion Energy Organization for the Joint Implementation of the ITER Project. These were a National Academy Review Panel Report supporting the burning plasma mission; a Fusion Energy Sciences Advisory Committee (FESAC) report confirming the role of ITER in achieving fusion power production, and The Lehman Review of the ITER project costing and project management processes (for the latter one, see ITER CTA Newsletter, no. 15, December 2002). All three studies have endorsed the US return to the ITER activities. This historical decision was announced by DOE Secretary Abraham during his remarks to employees of the Department's Princeton Plasma Physics Laboratory. The United States will be working with the other Participants in the ITER Negotiations on the Agreement and is preparing to participate in the ITA

  7. ITER at Cadarache; ITER a Cadarache

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2005-06-15

    This public information document presents the ITER project (International Thermonuclear Experimental Reactor), the definition of the fusion, the international cooperation and the advantages of the project. It presents also the site of Cadarache, an appropriate scientifical and economical environment. The last part of the documentation recalls the historical aspect of the project and the today mobilization of all partners. (A.L.B.)

  8. ITER council proceedings: 1992

    International Nuclear Information System (INIS)

    1994-01-01

    At the signing of the ITER EDA Agreement on July, 1992, each of the Parties presented to the Director General the names of their designated members of the ITER Council. Upon receiving those names, the Director General stated that the ITER Engineering Design Activities were ''ready to begin''. The next step in this process was the convening of the first meeting of the ITER Council. The first meeting of the Council, held in Vienna, was opened by Director General Hans Blix. The second meeting was held in Moscow, the formal seat of the Council. This volume presents records of these first two Council meetings and, together with the previous volumes on the text of the Agreement and Protocol 1 and the preparations for their signing respectively, represents essential information on the evolution of the ITER EDA

  9. Predictive modeling: potential application in prevention services.

    Science.gov (United States)

    Wilson, Moira L; Tumen, Sarah; Ota, Rissa; Simmers, Anthony G

    2015-05-01

    In 2012, the New Zealand Government announced a proposal to introduce predictive risk models (PRMs) to help professionals identify and assess children at risk of abuse or neglect as part of a preventive early intervention strategy, subject to further feasibility study and trialing. The purpose of this study is to examine technical feasibility and predictive validity of the proposal, focusing on a PRM that would draw on population-wide linked administrative data to identify newborn children who are at high priority for intensive preventive services. Data analysis was conducted in 2013 based on data collected in 2000-2012. A PRM was developed using data for children born in 2010 and externally validated for children born in 2007, examining outcomes to age 5 years. Performance of the PRM in predicting administratively recorded substantiations of maltreatment was good compared to the performance of other tools reviewed in the literature, both overall, and for indigenous Māori children. Some, but not all, of the children who go on to have recorded substantiations of maltreatment could be identified early using PRMs. PRMs should be considered as a potential complement to, rather than a replacement for, professional judgment. Trials are needed to establish whether risks can be mitigated and PRMs can make a positive contribution to frontline practice, engagement in preventive services, and outcomes for children. Deciding whether to proceed to trial requires balancing a range of considerations, including ethical and privacy risks and the risk of compounding surveillance bias. Crown Copyright © 2015. Published by Elsevier Inc. All rights reserved.

  10. ITER CTA newsletter. No. 3

    International Nuclear Information System (INIS)

    2001-11-01

    This ITER CTA newsletter comprises reports of Dr. P. Barnard, Iter Canada Chairman and CEO, about the progress of the first formal ITER negotiations and about the demonstration of details of Canada's bid on ITER workshops, and Dr. V. Vlasenkov, Project Board Secretary, about the meeting of the ITER CTA project board

  11. Heuristic Modeling for TRMM Lifetime Predictions

    Science.gov (United States)

    Jordan, P. S.; Sharer, P. J.; DeFazio, R. L.

    1996-01-01

    Analysis time for computing the expected mission lifetimes of proposed frequently maneuvering, tightly altitude constrained, Earth orbiting spacecraft have been significantly reduced by means of a heuristic modeling method implemented in a commercial-off-the-shelf spreadsheet product (QuattroPro) running on a personal computer (PC). The method uses a look-up table to estimate the maneuver frequency per month as a function of the spacecraft ballistic coefficient and the solar flux index, then computes the associated fuel use by a simple engine model. Maneuver frequency data points are produced by means of a single 1-month run of traditional mission analysis software for each of the 12 to 25 data points required for the table. As the data point computations are required only a mission design start-up and on the occasion of significant mission redesigns, the dependence on time consuming traditional modeling methods is dramatically reduced. Results to date have agreed with traditional methods to within 1 to 1.5 percent. The spreadsheet approach is applicable to a wide variety of Earth orbiting spacecraft with tight altitude constraints. It will be particularly useful to such missions as the Tropical Rainfall Measurement Mission scheduled for launch in 1997, whose mission lifetime calculations are heavily dependent on frequently revised solar flux predictions.

  12. A Computational Model for Predicting Gas Breakdown

    Science.gov (United States)

    Gill, Zachary

    2017-10-01

    Pulsed-inductive discharges are a common method of producing a plasma. They provide a mechanism for quickly and efficiently generating a large volume of plasma for rapid use and are seen in applications including propulsion, fusion power, and high-power lasers. However, some common designs see a delayed response time due to the plasma forming when the magnitude of the magnetic field in the thruster is at a minimum. New designs are difficult to evaluate due to the amount of time needed to construct a new geometry and the high monetary cost of changing the power generation circuit. To more quickly evaluate new designs and better understand the shortcomings of existing designs, a computational model is developed. This model uses a modified single-electron model as the basis for a Mathematica code to determine how the energy distribution in a system changes with regards to time and location. By analyzing this energy distribution, the approximate time and location of initial plasma breakdown can be predicted. The results from this code are then compared to existing data to show its validity and shortcomings. Missouri S&T APLab.

  13. Distributed model predictive control made easy

    CERN Document Server

    Negenborn, Rudy

    2014-01-01

    The rapid evolution of computer science, communication, and information technology has enabled the application of control techniques to systems beyond the possibilities of control theory just a decade ago. Critical infrastructures such as electricity, water, traffic and intermodal transport networks are now in the scope of control engineers. The sheer size of such large-scale systems requires the adoption of advanced distributed control approaches. Distributed model predictive control (MPC) is one of the promising control methodologies for control of such systems.   This book provides a state-of-the-art overview of distributed MPC approaches, while at the same time making clear directions of research that deserve more attention. The core and rationale of 35 approaches are carefully explained. Moreover, detailed step-by-step algorithmic descriptions of each approach are provided. These features make the book a comprehensive guide both for those seeking an introduction to distributed MPC as well as for those ...

  14. Which method predicts recidivism best?: A comparison of statistical, machine learning, and data mining predictive models

    OpenAIRE

    Tollenaar, N.; van der Heijden, P.G.M.

    2012-01-01

    Using criminal population conviction histories of recent offenders, prediction mod els are developed that predict three types of criminal recidivism: general recidivism, violent recidivism and sexual recidivism. The research question is whether prediction techniques from modern statistics, data mining and machine learning provide an improvement in predictive performance over classical statistical methods, namely logistic regression and linear discrim inant analysis. These models are compared ...

  15. ITER EDA newsletter. V. 7, no. 9

    International Nuclear Information System (INIS)

    1998-09-01

    Newsletter containing the two articles 'Parties working on continuation of ITER EDA' and 'ITER exhibit at the Austria Centre, Vienna'. The first article describes efforts of the 4 ITER partners, the European Atomic Energy Community and the governments of Japan, the Russian Federation and the USA, to agree to continuation of the ITER EDA. While the former 3 partners signed an Extension to the EDA, the Americans were refused funding by the US Congress und will therefore be phased out within one year. Copies of the documents signed are provided. The second article reports on exhibition featuring a model of ITER and various other means of information on nuclear fusion which took place at the IAEA Headquarters from the 21st to 25th of September 1998. There is also an article in memoriam of Alexander V. Kashirski, who died on the 29th of September 1998

  16. A new iterative speech enhancement scheme based on Kalman filtering

    DEFF Research Database (Denmark)

    Li, Chunjian; Andersen, Søren Vang

    2005-01-01

    A new iterative speech enhancement scheme that can be seen as an approximation to the Expectation-Maximization (EM) algorithm is proposed. The algorithm employs a Kalman filter that models the excitation source as a spectrally white process with a rapidly time-varying variance, which calls...... for a high temporal resolution estimation of this variance. A Local Variance Estimator based on a Prediction Error Kalman Filter is designed for this high temporal resolution variance estimation. To achieve fast convergence and avoid local maxima of the likelihood function, a Weighted Power Spectral...... Subtraction filter is introduced as an initialization procedure. Iterations are then made sequential inter-frame, exploiting the fact that the AR model changes slowly between neighboring frames. The proposed algorithm is computationally more efficient than a baseline EM algorithm due to its fast convergence...

  17. Study of Heating and Fusion Power Production in ITER Discharges

    International Nuclear Information System (INIS)

    Rafiq, T.; Kritz, A. H.; Bateman, G.; Kessel, C.; McCune, D. C.; Budny, R. V.; Pankin, A. Y.

    2011-01-01

    ITER simulations, in which the temperatures, toroidal angular frequency and currents are evolved, are carried out using the PTRANSP code starting with initial profiles and boundary conditions obtained from TSC code studies. The dependence of heat deposition and current drive on ICRF frequency, number of poloidal modes, beam orientation, number of Monte Carlo particles and ECRH launch angles is studied in order to examine various possibilities and contingencies for ITER steady state and hybrid discharges. For the hybrid discharges, the fusion power production and fusion Q, computed using the Multi-Mode MMM v7.1 anomalous transport model, are compared with those predicted using the GLF23 model. The simulations of the hybrid scenario indicate that the fusion power production at 1000 sec will be approximately 500 MW corresponding to a fusion Q = 10.0. The discharge scenarios simulated aid in understanding the conditions for optimizing fusion power production and in examining measures of plasma performance.

  18. Fuzzy predictive filtering in nonlinear economic model predictive control for demand response

    DEFF Research Database (Denmark)

    Santos, Rui Mirra; Zong, Yi; Sousa, Joao M. C.

    2016-01-01

    The performance of a model predictive controller (MPC) is highly correlated with the model's accuracy. This paper introduces an economic model predictive control (EMPC) scheme based on a nonlinear model, which uses a branch-and-bound tree search for solving the inherent non-convex optimization...

  19. Cognitive Model of Trust Dynamics Predicts Human Behavior within and between Two Games of Strategic Interaction with Computerized Confederate Agents

    Science.gov (United States)

    Collins, Michael G.; Juvina, Ion; Gluck, Kevin A.

    2016-01-01

    When playing games of strategic interaction, such as iterated Prisoner's Dilemma and iterated Chicken Game, people exhibit specific within-game learning (e.g., learning a game's optimal outcome) as well as transfer of learning between games (e.g., a game's optimal outcome occurring at a higher proportion when played after another game). The reciprocal trust players develop during the first game is thought to mediate transfer of learning effects. Recently, a computational cognitive model using a novel trust mechanism has been shown to account for human behavior in both games, including the transfer between games. We present the results of a study in which we evaluate the model's a priori predictions of human learning and transfer in 16 different conditions. The model's predictive validity is compared against five model variants that lacked a trust mechanism. The results suggest that a trust mechanism is necessary to explain human behavior across multiple conditions, even when a human plays against a non-human agent. The addition of a trust mechanism to the other learning mechanisms within the cognitive architecture, such as sequence learning, instance-based learning, and utility learning, leads to better prediction of the empirical data. It is argued that computational cognitive modeling is a useful tool for studying trust development, calibration, and repair. PMID:26903892

  20. Cognitive Model of Trust Dynamics Predicts Human Behavior within and between Two Games of Strategic Interaction with Computerized Confederate Agents.

    Science.gov (United States)

    Collins, Michael G; Juvina, Ion; Gluck, Kevin A

    2016-01-01

    When playing games of strategic interaction, such as iterated Prisoner's Dilemma and iterated Chicken Game, people exhibit specific within-game learning (e.g., learning a game's optimal outcome) as well as transfer of learning between games (e.g., a game's optimal outcome occurring at a higher proportion when played after another game). The reciprocal trust players develop during the first game is thought to mediate transfer of learning effects. Recently, a computational cognitive model using a novel trust mechanism has been shown to account for human behavior in both games, including the transfer between games. We present the results of a study in which we evaluate the model's a priori predictions of human learning and transfer in 16 different conditions. The model's predictive validity is compared against five model variants that lacked a trust mechanism. The results suggest that a trust mechanism is necessary to explain human behavior across multiple conditions, even when a human plays against a non-human agent. The addition of a trust mechanism to the other learning mechanisms within the cognitive architecture, such as sequence learning, instance-based learning, and utility learning, leads to better prediction of the empirical data. It is argued that computational cognitive modeling is a useful tool for studying trust development, calibration, and repair.

  1. Methodology for dimensional variation analysis of ITER integrated systems

    Energy Technology Data Exchange (ETDEWEB)

    Fuentes, F. Javier, E-mail: FranciscoJavier.Fuentes@iter.org [ITER Organization, Route de Vinon-sur-Verdon—CS 90046, 13067 St Paul-lez-Durance (France); Trouvé, Vincent [Assystem Engineering & Operation Services, rue J-M Jacquard CS 60117, 84120 Pertuis (France); Cordier, Jean-Jacques; Reich, Jens [ITER Organization, Route de Vinon-sur-Verdon—CS 90046, 13067 St Paul-lez-Durance (France)

    2016-11-01

    Highlights: • Tokamak dimensional management methodology, based on 3D variation analysis, is presented. • Dimensional Variation Model implementation workflow is described. • Methodology phases are described in detail. The application of this methodology to the tolerance analysis of ITER Vacuum Vessel is presented. • Dimensional studies are a valuable tool for the assessment of Tokamak PCR (Project Change Requests), DR (Deviation Requests) and NCR (Non-Conformance Reports). - Abstract: The ITER machine consists of a large number of complex systems highly integrated, with critical functional requirements and reduced design clearances to minimize the impact in cost and performances. Tolerances and assembly accuracies in critical areas could have a serious impact in the final performances, compromising the machine assembly and plasma operation. The management of tolerances allocated to part manufacture and assembly processes, as well as the control of potential deviations and early mitigation of non-compliances with the technical requirements, is a critical activity on the project life cycle. A 3D tolerance simulation analysis of ITER Tokamak machine has been developed based on 3DCS dedicated software. This integrated dimensional variation model is representative of Tokamak manufacturing functional tolerances and assembly processes, predicting accurate values for the amount of variation on critical areas. This paper describes the detailed methodology to implement and update the Tokamak Dimensional Variation Model. The model is managed at system level. The methodology phases are illustrated by its application to the Vacuum Vessel (VV), considering the status of maturity of VV dimensional variation model. The following topics are described in this paper: • Model description and constraints. • Model implementation workflow. • Management of input and output data. • Statistical analysis and risk assessment. The management of the integration studies based on

  2. Methodology for dimensional variation analysis of ITER integrated systems

    International Nuclear Information System (INIS)

    Fuentes, F. Javier; Trouvé, Vincent; Cordier, Jean-Jacques; Reich, Jens

    2016-01-01

    Highlights: • Tokamak dimensional management methodology, based on 3D variation analysis, is presented. • Dimensional Variation Model implementation workflow is described. • Methodology phases are described in detail. The application of this methodology to the tolerance analysis of ITER Vacuum Vessel is presented. • Dimensional studies are a valuable tool for the assessment of Tokamak PCR (Project Change Requests), DR (Deviation Requests) and NCR (Non-Conformance Reports). - Abstract: The ITER machine consists of a large number of complex systems highly integrated, with critical functional requirements and reduced design clearances to minimize the impact in cost and performances. Tolerances and assembly accuracies in critical areas could have a serious impact in the final performances, compromising the machine assembly and plasma operation. The management of tolerances allocated to part manufacture and assembly processes, as well as the control of potential deviations and early mitigation of non-compliances with the technical requirements, is a critical activity on the project life cycle. A 3D tolerance simulation analysis of ITER Tokamak machine has been developed based on 3DCS dedicated software. This integrated dimensional variation model is representative of Tokamak manufacturing functional tolerances and assembly processes, predicting accurate values for the amount of variation on critical areas. This paper describes the detailed methodology to implement and update the Tokamak Dimensional Variation Model. The model is managed at system level. The methodology phases are illustrated by its application to the Vacuum Vessel (VV), considering the status of maturity of VV dimensional variation model. The following topics are described in this paper: • Model description and constraints. • Model implementation workflow. • Management of input and output data. • Statistical analysis and risk assessment. The management of the integration studies based on

  3. Ozone Concentration Prediction via Spatiotemporal Autoregressive Model With Exogenous Variables

    Science.gov (United States)

    Kamoun, W.; Senoussi, R.

    2009-04-01

    concentration recorded in n=42 stations during the year 2005 within a south region in France, covering an area of approximately 10565 km2. Meteorological covariates are the daily maxima of temperature, wind speed, daily maxima of humidity and atmospheric pressure. Actually, the meteorological factors are not recorded in ozone monitoring sites and thus preliminary interpolation techniques were used and compared subsequently (Gaussian conditional simulation, ordinary kriging or kriging with external drift). Concluding remarks: From the statistical point of view, both simulation study and data analysis showed a fairly robust behaviour of estimation procedures. In both cases, the analysis of residuals proved a significant improvement of error prediction within this framework. From the environmental point of view, the ability of accounting for pertinent local and dynamical meteorological covariates clearly provided a useful tool in prediction methods. Bib [1]: Pfeifer.P.E; Deutsh.S.J. (1980) "A Three-Stage Iterative Procedure for Space-Time Modelling." Technometrics 22: 35-47. Bib [2]: Raffaella Giacomini and Cliff W.J.Granger 2002 - 07 "Aggregation of Space-Time Process" Departement of Economics, University of California, San Diego.

  4. Model for predicting mountain wave field uncertainties

    Science.gov (United States)

    Damiens, Florentin; Lott, François; Millet, Christophe; Plougonven, Riwal

    2017-04-01

    Studying the propagation of acoustic waves throughout troposphere requires knowledge of wind speed and temperature gradients from the ground up to about 10-20 km. Typical planetary boundary layers flows are known to present vertical low level shears that can interact with mountain waves, thereby triggering small-scale disturbances. Resolving these fluctuations for long-range propagation problems is, however, not feasible because of computer memory/time restrictions and thus, they need to be parameterized. When the disturbances are small enough, these fluctuations can be described by linear equations. Previous works by co-authors have shown that the critical layer dynamics that occur near the ground produces large horizontal flows and buoyancy disturbances that result in intense downslope winds and gravity wave breaking. While these phenomena manifest almost systematically for high Richardson numbers and when the boundary layer depth is relatively small compare to the mountain height, the process by which static stability affects downslope winds remains unclear. In the present work, new linear mountain gravity wave solutions are tested against numerical predictions obtained with the Weather Research and Forecasting (WRF) model. For Richardson numbers typically larger than unity, the mesoscale model is used to quantify the effect of neglected nonlinear terms on downslope winds and mountain wave patterns. At these regimes, the large downslope winds transport warm air, a so called "Foehn" effect than can impact sound propagation properties. The sensitivity of small-scale disturbances to Richardson number is quantified using two-dimensional spectral analysis. It is shown through a pilot study of subgrid scale fluctuations of boundary layer flows over realistic mountains that the cross-spectrum of mountain wave field is made up of the same components found in WRF simulations. The impact of each individual component on acoustic wave propagation is discussed in terms of

  5. Predictive spatio-temporal model for spatially sparse global solar radiation data

    International Nuclear Information System (INIS)

    André, Maïna; Dabo-Niang, Sophie; Soubdhan, Ted; Ould-Baba, Hanany

    2016-01-01

    This paper introduces a new approach for the forecasting of solar radiation series at a located station for very short time scale. We built a multivariate model in using few stations (3 stations) separated with irregular distances from 26 km to 56 km. The proposed model is a spatio temporal vector autoregressive VAR model specifically designed for the analysis of spatially sparse spatio-temporal data. This model differs from classic linear models in using spatial and temporal parameters where the available predictors are the lagged values at each station. A spatial structure of stations is defined by the sequential introduction of predictors in the model. Moreover, an iterative strategy in the process of our model will select the necessary stations removing the uninteresting predictors and also selecting the optimal p-order. We studied the performance of this model. The metric error, the relative root mean squared error (rRMSE), is presented at different short time scales. Moreover, we compared the results of our model to simple and well known persistence model and those found in literature. - Highlights: • A spatio-temporal VAR forecast model is used for spatially sparse data solar. • Lags and locations are selected by an optimization strategy. • Definition of spatial ordering of predictors influences forecasting results. • The model shows a better performance predictive at 30 min ahead in our context. • Benchmarking study shows a more accurate forecast at 1 h ahead with spatio-temporal VAR.

  6. Model for the prediction of subsurface strata movement due to underground mining

    Science.gov (United States)

    Cheng, Jianwei; Liu, Fangyuan; Li, Siyuan

    2017-12-01

    The problem of ground control stability due to large underground mining operations is often associated with large movements and deformations of strata. It is a complicated problem, and can induce severe safety or environmental hazards either at the surface or in strata. Hence, knowing the subsurface strata movement characteristics, and making any subsidence predictions in advance, are desirable for mining engineers to estimate any damage likely to affect the ground surface or subsurface strata. Based on previous research findings, this paper broadly applies a surface subsidence prediction model based on the influence function method to subsurface strata, in order to predict subsurface stratum movement. A step-wise prediction model is proposed, to investigate the movement of underground strata. The model involves a dynamic iteration calculation process to derive the movements and deformations for each stratum layer; modifications to the influence method function are also made for more precise calculations. The critical subsidence parameters, incorporating stratum mechanical properties and the spatial relationship of interest at the mining level, are thoroughly considered, with the purpose of improving the reliability of input parameters. Such research efforts can be very helpful to mining engineers’ understanding of the moving behavior of all strata over underground excavations, and assist in making any damage mitigation plan. In order to check the reliability of the model, two methods are carried out and cross-validation applied. One is to use a borehole TV monitor recording to identify the progress of subsurface stratum bedding and caving in a coal mine, the other is to conduct physical modelling of the subsidence in underground strata. The results of these two methods are used to compare with theoretical results calculated by the proposed mathematical model. The testing results agree well with each other, and the acceptable accuracy and reliability of the

  7. RACLETTE: a model for evaluating the thermal response of plasma facing components to slow high power plasma transients. Part II: Analysis of ITER plasma facing components

    Science.gov (United States)

    Federici, Gianfranco; Raffray, A. René

    1997-04-01

    The transient thermal model RACLETTE (acronym of Rate Analysis Code for pLasma Energy Transfer Transient Evaluation) described in part I of this paper is applied here to analyse the heat transfer and erosion effects of various slow (100 ms-10 s) high power energy transients on the actively cooled plasma facing components (PFCs) of the International Thermonuclear Experimental Reactor (ITER). These have a strong bearing on the PFC design and need careful analysis. The relevant parameters affecting the heat transfer during the plasma excursions are established. The temperature variation with time and space is evaluated together with the extent of vaporisation and melting (the latter only for metals) for the different candidate armour materials considered for the design (i.e., Be for the primary first wall, Be and CFCs for the limiter, Be, W, and CFCs for the divertor plates) and including for certain cases low-density vapour shielding effects. The critical heat flux, the change of the coolant parameters and the possible severe degradation of the coolant heat removal capability that could result under certain conditions during these transients, for example for the limiter, are also evaluated. Based on the results, the design implications on the heat removal performance and erosion damage of the variuos ITER PFCs are critically discussed and some recommendations are made for the selection of the most adequate protection materials and optimum armour thickness.

  8. RACLETTE: a model for evaluating the thermal response of plasma facing components to slow high power plasma transients. Pt. II. Analysis of ITER plasma facing components

    International Nuclear Information System (INIS)

    Federici, G.; Raffray, A.R.

    1997-01-01

    For pt.I see ibid., p.85-100, 1997. The transient thermal model RACLETTE (acronym of Rate Analysis Code for pLasma Energy Transfer Transient Evaluation) described in part I of this paper is applied here to analyse the heat transfer and erosion effects of various slow (100 ms-10 s) high power energy transients on the actively cooled plasma facing components (PFCs) of the International Thermonuclear Experimental Reactor (ITER). These have a strong bearing on the PFC design and need careful analysis. The relevant parameters affecting the heat transfer during the plasma excursions are established. The temperature variation with time and space is evaluated together with the extent of vaporisation and melting (the latter only for metals) for the different candidate armour materials considered for the design (i.e., Be for the primary first wall, Be and CFCs for the limiter, Be, W, and CFCs for the divertor plates) and including for certain cases low-density vapour shielding effects. The critical heat flux, the change of the coolant parameters and the possible severe degradation of the coolant heat removal capability that could result under certain conditions during these transients, for example for the limiter, are also evaluated. Based on the results, the design implications on the heat removal performance and erosion damage of the various ITER PFCs are critically discussed and some recommendations are made for the selection of the most adequate protection materials and optimum armour thickness. (orig.)

  9. Knowledge-based iterative model reconstruction technique in computed tomography of lumbar spine lowers radiation dose and improves tissue differentiation for patients with lower back pain.

    Science.gov (United States)

    Yang, Cheng Hui; Wu, Tung-Hsin; Lin, Chung-Jung; Chiou, Yi-You; Chen, Ying-Chou; Sheu, Ming-Huei; Guo, Wan-Yuo; Chiu, Chen Fen

    2016-10-01

    To evaluate the image quality and diagnostic confidence of reduced-dose computed tomography (CT) of the lumbar spine (L-spine) reconstructed with knowledge-based iterative model reconstruction (IMR). Prospectively, group A consisted of 55 patients imaged with standard acquisition reconstructed with filtered back-projection. Group B consisted of 58 patients imaged with half tube current, reconstructed with hybrid iterative reconstruction (iDose(4)) in Group B1 and knowledge-based IMR in Group B2. Signal-to-noise ratio (SNR) of different regions, the contrast-to-noise ratio between the intervetebral disc (IVD) and dural sac (D-D CNR), and subjective image quality of different regions were compared. Higher strength IMR was also compared in spinal stenosis cases. The SNR of the psoas muscle and D-D CNR were significantly higher in the IMR group. Except for the facet joint, subjective image quality of other regions including IVD, intervertebral foramen (IVF), dural sac, peridural fat, ligmentum flavum, and overall diagnostic acceptability were best for the IMR group. Diagnostic confidence of narrowing IVF and IVD was good (kappa=0.58-0.85). Higher strength IMR delineated IVD better in spinal stenosis cases. Lower dose CT of L-spine reconstructed with IMR demonstrates better tissue differentiation than iDose(4) and standard dose CT with FBP. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  10. CT Pulmonary Angiography at Reduced Radiation Exposure and Contrast Material Volume Using Iterative Model Reconstruction and iDose4 Technique in Comparison to FBP.

    Science.gov (United States)

    Laqmani, Azien; Kurfürst, Maximillian; Butscheidt, Sebastian; Sehner, Susanne; Schmidt-Holtz, Jakob; Behzadi, Cyrus; Nagel, Hans Dieter; Adam, Gerhard; Regier, Marc

    2016-01-01

    To assess image quality of CT pulmonary angiography (CTPA) at reduced radiation exposure (RD-CTPA) and contrast medium (CM) volume using two different iterative reconstruction (IR) algorithms (iDose4 and iterative model reconstruction (IMR)) in comparison to filtered back projection (FBP). 52 patients (body weight < 100 kg, mean BMI: 23.9) with suspected pulmonary embolism (PE) underwent RD-CTPA (tube voltage: 80 kV; mean CTDIvol: 1.9 mGy) using 40 ml CM. Data were reconstructed using FBP and two different IR algorithms (iDose4 and IMR). Subjective and objective image quality and conspicuity of PE were assessed in central, segmental, and subsegmental arteries. Noise reduction of 55% was achieved with iDose4 and of 85% with IMR compared to FBP. Contrast-to-noise ratio significantly increased with iDose4 and IMR compared to FBP (p<0.05). Subjective image quality was rated significantly higher at IMR reconstructions in comparison to iDose4 and FBP. Conspicuity of central and segmental PE significantly improved with the use of IMR. In subsegmental arteries, iDose4 was superior to IMR. CTPA at reduced radiation exposure and contrast medium volume is feasible with the use of IMR, which provides improved image quality and conspicuity of pulmonary embolism in central and segmental arteries.

  11. CT Pulmonary Angiography at Reduced Radiation Exposure and Contrast Material Volume Using Iterative Model Reconstruction and iDose4 Technique in Comparison to FBP.

    Directory of Open Access Journals (Sweden)

    Azien Laqmani

    Full Text Available To assess image quality of CT pulmonary angiography (CTPA at reduced radiation exposure (RD-CTPA and contrast medium (CM volume using two different iterative reconstruction (IR algorithms (iDose4 and iterative model reconstruction (IMR in comparison to filtered back projection (FBP.52 patients (body weight < 100 kg, mean BMI: 23.9 with suspected pulmonary embolism (PE underwent RD-CTPA (tube voltage: 80 kV; mean CTDIvol: 1.9 mGy using 40 ml CM. Data were reconstructed using FBP and two different IR algorithms (iDose4 and IMR. Subjective and objective image quality and conspicuity of PE were assessed in central, segmental, and subsegmental arteries.Noise reduction of 55% was achieved with iDose4 and of 85% with IMR compared to FBP. Contrast-to-noise ratio significantly increased with iDose4 and IMR compared to FBP (p<0.05. Subjective image quality was rated significantly higher at IMR reconstructions in comparison to iDose4 and FBP. Conspicuity of central and segmental PE significantly improved with the use of IMR. In subsegmental arteries, iDose4 was superior to IMR.CTPA at reduced radiation exposure and contrast medium volume is feasible with the use of IMR, which provides improved image quality and conspicuity of pulmonary embolism in central and segmental arteries.

  12. Model Predictive Control for an Industrial SAG Mill

    DEFF Research Database (Denmark)

    Ohan, Valeriu; Steinke, Florian; Metzger, Michael

    2012-01-01

    We discuss Model Predictive Control (MPC) based on ARX models and a simple lower order disturbance model. The advantage of this MPC formulation is that it has few tuning parameters and is based on an ARX prediction model that can readily be identied using standard technologies from system identic...

  13. Uncertainties in spatially aggregated predictions from a logistic regression model

    NARCIS (Netherlands)

    Horssen, P.W. van; Pebesma, E.J.; Schot, P.P.

    2002-01-01

    This paper presents a method to assess the uncertainty of an ecological spatial prediction model which is based on logistic regression models, using data from the interpolation of explanatory predictor variables. The spatial predictions are presented as approximate 95% prediction intervals. The

  14. Dealing with missing predictor values when applying clinical prediction models.

    NARCIS (Netherlands)

    Janssen, K.J.; Vergouwe, Y.; Donders, A.R.T.; Harrell Jr, F.E.; Chen, Q.; Grobbee, D.E.; Moons, K.G.

    2009-01-01

    BACKGROUND: Prediction models combine patient characteristics and test results to predict the presence of a disease or the occurrence of an event in the future. In the event that test results (predictor) are unavailable, a strategy is needed to help users applying a prediction model to deal with

  15. Neural Fuzzy Inference System-Based Weather Prediction Model and Its Precipitation Predicting Experiment

    Directory of Open Access Journals (Sweden)

    Jing Lu

    2014-11-01

    Full Text Available We propose a weather prediction model in this article based on neural network and fuzzy inference system (NFIS-WPM, and then apply it to predict daily fuzzy precipitation given meteorological premises for testing. The model consists of two parts: the first part is the “fuzzy rule-based neural network”, which simulates sequential relations among fuzzy sets using artificial neural network; and the second part is the “neural fuzzy inference system”, which is based on the first part, but could learn new fuzzy rules from the previous ones according to the algorithm we proposed. NFIS-WPM (High Pro and NFIS-WPM (Ave are improved versions of this model. It is well known that the need for accurate weather prediction is apparent when considering the benefits. However, the excessive pursuit of accuracy in weather prediction makes some of the “accurate” prediction results meaningless and the numerical prediction model is often complex and time-consuming. By adapting this novel model to a precipitation prediction problem, we make the predicted outcomes of precipitation more accurate and the prediction methods simpler than by using the complex numerical forecasting model that would occupy large computation resources, be time-consuming and which has a low predictive accuracy rate. Accordingly, we achieve more accurate predictive precipitation results than by using traditional artificial neural networks that have low predictive accuracy.

  16. Foundation Settlement Prediction Based on a Novel NGM Model

    Directory of Open Access Journals (Sweden)

    Peng-Yu Chen

    2014-01-01

    Full Text Available Prediction of foundation or subgrade settlement is very important during engineering construction. According to the fact that there are lots of settlement-time sequences with a nonhomogeneous index trend, a novel grey forecasting model called NGM (1,1,k,c model is proposed in this paper. With an optimized whitenization differential equation, the proposed NGM (1,1,k,c model has the property of white exponential law coincidence and can predict a pure nonhomogeneous index sequence precisely. We used two case studies to verify the predictive effect of NGM (1,1,k,c model for settlement prediction. The results show that this model can achieve excellent prediction accuracy; thus, the model is quite suitable for simulation and prediction of approximate nonhomogeneous index sequence and has excellent application value in settlement prediction.

  17. ITER Plasma Control System Development

    Science.gov (United States)

    Snipes, Joseph; ITER PCS Design Team

    2015-11-01

    The development of the ITER Plasma Control System (PCS) continues with the preliminary design phase for 1st plasma and early plasma operation in H/He up to Ip = 15 MA in L-mode. The design is being developed through a contract between the ITER Organization and a consortium of plasma control experts from EU and US fusion laboratories, which is expected to be completed in time for a design review at the end of 2016. This design phase concentrates on breakdown including early ECH power and magnetic control of the poloidal field null, plasma current, shape, and position. Basic kinetic control of the heating (ECH, ICH, NBI) and fueling systems is also included. Disruption prediction, mitigation, and maintaining stable operation are also included because of the high magnetic and kinetic stored energy present already for early plasma operation. Support functions for error field topology and equilibrium reconstruction are also required. All of the control functions also must be integrated into an architecture that will be capable of the required complexity of all ITER scenarios. A database is also being developed to collect and manage PCS functional requirements from operational scenarios that were defined in the Conceptual Design with links to proposed event handling strategies and control algorithms for initial basic control functions. A brief status of the PCS development will be presented together with a proposed schedule for design phases up to DT operation.

  18. Predictive capabilities of various constitutive models for arterial tissue.

    Science.gov (United States)

    Schroeder, Florian; Polzer, Stanislav; Slažanský, Martin; Man, Vojtěch; Skácel, Pavel

    2018-02-01

    Aim of this study is to validate some constitutive models by assessing their capabilities in describing and predicting uniaxial and biaxial behavior of porcine aortic tissue. 14 samples from porcine aortas were used to perform 2 uniaxial and 5 biaxial tensile tests. Transversal strains were furthermore stored for uniaxial data. The experimental data were fitted by four constitutive models: Holzapfel-Gasser-Ogden model (HGO), model based on generalized structure tensor (GST), Four-Fiber-Family model (FFF) and Microfiber model. Fitting was performed to uniaxial and biaxial data sets separately and descriptive capabilities of the models were compared. Their predictive capabilities were assessed in two ways. Firstly each model was fitted to biaxial data and its accuracy (in term of R 2 and NRMSE) in prediction of both uniaxial responses was evaluated. Then this procedure was performed conversely: each model was fitted to both uniaxial tests and its accuracy in prediction of 5 biaxial responses was observed. Descriptive capabilities of all models were excellent. In predicting uniaxial response from biaxial data, microfiber model was the most accurate while the other models showed also reasonable accuracy. Microfiber and FFF models were capable to reasonably predict biaxial responses from uniaxial data while HGO and GST models failed completely in this task. HGO and GST models are not capable to predict biaxial arterial wall behavior while FFF model is the most robust of the investigated constitutive models. Knowledge of transversal strains in uniaxial tests improves robustness of constitutive models. Copyright © 2017 Elsevier Ltd. All rights reserved.

  19. Current generation by helicons and LH waves in modern tokamaks and reactors FNSF-AT, ITER and DEMO. Scenarios, modeling and antennae

    Science.gov (United States)

    Vdovin, V.

    2014-02-01

    The Innovative concept and 3D full wave code modeling Off-axis current drive by RF waves in large scale tokamaks, reactors FNSF-AT, ITER and DEMO for steady state operation with high efficiency was proposed [1] to overcome problems well known for LH method [2]. The scheme uses the helicons radiation (fast magnetosonic waves at high (20-40) IC frequency harmonics) at frequencies of 500-1000 MHz, propagating in the outer regions of the plasmas with a rotational transform. It is expected that the current generated by Helicons will help to have regimes with negative magnetic shear and internal transport barrier to ensure stability at high normalized plasma pressure βN > 3 (the so-called Advanced scenarios) of interest for FNSF and the commercial reactor. Modeling with full wave three-dimensional codes PSTELION and STELEC2 showed flexible control of the current profile in the reactor plasmas of ITER, FNSF-AT and DEMO [2,3], using multiple frequencies, the positions of the antennae and toroidal waves slow down. Also presented are the results of simulations of current generation by helicons in tokamaks DIII-D, T-15MD and JT-60SA [3]. In DEMO and Power Plant antenna is strongly simplified, being some analoge of mirrors based ECRF launcher, as will be shown. For spherical tokamaks the Helicons excitation scheme does not provide efficient Off-axis CD profile flexibility due to strong coupling of helicons with O-mode, also through the boundary conditions in low aspect machines, and intrinsic large amount of trapped electrons, as is shown by STELION modeling for the NSTX tokamak. Brief history of Helicons experimental and modeling exploration in straight plasmas, tokamaks and tokamak based fusion Reactors projects is given, including planned joint DIII-D - Kurchatov Institute experiment on helicons CD [1].

  20. ITER days in Moscow

    International Nuclear Information System (INIS)

    Golubchikov, L.

    2001-01-01

    In connection with the successful completion of the Engineering Design of the International Thermonuclear Reactor (ITER) and the 50th anniversary of fusion research in the USSR, the Ministry of the Russian Federation for Atomic Energy (Minatom) with the participation of the Russian Academy of Sciences, organized the International Symposium 'ITER days in Moscow' on 7-8 June 2001. About 250 people from more than 20 states took part in the Meeting. The participants welcomed the R and D results of the ITER project and considered it as a necessary step to establish a basis for a fusion energy source. There were also some scientific presentations on the following topics: ITER physics basis; Effect of fusion research on general physics; Fusion power reactors; US interests in burning plasma

  1. ITER definition phase

    International Nuclear Information System (INIS)

    1989-01-01

    The International Thermonuclear Experimental Reactor (ITER) is envisioned as a fusion device which would demonstrate the scientific and technological feasibility of fusion power. As a first step towards achieving this goal, the European Community, Japan, the Soviet Union, and the United States of America have entered into joint conceptual design activities under the auspices of the International Atomic Energy Agency. A brief summary of the Definition Phase of ITER activities is contained in this report. Included in this report are the background, objectives, organization, definition phase activities, and research and development plan of this endeavor in international scientific collaboration. A more extended technical summary is contained in the two-volume report, ''ITER Concept Definition,'' IAEA/ITER/DS/3. 2 figs, 2 tabs

  2. Power converters for ITER

    CERN Document Server

    Benfatto, I

    2006-01-01

    The International Thermonuclear Experimental Reactor (ITER) is a thermonuclear fusion experiment designed to provide long deuterium– tritium burning plasma operation. After a short description of ITER objectives, the main design parameters and the construction schedule, the paper describes the electrical characteristics of the French 400 kV grid at Cadarache: the European site proposed for ITER. Moreover, the paper describes the main requirements and features of the power converters designed for the ITER coil and additional heating power supplies, characterized by a total installed power of about 1.8 GVA, modular design with basic units up to 90 MVA continuous duty, dc currents up to 68 kA, and voltages from 1 kV to 1 MV dc.

  3. Comparing National Water Model Inundation Predictions with Hydrodynamic Modeling

    Science.gov (United States)

    Egbert, R. J.; Shastry, A.; Aristizabal, F.; Luo, C.

    2017-12-01

    The National Water Model (NWM) simulates the hydrologic cycle and produces streamflow forecasts, runoff, and other variables for 2.7 million reaches along the National Hydrography Dataset for the continental United States. NWM applies Muskingum-Cunge channel routing which is based on the continuity equation. However, the momentum equation also needs to be considered to obtain better estimates of streamflow and stage in rivers especially for applications such as flood inundation mapping. Simulation Program for River NeTworks (SPRNT) is a fully dynamic model for large scale river networks that solves the full nonlinear Saint-Venant equations for 1D flow and stage height in river channel networks with non-uniform bathymetry. For the current work, the steady-state version of the SPRNT model was leveraged. An evaluation on SPRNT's and NWM's abilities to predict inundation was conducted for the record flood of Hurricane Matthew in October 2016 along the Neuse River in North Carolina. This event was known to have been influenced by backwater effects from the Hurricane's storm surge. Retrospective NWM discharge predictions were converted to stage using synthetic rating curves. The stages from both models were utilized to produce flood inundation maps using the Height Above Nearest Drainage (HAND) method which uses the local relative heights to provide a spatial representation of inundation depths. In order to validate the inundation produced by the models, Sentinel-1A synthetic aperture radar data in the VV and VH polarizations along with auxiliary data was used to produce a reference inundation map. A preliminary, binary comparison of the inundation maps to the reference, limited to the five HUC-12 areas of Goldsboro, NC, yielded that the flood inundation accuracies for NWM and SPRNT were 74.68% and 78.37%, respectively. The differences for all the relevant test statistics including accuracy, true positive rate, true negative rate, and positive predictive value were found

  4. Approximate iterative algorithms

    CERN Document Server

    Almudevar, Anthony Louis

    2014-01-01

    Iterative algorithms often rely on approximate evaluation techniques, which may include statistical estimation, computer simulation or functional approximation. This volume presents methods for the study of approximate iterative algorithms, providing tools for the derivation of error bounds and convergence rates, and for the optimal design of such algorithms. Techniques of functional analysis are used to derive analytical relationships between approximation methods and convergence properties for general classes of algorithms. This work provides the necessary background in functional analysis a

  5. ITER EDA and technology

    International Nuclear Information System (INIS)

    Baker, C.C.

    2001-01-01

    The year 1998 was the culmination of the six-year Engineering Design Activities (EDA) of the International Thermonuclear Experimental Reactor (ITER) Project. The EDA results in design and validating technology R and D, plus the associated effort in voluntary physics research, is a significant achievement and major milestone in the history of magnetic fusion energy development. Consequently, the ITER EDA was a major theme at this Conference, contributing almost 40 papers

  6. ITER explorations started

    International Nuclear Information System (INIS)

    Golubchikov, L.

    2000-01-01

    Opening this first Explorers' Meeting, Minister Adamov welcomed the participants, thanked the ITER parties for their positive response to his invitation and expressed the desire of the Russian Federation to see ITER realized, stressing the importance of continued progress with the project as an outstanding example of international scientific co-operation. During the meeting, the exploration tasks were discussed and agreed upon, as well as the work plan and schedule

  7. Predictive models for moving contact line flows

    Science.gov (United States)

    Rame, Enrique; Garoff, Stephen

    2003-01-01

    Modeling flows with moving contact lines poses the formidable challenge that the usual assumptions of Newtonian fluid and no-slip condition give rise to a well-known singularity. This singularity prevents one from satisfying the contact angle condition to compute the shape of the fluid-fluid interface, a crucial calculation without which design parameters such as the pressure drop needed to move an immiscible 2-fluid system through a solid matrix cannot be evaluated. Some progress has been made for low Capillary number spreading flows. Combining experimental measurements of fluid-fluid interfaces very near the moving contact line with an analytical expression for the interface shape, we can determine a parameter that forms a boundary condition for the macroscopic interface shape when Ca much les than l. This parameter, which plays the role of an "apparent" or macroscopic dynamic contact angle, is shown by the theory to depend on the system geometry through the macroscopic length scale. This theoretically established dependence on geometry allows this parameter to be "transferable" from the geometry of the measurement to any other geometry involving the same material system. Unfortunately this prediction of the theory cannot be tested on Earth.

  8. Developmental prediction model for early alcohol initiation in Dutch adolescents

    NARCIS (Netherlands)

    Geels, L.M.; Vink, J.M.; Beijsterveldt, C.E.M. van; Bartels, M.; Boomsma, D.I.

    2013-01-01

    Objective: Multiple factors predict early alcohol initiation in teenagers. Among these are genetic risk factors, childhood behavioral problems, life events, lifestyle, and family environment. We constructed a developmental prediction model for alcohol initiation below the Dutch legal drinking age

  9. An iteratively reweighted least-squares approach to adaptive robust adjustment of parameters in linear regression models with autoregressive and t-distributed deviations

    Science.gov (United States)

    Kargoll, Boris; Omidalizarandi, Mohammad; Loth, Ina; Paffenholz, Jens-André; Alkhatib, Hamza

    2018-03-01

    In this paper, we investigate a linear regression time series model of possibly outlier-afflicted observations and autocorrelated random deviations. This colored noise is represented by a covariance-stationary autoregressive (AR) process, in which the independent error components follow a scaled (Student's) t-distribution. This error model allows for the stochastic modeling of multiple outliers and for an adaptive robust maximum likelihood (ML) estimation of the unknown regression and AR coefficients, the scale parameter, and the degree of freedom of the t-distribution. This approach is meant to be an extension of known estimators, which tend to focus only on the regression model, or on the AR error model, or on normally distributed errors. For the purpose of ML estimation, we derive an expectation conditional maximization either algorithm, which leads to an easy-to-implement version of iteratively reweighted least squares. The estimation performance of the algorithm is evaluated via Monte Carlo simulations for a Fourier as well as a spline model in connection with AR colored noise models of different orders and with three different sampling distributions generating the white noise components. We apply the algorithm to a vibration dataset recorded by a high-accuracy, single-axis accelerometer, focusing on the evaluation of the estimated AR colored noise model.

  10. Iterative circle-inserting algorithm CST3D-OC of truly orthogonal curvilinear grid for coastal or river modelling

    Science.gov (United States)

    Kim, H.; Lee, S.; Lee, J.; Lim, H.-S.

    2017-08-01

    A geometric method to generate orthogonal curvilinear grid is proposed here. Elliptic partial differential equations have frequently been solved to find orthogonal grid positions, but questions on orthogonality have remained so far. Algebraic methods have also been developed to improve orthogonality, but their applications have been limited to special situations. When two confronting boundary lines of the quadrilateral boundaries are straight, and their positions are known, and we assume that some degree of freedom exists on the other two confronting boundary curves under the condition that the each curve passes through a point, we can assign a set of latitudinal curves in the domain using polynomials. The curves are expected not to fold on their own. The grid positions along longitudinal curves are found by inserting circles between two neighbouring latitudinal curves one by one. If the two curves are straight, the new grid point above the grid point of interest can be found geometrically. This algorithm involves iterations because the curves are not straight lines. The present new algorithm is applied to a domain, and produced almost perfect orthogonality, and similar aspect ratio compared to an existing partial differential equation approach. The present algorithm also can express almost quadrant domain. The present algorithm seems useful for generation of orthogonal curvilinear grids along coasts or rivers. Some example grids are demonstrated.

  11. ITER Status and Plans

    Science.gov (United States)

    Greenfield, Charles M.

    2017-10-01

    The US Burning Plasma Organization is pleased to welcome Dr. Bernard Bigot, who will give an update on progress in the ITER Project. Dr. Bigot took over as Director General of the ITER Organization in early 2015 following a distinguished career that included serving as Chairman and CEO of the French Alternative Energies and Atomic Energy Commission and as High Commissioner for ITER in France. During his tenure at ITER the project has moved into high gear, with rapid progress evident on the construction site and preparation of a staged schedule and a research plan leading from where we are today through all the way to full DT operation. In an unprecedented international effort, seven partners ``China, the European Union, India, Japan, Korea, Russia and the United States'' have pooled their financial and scientific resources to build the biggest fusion reactor in history. ITER will open the way to the next step: a demonstration fusion power plant. All DPP attendees are welcome to attend this ITER town meeting.

  12. Seasonal predictability of Kiremt rainfall in coupled general circulation models

    Science.gov (United States)

    Gleixner, Stephanie; Keenlyside, Noel S.; Demissie, Teferi D.; Counillon, François; Wang, Yiguo; Viste, Ellen

    2017-11-01

    The Ethiopian economy and population is strongly dependent on rainfall. Operational seasonal predictions for the main rainy season (Kiremt, June-September) are based on statistical approaches with Pacific sea surface temperatures (SST) as the main predictor. Here we analyse dynamical predictions from 11 coupled general circulation models for the Kiremt seasons from 1985-2005 with the forecasts starting from the beginning of May. We find skillful predictions from three of the 11 models, but no model beats a simple linear prediction model based on the predicted Niño3.4 indices. The skill of the individual models for dynamically predicting Kiremt rainfall depends on the strength of the teleconnection between Kiremt rainfall and concurrent Pacific SST in the models. Models that do not simulate this teleconnection fail to capture the observed relationship between Kiremt rainfall and the large-scale Walker circulation.

  13. Protein secondary structure prediction for a single-sequence using hidden semi-Markov models

    Directory of Open Access Journals (Sweden)

    Borodovsky Mark

    2006-03-01

    Full Text Available Abstract Background The accuracy of protein secondary structure prediction has been improving steadily towards the 88% estimated theoretical limit. There are two types of prediction algorithms: Single-sequence prediction algorithms imply that information about other (homologous proteins is not available, while algorithms of the second type imply that information about homologous proteins is available, and use it intensively. The single-sequence algorithms could make an important contribution to studies of proteins with no detected homologs, however the accuracy of protein secondary structure prediction from a single-sequence is not as high as when the additional evolutionary information is present. Results In this paper, we further refine and extend the hidden semi-Markov model (HSMM initially considered in the BSPSS algorithm. We introduce an improved residue dependency model by considering the patterns of statistically significant amino acid correlation at structural segment borders. We also derive models that specialize on different sections of the dependency structure and incorporate them into HSMM. In addition, we implement an iterative training method to refine estimates of HSMM parameters. The three-state-per-residue accuracy and other accuracy measures of the new method, IPSSP, are shown to be comparable or better than ones for BSPSS as well as for PSIPRED, tested under the single-sequence condition. Conclusions We have shown that new dependency models and training methods bring further improvements to single-sequence protein secondary structure prediction. The results are obtained under cross-validation conditions using a dataset with no pair of sequences having significant sequence similarity. As new sequences are added to the database it is possible to augment the dependency structure and obtain even higher accuracy. Current and future advances should contribute to the improvement of function prediction for orphan proteins inscrutable

  14. Multi-objective radiomics model for predicting distant failure in lung SBRT

    Science.gov (United States)

    Zhou, Zhiguo; Folkert, Michael; Iyengar, Puneeth; Westover, Kenneth; Zhang, Yuanyuan; Choy, Hak; Timmerman, Robert; Jiang, Steve; Wang, Jing

    2017-06-01

    Stereotactic body radiation therapy (SBRT) has demonstrated high local control rates in early stage non-small cell lung cancer patients who are not ideal surgical candidates. However, distant failure after SBRT is still common. For patients at high risk of early distant failure after SBRT treatment, additional systemic therapy may reduce the risk of distant relapse and improve overall survival. Therefore, a strategy that can correctly stratify patients at high risk of failure is needed. The field of radiomics holds great potential in predicting treatment outcomes by using high-throughput extraction of quantitative imaging features. The construction of predictive models in radiomics is typically based on a single objective such as overall accuracy or the area under the curve (AUC). However, because of imbalanced positive and negative events in the training datasets, a single objective may not be ideal to guide model construction. To overcome these limitations, we propose a multi-objective radiomics model that simultaneously considers sensitivity and specificity as objective functions. To design a more accurate and reliable model, an iterative multi-objective immune algorithm (IMIA) was proposed to optimize these objective functions. The multi-objective radiomics model is more sensitive than the single-objective model, while maintaining the same levels of specificity and AUC. The IMIA performs better than the traditional immune-inspired multi-objective algorithm.

  15. MODELLING OF DYNAMIC SPEED LIMITS USING THE MODEL PREDICTIVE CONTROL

    Directory of Open Access Journals (Sweden)

    Andrey Borisovich Nikolaev

    2017-09-01

    Full Text Available The article considers the issues of traffic management using intelligent system “Car-Road” (IVHS, which consist of interacting intelligent vehicles (IV and intelligent roadside controllers. Vehicles are organized in convoy with small distances between them. All vehicles are assumed to be fully automated (throttle control, braking, steering. Proposed approaches for determining speed limits for traffic cars on the motorway using a model predictive control (MPC. The article proposes an approach to dynamic speed limit to minimize the downtime of vehicles in traffic.

  16. Quantification and reduction of the collimator-detector response effect in SPECT by applying a system model during iterative image reconstruction: a simulation study.

    Science.gov (United States)

    Kalantari, Faraz; Rajabi, Hossein; Saghari, Mohsen

    2012-03-01

    Detector blurring and non-ideal collimation decrease the spatial resolution of the single-photon emission computed tomography (SPECT) images. Iterative reconstruction algorithms such as ordered subsets expectation maximization (OSEM) can incorporate degrading factors during reconstruction. We investigated the quantitative errors associated with poor SPECT resolution and evaluated the importance of two-dimensional (2D) and three-dimensional (3D) resolution recovery by modelling system response during iterative image reconstruction. Different phantoms consisted of the NURBS-based cardiac-torso (NCAT) liver phantom with small tumors, the Zubal brain phantom and the NCAT heart phantom were used in this study. Monte Carlo simulation was used to create SPECT projections. Gaussian functions were used to model collimator detector response (CDR). Modeled CDRs were applied during OSEM. Both noise-free and noisy projections were created. Even with noise-free projections, conventional OSEM algorithm provided limited quantitative accuracy compared to both 2D and 3D resolution recovery. The 3D implementation of resolution recovery, however, yielded superior results compared to its 2D implementation. For the liver phantom, the ability to distinguish small tumors in both transverse and axial planes was improved. For the brain phantom, gray to white matter activity ratio was increased from 3.14 ± 0.04 in simple OSEM to 3.84 ± 0.06 in 3D OSEM. For the NCAT heart phantom, 3D resolution recovery, results in images with thinner wall and higher contrast for different noise levels. There are considerable quantitative errors associated with CDR, especially when the size of the target is comparable with the spatial resolution of the system. Between different reconstruction algorithms, 3D OSEM that consider the 3D nature of CDR, improve both the visual quality and the quantitative accuracy of any SPECT studies.

  17. Iterative near-term ecological forecasting: Needs, opportunities, and challenges

    Science.gov (United States)

    Dietze, Michael C.; Fox, Andrew; Beck-Johnson, Lindsay; Betancourt, Julio L.; Hooten, Mevin B.; Jarnevich, Catherine S.; Keitt, Timothy H.; Kenney, Melissa A.; Laney, Christine M.; Larsen, Laurel G.; Loescher, Henry W.; Lunch, Claire K.; Pijanowski, Bryan; Randerson, James T.; Read, Emily; Tredennick, Andrew T.; Vargas, Rodrigo; Weathers, Kathleen C.; White, Ethan P.

    2018-01-01

    Two foundational questions about sustainability are “How are ecosystems and the services they provide going to change in the future?” and “How do human decisions affect these trajectories?” Answering these questions requires an ability to forecast ecological processes. Unfortunately, most ecological forecasts focus on centennial-scale climate responses, therefore neither meeting the needs of near-term (daily to decadal) environmental decision-making nor allowing comparison of specific, quantitative predictions to new observational data, one of the strongest tests of scientific theory. Near-term forecasts provide the opportunity to iteratively cycle between performing analyses and updating predictions in light of new evidence. This iterative process of gaining feedback, building experience, and correcting models and methods is critical for improving forecasts. Iterative, near-term forecasting will accelerate ecological research, make it more relevant to society, and inform sustainable decision-making under high uncertainty and adaptive management. Here, we identify the immediate scientific and societal needs, opportunities, and challenges for iterative near-term ecological forecasting. Over the past decade, data volume, variety, and accessibility have greatly increased, but challenges remain in interoperability, latency, and uncertainty quantification. Similarly, ecologists have made considerable advances in applying computational, informatic, and statistical methods, but opportunities exist for improving forecast-specific theory, methods, and cyberinfrastructure. Effective forecasting will also require changes in scientific training, culture, and institutions. The need to start forecasting is now; the time for making ecology more predictive is here, and learning by doing is the fastest route to drive the science forward.

  18. Predictability in models of the atmospheric circulation

    NARCIS (Netherlands)

    Houtekamer, P.L.

    1992-01-01

    It will be clear from the above discussions that skill forecasts are still in their infancy. Operational skill predictions do not exist. One is still struggling to prove that skill predictions, at any range, have any quality at all. It is not clear what the statistics of the analysis error

  19. Iterative model reconstruction: Improved image quality of low-tube-voltage prospective ECG-gated coronary CT angiography images at 256-slice CT

    Energy Technology Data Exchange (ETDEWEB)

    Oda, Seitaro, E-mail: seisei0430@nifty.com [Department of Cardiology, MedStar Washington Hospital Center, 110 Irving Street, NW, Washington, DC 20010 (United States); Department of Diagnostic Radiology, Faculty of Life Sciences, Kumamoto University, 1-1-1 Honjyo, Chuo-ku, Kumamoto, 860-8556 (Japan); Weissman, Gaby, E-mail: Gaby.Weissman@medstar.net [Department of Cardiology, MedStar Washington Hospital Center, 110 Irving Street, NW, Washington, DC 20010 (United States); Vembar, Mani, E-mail: mani.vembar@philips.com [CT Clinical Science, Philips Healthcare, c595 Miner Road, Cleveland, OH 44143 (United States); Weigold, Wm. Guy, E-mail: Guy.Weigold@MedStar.net [Department of Cardiology, MedStar Washington Hospital Center, 110 Irving Street, NW, Washington, DC 20010 (United States)

    2014-08-15

    Objectives: To investigate the effects of a new model-based type of iterative reconstruction (M-IR) technique, the iterative model reconstruction, on image quality of prospectively gated coronary CT angiography (CTA) acquired at low-tube-voltage. Methods: Thirty patients (16 men, 14 women; mean age 52.2 ± 13.2 years) underwent coronary CTA at 100-kVp on a 256-slice CT. Paired image sets were created using 3 types of reconstruction, i.e. filtered back projection (FBP), a hybrid type of iterative reconstruction (H-IR), and M-IR. Quantitative parameters including CT-attenuation, image noise, and contrast-to-noise ratio (CNR) were measured. The visual image quality, i.e. graininess, beam-hardening, vessel sharpness, and overall image quality, was scored on a 5-point scale. Lastly, coronary artery segments were evaluated using a 4-point scale to investigate the assessability of each segment. Results: There was no significant difference in coronary arterial CT attenuation among the 3 reconstruction methods. The mean image noise of FBP, H-IR, and M-IR images was 29.3 ± 9.6, 19.3 ± 6.9, and 12.9 ± 3.3 HU, respectively, there were significant differences for all comparison combinations among the 3 methods (p < 0.01). The CNR of M-IR was significantly better than of FBP and H-IR images (13.5 ± 5.0 [FBP], 20.9 ± 8.9 [H-IR] and 39.3 ± 13.9 [M-IR]; p < 0.01). The visual scores were significantly higher for M-IR than the other images (p < 0.01), and 95.3% of the coronary segments imaged with M-IR were of assessable quality compared with 76.7% of FBP- and 86.9% of H-IR images. Conclusions: M-IR can provide significantly improved qualitative and quantitative image quality in prospectively gated coronary CTA using a low-tube-voltage.

  20. A fast and pragmatic approach for scatter correction in flat-detector CT using elliptic modeling and iterative optimization

    Energy Technology Data Exchange (ETDEWEB)

    Meyer, Michael; Kalender, Willi A; Kyriakou, Yiannis [Institute of Medical Physics, University of Erlangen-Nuernberg (Germany)], E-mail: michael.meyer@imp.uni-erlangen.de

    2010-01-07

    Scattered radiation is a major source of artifacts in flat detector computed tomography (FDCT) due to the increased irradiated volumes. We propose a fast projection-based algorithm for correction of scatter artifacts. The presented algorithm combines a convolution method to determine the spatial distribution of the scatter intensity distribution with an object-size-dependent scaling of the scatter intensity distributions using a priori information generated by Monte Carlo simulations. A projection-based (PBSE) and an image-based (IBSE) strategy for size estimation of the scanned object are presented. Both strategies provide good correction and comparable results; the faster PBSE strategy is recommended. Even with such a fast and simple algorithm that in the PBSE variant does not rely on reconstructed volumes or scatter measurements, it is possible to provide a reasonable scatter correction even for truncated scans. For both simulations and measurements, scatter artifacts were significantly reduced and the algorithm showed stable behavior in the z-direction. For simulated voxelized head, hip and thorax phantoms, a figure of merit Q of 0.82, 0.76 and 0.77 was reached, respectively (Q = 0 for uncorrected, Q = 1 for ideal). For a water phantom with 15 cm diameter, for example, a cupping reduction from 10.8% down to 2.1% was achieved. The performance of the correction method has limitations in the case of measurements using non-ideal detectors, intensity calibration, etc. An iterative approach to overcome most of these limitations was proposed. This approach is based on root finding of a cupping metric and may be useful for other scatter correction methods as well. By this optimization, cupping of the measured water phantom was further reduced down to 0.9%. The algorithm was evaluated on a commercial system including truncated and non-homogeneous clinically relevant objects.

  1. Modeling of the negative ions extraction from a hydrogen plasma source. Application to ITER Neutral Beam Injector

    International Nuclear Information System (INIS)

    Mochalskyy, S.

    2011-12-01

    The development of a high performance negative ion (NI) source constitutes a crucial step in the construction of a Neutral Beam Injector of the future fusion reactor ITER. NI source should deliver 40 A of H - or of D - . To address this problem in a realistic way, a 3D particles-in-cell electrostatic collisional code was developed. Binary collisions between the particles are introduced using Monte-Carlo collision scheme. This code called ONIX was used to investigate the plasma properties and the transport of the charged particles close to a typical extraction aperture. Results obtained from this code are presented in this thesis. They include negative ions and electrons 3D trajectories. The ion and electron current density profiles are shown for different local magnetic field configurations. Results of production, destruction, and transport of H - in the extraction region are also presented. The production of H - is investigated via 3 atomic processes: 1) electron dissociative attachment to the vibrationally excited molecules H 2 (v) in the volume, 2) interaction of the positive ions H + and H 2 + with the aperture wall and 3) collisions of the neutral gas H, H 2 with aperture wall. The influence of each process on the total extracted NI current is discussed. The extraction efficiency of H - from the volume is compared to the one of H - coming from the wall. Moreover, a parametric study of the H - surface production is presented. Results show the role of sheath behavior in the vicinity of the aperture developing a double layer structure responsible of the NI extraction limitations. The 2 following issues are also analysed. First the influence of the external extracted potential value on the formation of negative sheath and secondly the strength of the magnetic filter on the total extracted NI and co-extracted electron current. The suppression of the electron beam by the negative ion produced at the plasma grid wall is also discussed. Results are in good agreement

  2. A fast and pragmatic approach for scatter correction in flat-detector CT using elliptic modeling and iterative optimization

    International Nuclear Information System (INIS)

    Meyer, Michael; Kalender, Willi A; Kyriakou, Yiannis

    2010-01-01

    Scattered radiation is a major source of artifacts in flat detector computed tomography (FDCT) due to the increased irradiated volumes. We propose a fast projection-based algorithm for correction of scatter artifacts. The presented algorithm combines a convolution method to determine the spatial distribution of the scatter intensity distribution with an object-size-dependent scaling of the scatter intensity distributions using a priori information generated by Monte Carlo simulations. A projection-based (PBSE) and an image-based (IBSE) strategy for size estimation of the scanned object are presented. Both strategies provide good correction and comparable results; the faster PBSE strategy is recommended. Even with such a fast and simple algorithm that in the PBSE variant does not rely on reconstructed volumes or scatter measurements, it is possible to provide a reasonable scatter correction even for truncated scans. For both simulations and measurements, scatter artifacts were significantly reduced and the algorithm showed stable behavior in the z-direction. For simulated voxelized head, hip and thorax phantoms, a figure of merit Q of 0.82, 0.76 and 0.77 was reached, respectively (Q = 0 for uncorrected, Q = 1 for ideal). For a water phantom with 15 cm diameter, for example, a cupping reduction from 10.8% down to 2.1% was achieved. The performance of the correction method has limitations in the case of measurements using non-ideal detectors, intensity calibration, etc. An iterative approach to overcome most of these limitations was proposed. This approach is based on root finding of a cupping metric and may be useful for other scatter correction methods as well. By this optimization, cupping of the measured water phantom was further reduced down to 0.9%. The algorithm was evaluated on a commercial system including truncated and non-homogeneous clinically relevant objects.

  3. Statistical model based iterative reconstruction in myocardial CT perfusion: exploitation of the low dimensionality of the spatial-temporal image matrix

    Science.gov (United States)

    Li, Yinsheng; Niu, Kai; Chen, Guang-Hong

    2015-03-01

    Time-resolved CT imaging methods play an increasingly important role in clinical practice, particularly, in the diagnosis and treatment of vascular diseases. In a time-resolved CT imaging protocol, it is often necessary to irradiate the patients for an extended period of time. As a result, the cumulative radiation dose in these CT applications is often higher than that of the static CT imaging protocols. Therefore, it is important to develop new means of reducing radiation dose for time-resolved CT imaging. In this paper, we present a novel statistical model based iterative reconstruction method that enables the reconstruction of low noise time-resolved CT images at low radiation exposure levels. Unlike other well known statistical reconstruction methods, this new method primarily exploits the intrinsic low dimensionality of time-resolved CT images to regularize the reconstruction. Numerical simulations were used to validate the proposed method.

  4. Required Collaborative Work in Online Courses: A Predictive Modeling Approach

    Science.gov (United States)

    Smith, Marlene A.; Kellogg, Deborah L.

    2015-01-01

    This article describes a predictive model that assesses whether a student will have greater perceived learning in group assignments or in individual work. The model produces correct classifications 87.5% of the time. The research is notable in that it is the first in the education literature to adopt a predictive modeling methodology using data…

  5. Models for predicting compressive strength and water absorption of ...

    African Journals Online (AJOL)

    This work presents a mathematical model for predicting the compressive strength and water absorption of laterite-quarry dust cement block using augmented Scheffe's simplex lattice design. The statistical models developed can predict the mix proportion that will yield the desired property. The models were tested for lack of ...

  6. Transient Calibration of a Variably-Saturated Groundwater Flow Model By Iterative Ensemble Smoothering: Synthetic Case and Application to the Flow Induced During Shaft Excavation and Operation of the Bure Underground Research Laboratory

    Science.gov (United States)

    Lam, D. T.; Kerrou, J.; Benabderrahmane, H.; Perrochet, P.

    2017-12-01

    The calibration of groundwater flow models in transient state can be motivated by the expected improved characterization of the aquifer hydraulic properties, especially when supported by a rich transient dataset. In the prospect of setting up a calibration strategy for a variably-saturated transient groundwater flow model of the area around the ANDRA's Bure Underground Research Laboratory, we wish to take advantage of the long hydraulic head and flowrate time series collected near and at the access shafts in order to help inform the model hydraulic parameters. A promising inverse approach for such high-dimensional nonlinear model, and which applicability has been illustrated more extensively in other scientific fields, could be an iterative ensemble smoother algorithm initially developed for a reservoir engineering problem. Furthermore, the ensemble-based stochastic framework will allow to address to some extent the uncertainty of the calibration for a subsequent analysis of a flow process dependent prediction. By assimilating the available data in one single step, this method iteratively updates each member of an initial ensemble of stochastic realizations of parameters until the minimization of an objective function. However, as it is well known for ensemble-based Kalman methods, this correction computed from approximations of covariance matrices is most efficient when the ensemble realizations are multi-Gaussian. As shown by the comparison of the updated ensemble mean obtained for our simplified synthetic model of 2D vertical flow by using either multi-Gaussian or multipoint simulations of parameters, the ensemble smoother fails to preserve the initial connectivity of the facies and the parameter bimodal distribution. Given the geological structures depicted by the multi-layered geological model built for the real case, our goal is to find how to still best leverage the performance of the ensemble smoother while using an initial ensemble of conditional multi

  7. ITER EDA Newsletter. V.4, no.1

    International Nuclear Information System (INIS)

    1995-01-01

    This ITER EDA (Engineering Design Activities) Newsletter issue reports on (i) the seventh ITER Council Meeting held at the Naka Joint Work Site on 14-15 December 1994, (ii) the ''Confinement Modelling and Database Expert Group Workshop'' held in Seville, Spain, 3-4 October 1994, and (iii) the first meeting of the International Organizing Committee for the Seventh International Fusion Reactor Materials Conference

  8. ITER EDA newsletter. V. 7, no. 11

    International Nuclear Information System (INIS)

    1998-11-01

    This ITER EDA Newsletter contains a report on the delivery of the outer module of the CS model coil to Naka by K. Okuno et al, a special lecture by H. Yoshikawa, the president of the Science Council of Japan on the future outlook of nuclear fusion and a report on an ITER display during the 17th IAEA Fusion Energy Conference, held in Yokohama, Japan, from October 19 to 24, 1998

  9. Design iteration in construction projects – Review and directions

    Directory of Open Access Journals (Sweden)

    Purva Mujumdar

    2018-03-01

    Full Text Available Design phase of any construction project involves several designers who exchange information with each other most often in an unstructured manner throughout the design phase. When these information exchanges happen to occur in cycles/loops, it is termed as design iteration. Iteration is an inherent and unavoidable aspect of any design phase which requires proper planning. Till date, very few researchers have explored the design iteration (“complexity” in construction sector. Hence, the objective of this paper was to document and review the complexities of iteration during design phase of construction projects for efficient design planning. To achieve this objective, exhaustive literature review on design iteration was done for four sectors – construction, manufacturing, aerospace, and software development. In addition, semi-structured interviews and discussions were done with a few design experts to verify the different dimensions of iteration. Finally, a design iteration framework was presented in this study that facilitates successful planning. Keywords: Design iteration, Types of iteration, Causes and impact of iteration, Models of iteration, Execution strategies of iteration

  10. Physics basis of ITER-FEAT

    International Nuclear Information System (INIS)

    Shimada, M.; Campbell, D.J.; Wakatani, M.; Ninomiya, H.; Ivanov, N.V.; Mukhovatov, V.

    2001-01-01

    This paper reviews Physics R and D results obtained since the publication of the ITER Physics Basis document. The heating power required for the LH transition has been re-assessed, including recent results from C-Mod and JT-60U and it has been found that the predicted power is a factor of two lower than the previous projection. For predicting ITER-FEAT performance, a conservative scaling IPB98(y,2) has been adopted for the energy confinement, producing confinement times ∼20% lower than those derived from the IPB98(y,1) law. While energy confinement degradation at high density remains a serious issue, recent experiments suggest that good confinement is achievable in ITER at n/n G ∼0.85 with high triangularity. The estimated runaway electron energy has been reduced to ∼20MJ, since recent experiments show that runaway electrons disappear for q 95 leq2. (author)

  11. Regression models for predicting anthropometric measurements of ...

    African Journals Online (AJOL)

    measure anthropometric dimensions to predict difficult-to-measure dimensions required for ergonomic design of school furniture. A total of 143 students aged between 16 and 18 years from eight public secondary schools in Ogbomoso, Nigeria ...

  12. FINITE ELEMENT MODEL FOR PREDICTING RESIDUAL ...

    African Journals Online (AJOL)

    direction (σx) had a maximum value of 375MPa (tensile) and minimum value of ... These results shows that the residual stresses obtained by prediction from the finite element method are in fair agreement with the experimental results.

  13. Probabilistic Modeling and Visualization for Bankruptcy Prediction

    DEFF Research Database (Denmark)

    Antunes, Francisco; Ribeiro, Bernardete; Pereira, Francisco Camara

    2017-01-01

    In accounting and finance domains, bankruptcy prediction is of great utility for all of the economic stakeholders. The challenge of accurate assessment of business failure prediction, specially under scenarios of financial crisis, is known to be complicated. Although there have been many successful......). Using real-world bankruptcy data, an in-depth analysis is conducted showing that, in addition to a probabilistic interpretation, the GP can effectively improve the bankruptcy prediction performance with high accuracy when compared to the other approaches. We additionally generate a complete graphical...... visualization to improve our understanding of the different attained performances, effectively compiling all the conducted experiments in a meaningful way. We complete our study with an entropy-based analysis that highlights the uncertainty handling properties provided by the GP, crucial for prediction tasks...

  14. Prediction for Major Adverse Outcomes in Cardiac Surgery: Comparison of Three Prediction Models

    Directory of Open Access Journals (Sweden)

    Cheng-Hung Hsieh

    2007-09-01

    Conclusion: The Parsonnet score performed as well as the logistic regression models in predicting major adverse outcomes. The Parsonnet score appears to be a very suitable model for clinicians to use in risk stratification of cardiac surgery.

  15. ITER tokamak device

    International Nuclear Information System (INIS)

    Doggett, J.; Salpietro, E.; Shatalov, G.

    1991-01-01

    The results of the Conceptual Design Activities for the International Thermonuclear Experimental Reactor (ITER) are summarized. These activities, carried out between April 1988 and December 1990, produced a consistent set of technical characteristics and preliminary plans for co-ordinated research and development support of ITER; and a conceptual design, a description of design requirements and a preliminary construction schedule and cost estimate. After a description of the design basis, an overview is given of the tokamak device, its auxiliary systems, facility and maintenance. The interrelation and integration of the various subsystems that form the ITER tokamak concept are discussed. The 16 ITER equatorial port allocations, used for nuclear testing, diagnostics, fuelling, maintenance, and heating and current drive, are given, as well as a layout of the reactor building. Finally, brief descriptions are given of the major ITER sub-systems, i.e., (i) magnet systems (toroidal and poloidal field coils and cryogenic systems), (ii) containment structures (vacuum and cryostat vessels, machine gravity supports, attaching locks, passive loops and active coils), (iii) first wall, (iv) divertor plate (design and materials, performance and lifetime, a.o.), (v) blanket/shield system, (vi) maintenance equipment, (vii) current drive and heating, (viii) fuel cycle system, and (ix) diagnostics. 11 refs, figs and tabs

  16. Twelfth ITER negotiation meeting

    International Nuclear Information System (INIS)

    2006-01-01

    Delegations from China, European Union, Japan, the Republic of Korea, the Russian Federation and the United States of America gathered on Jeju Island, Korea, on 6 December 2005, to complete their negotiations on an Agreement on the joint implementation of the ITER international fusion energy project. At the start of the Meeting, the Delegations unanimously and enthusiastically welcomed India as a full Party to the ITER venture. A Delegation from India then joined the Meeting and participated fully in the discussions that followed. The seven ITER Delegations also welcomed to the Meeting the newly designated Nominee Director-General for the prospective ITER Organization, Ambassador Kaname Ikeda, who is to take up his duties as leader of the project. Based on the results of intensive working level meetings held throughout the previous week, the Delegations have succeeded in clearing the remaining key issues such as decision-making, intellectual property and management within the prospective ITER Organization and adjustments to the sharing of resources as a result of India's participation, including in particular cost sharing and in-kind contributions, leaving only a few legal points requiring resolution during the final lawyers' meeting to review the text for coherence and internal consistency

  17. Protein (multi-)location prediction: utilizing interdependencies via a generative model

    Science.gov (United States)

    Shatkay, Hagit

    2015-01-01

    Motivation: Proteins are responsible for a multitude of vital tasks in all living organisms. Given that a protein’s function and role are strongly related to its subcellular location, protein location prediction is an important research area. While proteins move from one location to another and can localize to multiple locations, most existing location prediction systems assign only a single location per protein. A few recent systems attempt to predict multiple locations for proteins, however, their performance leaves much room for improvement. Moreover, such systems do not capture dependencies among locations and usually consider locations as independent. We hypothesize that a multi-location predictor that captures location inter-dependencies can improve location predictions for proteins. Results: We introduce a probabilistic generative model for protein localization, and develop a system based on it—which we call MDLoc—that utilizes inter-dependencies among locations to predict multiple locations for proteins. The model captures location inter-dependencies using Bayesian networks and represents dependency between features and locations using a mixture model. We use iterative processes for learning model parameters and for estimating protein locations. We evaluate our classifier MDLoc, on a dataset of single- and multi-localized proteins derived from the DBMLoc dataset, which is the most comprehensive protein multi-localization dataset currently available. Our results, obtained by using MDLoc, significantly improve upon results obtained by an initial simpler classifier, as well as on results reported by other top systems. Availability and implementation: MDLoc is available at: http://www.eecis.udel.edu/∼compbio/mdloc. Contact: shatkay@udel.edu. PMID:26072505

  18. From Predictive Models to Instructional Policies

    Science.gov (United States)

    Rollinson, Joseph; Brunskill, Emma

    2015-01-01

    At their core, Intelligent Tutoring Systems consist of a student model and a policy. The student model captures the state of the student and the policy uses the student model to individualize instruction. Policies require different properties from the student model. For example, a mastery threshold policy requires the student model to have a way…

  19. Comparisons of Faulting-Based Pavement Performance Prediction Models

    Directory of Open Access Journals (Sweden)

    Weina Wang

    2017-01-01

    Full Text Available Faulting prediction is the core of concrete pavement maintenance and design. Highway agencies are always faced with the problem of lower accuracy for the prediction which causes costly maintenance. Although many researchers have developed some performance prediction models, the accuracy of prediction has remained a challenge. This paper reviews performance prediction models and JPCP faulting models that have been used in past research. Then three models including multivariate nonlinear regression (MNLR model, artificial neural network (ANN model, and Markov Chain (MC model are tested and compared using a set of actual pavement survey data taken on interstate highway with varying design features, traffic, and climate data. It is found that MNLR model needs further recalibration, while the ANN model needs more data for training the network. MC model seems a good tool for pavement performance prediction when the data is limited, but it is based on visual inspections and not explicitly related to quantitative physical parameters. This paper then suggests that the further direction for developing the performance prediction model is incorporating the advantages and disadvantages of different models to obtain better accuracy.

  20. Is the Bifactor Model a Better Model or Is It Just Better at Modeling Implausible Responses? Application of Iteratively Reweighted Least Squares to the Rosenberg Self-Esteem Scale.

    Science.gov (United States)

    Reise, Steven P; Kim, Dale S; Mansolf, Maxwell; Widaman, Keith F

    2016-01-01

    Although the structure of the Rosenberg Self-Esteem Scale (RSES) has been exhaustively evaluated, questions regarding dimensionality and direction of wording effects continue to be debated. To shed new light on these issues, we ask (a) for what percentage of individuals is a unidimensional model adequate, (b) what additional percentage of individuals can be modeled with multidimensional specifications, and (c) what percentage of individuals respond so inconsistently that they cannot be well modeled? To estimate these percentages, we applied iteratively reweighted least squares (IRLS) to examine the structure of the RSES in a large, publicly available data set. A distance measure, d s , reflecting a distance between a response pattern and an estimated model, was used for case weighting. We found that a bifactor model provided the best overall model fit, with one general factor and two wording-related group factors. However, on the basis of d r  values, a distance measure based on individual residuals, we concluded that approximately 86% of cases were adequately modeled through a unidimensional structure, and only an additional 3% required a bifactor model. Roughly 11% of cases were judged as "unmodelable" due to their significant residuals in all models considered. Finally, analysis of d s revealed that some, but not all, of the superior fit of the bifactor model is owed to that model's ability to better accommodate implausible and possibly invalid response patterns, and not necessarily because it better accounts for the effects of direction of wording.

  1. Prediction Accuracy Optimization of Chaotic Perturbation in the Analysis Model of Network-Oriented Consumption

    Directory of Open Access Journals (Sweden)

    Dakai Li

    2014-10-01

    Full Text Available As the slower rate of convergence and lower study ability in the late period of network-oriented consumption prediction model based on neural network algorithm, this paper proposed a network analysis neural model based on chaotic disturbance optimized particle swarm. Firstly, improve the initialization of particle swarm with chaotic disturbance optimization strategy in order to limit the initial position and the initial speed of limited particle. Then have an optimal operation on each individual in particle swarm with chaotic disturbance variables, so that the particles which do not enter into iteration will jump out of the local optima area. And next, optimize the PSO algorithm inertia weight by adopting adaptive adjustment strategy based on individual particle adaptive value. At last, combine the improved PSO algorithm based on chaotic disturbance with neural network algorithm, thus we will construct the network-oriented consumption analysis model. Simulation results show that the proposed network-oriented consumption analysis neural network model based on chaotic disturbance optimized particle swarm has greatly improved in prediction accuracy and computational speed.

  2. Reduction of metal artifacts due to dental hardware in computed tomography angiography: assessment of the utility of model-based iterative reconstruction

    Energy Technology Data Exchange (ETDEWEB)

    Kuya, Keita; Shinohara, Yuki; Ogawa, Toshihide [Tottori University, Division of Radiology, Department of Pathophysiological and Therapeutic Science, Faculty of Medicine, Yonago (Japan); Kato, Ayumi [Tottori Municipal Hospital, Department of Radiology, Yonago (Japan); Sakamoto, Makoto; Kurosaki, Masamichi [Tottori University, Division of Neurosurgery, Department of Neurological Sciences, Faculty of Medicine, Yonago (Japan)

    2017-03-15

    The aim of this study is to assess the value of adaptive statistical iterative reconstruction (ASIR) and model-based iterative reconstruction (MBIR) for reduction of metal artifacts due to dental hardware in carotid CT angiography (CTA). Thirty-seven patients with dental hardware who underwent carotid CTA were included. CTA was performed with a GE Discovery CT750 HD scanner and reconstructed with filtered back projection (FBP), ASIR, and MBIR. We measured the standard deviation at the cervical segment of the internal carotid artery that was affected most by dental metal artifacts (SD{sub 1}) and the standard deviation at the common carotid artery that was not affected by the artifact (SD{sub 2}). We calculated the artifact index (AI) as follows: AI = [(SD{sub 1})2 - (SD{sub 2})2]1/2 and compared each AI for FBP, ASIR, and MBIR. Visual assessment of the internal carotid artery was also performed by two neuroradiologists using a five-point scale for each axial and reconstructed sagittal image. The inter-observer agreement was analyzed using weighted kappa analysis. MBIR significantly improved AI compared with FBP and ASIR (p < 0.001, each). We found no significant difference in AI between FBP and ASIR (p = 0.502). The visual score of MBIR was significantly better than those of FBP and ASIR (p < 0.001, each), whereas the scores of ASIR were the same as those of FBP. Kappa values indicated good inter-observer agreements in all reconstructed images (0.747-0.778). MBIR resulted in a significant reduction in artifact from dental hardware in carotid CTA. (orig.)

  3. Knowledge-based iterative model reconstruction technique in computed tomography of lumbar spine lowers radiation dose and improves tissue differentiation for patients with lower back pain

    Energy Technology Data Exchange (ETDEWEB)

    Yang, Cheng Hui [Department of Medical Imaging, Pojen General Hopsital, Taipei, Taiwan (China); School of Medicine, National Yang-Ming University, Taipei, Taiwan (China); Wu, Tung-Hsin [Department of Biomedical Imaging and Radiological Sciences, National Yang-Ming University, Taipei, Taiwan (China); Lin, Chung-Jung, E-mail: bcjlin@me.com [School of Medicine, National Yang-Ming University, Taipei, Taiwan (China); Department of Radiology, Taipei Veterans General Hospital, Taipei, Taiwan (China); Chiou, Yi-You; Chen, Ying-Chou; Sheu, Ming-Huei; Guo, Wan-Yuo; Chiu, Chen Fen [School of Medicine, National Yang-Ming University, Taipei, Taiwan (China); Department of Radiology, Taipei Veterans General Hospital, Taipei, Taiwan (China)

    2016-10-15

    Highlights: • Knowledge-based IMR improves tissue differentiation in CT of L-spine better than hybrid IR (iDose{sup 4}). • Higher strength IMR improves image qualities of the IVD and IVF in spinal stenosis. • IMR provides diagnostic lower dose CT of L-spine. - Abstract: Purpose: To evaluate the image quality and diagnostic confidence of reduced-dose computed tomography (CT) of the lumbar spine (L-spine) reconstructed with knowledge-based iterative model reconstruction (IMR). Materials and methods: Prospectively, group A consisted of 55 patients imaged with standard acquisition reconstructed with filtered back-projection. Group B consisted of 58 patients imaged with half tube current, reconstructed with hybrid iterative reconstruction (iDose{sup 4}) in Group B1 and knowledge-based IMR in Group B2. Signal-to-noise ratio (SNR) of different regions, the contrast-to-noise ratio between the intervetebral disc (IVD) and dural sac (D-D CNR), and subjective image quality of different regions were compared. Higher strength IMR was also compared in spinal stenosis cases. Results: The SNR of the psoas muscle and D-D CNR were significantly higher in the IMR group. Except for the facet joint, subjective image quality of other regions including IVD, intervertebral foramen (IVF), dural sac, peridural fat, ligmentum flavum, and overall diagnostic acceptability were best for the IMR group. Diagnostic confidence of narrowing IVF and IVD was good (kappa = 0.58–0.85). Higher strength IMR delineated IVD better in spinal stenosis cases. Conclusion: Lower dose CT of L-spine reconstructed with IMR demonstrates better tissue differentiation than iDose{sup 4} and standard dose CT with FBP.

  4. Batch-to-batch model improvement for cooling crystallization

    OpenAIRE

    Forgione , Marco; Birpoutsoukis , Georgios; Bombois , Xavier; Mesbah , Ali; Daudey , Peter; Van Den Hof , Paul

    2015-01-01

    International audience; Two batch-to-batch model update strategies for model-based control of batch cooling crystallization are presented. In Iterative Learning Control, a nominal process model is adjusted by a non-parametric, additive correction term which depends on the difference between the measured output and the model prediction in the previous batch. In Iterative Identification Control, the uncertain model parameters are iteratively estimated using the measured batch data. Due to the d...

  5. A model to predict the beginning of the pollen season

    DEFF Research Database (Denmark)

    Toldam-Andersen, Torben Bo

    1991-01-01

    In order to predict the beginning of the pollen season, a model comprising the Utah phenoclirnatography Chill Unit (CU) and ASYMCUR-Growing Degree Hour (GDH) submodels were used to predict the first bloom in Alms, Ulttirrs and Berirln. The model relates environmental temperatures to rest completion...... and bud development. As phenologic parameter 14 years of pollen counts were used. The observed datcs for the beginning of the pollen seasons were defined from the pollen counts and compared with the model prediction. The CU and GDH submodels were used as: 1. A fixed day model, using only the GDH model...... for fruit trees are generally applicable, and give a reasonable description of the growth processes of other trees. This type of model can therefore be of value in predicting the start of the pollen season. The predicted dates were generally within 3-5 days of the observed. Finally the possibility of frost...

  6. Risk prediction model: Statistical and artificial neural network approach

    Science.gov (United States)

    Paiman, Nuur Azreen; Hariri, Azian; Masood, Ibrahim

    2017-04-01

    Prediction models are increasingly gaining popularity and had been used in numerous areas of studies to complement and fulfilled clinical reasoning and decision making nowadays. The adoption of such models assist physician's decision making, individual's behavior, and consequently improve individual outcomes and the cost-effectiveness of care. The objective of this paper is to reviewed articles related to risk prediction model in order to understand the suitable approach, development and the validation process of risk prediction model. A qualitative review of the aims, methods and significant main outcomes of the nineteen published articles that developed risk prediction models from numerous fields were done. This paper also reviewed on how researchers develop and validate the risk prediction models based on statistical and artificial neural network approach. From the review done, some methodological recommendation in developing and validating the prediction model were highlighted. According to studies that had been done, artificial neural network approached in developing the prediction model were more accurate compared to statistical approach. However currently, only limited published literature discussed on which approach is more accurate for risk prediction model development.

  7. On the safety of ITER accelerators.

    Science.gov (United States)

    Li, Ge

    2013-01-01

    Three 1 MV/40A accelerators in heating neutral beams (HNB) are on track to be implemented in the International Thermonuclear Experimental Reactor (ITER). ITER may produce 500 MWt of power by 2026 and may serve as a green energy roadmap for the world. They will generate -1 MV 1 h long-pulse ion beams to be neutralised for plasma heating. Due to frequently occurring vacuum sparking in the accelerators, the snubbers are used to limit the fault arc current to improve ITER safety. However, recent analyses of its reference design have raised concerns. General nonlinear transformer theory is developed for the snubber to unify the former snubbers' different design models with a clear mechanism. Satisfactory agreement between theory and tests indicates that scaling up to a 1 MV voltage may be possible. These results confirm the nonlinear process behind transformer theory and map out a reliable snubber design for a safer ITER.

  8. Evaluation of the US Army fallout prediction model

    International Nuclear Information System (INIS)

    Pernick, A.; Levanon, I.

    1987-01-01

    The US Army fallout prediction method was evaluated against an advanced fallout prediction model--SIMFIC (Simplified Fallout Interpretive Code). The danger zone areas of the US Army method were found to be significantly greater (up to a factor of 8) than the areas of corresponding radiation hazard as predicted by SIMFIC. Nonetheless, because the US Army's method predicts danger zone lengths that are commonly shorter than the corresponding hot line distances of SIMFIC, the US Army's method is not reliably conservative

  9. On the use of iterative re-weighting least-squares and outlier detection for empirically modelling rates of vertical displacement

    Science.gov (United States)

    Rangelova, E.; Fotopoulos, G.; Sideris, M. G.

    2009-06-01

    The proper identification and removal of outliers in the combination of rates of vertical displacements derived from GPS, tide gauges/satellite altimetry, and GRACE observations is presented. Outlier detection is a necessary pre-screening procedure in order to ensure reliable estimates of stochastic properties of the observations in the combined least-squares adjustment (via rescaling of covariance matrices) and to ensure that the final vertical motion model is not corrupted and/or distorted by erroneous data. Results from this study indicate that typical data snooping methods are inadequate in dealing with these heterogeneous data sets and their stochastic properties. Using simulated vertical displacement rates, it is demonstrated that a large variety of outliers (random scattered and adjacent, as well as jointly influential) can be dealt with if an iterative re-weighting least-squares adjustment is combined with a robust median estimator. Moreover, robust estimators are efficient in areas weakly constrained by the data, where even high quality observations may appear to be erroneous if their estimates are largely influenced by outliers. Four combined models for the vertical motion in the region of the Great Lakes are presented. The computed vertical displacements vary between - 2 mm/year (subsidence) along the southern shores and 3 mm/year (uplift) along the northern shores. The derived models provide reliable empirical constraints and error bounds for postglacial rebound models in the region.

  10. Comparative Evaluation of Some Crop Yield Prediction Models ...

    African Journals Online (AJOL)

    A computer program was adopted from the work of Hill et al. (1982) to calibrate and test three of the existing yield prediction models using tropical cowpea yieldÐweather data. The models tested were Hanks Model (first and second versions). Stewart Model (first and second versions) and HallÐButcher Model. Three sets of ...

  11. Comparative Evaluation of Some Crop Yield Prediction Models ...

    African Journals Online (AJOL)

    (1982) to calibrate and test three of the existing yield prediction models using tropical cowpea yieldÐweather data. The models tested were Hanks Model (first and second versions). Stewart Model (first and second versions) and HallÐButcher Model. Three sets of cowpea yield-water use and weather data were collected.

  12. Earthly sun called ITER

    International Nuclear Information System (INIS)

    Pozdeyev, Mikhail

    2002-01-01

    Full text: Participating in the film are Academicians Velikhov and Glukhikh, Mr. Filatof, ITER Director from Russia, Mr. Sannikov from Kurchatov Institute. The film tells about the starting point of the project (Mr. Lavrentyev), the pioneers of the project (Academicians Tamme, Sakharov, Artsimovich) and about the situation the project is standing now. Participating in [ITER now are the US, Russia, Japan and the European Union. There are two associated members as well - Kazakhstan and Canada. By now the engineering design phase has been finished. Computer animation used in the video gives us the idea how the first thermonuclear reactor based on famous Russian TOKOMAK works. (author)

  13. Iterated multidimensional wave conversion

    International Nuclear Information System (INIS)

    Brizard, A. J.; Tracy, E. R.; Johnston, D.; Kaufman, A. N.; Richardson, A. S.; Zobin, N.

    2011-01-01

    Mode conversion can occur repeatedly in a two-dimensional cavity (e.g., the poloidal cross section of an axisymmetric tokamak). We report on two novel concepts that allow for a complete and global visualization of the ray evolution under iterated conversions. First, iterated conversion is discussed in terms of ray-induced maps from the two-dimensional conversion surface to itself (which can be visualized in terms of three-dimensional rooms). Second, the two-dimensional conversion surface is shown to possess a symplectic structure derived from Dirac constraints associated with the two dispersion surfaces of the interacting waves.

  14. Prediction of speech intelligibility based on an auditory preprocessing model

    DEFF Research Database (Denmark)

    Christiansen, Claus Forup Corlin; Pedersen, Michael Syskind; Dau, Torsten

    2010-01-01

    Classical speech intelligibility models, such as the speech transmission index (STI) and the speech intelligibility index (SII) are based on calculations on the physical acoustic signals. The present study predicts speech intelligibility by combining a psychoacoustically validated model of auditory...

  15. Modelling microbial interactions and food structure in predictive microbiology

    NARCIS (Netherlands)

    Malakar, P.K.

    2002-01-01

    Keywords: modelling, dynamic models, microbial interactions, diffusion, microgradients, colony growth, predictive microbiology.

    Growth response of microorganisms in foods is a complex process. Innovations in food production and preservation techniques have resulted in adoption of

  16. Ocean wave prediction using numerical and neural network models

    Digital Repository Service at National Institute of Oceanography (India)

    Mandal, S.; Prabaharan, N.

    This paper presents an overview of the development of the numerical wave prediction models and recently used neural networks for ocean wave hindcasting and forecasting. The numerical wave models express the physical concepts of the phenomena...

  17. A Prediction Model of the Capillary Pressure J-Function.

    Directory of Open Access Journals (Sweden)

    W S Xu

    Full Text Available The capillary pressure J-function is a dimensionless measure of the capillary pressure of a fluid in a porous medium. The function was derived based on a capillary bundle model. However, the dependence of the J-function on the saturation Sw is not well understood. A prediction model for it is presented based on capillary pressure model, and the J-function prediction model is a power function instead of an exponential or polynomial function. Relative permeability is calculated with the J-function prediction model, resulting in an easier calculation and results that are more representative.

  18. Statistical model based gender prediction for targeted NGS clinical panels

    Directory of Open Access Journals (Sweden)

    Palani Kannan Kandavel

    2017-12-01

    The reference test dataset are being used to test the model. The sensitivity on predicting the gender has been increased from the current “genotype composition in ChrX” based approach. In addition, the prediction score given by the model can be used to evaluate the quality of clinical dataset. The higher prediction score towards its respective gender indicates the higher quality of sequenced data.

  19. comparative analysis of two mathematical models for prediction

    African Journals Online (AJOL)

    Abstract. A mathematical modeling for prediction of compressive strength of sandcrete blocks was performed using statistical analysis for the sandcrete block data ob- tained from experimental work done in this study. The models used are Scheffes and Osadebes optimization theories to predict the compressive strength of ...

  20. Comparison of predictive models for the early diagnosis of diabetes

    NARCIS (Netherlands)

    M. Jahani (Meysam); M. Mahdavi (Mahdi)

    2016-01-01

    textabstractObjectives: This study develops neural network models to improve the prediction of diabetes using clinical and lifestyle characteristics. Prediction models were developed using a combination of approaches and concepts. Methods: We used memetic algorithms to update weights and to improve

  1. Testing and analysis of internal hardwood log defect prediction models

    Science.gov (United States)

    R. Edward. Thomas

    2011-01-01

    The severity and location of internal defects determine the quality and value of lumber sawn from hardwood logs. Models have been developed to predict the size and position of internal defects based on external defect indicator measurements. These models were shown to predict approximately 80% of all internal knots based on external knot indicators. However, the size...

  2. Hidden Markov Model for quantitative prediction of snowfall

    Indian Academy of Sciences (India)

    A Hidden Markov Model (HMM) has been developed for prediction of quantitative snowfall in Pir-Panjal and Great Himalayan mountain ranges of Indian Himalaya. The model predicts snowfall for two days in advance using daily recorded nine meteorological variables of past 20 winters from 1992–2012. There are six ...

  3. Bayesian variable order Markov models: Towards Bayesian predictive state representations

    NARCIS (Netherlands)

    Dimitrakakis, C.

    2009-01-01

    We present a Bayesian variable order Markov model that shares many similarities with predictive state representations. The resulting models are compact and much easier to specify and learn than classical predictive state representations. Moreover, we show that they significantly outperform a more

  4. Demonstrating the improvement of predictive maturity of a computational model

    Energy Technology Data Exchange (ETDEWEB)

    Hemez, Francois M [Los Alamos National Laboratory; Unal, Cetin [Los Alamos National Laboratory; Atamturktur, Huriye S [CLEMSON UNIV.

    2010-01-01

    We demonstrate an improvement of predictive capability brought to a non-linear material model using a combination of test data, sensitivity analysis, uncertainty quantification, and calibration. A model that captures increasingly complicated phenomena, such as plasticity, temperature and strain rate effects, is analyzed. Predictive maturity is defined, here, as the accuracy of the model to predict multiple Hopkinson bar experiments. A statistical discrepancy quantifies the systematic disagreement (bias) between measurements and predictions. Our hypothesis is that improving the predictive capability of a model should translate into better agreement between measurements and predictions. This agreement, in turn, should lead to a smaller discrepancy. We have recently proposed to use discrepancy and coverage, that is, the extent to which the physical experiments used for calibration populate the regime of applicability of the model, as basis to define a Predictive Maturity Index (PMI). It was shown that predictive maturity could be improved when additional physical tests are made available to increase coverage of the regime of applicability. This contribution illustrates how the PMI changes as 'better' physics are implemented in the model. The application is the non-linear Preston-Tonks-Wallace (PTW) strength model applied to Beryllium metal. We demonstrate that our framework tracks the evolution of maturity of the PTW model. Robustness of the PMI with respect to the selection of coefficients needed in its definition is also studied.

  5. Refining the Committee Approach and Uncertainty Prediction in Hydrological Modelling

    NARCIS (Netherlands)

    Kayastha, N.

    2014-01-01

    Due to the complexity of hydrological systems a single model may be unable to capture the full range of a catchment response and accurately predict the streamflows. The multi modelling approach opens up possibilities for handling such difficulties and allows improve the predictive capability of

  6. Refining the committee approach and uncertainty prediction in hydrological modelling

    NARCIS (Netherlands)

    Kayastha, N.

    2014-01-01

    Due to the complexity of hydrological systems a single model may be unable to capture the full range of a catchment response and accurately predict the streamflows. The multi modelling approach opens up possibilities for handling such difficulties and allows improve the predictive capability of

  7. Wind turbine control and model predictive control for uncertain systems

    DEFF Research Database (Denmark)

    Thomsen, Sven Creutz

    as disturbance models for controller design. The theoretical study deals with Model Predictive Control (MPC). MPC is an optimal control method which is characterized by the use of a receding prediction horizon. MPC has risen in popularity due to its inherent ability to systematically account for time...

  8. Hidden Markov Model for quantitative prediction of snowfall and ...

    Indian Academy of Sciences (India)

    A Hidden Markov Model (HMM) has been developed for prediction of quantitative snowfall in Pir-Panjal and Great Himalayan mountain ranges of Indian Himalaya. The model predicts snowfall for two days in advance using daily recorded nine meteorological variables of past 20 winters from 1992–2012. There are six ...

  9. Model predictive control of a 3-DOF helicopter system using ...

    African Journals Online (AJOL)

    ... by simulation, and its performance is compared with that achieved by linear model predictive control (LMPC). Keywords: nonlinear systems, helicopter dynamics, MIMO systems, model predictive control, successive linearization. International Journal of Engineering, Science and Technology, Vol. 2, No. 10, 2010, pp. 9-19 ...

  10. Models for predicting fuel consumption in sagebrush-dominated ecosystems

    Science.gov (United States)

    Clinton S. Wright

    2013-01-01

    Fuel consumption predictions are necessary to accurately estimate or model fire effects, including pollutant emissions during wildland fires. Fuel and environmental measurements on a series of operational prescribed fires were used to develop empirical models for predicting fuel consumption in big sagebrush (Artemisia tridentate Nutt.) ecosystems....

  11. Comparative Analysis of Two Mathematical Models for Prediction of ...

    African Journals Online (AJOL)

    A mathematical modeling for prediction of compressive strength of sandcrete blocks was performed using statistical analysis for the sandcrete block data obtained from experimental work done in this study. The models used are Scheffe's and Osadebe's optimization theories to predict the compressive strength of sandcrete ...

  12. Status of reliability in determining SDDR for manual maintenance activities in ITER: Quality assessment of relevant activation cross sections involved

    Energy Technology Data Exchange (ETDEWEB)

    Garcia, R., E-mail: rgarciam@ind.uned.es [UNED, Power Engineering Department, C/Juan del Rosal 12, 28040 Madrid (Spain); Garcia, M. [UNED, Power Engineering Department, C/Juan del Rosal 12, 28040 Madrid (Spain); Pampin, R. [F4E, Torres Diagonal Litoral B3, Barcelona (Spain); Sanz, J. [UNED, Power Engineering Department, C/Juan del Rosal 12, 28040 Madrid (Spain)

    2016-11-15

    Highlights: • Feasibility of manual maintenance activities in ITER port cell and port interspace. • Activation of relevant materials and components placed in the current ITER model. • Dominant radionuclides and pathways for shutdown dose rate in ITER. • Quality analysis of typically used EAF and TENDL activation libraries is performed. • EAF performance found as trustworthy with slight recommended improvements. - Abstract: This paper assesses the quality of the EAF-2007 and 2010 activation cross sections for relevant reactions in the determination of the Shutdown Dose Rate (SDDR) in the Port Cell (PC) and Port Interspace (PI) areas of ITER. For each of relevant ITER materials, dominant radionuclides responsible of SDDR and their production pathways are listed. This information comes from a review of the recent reports/papers about SDDR in ITER and own calculations. A total of 26 relevant pathways are found. The quality of these cross sections pathways is assessed following EAF validation procedure, and for those found as not validated last TENDL library versions have been investigated in order to check possible improvements when compared to EAF. The use of EAF libraries is found as trustworthy and it is recommended for the prediction of SDDR in the ITER PC and PI. However, 3 cross section reactions are considered for further improvement: Co59(n,2n)Co58, Cu63(n,g)Cu64 and Cr50(n,g)Cr51.

  13. A mathematical model for predicting earthquake occurrence ...

    African Journals Online (AJOL)

    We consider the continental crust under damage. We use the observed results of microseism in many seismic stations of the world which was established to study the time series of the activities of the continental crust with a view to predicting possible time of occurrence of earthquake. We consider microseism time series ...

  14. Model for predicting the injury severity score.

    Science.gov (United States)

    Hagiwara, Shuichi; Oshima, Kiyohiro; Murata, Masato; Kaneko, Minoru; Aoki, Makoto; Kanbe, Masahiko; Nakamura, Takuro; Ohyama, Yoshio; Tamura, Jun'ichi

    2015-07-01

    To determine the formula that predicts the injury severity score from parameters that are obtained in the emergency department at arrival. We reviewed the medical records of trauma patients who were transferred to the emergency department of Gunma University Hospital between January 2010 and December 2010. The injury severity score, age, mean blood pressure, heart rate, Glasgow coma scale, hemoglobin, hematocrit, red blood cell count, platelet count, fibrinogen, international normalized ratio of prothrombin time, activated partial thromboplastin time, and fibrin degradation products, were examined in those patients on arrival. To determine the formula that predicts the injury severity score, multiple linear regression analysis was carried out. The injury severity score was set as the dependent variable, and the other parameters were set as candidate objective variables. IBM spss Statistics 20 was used for the statistical analysis. Statistical significance was set at P  Watson ratio was 2.200. A formula for predicting the injury severity score in trauma patients was developed with ordinary parameters such as fibrin degradation products and mean blood pressure. This formula is useful because we can predict the injury severity score easily in the emergency department.

  15. Econometric models for predicting confusion crop ratios

    Science.gov (United States)

    Umberger, D. E.; Proctor, M. H.; Clark, J. E.; Eisgruber, L. M.; Braschler, C. B. (Principal Investigator)

    1979-01-01

    Results for both the United States and Canada show that econometric models can provide estimates of confusion crop ratios that are more accurate than historical ratios. Whether these models can support the LACIE 90/90 accuracy criterion is uncertain. In the United States, experimenting with additional model formulations could provide improved methods models in some CRD's, particularly in winter wheat. Improved models may also be possible for the Canadian CD's. The more aggressive province/state models outperformed individual CD/CRD models. This result was expected partly because acreage statistics are based on sampling procedures, and the sampling precision declines from the province/state to the CD/CRD level. Declining sampling precision and the need to substitute province/state data for the CD/CRD data introduced measurement error into the CD/CRD models.

  16. Fixed recurrence and slip models better predict earthquake behavior than the time- and slip-predictable models 1: repeating earthquakes

    Science.gov (United States)

    Rubinstein, Justin L.; Ellsworth, William L.; Chen, Kate Huihsuan; Uchida, Naoki

    2012-01-01

    The behavior of individual events in repeating earthquake sequences in California, Taiwan and Japan is better predicted by a model with fixed inter-event time or fixed slip than it is by the time- and slip-predictable models for earthquake occurrence. Given that repeating earthquakes are highly regular in both inter-event time and seismic moment, the time- and slip-predictable models seem ideally suited to explain their behavior. Taken together with evidence from the companion manuscript that shows similar results for laboratory experiments we conclude that the short-term predictions of the time- and slip-predictable models should be rejected in favor of earthquake models that assume either fixed slip or fixed recurrence interval. This implies that the elastic rebound model underlying the time- and slip-predictable models offers no additional value in describing earthquake behavior in an event-to-event sense, but its value in a long-term sense cannot be determined. These models likely fail because they rely on assumptions that oversimplify the earthquake cycle. We note that the time and slip of these events is predicted quite well by fixed slip and fixed recurrence models, so in some sense they are time- and slip-predictable. While fixed recurrence and slip models better predict repeating earthquake behavior than the time- and slip-predictable models, we observe a correlation between slip and the preceding recurrence time for many repeating earthquake sequences in Parkfield, California. This correlation is not found in other regions, and the sequences with the correlative slip-predictable behavior are not distinguishable from nearby earthquake sequences that do not exhibit this behavior.

  17. Adding propensity scores to pure prediction models fails to improve predictive performance

    Directory of Open Access Journals (Sweden)

    Amy S. Nowacki

    2013-08-01

    Full Text Available Background. Propensity score usage seems to be growing in popularity leading researchers to question the possible role of propensity scores in prediction modeling, despite the lack of a theoretical rationale. It is suspected that such requests are due to the lack of differentiation regarding the goals of predictive modeling versus causal inference modeling. Therefore, the purpose of this study is to formally examine the effect of propensity scores on predictive performance. Our hypothesis is that a multivariable regression model that adjusts for all covariates will perform as well as or better than those models utilizing propensity scores with respect to model discrimination and calibration.Methods. The most commonly encountered statistical scenarios for medical prediction (logistic and proportional hazards regression were used to investigate this research question. Random cross-validation was performed 500 times to correct for optimism. The multivariable regression models adjusting for all covariates were compared with models that included adjustment for or weighting with the propensity scores. The methods were compared based on three predictive performance measures: (1 concordance indices; (2 Brier scores; and (3 calibration curves.Results. Multivariable models adjusting for all covariates had the highest average concordance index, the lowest average Brier score, and the best calibration. Propensity score adjustment and inverse probability weighting models without adjustment for all covariates performed worse than full models and failed to improve predictive performance with full covariate adjustment.Conclusion. Propensity score techniques did not improve prediction performance measures beyond multivariable adjustment. Propensity scores are not recommended if the analytical goal is pure prediction modeling.

  18. PEEX Modelling Platform for Seamless Environmental Prediction

    Science.gov (United States)

    Baklanov, Alexander; Mahura, Alexander; Arnold, Stephen; Makkonen, Risto; Petäjä, Tuukka; Kerminen, Veli-Matti; Lappalainen, Hanna K.; Ezau, Igor; Nuterman, Roman; Zhang, Wen; Penenko, Alexey; Gordov, Evgeny; Zilitinkevich, Sergej; Kulmala, Markku

    2017-04-01

    The Pan-Eurasian EXperiment (PEEX) is a multidisciplinary, multi-scale research programme stared in 2012 and aimed at resolving the major uncertainties in Earth System Science and global sustainability issues concerning the Arctic and boreal Northern Eurasian regions and in China. Such challenges include climate change, air quality, biodiversity loss, chemicalization, food supply, and the use of natural resources by mining, industry, energy production and transport. The research infrastructure introduces the current state of the art modeling platform and observation systems in the Pan-Eurasian region and presents the future baselines for the coherent and coordinated research infrastructures in the PEEX domain. The PEEX modeling Platform is characterized by a complex seamless integrated Earth System Modeling (ESM) approach, in combination with specific models of different processes and elements of the system, acting on different temporal and spatial scales. The ensemble approach is taken to the integration of modeling results from different models, participants and countries. PEEX utilizes the full potential of a hierarchy of models: scenario analysis, inverse modeling, and modeling based on measurement needs and processes. The models are validated and constrained by available in-situ and remote sensing data of various spatial and temporal scales using data assimilation and top-down modeling. The analyses of the anticipated large volumes of data produced by available models and sensors will be supported by a dedicated virtual research environment developed for these purposes.

  19. The second iteration of the Systems Prioritization Method: A systems prioritization and decision-aiding tool for the Waste Isolation Pilot Plant: Volume 2, Summary of technical input and model implementation

    International Nuclear Information System (INIS)

    Prindle, N.H.; Mendenhall, F.T.; Trauth, K.; Boak, D.M.; Beyeler, W.; Hora, S.; Rudeen, D.

    1996-05-01

    The Systems Prioritization Method (SPM) is a decision-aiding tool developed by Sandia National Laboratories (SNL). SPM provides an analytical basis for supporting programmatic decisions for the Waste Isolation Pilot Plant (WIPP) to meet selected portions of the applicable US EPA long-term performance regulations. The first iteration of SPM (SPM-1), the prototype for SPM< was completed in 1994. It served as a benchmark and a test bed for developing the tools needed for the second iteration of SPM (SPM-2). SPM-2, completed in 1995, is intended for programmatic decision making. This is Volume II of the three-volume final report of the second iteration of the SPM. It describes the technical input and model implementation for SPM-2, and presents the SPM-2 technical baseline and the activities, activity outcomes, outcome probabilities, and the input parameters for SPM-2 analysis

  20. Submillisievert Computed Tomography of the Chest Using Model-Based Iterative Algorithm: Optimization of Tube Voltage With Regard to Patient Size.

    Science.gov (United States)

    Deák, Zsuzsanna; Maertz, Friedrich; Meurer, Felix; Notohamiprodjo, Susan; Mueck, Fabian; Geyer, Lucas L; Reiser, Maximilian F; Wirth, Stefan

    The aim of this study was to define optimal tube potential for soft tissue and vessel visualization in dose-reduced chest CT protocols using model-based iterative algorithm in average and overweight patients. Thirty-six patients receiving chest CT according to 3 protocols (120 kVp/noise index [NI], 60; 100 kVp/NI, 65; 80 kVp/NI, 70) were included in this prospective study, approved by the ethics committee. Patients' physical parameters and dose descriptors were recorded. Images were reconstructed with model-based algorithm. Two radiologists evaluated image quality and lesion conspicuity; the protocols were intraindividually compared with preceding control CT reconstructed with statistical algorithm (120 kVp/NI, 20). Mean and standard deviation of attenuation of the muscle and fat tissues and signal-to-noise ratio of the aorta were measured. Diagnostic images (lesion conspicuity, 95%-100%) were acquired in average and overweight patients at 1.34, 1.02, and 1.08 mGy and at 3.41, 3.20, and 2.88 mGy at 120, 100, and 80 kVp, respectively. Data are given as CT dose index volume values. Model-based algorithm allows for submillisievert chest CT in average patients; the use of 100 kVp is recommended.