WorldWideScience

Sample records for accurate reduced models

  1. Accurate Modeling of Advanced Reflectarrays

    DEFF Research Database (Denmark)

    Zhou, Min

    Analysis and optimization methods for the design of advanced printed re ectarrays have been investigated, and the study is focused on developing an accurate and efficient simulation tool. For the analysis, a good compromise between accuracy and efficiency can be obtained using the spectral domain...

  2. Universality: Accurate Checks in Dyson's Hierarchical Model

    Science.gov (United States)

    Godina, J. J.; Meurice, Y.; Oktay, M. B.

    2003-06-01

    In this talk we present high-accuracy calculations of the susceptibility near βc for Dyson's hierarchical model in D = 3. Using linear fitting, we estimate the leading (γ) and subleading (Δ) exponents. Independent estimates are obtained by calculating the first two eigenvalues of the linearized renormalization group transformation. We found γ = 1.29914073 ± 10 -8 and, Δ = 0.4259469 ± 10-7 independently of the choice of local integration measure (Ising or Landau-Ginzburg). After a suitable rescaling, the approximate fixed points for a large class of local measure coincide accurately with a fixed point constructed by Koch and Wittwer.

  3. Determining Reduced Order Models for Optimal Stochastic Reduced Order Models

    Energy Technology Data Exchange (ETDEWEB)

    Bonney, Matthew S. [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Brake, Matthew R.W. [Sandia National Lab. (SNL-CA), Livermore, CA (United States)

    2015-08-01

    The use of parameterized reduced order models(PROMs) within the stochastic reduced order model (SROM) framework is a logical progression for both methods. In this report, five different parameterized reduced order models are selected and critiqued against the other models along with truth model for the example of the Brake-Reuss beam. The models are: a Taylor series using finite difference, a proper orthogonal decomposition of the the output, a Craig-Bampton representation of the model, a method that uses Hyper-Dual numbers to determine the sensitivities, and a Meta-Model method that uses the Hyper-Dual results and constructs a polynomial curve to better represent the output data. The methods are compared against a parameter sweep and a distribution propagation where the first four statistical moments are used as a comparison. Each method produces very accurate results with the Craig-Bampton reduction having the least accurate results. The models are also compared based on time requirements for the evaluation of each model where the Meta- Model requires the least amount of time for computation by a significant amount. Each of the five models provided accurate results in a reasonable time frame. The determination of which model to use is dependent on the availability of the high-fidelity model and how many evaluations can be performed. Analysis of the output distribution is examined by using a large Monte-Carlo simulation along with a reduced simulation using Latin Hypercube and the stochastic reduced order model sampling technique. Both techniques produced accurate results. The stochastic reduced order modeling technique produced less error when compared to an exhaustive sampling for the majority of methods.

  4. Accurate Electromagnetic Modeling Methods for Integrated Circuits

    NARCIS (Netherlands)

    Sheng, Z.

    2010-01-01

    The present development of modern integrated circuits (IC’s) is characterized by a number of critical factors that make their design and verification considerably more difficult than before. This dissertation addresses the important questions of modeling all electromagnetic behavior of features on t

  5. Spectropolarimetrically accurate magnetohydrostatic sunspot model for forward modelling in helioseismology

    CERN Document Server

    Przybylski, D; Cally, P S

    2015-01-01

    We present a technique to construct a spectropolarimetrically accurate magneto-hydrostatic model of a large-scale solar magnetic field concentration, mimicking a sunspot. Using the constructed model we perform a simulation of acoustic wave propagation, conversion and absorption in the solar interior and photosphere with the sunspot embedded into it. With the $6173\\mathrm{\\AA}$ magnetically sensitive photospheric absorption line of neutral iron, we calculate observable quantities such as continuum intensities, Doppler velocities, as well as full Stokes vector for the simulation at various positions at the solar disk, and analyse the influence of non-locality of radiative transport in the solar photosphere on helioseismic measurements. Bisector shapes were used to perform multi-height observations. The differences in acoustic power at different heights within the line formation region at different positions at the solar disk were simulated and characterised. An increase in acoustic power in the simulated observ...

  6. A new, accurate predictive model for incident hypertension

    DEFF Research Database (Denmark)

    Völzke, Henry; Fung, Glenn; Ittermann, Till;

    2013-01-01

    Data mining represents an alternative approach to identify new predictors of multifactorial diseases. This work aimed at building an accurate predictive model for incident hypertension using data mining procedures....

  7. Accurate Load Modeling Based on Analytic Hierarchy Process

    Directory of Open Access Journals (Sweden)

    Zhenshu Wang

    2016-01-01

    Full Text Available Establishing an accurate load model is a critical problem in power system modeling. That has significant meaning in power system digital simulation and dynamic security analysis. The synthesis load model (SLM considers the impact of power distribution network and compensation capacitor, while randomness of power load is more precisely described by traction power system load model (TPSLM. On the basis of these two load models, a load modeling method that combines synthesis load with traction power load is proposed in this paper. This method uses analytic hierarchy process (AHP to interact with two load models. Weight coefficients of two models can be calculated after formulating criteria and judgment matrixes and then establishing a synthesis model by weight coefficients. The effectiveness of the proposed method was examined through simulation. The results show that accurate load modeling based on AHP can effectively improve the accuracy of load model and prove the validity of this method.

  8. Reduced Order Podolsky Model

    CERN Document Server

    Thibes, Ronaldo

    2016-01-01

    We perform the canonical and path integral quantizations of a lower-order derivatives model describing Podolsky's generalized electrodynamics. The physical content of the model shows an auxiliary massive vector field coupled to the usual electromagnetic field. The equivalence with Podolsky's original model is studied at classical and quantum levels. Concerning the dynamical time evolution we obtain a theory with two first-class and two second-class constraints in phase space. We calculate explicitly the corresponding Dirac brackets involving both vector fields. We use the Senjanovic procedure to implement the second-class constraints and the Batalin-Fradkin-Vilkovisky path integral quantization scheme to deal with the symmetries generated by the first-class constraints. The physical interpretation of the results turns out to be simpler due to the reduced derivatives order permeating the equations of motion, Dirac brackets and effective action.

  9. ACCURATE FORECAST AS AN EFFECTIVE WAY TO REDUCE THE ECONOMIC RISK OF AGRO-INDUSTRIAL COMPLEX

    Directory of Open Access Journals (Sweden)

    Kymratova A. M.

    2014-11-01

    Full Text Available This article discusses the ways of reducing the financial, economic and social risks on the basis of an accurate prediction. We study the importance of natural time series of winter wheat yield, minimum winter, winter-spring daily temperatures. The feature of the time series of this class is disobeying a normal distribution, there is no visible trend

  10. On nonlinear reduced order modeling

    International Nuclear Information System (INIS)

    When applied to a model that receives n input parameters and predicts m output responses, a reduced order model estimates the variations in the m outputs of the original model resulting from variations in its n inputs. While direct execution of the forward model could provide these variations, reduced order modeling plays an indispensable role for most real-world complex models. This follows because the solutions of complex models are expensive in terms of required computational overhead, thus rendering their repeated execution computationally infeasible. To overcome this problem, reduced order modeling determines a relationship (often referred to as a surrogate model) between the input and output variations that is much cheaper to evaluate than the original model. While it is desirable to seek highly accurate surrogates, the computational overhead becomes quickly intractable especially for high dimensional model, n ≫ 10. In this manuscript, we demonstrate a novel reduced order modeling method for building a surrogate model that employs only 'local first-order' derivatives and a new tensor-free expansion to efficiently identify all the important features of the original model to reach a predetermined level of accuracy. This is achieved via a hybrid approach in which local first-order derivatives (i.e., gradient) of a pseudo response (a pseudo response represents a random linear combination of original model’s responses) are randomly sampled utilizing a tensor-free expansion around some reference point, with the resulting gradient information aggregated in a subspace (denoted by the active subspace) of dimension much less than the dimension of the input parameters space. The active subspace is then sampled employing the state-of-the-art techniques for global sampling methods. The proposed method hybridizes the use of global sampling methods for uncertainty quantification and local variational methods for sensitivity analysis. In a similar manner to

  11. Accurate energy model for WSN node and its optimal design

    Institute of Scientific and Technical Information of China (English)

    Kan Baoqiang; Cai Li; Zhu Hongsong; Xu Yongjun

    2008-01-01

    With the development of CMOS and MEMS technologies, the implementation of a large number of wireless distributed micro-sensors that can be easily and rapidly deployed to form highly redundant, self-configuring, and ad hoc sensor networks. To facilitate ease of deployment, these sensors operate on battery for extended periods of time. A particular challenge in maintaining extended battery lifetime lies in achieving communications with low power. For better understanding of the design tradeoffs of wireless sensor network (WSN), a more accurate energy model for wireless sensor node is proposed, and an optimal design method of energy efficient wireless sensor node is described as well. Different from power models ever shown which assume the power cost of each component in WSN node is constant, the new one takes into account the energy dissipation of circuits in practical physical layer. It shows that there are some parameters, such as data rate, carrier frequency, bandwidth, Tsw, etc, which have a significant effect on the WSN node energy consumption per useful bit (EPUB). For a given quality specification, how energy consumption can be reduced by adjusting one or more of these parameters is shown.

  12. An accurate and simple quantum model for liquid water.

    Science.gov (United States)

    Paesani, Francesco; Zhang, Wei; Case, David A; Cheatham, Thomas E; Voth, Gregory A

    2006-11-14

    The path-integral molecular dynamics and centroid molecular dynamics methods have been applied to investigate the behavior of liquid water at ambient conditions starting from a recently developed simple point charge/flexible (SPC/Fw) model. Several quantum structural, thermodynamic, and dynamical properties have been computed and compared to the corresponding classical values, as well as to the available experimental data. The path-integral molecular dynamics simulations show that the inclusion of quantum effects results in a less structured liquid with a reduced amount of hydrogen bonding in comparison to its classical analog. The nuclear quantization also leads to a smaller dielectric constant and a larger diffusion coefficient relative to the corresponding classical values. Collective and single molecule time correlation functions show a faster decay than their classical counterparts. Good agreement with the experimental measurements in the low-frequency region is obtained for the quantum infrared spectrum, which also shows a higher intensity and a redshift relative to its classical analog. A modification of the original parametrization of the SPC/Fw model is suggested and tested in order to construct an accurate quantum model, called q-SPC/Fw, for liquid water. The quantum results for several thermodynamic and dynamical properties computed with the new model are shown to be in a significantly better agreement with the experimental data. Finally, a force-matching approach was applied to the q-SPC/Fw model to derive an effective quantum force field for liquid water in which the effects due to the nuclear quantization are explicitly distinguished from those due to the underlying molecular interactions. Thermodynamic and dynamical properties computed using standard classical simulations with this effective quantum potential are found in excellent agreement with those obtained from significantly more computationally demanding full centroid molecular dynamics

  13. An accurate RLGC circuit model for dual tapered TSV structure

    International Nuclear Information System (INIS)

    A fast RLGC circuit model with analytical expression is proposed for the dual tapered through-silicon via (TSV) structure in three-dimensional integrated circuits under different slope angles at the wide frequency region. By describing the electrical characteristics of the dual tapered TSV structure, the RLGC parameters are extracted based on the numerical integration method. The RLGC model includes metal resistance, metal inductance, substrate resistance, outer inductance with skin effect and eddy effect taken into account. The proposed analytical model is verified to be nearly as accurate as the Q3D extractor but more efficient. (semiconductor integrated circuits)

  14. Bayesian calibration of power plant models for accurate performance prediction

    International Nuclear Information System (INIS)

    Highlights: • Bayesian calibration is applied to power plant performance prediction. • Measurements from a plant in operation are used for model calibration. • A gas turbine performance model and steam cycle model are calibrated. • An integrated plant model is derived. • Part load efficiency is accurately predicted as a function of ambient conditions. - Abstract: Gas turbine combined cycles are expected to play an increasingly important role in the balancing of supply and demand in future energy markets. Thermodynamic modeling of these energy systems is frequently applied to assist in decision making processes related to the management of plant operation and maintenance. In most cases, model inputs, parameters and outputs are treated as deterministic quantities and plant operators make decisions with limited or no regard of uncertainties. As the steady integration of wind and solar energy into the energy market induces extra uncertainties, part load operation and reliability are becoming increasingly important. In the current study, methods are proposed to not only quantify various types of uncertainties in measurements and plant model parameters using measured data, but to also assess their effect on various aspects of performance prediction. The authors aim to account for model parameter and measurement uncertainty, and for systematic discrepancy of models with respect to reality. For this purpose, the Bayesian calibration framework of Kennedy and O’Hagan is used, which is especially suitable for high-dimensional industrial problems. The article derives a calibrated model of the plant efficiency as a function of ambient conditions and operational parameters, which is also accurate in part load. The article shows that complete statistical modeling of power plants not only enhances process models, but can also increases confidence in operational decisions

  15. More-Accurate Model of Flows in Rocket Injectors

    Science.gov (United States)

    Hosangadi, Ashvin; Chenoweth, James; Brinckman, Kevin; Dash, Sanford

    2011-01-01

    An improved computational model for simulating flows in liquid-propellant injectors in rocket engines has been developed. Models like this one are needed for predicting fluxes of heat in, and performances of, the engines. An important part of predicting performance is predicting fluctuations of temperature, fluctuations of concentrations of chemical species, and effects of turbulence on diffusion of heat and chemical species. Customarily, diffusion effects are represented by parameters known in the art as the Prandtl and Schmidt numbers. Prior formulations include ad hoc assumptions of constant values of these parameters, but these assumptions and, hence, the formulations, are inaccurate for complex flows. In the improved model, these parameters are neither constant nor specified in advance: instead, they are variables obtained as part of the solution. Consequently, this model represents the effects of turbulence on diffusion of heat and chemical species more accurately than prior formulations do, and may enable more-accurate prediction of mixing and flows of heat in rocket-engine combustion chambers. The model has been implemented within CRUNCH CFD, a proprietary computational fluid dynamics (CFD) computer program, and has been tested within that program. The model could also be implemented within other CFD programs.

  16. On the importance of having accurate data for astrophysical modelling

    Science.gov (United States)

    Lique, Francois

    2016-06-01

    The Herschel telescope and the ALMA and NOEMA interferometers have opened new windows of observation for wavelengths ranging from far infrared to sub-millimeter with spatial and spectral resolutions previously unmatched. To make the most of these observations, an accurate knowledge of the physical and chemical processes occurring in the interstellar and circumstellar media is essential.In this presentation, I will discuss what are the current needs of astrophysics in terms of molecular data and I will show that accurate molecular data are crucial for the proper determination of the physical conditions in molecular clouds.First, I will focus on collisional excitation studies that are needed for molecular lines modelling beyond the Local Thermodynamic Equilibrium (LTE) approach. In particular, I will show how new collisional data for the HCN and HNC isomers, two tracers of star forming conditions, have allowed solving the problem of their respective abundance in cold molecular clouds. I will also present the last collisional data that have been computed in order to analyse new highly resolved observations provided by the ALMA interferometer.Then, I will present the calculation of accurate rate constants for the F+H2 → HF+H and Cl+H2 ↔ HCl+H reactions, which have allowed a more accurate determination of the physical conditions in diffuse molecular clouds. I will also present the recent work on the ortho-para-H2 conversion due to hydrogen exchange that allow more accurate determination of the ortho-to-para-H2 ratio in the universe and that imply a significant revision of the cooling mechanism in astrophysical media.

  17. Accurate macroscale modelling of spatial dynamics in multiple dimensions

    CERN Document Server

    Roberts, A ~J; Bunder, J ~E

    2011-01-01

    Developments in dynamical systems theory provides new support for the macroscale modelling of pdes and other microscale systems such as Lattice Boltzmann, Monte Carlo or Molecular Dynamics simulators. By systematically resolving subgrid microscale dynamics the dynamical systems approach constructs accurate closures of macroscale discretisations of the microscale system. Here we specifically explore reaction-diffusion problems in two spatial dimensions as a prototype of generic systems in multiple dimensions. Our approach unifies into one the modelling of systems by a type of finite elements, and the `equation free' macroscale modelling of microscale simulators efficiently executing only on small patches of the spatial domain. Centre manifold theory ensures that a closed model exist on the macroscale grid, is emergent, and is systematically approximated. Dividing space either into overlapping finite elements or into spatially separated small patches, the specially crafted inter-element\\slash patch coupling als...

  18. Simple Mathematical Models Do Not Accurately Predict Early SIV Dynamics

    Directory of Open Access Journals (Sweden)

    Cecilia Noecker

    2015-03-01

    Full Text Available Upon infection of a new host, human immunodeficiency virus (HIV replicates in the mucosal tissues and is generally undetectable in circulation for 1–2 weeks post-infection. Several interventions against HIV including vaccines and antiretroviral prophylaxis target virus replication at this earliest stage of infection. Mathematical models have been used to understand how HIV spreads from mucosal tissues systemically and what impact vaccination and/or antiretroviral prophylaxis has on viral eradication. Because predictions of such models have been rarely compared to experimental data, it remains unclear which processes included in these models are critical for predicting early HIV dynamics. Here we modified the “standard” mathematical model of HIV infection to include two populations of infected cells: cells that are actively producing the virus and cells that are transitioning into virus production mode. We evaluated the effects of several poorly known parameters on infection outcomes in this model and compared model predictions to experimental data on infection of non-human primates with variable doses of simian immunodifficiency virus (SIV. First, we found that the mode of virus production by infected cells (budding vs. bursting has a minimal impact on the early virus dynamics for a wide range of model parameters, as long as the parameters are constrained to provide the observed rate of SIV load increase in the blood of infected animals. Interestingly and in contrast with previous results, we found that the bursting mode of virus production generally results in a higher probability of viral extinction than the budding mode of virus production. Second, this mathematical model was not able to accurately describe the change in experimentally determined probability of host infection with increasing viral doses. Third and finally, the model was also unable to accurately explain the decline in the time to virus detection with increasing viral

  19. Congestion Control in WMSNs by Reducing Congestion and Free Resources to Set Accurate Rates and Priority

    Directory of Open Access Journals (Sweden)

    Akbar Majidi

    2014-08-01

    Full Text Available The main intention of this paper is focus on mechanism for reducing congestion in the network by free resources to set accurate rates and priority data needs. If two nodes send their packets in the shortest path to the parent node in a crowded place, a source node must prioritize the data and uses data that have lower priorities of a suitable detour nodes consisting of low or non- active consciously. The proposed algorithm is applied to the nodes near the base station (which convey more traffic after the congestion detection mechanism detected the congestion. Obtained results from simulation test done by NS-2 simulator demonstrate the innovation and validity of proposed method with better performance in comparison with CCF, PCCP and DCCP protocols.

  20. Accurate high-harmonic spectra from time-dependent two-particle reduced density matrix theory

    CERN Document Server

    Lackner, Fabian; Sato, Takeshi; Ishikawa, Kenichi L; Burgdörfer, Joachim

    2016-01-01

    The accurate description of the non-linear response of many-electron systems to strong-laser fields remains a major challenge. Methods that bypass the unfavorable exponential scaling with particle number are required to address larger systems. In this paper we present a fully three-dimensional implementation of the time-dependent two-particle reduced density matrix (TD-2RDM) method for many-electron atoms. We benchmark this approach by a comparison with multi-configurational time-dependent Hartree-Fock (MCTDHF) results for the harmonic spectra of beryllium and neon. We show that the TD-2RDM is very well-suited to describe the non-linear atomic response and to reveal the influence of electron-correlation effects.

  1. Accurate Modeling of Buck Converters with Magnetic-Core Inductors

    DEFF Research Database (Denmark)

    Astorino, Antonio; Antonini, Giulio; Swaminathan, Madhavan

    2015-01-01

    In this paper, a modeling approach for buck converters with magnetic-core inductors is presented. Due to the high nonlinearity of magnetic materials, the frequency domain analysis of such circuits is not suitable for an accurate description of their behaviour. Hence, in this work, a timedomain...... model of buck converters with magnetic-core inductors in a SimulinkR environment is proposed. As an example, the presented approach is used to simulate an eight-phase buck converter. The simulation results show that an unexpected system behaviour in terms of current ripple amplitude needs the inductor core...

  2. Accurate, low-cost 3D-models of gullies

    Science.gov (United States)

    Onnen, Nils; Gronz, Oliver; Ries, Johannes B.; Brings, Christine

    2015-04-01

    Soil erosion is a widespread problem in arid and semi-arid areas. The most severe form is the gully erosion. They often cut into agricultural farmland and can make a certain area completely unproductive. To understand the development and processes inside and around gullies, we calculated detailed 3D-models of gullies in the Souss Valley in South Morocco. Near Taroudant, we had four study areas with five gullies different in size, volume and activity. By using a Canon HF G30 Camcorder, we made varying series of Full HD videos with 25fps. Afterwards, we used the method Structure from Motion (SfM) to create the models. To generate accurate models maintaining feasible runtimes, it is necessary to select around 1500-1700 images from the video, while the overlap of neighboring images should be at least 80%. In addition, it is very important to avoid selecting photos that are blurry or out of focus. Nearby pixels of a blurry image tend to have similar color values. That is why we used a MATLAB script to compare the derivatives of the images. The higher the sum of the derivative, the sharper an image of similar objects. MATLAB subdivides the video into image intervals. From each interval, the image with the highest sum is selected. E.g.: 20min. video at 25fps equals 30.000 single images. The program now inspects the first 20 images, saves the sharpest and moves on to the next 20 images etc. Using this algorithm, we selected 1500 images for our modeling. With VisualSFM, we calculated features and the matches between all images and produced a point cloud. Then, MeshLab has been used to build a surface out of it using the Poisson surface reconstruction approach. Afterwards we are able to calculate the size and the volume of the gullies. It is also possible to determine soil erosion rates, if we compare the data with old recordings. The final step would be the combination of the terrestrial data with the data from our aerial photography. So far, the method works well and we

  3. Velocity potential formulations of highly accurate Boussinesq-type models

    DEFF Research Database (Denmark)

    Bingham, Harry B.; Madsen, Per A.; Fuhrman, David R.

    2009-01-01

    , B., 2006. A Boussinesq-type method for fully nonlinear waves interacting with a rapidly varying bathymetry. Coast. Eng. 53, 487-504); Jamois et al. (Jamois, E., Fuhrman, D.R., Bingham, H.B., Molin, B., 2006. Wave-structure interactions and nonlinear wave processes on the weather side of reflective...... is of interest because it reduces the computational effort by approximately a factor of two and facilitates a coupling to other potential flow solvers. A new shoaling enhancement operator is introduced to derive new models (in both formulations) with a velocity profile which is always consistent...... processes on the weather side of reflective structures. Coast. Eng. 53, 929-945). An exact infinite series solution for the potential is obtained via a Taylor expansion about an arbitrary vertical position z=(z) over cap. For practical implementation however, the solution is expanded based on a slow...

  4. A Method to Build a Super Small but Practically Accurate Language Model for Handheld Devices

    Institute of Scientific and Technical Information of China (English)

    WU GenQing (吴根清); ZHENG Fang (郑方)

    2003-01-01

    In this paper, an important question, whether a small language model can be practically accurate enough, is raised. Afterwards, the purpose of a language model, the problems that a language model faces, and the factors that affect the performance of a language model,are analyzed. Finally, a novel method for language model compression is proposed, which makes the large language model usable for applications in handheld devices, such as mobiles, smart phones, personal digital assistants (PDAs), and handheld personal computers (HPCs). In the proposed language model compression method, three aspects are included. First, the language model parameters are analyzed and a criterion based on the importance measure of n-grams is used to determine which n-grams should be kept and which removed. Second, a piecewise linear warping method is proposed to be used to compress the uni-gram count values in the full language model. And third, a rank-based quantization method is adopted to quantize the bi-gram probability values. Experiments show that by using this compression method the language model can be reduced dramatically to only about 1M bytes while the performance almost does not decrease. This provides good evidence that a language model compressed by means of a well-designed compression technique is practically accurate enough, and it makes the language model usable in handheld devices.

  5. Fast and accurate prediction of numerical relativity waveforms from binary black hole mergers using surrogate models

    CERN Document Server

    Blackman, Jonathan; Galley, Chad R; Szilagyi, Bela; Scheel, Mark A; Tiglio, Manuel; Hemberger, Daniel A

    2015-01-01

    Simulating a binary black hole coalescence by solving Einstein's equations is computationally expensive, requiring days to months of supercomputing time. In this paper, we construct an accurate and fast-to-evaluate surrogate model for numerical relativity (NR) waveforms from non-spinning binary black hole coalescences with mass ratios from $1$ to $10$ and durations corresponding to about $15$ orbits before merger. Our surrogate, which is built using reduced order modeling techniques, is distinct from traditional modeling efforts. We find that the full multi-mode surrogate model agrees with waveforms generated by NR to within the numerical error of the NR code. In particular, we show that our modeling strategy produces surrogates which can correctly predict NR waveforms that were {\\em not} used for the surrogate's training. For all practical purposes, then, the surrogate waveform model is equivalent to the high-accuracy, large-scale simulation waveform but can be evaluated in a millisecond to a second dependin...

  6. An accurate and simple large signal model of HEMT

    DEFF Research Database (Denmark)

    Liu, Qing

    1989-01-01

    A large-signal model of discrete HEMTs (high-electron-mobility transistors) has been developed. It is simple and suitable for SPICE simulation of hybrid digital ICs. The model parameters are extracted by using computer programs and data provided by the manufacturer. Based on this model, a hybrid...

  7. Can Raters with Reduced Job Descriptive Information Provide Accurate Position Analysis Questionnaire (PAQ) Ratings?

    Science.gov (United States)

    Friedman, Lee; Harvey, Robert J.

    1986-01-01

    Job-naive raters provided with job descriptive information made Position Analysis Questionnaire (PAQ) ratings which were validated against ratings of job analysts who were also job content experts. None of the reduced job descriptive information conditions enabled job-naive raters to obtain either acceptable levels of convergent validity with…

  8. Coupling Efforts to the Accurate and Efficient Tsunami Modelling System

    Science.gov (United States)

    Son, S.

    2015-12-01

    In the present study, we couple two different types of tsunami models, i.e., nondispersive shallow water model of characteristic form(MOST ver.4) and dispersive Boussinesq model of non-characteristic form(Son et al. (2011)) in an attempt to improve modelling accuracy and efficiency. Since each model deals with different type of primary variables, additional care on matching boundary condition is required. Using an absorbing-generating boundary condition developed by Van Dongeren and Svendsen(1997), model coupling and integration is achieved. Characteristic variables(i.e., Riemann invariants) in MOST are converted to non-characteristic variables for Boussinesq solver without any loss of physical consistency. Established modelling system has been validated through typical test problems to realistic tsunami events. Simulated results reveal good performance of developed modelling system. Since coupled modelling system provides advantageous flexibility feature during implementation, great efficiencies and accuracies are expected to be gained through spot-focusing application of Boussinesq model inside the entire domain of tsunami propagation.

  9. BWR stability using a reduced dynamical model

    International Nuclear Information System (INIS)

    BWR stability can be treated with reduced order dynamical models. When the parameters of the model came from experimental data, the predictions are accurate. In this work an alternative derivation for the void fraction equation is made, but remarking the physical struct-ure of the parameters. As the poles of power/reactivity transfer function are related with the parameters, the measurement of the poles by other techniques such as noise analysis will lead to the parameters, but the system of equations in non-linear. Simple parametric calculat-ion of decay ratio are performed, showing why BWRs become unstable when they are operated at low flow and high power. (Author). 7 refs

  10. A Simple and Accurate Closed-Form EGN Model Formula

    CERN Document Server

    Poggiolini, P; Carena, A; Forghieri, F

    2015-01-01

    The GN model of non-linear fiber propagation has been shown to overestimate the variance of non-linearity due to the signal Gaussianity approximation, leading to maximum reach predictions for typical optical systems which may be pessimistic by about 5% to 15%, depending on fiber type and system set-up. Various models have been proposed which improve over the GN model accuracy. One of them is the EGN model, which completely removes the Gaussianity approximation from all non-linear interference (NLI) components. The EGN model is, however, substantially more complex than the GN model. Recently, we proposed a simple closed-form formula which permits to approximate the EGN model, starting from the GN. It was however limited to all-identical, equispaced channels, and did not correct single-channel NLI (also called SCI). In this follow-up contribution, we propose an improved version which both allows to address non-identical channels and corrects the SCI contribution as well. Extensive simulative testing shows the n...

  11. A more accurate model of wetting transitions with liquid helium

    International Nuclear Information System (INIS)

    Up to now the analysis of the liquid helium prewetting line on alkali metal substrates have been made using the simple model proposed by Saam et al. Some improvements on this model are considered within a mean field, sharp kink model. The temperature variations of the substrate-liquid interface energy and that of the liquid density are considered, as well as a more realistic effective potential for the film-substrate interaction. A comparison is made with the experimental data on rubidium and cesium

  12. Visual texture accurate material appearance measurement, representation and modeling

    CERN Document Server

    Haindl, Michal

    2013-01-01

    This book surveys the state of the art in multidimensional, physically-correct visual texture modeling. Features: reviews the entire process of texture synthesis, including material appearance representation, measurement, analysis, compression, modeling, editing, visualization, and perceptual evaluation; explains the derivation of the most common representations of visual texture, discussing their properties, advantages, and limitations; describes a range of techniques for the measurement of visual texture, including BRDF, SVBRDF, BTF and BSSRDF; investigates the visualization of textural info

  13. Accurate wind farm development and operation. Advanced wake modelling

    Energy Technology Data Exchange (ETDEWEB)

    Brand, A.; Bot, E.; Ozdemir, H. [ECN Unit Wind Energy, P.O. Box 1, NL 1755 ZG Petten (Netherlands); Steinfeld, G.; Drueke, S.; Schmidt, M. [ForWind, Center for Wind Energy Research, Carl von Ossietzky Universitaet Oldenburg, D-26129 Oldenburg (Germany); Mittelmeier, N. REpower Systems SE, D-22297 Hamburg (Germany))

    2013-11-15

    The ability is demonstrated to calculate wind farm wakes on the basis of ambient conditions that were calculated with an atmospheric model. Specifically, comparisons are described between predicted and observed ambient conditions, and between power predictions from three wind farm wake models and power measurements, for a single and a double wake situation. The comparisons are based on performance indicators and test criteria, with the objective to determine the percentage of predictions that fall within a given range about the observed value. The Alpha Ventus site is considered, which consists of a wind farm with the same name and the met mast FINO1. Data from the 6 REpower wind turbines and the FINO1 met mast were employed. The atmospheric model WRF predicted the ambient conditions at the location and the measurement heights of the FINO1 mast. May the predictability of the wind speed and the wind direction be reasonable if sufficiently sized tolerances are employed, it is fairly impossible to predict the ambient turbulence intensity and vertical shear. Three wind farm wake models predicted the individual turbine powers: FLaP-Jensen and FLaP-Ainslie from ForWind Oldenburg, and FarmFlow from ECN. The reliabilities of the FLaP-Ainslie and the FarmFlow wind farm wake models are of equal order, and higher than FLaP-Jensen. Any difference between the predictions from these models is most clear in the double wake situation. Here FarmFlow slightly outperforms FLaP-Ainslie.

  14. An accurate halo model for fitting non-linear cosmological power spectra and baryonic feedback models

    CERN Document Server

    Mead, Alexander; Heymans, Catherine; Joudaki, Shahab; Heavens, Alan

    2015-01-01

    We present an optimised variant of the halo model, designed to produce accurate matter power spectra well into the non-linear regime for a wide range of cosmological models. To do this, we introduce physically-motivated free parameters into the halo-model formalism and fit these to data from high-resolution N-body simulations. For a variety of $\\Lambda$CDM and $w$CDM models the halo-model power is accurate to $\\simeq 5$ per cent for $k\\leq 10h\\,\\mathrm{Mpc}^{-1}$ and $z\\leq 2$. We compare our results with recent revisions of the popular HALOFIT model and show that our predictions are more accurate. An advantage of our new halo model is that it can be adapted to account for the effects of baryonic feedback on the power spectrum. We demonstrate this by fitting the halo model to power spectra from the OWLS hydrodynamical simulation suite via parameters that govern halo internal structure. We are able to fit all feedback models investigated at the 5 per cent level using only two free parameters, and we place limi...

  15. Accurate modelling of flow induced stresses in rigid colloidal aggregates

    Science.gov (United States)

    Vanni, Marco

    2015-07-01

    A method has been developed to estimate the motion and the internal stresses induced by a fluid flow on a rigid aggregate. The approach couples Stokesian dynamics and structural mechanics in order to take into account accurately the effect of the complex geometry of the aggregates on hydrodynamic forces and the internal redistribution of stresses. The intrinsic error of the method, due to the low-order truncation of the multipole expansion of the Stokes solution, has been assessed by comparison with the analytical solution for the case of a doublet in a shear flow. In addition, it has been shown that the error becomes smaller as the number of primary particles in the aggregate increases and hence it is expected to be negligible for realistic reproductions of large aggregates. The evaluation of internal forces is performed by an adaptation of the matrix methods of structural mechanics to the geometric features of the aggregates and to the particular stress-strain relationship that occurs at intermonomer contacts. A preliminary investigation on the stress distribution in rigid aggregates and their mode of breakup has been performed by studying the response to an elongational flow of both realistic reproductions of colloidal aggregates (made of several hundreds monomers) and highly simplified structures. A very different behaviour has been evidenced between low-density aggregates with isostatic or weakly hyperstatic structures and compact aggregates with highly hyperstatic configuration. In low-density clusters breakup is caused directly by the failure of the most stressed intermonomer contact, which is typically located in the inner region of the aggregate and hence originates the birth of fragments of similar size. On the contrary, breakup of compact and highly cross-linked clusters is seldom caused by the failure of a single bond. When this happens, it proceeds through the removal of a tiny fragment from the external part of the structure. More commonly, however

  16. Simulation model accurately estimates total dietary iodine intake

    NARCIS (Netherlands)

    Verkaik-Kloosterman, J.; Veer, van 't P.; Ocke, M.C.

    2009-01-01

    One problem with estimating iodine intake is the lack of detailed data about the discretionary use of iodized kitchen salt and iodization of industrially processed foods. To be able to take into account these uncertainties in estimating iodine intake, a simulation model combining deterministic and p

  17. Compact and Accurate Turbocharger Modelling for Engine Control

    DEFF Research Database (Denmark)

    Sorenson, Spencer C; Hendricks, Elbert; Magnússon, Sigurjón;

    2005-01-01

    With the current trend towards engine downsizing, the use of turbochargers to obtain extra engine power has become common. A great díffuculty in the use of turbochargers is in the modelling of the compressor map. In general this is done by inserting the compressor map directly into the engine ECU...

  18. Double Layered Sheath in Accurate HV XLPE Cable Modeling

    DEFF Research Database (Denmark)

    Gudmundsdottir, Unnur Stella; Silva, J. De; Bak, Claus Leth;

    2010-01-01

    This paper discusses modelling of high voltage AC underground cables. For long cables, when crossbonding points are present, not only the coaxial mode of propagation is excited during transient phenomena, but also the intersheath mode. This causes inaccurate simulation results for high frequency...... studies of crossbonded cables. For the intersheath mode, the correct physical representation of the cables sheath as well as proximity affect play a large role and will ensure correct calculations of the series impedance matrix and therefore a correct simulation for the actual cable. This paper gives...... a new, more correct method for modelling the actual physical layout of the sheath. It is shown by comparison to field measurements how the new method of simulating the cable's sheath results in simulations with less deviation from field test results....

  19. Simulation model accurately estimates total dietary iodine intake.

    Science.gov (United States)

    Verkaik-Kloosterman, Janneke; van 't Veer, Pieter; Ocké, Marga C

    2009-07-01

    One problem with estimating iodine intake is the lack of detailed data about the discretionary use of iodized kitchen salt and iodization of industrially processed foods. To be able to take into account these uncertainties in estimating iodine intake, a simulation model combining deterministic and probabilistic techniques was developed. Data from the Dutch National Food Consumption Survey (1997-1998) and an update of the Food Composition database were used to simulate 3 different scenarios: Dutch iodine legislation until July 2008, Dutch iodine legislation after July 2008, and a potential future situation. Results from studies measuring iodine excretion during the former legislation are comparable with the iodine intakes estimated with our model. For both former and current legislation, iodine intake was adequate for a large part of the Dutch population, but some young children (iodine levels, the percentage of the Dutch population with intakes that were too low increased (almost 10% of young children). To keep iodine intakes adequate, salt iodine levels should not be decreased, unless many more foods will contain iodized salt. Our model should be useful in predicting the effects of food reformulation or fortification on habitual nutrient intakes.

  20. The slow-scale linear noise approximation: an accurate, reduced stochastic description of biochemical networks under timescale separation conditions

    Directory of Open Access Journals (Sweden)

    Thomas Philipp

    2012-05-01

    Full Text Available Abstract Background It is well known that the deterministic dynamics of biochemical reaction networks can be more easily studied if timescale separation conditions are invoked (the quasi-steady-state assumption. In this case the deterministic dynamics of a large network of elementary reactions are well described by the dynamics of a smaller network of effective reactions. Each of the latter represents a group of elementary reactions in the large network and has associated with it an effective macroscopic rate law. A popular method to achieve model reduction in the presence of intrinsic noise consists of using the effective macroscopic rate laws to heuristically deduce effective probabilities for the effective reactions which then enables simulation via the stochastic simulation algorithm (SSA. The validity of this heuristic SSA method is a priori doubtful because the reaction probabilities for the SSA have only been rigorously derived from microscopic physics arguments for elementary reactions. Results We here obtain, by rigorous means and in closed-form, a reduced linear Langevin equation description of the stochastic dynamics of monostable biochemical networks in conditions characterized by small intrinsic noise and timescale separation. The slow-scale linear noise approximation (ssLNA, as the new method is called, is used to calculate the intrinsic noise statistics of enzyme and gene networks. The results agree very well with SSA simulations of the non-reduced network of elementary reactions. In contrast the conventional heuristic SSA is shown to overestimate the size of noise for Michaelis-Menten kinetics, considerably under-estimate the size of noise for Hill-type kinetics and in some cases even miss the prediction of noise-induced oscillations. Conclusions A new general method, the ssLNA, is derived and shown to correctly describe the statistics of intrinsic noise about the macroscopic concentrations under timescale separation conditions

  1. Accurate numerical solutions for elastic-plastic models

    International Nuclear Information System (INIS)

    The accuracy of two integration algorithms is studied for the common engineering condition of a von Mises, isotropic hardening model under plane stress. Errors in stress predictions for given total strain increments are expressed with contour plots of two parameters: an angle in the pi plane and the difference between the exact and computed yield-surface radii. The two methods are the tangent-predictor/radial-return approach and the elastic-predictor/radial-corrector algorithm originally developed by Mendelson. The accuracy of a combined tangent-predictor/radial-corrector algorithm is also investigated

  2. An accurate halo model for fitting non-linear cosmological power spectra and baryonic feedback models

    Science.gov (United States)

    Mead, A. J.; Peacock, J. A.; Heymans, C.; Joudaki, S.; Heavens, A. F.

    2015-12-01

    We present an optimized variant of the halo model, designed to produce accurate matter power spectra well into the non-linear regime for a wide range of cosmological models. To do this, we introduce physically motivated free parameters into the halo-model formalism and fit these to data from high-resolution N-body simulations. For a variety of Λ cold dark matter (ΛCDM) and wCDM models, the halo-model power is accurate to ≃ 5 per cent for k ≤ 10h Mpc-1 and z ≤ 2. An advantage of our new halo model is that it can be adapted to account for the effects of baryonic feedback on the power spectrum. We demonstrate this by fitting the halo model to power spectra from the OWLS (OverWhelmingly Large Simulations) hydrodynamical simulation suite via parameters that govern halo internal structure. We are able to fit all feedback models investigated at the 5 per cent level using only two free parameters, and we place limits on the range of these halo parameters for feedback models investigated by the OWLS simulations. Accurate predictions to high k are vital for weak-lensing surveys, and these halo parameters could be considered nuisance parameters to marginalize over in future analyses to mitigate uncertainty regarding the details of feedback. Finally, we investigate how lensing observables predicted by our model compare to those from simulations and from HALOFIT for a range of k-cuts and feedback models and quantify the angular scales at which these effects become important. Code to calculate power spectra from the model presented in this paper can be found at https://github.com/alexander-mead/hmcode.

  3. An Accurate and Computationally Efficient Model for Membrane-Type Circular-Symmetric Micro-Hotplates

    Directory of Open Access Journals (Sweden)

    Usman Khan

    2014-04-01

    Full Text Available Ideally, the design of high-performance micro-hotplates would require a large number of simulations because of the existence of many important design parameters as well as the possibly crucial effects of both spread and drift. However, the computational cost of FEM simulations, which are the only available tool for accurately predicting the temperature in micro-hotplates, is very high. As a result, micro-hotplate designers generally have no effective simulation-tools for the optimization. In order to circumvent these issues, here, we propose a model for practical circular-symmetric micro-hot-plates which takes advantage of modified Bessel functions, computationally efficient matrix-approach for considering the relevant boundary conditions, Taylor linearization for modeling the Joule heating and radiation losses, and external-region-segmentation strategy in order to accurately take into account radiation losses in the entire micro-hotplate. The proposed model is almost as accurate as FEM simulations and two to three orders of magnitude more computationally efficient (e.g., 45 s versus more than 8 h. The residual errors, which are mainly associated to the undesired heating in the electrical contacts, are small (e.g., few degrees Celsius for an 800 °C operating temperature and, for important analyses, almost constant. Therefore, we also introduce a computationally-easy single-FEM-compensation strategy in order to reduce the residual errors to about 1 °C. As illustrative examples of the power of our approach, we report the systematic investigation of a spread in the membrane thermal conductivity and of combined variations of both ambient and bulk temperatures. Our model enables a much faster characterization of micro-hotplates and, thus, a much more effective optimization prior to fabrication.

  4. A Biomechanical Model of the Scapulothoracic Joint to Accurately Capture Scapular Kinematics during Shoulder Movements.

    Directory of Open Access Journals (Sweden)

    Ajay Seth

    Full Text Available The complexity of shoulder mechanics combined with the movement of skin relative to the scapula makes it difficult to measure shoulder kinematics with sufficient accuracy to distinguish between symptomatic and asymptomatic individuals. Multibody skeletal models can improve motion capture accuracy by reducing the space of possible joint movements, and models are used widely to improve measurement of lower limb kinematics. In this study, we developed a rigid-body model of a scapulothoracic joint to describe the kinematics of the scapula relative to the thorax. This model describes scapular kinematics with four degrees of freedom: 1 elevation and 2 abduction of the scapula on an ellipsoidal thoracic surface, 3 upward rotation of the scapula normal to the thoracic surface, and 4 internal rotation of the scapula to lift the medial border of the scapula off the surface of the thorax. The surface dimensions and joint axes can be customized to match an individual's anthropometry. We compared the model to "gold standard" bone-pin kinematics collected during three shoulder tasks and found modeled scapular kinematics to be accurate to within 2 mm root-mean-squared error for individual bone-pin markers across all markers and movement tasks. As an additional test, we added random and systematic noise to the bone-pin marker data and found that the model reduced kinematic variability due to noise by 65% compared to Euler angles computed without the model. Our scapulothoracic joint model can be used for inverse and forward dynamics analyses and to compute joint reaction loads. The computational performance of the scapulothoracic joint model is well suited for real-time applications; it is freely available for use with OpenSim 3.2, and is customizable and usable with other OpenSim models.

  5. A Biomechanical Model of the Scapulothoracic Joint to Accurately Capture Scapular Kinematics during Shoulder Movements.

    Science.gov (United States)

    Seth, Ajay; Matias, Ricardo; Veloso, António P; Delp, Scott L

    2016-01-01

    The complexity of shoulder mechanics combined with the movement of skin relative to the scapula makes it difficult to measure shoulder kinematics with sufficient accuracy to distinguish between symptomatic and asymptomatic individuals. Multibody skeletal models can improve motion capture accuracy by reducing the space of possible joint movements, and models are used widely to improve measurement of lower limb kinematics. In this study, we developed a rigid-body model of a scapulothoracic joint to describe the kinematics of the scapula relative to the thorax. This model describes scapular kinematics with four degrees of freedom: 1) elevation and 2) abduction of the scapula on an ellipsoidal thoracic surface, 3) upward rotation of the scapula normal to the thoracic surface, and 4) internal rotation of the scapula to lift the medial border of the scapula off the surface of the thorax. The surface dimensions and joint axes can be customized to match an individual's anthropometry. We compared the model to "gold standard" bone-pin kinematics collected during three shoulder tasks and found modeled scapular kinematics to be accurate to within 2 mm root-mean-squared error for individual bone-pin markers across all markers and movement tasks. As an additional test, we added random and systematic noise to the bone-pin marker data and found that the model reduced kinematic variability due to noise by 65% compared to Euler angles computed without the model. Our scapulothoracic joint model can be used for inverse and forward dynamics analyses and to compute joint reaction loads. The computational performance of the scapulothoracic joint model is well suited for real-time applications; it is freely available for use with OpenSim 3.2, and is customizable and usable with other OpenSim models. PMID:26734761

  6. Parameterized reduced-order models using hyper-dual numbers.

    Energy Technology Data Exchange (ETDEWEB)

    Fike, Jeffrey A.; Brake, Matthew Robert

    2013-10-01

    The goal of most computational simulations is to accurately predict the behavior of a real, physical system. Accurate predictions often require very computationally expensive analyses and so reduced order models (ROMs) are commonly used. ROMs aim to reduce the computational cost of the simulations while still providing accurate results by including all of the salient physics of the real system in the ROM. However, real, physical systems often deviate from the idealized models used in simulations due to variations in manufacturing or other factors. One approach to this issue is to create a parameterized model in order to characterize the effect of perturbations from the nominal model on the behavior of the system. This report presents a methodology for developing parameterized ROMs, which is based on Craig-Bampton component mode synthesis and the use of hyper-dual numbers to calculate the derivatives necessary for the parameterization.

  7. A scalable and accurate method for classifying protein-ligand binding geometries using a MapReduce approach.

    Science.gov (United States)

    Estrada, T; Zhang, B; Cicotti, P; Armen, R S; Taufer, M

    2012-07-01

    We present a scalable and accurate method for classifying protein-ligand binding geometries in molecular docking. Our method is a three-step process: the first step encodes the geometry of a three-dimensional (3D) ligand conformation into a single 3D point in the space; the second step builds an octree by assigning an octant identifier to every single point in the space under consideration; and the third step performs an octree-based clustering on the reduced conformation space and identifies the most dense octant. We adapt our method for MapReduce and implement it in Hadoop. The load-balancing, fault-tolerance, and scalability in MapReduce allow screening of very large conformation spaces not approachable with traditional clustering methods. We analyze results for docking trials for 23 protein-ligand complexes for HIV protease, 21 protein-ligand complexes for Trypsin, and 12 protein-ligand complexes for P38alpha kinase. We also analyze cross docking trials for 24 ligands, each docking into 24 protein conformations of the HIV protease, and receptor ensemble docking trials for 24 ligands, each docking in a pool of HIV protease receptors. Our method demonstrates significant improvement over energy-only scoring for the accurate identification of native ligand geometries in all these docking assessments. The advantages of our clustering approach make it attractive for complex applications in real-world drug design efforts. We demonstrate that our method is particularly useful for clustering docking results using a minimal ensemble of representative protein conformational states (receptor ensemble docking), which is now a common strategy to address protein flexibility in molecular docking. PMID:22658682

  8. Reducing the Need for Accurate Stream Flow Forecasting for Water Supply Planning by Augmenting Reservoir Operations with Seawater Desalination and Wastewater Recycling

    Science.gov (United States)

    Bhushan, R.; Ng, T. L.

    2014-12-01

    Accurate stream flow forecasts are critical for reservoir operations for water supply planning. As the world urban population increases, the demand for water in cities is also increasing, making accurate forecasts even more important. However, accurate forecasting of stream flows is difficult owing to short- and long-term weather variations. We propose to reduce this need for accurate stream flow forecasts by augmenting reservoir operations with seawater desalination and wastewater recycling. We develop a robust operating policy for the joint operation of the three sources. With the joint model, we tap into the unlimited reserve of seawater through desalination, and make use of local supplies of wastewater through recycling. However, both seawater desalination and recycling are energy intensive and relatively expensive. Reservoir water on the other hand, is generally cheaper but is limited and variable in its availability, increasing the risk of water shortage during extreme climate events. We operate the joint system by optimizing it using a genetic algorithm to maximize water supply reliability and resilience while minimizing vulnerability subject to a budget constraint and for a given stream flow forecast. To compute the total cost of the system, we take into account the pumping cost of transporting reservoir water to its final destination, and the capital and operating costs of desalinating seawater and recycling wastewater. We produce results for different hydro climatic regions based on artificial stream flows we generate using a simple hydrological model and an autoregressive time series model. The artificial flows are generated from precipitation and temperature data from the Canadian Regional Climate model for present and future scenarios. We observe that the joint operation is able to effectively minimize the negative effects of stream flow forecast uncertainty on system performance at an overall cost that is not significantly greater than the cost of a

  9. Accurate mask model implementation in OPC model for 14nm nodes and beyond

    Science.gov (United States)

    Zine El Abidine, Nacer; Sundermann, Frank; Yesilada, Emek; Farys, Vincent; Huguennet, Frederic; Armeanu, Ana-Maria; Bork, Ingo; Chomat, Michael; Buck, Peter; Schanen, Isabelle

    2015-10-01

    In a previous work [1] we demonstrated that current OPC model assuming the mask pattern to be analogous to the designed data is no longer valid. Indeed as depicted in figure 1, an extreme case of line-end shortening shows a gap up to 10 nm difference (at mask level). For that reason an accurate mask model, for a 14nm logic gate level has been calibrated. A model with a total RMS of 1.38nm at mask level was obtained. 2D structures such as line-end shortening and corner rounding were well predicted using SEM pictures overlaid with simulated contours. The first part of this paper is dedicated to the implementation of our improved model in current flow. The improved model consists of a mask model capturing mask process and writing effects and a standard optical and resist model addressing the litho exposure and development effects at wafer level. The second part will focus on results from the comparison of the two models, the new and the regular, as depicted in figure 2.

  10. Accurate mask model implementation in optical proximity correction model for 14-nm nodes and beyond

    Science.gov (United States)

    Zine El Abidine, Nacer; Sundermann, Frank; Yesilada, Emek; Farys, Vincent; Huguennet, Frederic; Armeanu, Ana-Maria; Bork, Ingo; Chomat, Michael; Buck, Peter; Schanen, Isabelle

    2016-04-01

    In a previous work, we demonstrated that the current optical proximity correction model assuming the mask pattern to be analogous to the designed data is no longer valid. An extreme case of line-end shortening shows a gap up to 10 nm difference (at mask level). For that reason, an accurate mask model has been calibrated for a 14-nm logic gate level. A model with a total RMS of 1.38 nm at mask level was obtained. Two-dimensional structures, such as line-end shortening and corner rounding, were well predicted using scanning electron microscopy pictures overlaid with simulated contours. The first part of this paper is dedicated to the implementation of our improved model in current flow. The improved model consists of a mask model capturing mask process and writing effects, and a standard optical and resist model addressing the litho exposure and development effects at wafer level. The second part will focus on results from the comparison of the two models, the new and the regular.

  11. Generalized Reduced Order Model Generation Project

    Data.gov (United States)

    National Aeronautics and Space Administration — M4 Engineering proposes to develop a generalized reduced order model generation method. This method will allow for creation of reduced order aeroservoelastic state...

  12. A parallel high-order accurate finite element nonlinear Stokes ice sheet model and benchmark experiments

    Energy Technology Data Exchange (ETDEWEB)

    Leng, Wei [Chinese Academy of Sciences; Ju, Lili [University of South Carolina; Gunzburger, Max [Florida State University; Price, Stephen [Los Alamos National Laboratory; Ringler, Todd [Los Alamos National Laboratory,

    2012-01-01

    The numerical modeling of glacier and ice sheet evolution is a subject of growing interest, in part because of the potential for models to inform estimates of global sea level change. This paper focuses on the development of a numerical model that determines the velocity and pressure fields within an ice sheet. Our numerical model features a high-fidelity mathematical model involving the nonlinear Stokes system and combinations of no-sliding and sliding basal boundary conditions, high-order accurate finite element discretizations based on variable resolution grids, and highly scalable parallel solution strategies, all of which contribute to a numerical model that can achieve accurate velocity and pressure approximations in a highly efficient manner. We demonstrate the accuracy and efficiency of our model by analytical solution tests, established ice sheet benchmark experiments, and comparisons with other well-established ice sheet models.

  13. Energy-accurate simulation models for evaluating the energy efficiency; Energieexakte Simulationsmodelle zur Bewertung der Energieeffizienz

    Energy Technology Data Exchange (ETDEWEB)

    Blank, Frederic; Roth-Stielow, Joerg [Stuttgart Univ. (Germany). Inst. fuer Leistungselektronik und Elektrische Antriebe

    2011-07-01

    For the evaluation of the energy efficiency of electrical drive systems in start-stop operations, the amount of energy per cycle is used. This variable of comparison ''energy'' is determined by simulating the whole drive system with special simulation models. These models have to be energy-accurate in order to implement the significant losses. Two simulation models are presented, which were optimized for these simulations: models of a permanent synchronous motor and a frequency inverter. The models are parameterized with measurements and the calculations are verified. Using these models, motion cycles can be simulated and the necessary energy per cycle can be determined. (orig.)

  14. Development of an accurate cavitation coupled spray model for diesel engine simulation

    International Nuclear Information System (INIS)

    Highlights: • A new hybrid spray model was implemented into KIVA4 CFD code. • Cavitation sub model was coupled with classic KHRT model. • New model predicts better than classical spray models. • New model predicts spray and combustion characteristics with accuracy. - Abstract: The combustion process in diesel engines is essentially controlled by the dynamics of the fuel spray. Thus accurate modeling of spray process is vital to accurately model the combustion process in diesel engines. In this work, a new hybrid spray model was developed by coupling the cavitation induced spray sub model to KHRT spray model. This new model was implemented into KIVA4 CFD code. The new developed spray model was extensively validated against the experimental data of non-vaporizing and vaporizing spray obtained from constant volume combustion chamber (CVCC) available in literature. The results were compared on the basis of liquid length, spray penetration and spray images. The model was also validated against the engine combustion characteristics data like in-cylinder pressure and heat release rate. The new spray model very well captures both spray characteristics and combustion characteristics

  15. Bilinear reduced order approximate model of parabolic distributed solar collectors

    KAUST Repository

    Elmetennani, Shahrazed

    2015-07-01

    This paper proposes a novel, low dimensional and accurate approximate model for the distributed parabolic solar collector, by means of a modified gaussian interpolation along the spatial domain. The proposed reduced model, taking the form of a low dimensional bilinear state representation, enables the reproduction of the heat transfer dynamics along the collector tube for system analysis. Moreover, presented as a reduced order bilinear state space model, the well established control theory for this class of systems can be applied. The approximation efficiency has been proven by several simulation tests, which have been performed considering parameters of the Acurex field with real external working conditions. Model accuracy has been evaluated by comparison to the analytical solution of the hyperbolic distributed model and its semi discretized approximation highlighting the benefits of using the proposed numerical scheme. Furthermore, model sensitivity to the different parameters of the gaussian interpolation has been studied.

  16. Bottom-up coarse-grained models that accurately describe the structure, pressure, and compressibility of molecular liquids

    Energy Technology Data Exchange (ETDEWEB)

    Dunn, Nicholas J. H.; Noid, W. G., E-mail: wnoid@chem.psu.edu [Department of Chemistry, The Pennsylvania State University, University Park, Pennsylvania 16802 (United States)

    2015-12-28

    The present work investigates the capability of bottom-up coarse-graining (CG) methods for accurately modeling both structural and thermodynamic properties of all-atom (AA) models for molecular liquids. In particular, we consider 1, 2, and 3-site CG models for heptane, as well as 1 and 3-site CG models for toluene. For each model, we employ the multiscale coarse-graining method to determine interaction potentials that optimally approximate the configuration dependence of the many-body potential of mean force (PMF). We employ a previously developed “pressure-matching” variational principle to determine a volume-dependent contribution to the potential, U{sub V}(V), that approximates the volume-dependence of the PMF. We demonstrate that the resulting CG models describe AA density fluctuations with qualitative, but not quantitative, accuracy. Accordingly, we develop a self-consistent approach for further optimizing U{sub V}, such that the CG models accurately reproduce the equilibrium density, compressibility, and average pressure of the AA models, although the CG models still significantly underestimate the atomic pressure fluctuations. Additionally, by comparing this array of models that accurately describe the structure and thermodynamic pressure of heptane and toluene at a range of different resolutions, we investigate the impact of bottom-up coarse-graining upon thermodynamic properties. In particular, we demonstrate that U{sub V} accounts for the reduced cohesion in the CG models. Finally, we observe that bottom-up coarse-graining introduces subtle correlations between the resolution, the cohesive energy density, and the “simplicity” of the model.

  17. Bottom-up coarse-grained models that accurately describe the structure, pressure, and compressibility of molecular liquids

    Science.gov (United States)

    Dunn, Nicholas J. H.; Noid, W. G.

    2015-12-01

    The present work investigates the capability of bottom-up coarse-graining (CG) methods for accurately modeling both structural and thermodynamic properties of all-atom (AA) models for molecular liquids. In particular, we consider 1, 2, and 3-site CG models for heptane, as well as 1 and 3-site CG models for toluene. For each model, we employ the multiscale coarse-graining method to determine interaction potentials that optimally approximate the configuration dependence of the many-body potential of mean force (PMF). We employ a previously developed "pressure-matching" variational principle to determine a volume-dependent contribution to the potential, UV(V), that approximates the volume-dependence of the PMF. We demonstrate that the resulting CG models describe AA density fluctuations with qualitative, but not quantitative, accuracy. Accordingly, we develop a self-consistent approach for further optimizing UV, such that the CG models accurately reproduce the equilibrium density, compressibility, and average pressure of the AA models, although the CG models still significantly underestimate the atomic pressure fluctuations. Additionally, by comparing this array of models that accurately describe the structure and thermodynamic pressure of heptane and toluene at a range of different resolutions, we investigate the impact of bottom-up coarse-graining upon thermodynamic properties. In particular, we demonstrate that UV accounts for the reduced cohesion in the CG models. Finally, we observe that bottom-up coarse-graining introduces subtle correlations between the resolution, the cohesive energy density, and the "simplicity" of the model.

  18. An accurate and efficient system model of iterative image reconstruction in high-resolution pinhole SPECT for small animal research

    International Nuclear Information System (INIS)

    Accurate modeling of the photon acquisition process in pinhole SPECT is essential for optimizing resolution. In this work, the authors develop an accurate system model in which pinhole finite aperture and depth-dependent geometric sensitivity are explicitly included. To achieve high-resolution pinhole SPECT, the voxel size is usually set in the range of sub-millimeter so that the total number of image voxels increase accordingly. It is inevitably that a system matrix that models a variety of favorable physical factors will become extremely sophisticated. An efficient implementation for such an accurate system model is proposed in this research. We first use the geometric symmetries to reduce redundant entries in the matrix. Due to the sparseness of the matrix, only non-zero terms are stored. A novel center-to-radius recording rule is also developed to effectively describe the relation between a voxel and its related detectors at every projection angle. The proposed system matrix is also suitable for multi-threaded computing. Finally, the accuracy and effectiveness of the proposed system model is evaluated in a workstation equipped with two Quad-Core Intel X eon processors.

  19. Efficient and Accurate Log-Levy Approximations of Levy-Driven LIBOR Models

    DEFF Research Database (Denmark)

    Papapantoleon, Antonis; Schoenmakers, John; Skovmand, David

    2012-01-01

    -driven LIBOR model and aim to develop accurate and efficient log-Lévy approximations for the dynamics of the rates. The approximations are based on the truncation of the drift term and on Picard approximation of suitable processes. Numerical experiments for forward-rate agreements, caps, swaptions and sticky...

  20. In-situ measurements of material thermal parameters for accurate LED lamp thermal modelling

    NARCIS (Netherlands)

    Vellvehi, M.; Perpina, X.; Jorda, X.; Werkhoven, R.J.; Kunen, J.M.G.; Jakovenko, J.; Bancken, P.; Bolt, P.J.

    2013-01-01

    This work deals with the extraction of key thermal parameters for accurate thermal modelling of LED lamps: air exchange coefficient around the lamp, emissivity and thermal conductivity of all lamp parts. As a case study, an 8W retrofit lamp is presented. To assess simulation results, temperature is

  1. Utilizing anisotropic Preisach-type models in the accurate simulation of magnetostriction

    Energy Technology Data Exchange (ETDEWEB)

    Adly, A.A. [Cairo Univ., Giza (Egypt). Electrical Power and Machines Dept.; Mayergoyz, I.D. [Univ. of Maryland, College Park, MD (United States). Electrical Engineering Dept.; Bergqvist, A. [Royal Inst. of Tech., Stockholm (Sweden). Dept. of Electrical Power Engineering

    1997-09-01

    Magnetostriction models are being widely used in the development of fine positioning and active vibration damping devices. This paper presents a new approach for simulating 1-D magnetostriction using 2-D anisotropic Preisach-type models. In this approach, identification of the model takes into account measured flux density versus field and strain versus field curves for different stress values. Consequently, a more accurate magnetostriction model may be obtained. Details of the identification procedure as well as experimental testing of the proposed model are given.

  2. Accurate Modeling of the Spiral Bevel and Hypoid Gear with a New Tooth Profile

    Institute of Scientific and Technical Information of China (English)

    LI Yun-song; ADAYI Xieeryazidan; DING Han

    2014-01-01

    Distinguishing with traditional tooth profile of spiral bevel and hypoid gear, it proposed a new tooth profile namely the spherical involute. Firstly, a new theory of forming the spherical involute tooth profile was proposed. Then, this theory was applied to complete parametric derivation of each part of its tooth profile. For enhancing the precision, the SWEEP method used for formation of each part of tooth surface and G1 stitching schema for obtaining a unified tooth surface are put forward and made the application in the accurate modeling. Lastly, owing to the higher accuracy of tooth surface of outputted model, it gave some optimization approaches. Given numerical example about the model can show that this designed gear with spherical involute tooth profile can achieve fast and accurate parametric modeling and provide a foundation for tooth contact analysis (TCA) in digitized design and manufacture.

  3. Development of an Accurate Urban Modeling System Using CAD/GIS Data for Atmosphere Environmental Simulation

    Institute of Scientific and Technical Information of China (English)

    Tomosato Takada; Kazuo Kashiyama

    2008-01-01

    This paper presents an urban modeling system using CAD/GIS data for atmosphere environ- mental simulation, such as wind flow and contaminant spread in urban area. The CAD data is used for the shape modeling for the high-storied buildings and civil structures with complicated shape since the data for that is not included in the 3D-GIS data accurately. The unstructured mesh based on the tetrahedron element is employed in order to express the urban structures with complicated shape accurately. It is difficult to un- derstand the quality of shape model and mesh by the conventional visualization technique. In this paper, the stereoscopic visualization using virtual reality (VR) technology is employed for the vedfication of the quality of shape model and mesh. The present system is applied to the atmosphere environmental simulation in ur- ban area and is shown to be an useful planning and design tool to investigate the atmosphere environmental problem.

  4. In-Situ Residual Tracking in Reduced Order Modelling

    Directory of Open Access Journals (Sweden)

    Joseph C. Slater

    2002-01-01

    Full Text Available Proper orthogonal decomposition (POD based reduced-order modelling is demonstrated to be a weighted residual technique similar to Galerkin's method. Estimates of weighted residuals of neglected modes are used to determine relative importance of neglected modes to the model. The cumulative effects of neglected modes can be used to estimate error in the reduced order model. Thus, once the snapshots have been obtained under prescribed training conditions, the need to perform full-order simulations for comparison is eliminates. This has the potential to allow the analyst to initiate further training when the reduced modes are no longer sufficient to accurately represent the predominant phenomenon of interest. The response of a fluid moving at Mach 1.2 above a panel to a forced localized oscillation of the panel at and away from the training operating conditions is used to demonstrate the evaluation method.

  5. Automated Image-Based Procedures for Accurate Artifacts 3D Modeling and Orthoimage Generation

    Directory of Open Access Journals (Sweden)

    Marc Pierrot-Deseilligny

    2011-12-01

    Full Text Available The accurate 3D documentation of architectures and heritages is getting very common and required in different application contexts. The potentialities of the image-based approach are nowadays very well-known but there is a lack of reliable, precise and flexible solutions, possibly open-source, which could be used for metric and accurate documentation or digital conservation and not only for simple visualization or web-based applications. The article presents a set of photogrammetric tools developed in order to derive accurate 3D point clouds and orthoimages for the digitization of archaeological and architectural objects. The aim is also to distribute free solutions (software, methodologies, guidelines, best practices, etc. based on 3D surveying and modeling experiences, useful in different application contexts (architecture, excavations, museum collections, heritage documentation, etc. and according to several representations needs (2D technical documentation, 3D reconstruction, web visualization, etc..

  6. Applying an accurate spherical model to gamma-ray burst afterglow observations

    CERN Document Server

    Leventis, Konstantinos; van Eerten, Hendrik J; Wijers, Ralph A M J

    2013-01-01

    We present results of model fits to afterglow data sets of GRB970508, GRB980703 and GRB070125, characterized by long and broadband coverage. The model assumes synchrotron radiation (including self-absorption) from a spherical adiabatic blast wave and consists of analytic flux prescriptions based on numerical results. For the first time it combines the accuracy of hydrodynamic simulations through different stages of the outflow dynamics with the flexibility of simple heuristic formulas. The prescriptions are especially geared towards accurate description of the dynamical transition of the outflow from relativistic to Newtonian velocities in an arbitrary power-law density environment. We show that the spherical model can accurately describe the data only in the case of GRB970508, for which we find a circumburst medium density consistent with a stellar wind. We investigate in detail the implied spectra and physical parameters of that burst. For the microphysics we show evidence for equipartition between the frac...

  7. Towards more accurate wind and solar power prediction by improving NWP model physics

    Science.gov (United States)

    Steiner, Andrea; Köhler, Carmen; von Schumann, Jonas; Ritter, Bodo

    2014-05-01

    The growing importance and successive expansion of renewable energies raise new challenges for decision makers, economists, transmission system operators, scientists and many more. In this interdisciplinary field, the role of Numerical Weather Prediction (NWP) is to reduce the errors and provide an a priori estimate of remaining uncertainties associated with the large share of weather-dependent power sources. For this purpose it is essential to optimize NWP model forecasts with respect to those prognostic variables which are relevant for wind and solar power plants. An improved weather forecast serves as the basis for a sophisticated power forecasts. Consequently, a well-timed energy trading on the stock market, and electrical grid stability can be maintained. The German Weather Service (DWD) currently is involved with two projects concerning research in the field of renewable energy, namely ORKA*) and EWeLiNE**). Whereas the latter is in collaboration with the Fraunhofer Institute (IWES), the project ORKA is led by energy & meteo systems (emsys). Both cooperate with German transmission system operators. The goal of the projects is to improve wind and photovoltaic (PV) power forecasts by combining optimized NWP and enhanced power forecast models. In this context, the German Weather Service aims to improve its model system, including the ensemble forecasting system, by working on data assimilation, model physics and statistical post processing. This presentation is focused on the identification of critical weather situations and the associated errors in the German regional NWP model COSMO-DE. First steps leading to improved physical parameterization schemes within the NWP-model are presented. Wind mast measurements reaching up to 200 m height above ground are used for the estimation of the (NWP) wind forecast error at heights relevant for wind energy plants. One particular problem is the daily cycle in wind speed. The transition from stable stratification during

  8. The accurate and comprehensive model of thin fluid flows with inertia on curved substrates

    OpenAIRE

    Roberts, A J; Li, Zhenquan

    1999-01-01

    Consider the 3D flow of a viscous Newtonian fluid upon a curved 2D substrate when the fluid film is thin as occurs in many draining, coating and biological flows. We derive a comprehensive model of the dynamics of the film, the model being expressed in terms of the film thickness and the average lateral velocity. Based upon centre manifold theory, we are assured that the model accurately includes the effects of the curvature of substrate, gravitational body force, fluid inertia and dissipatio...

  9. Impact of an accurate modeling of primordial chemistry in high resolution studies

    CERN Document Server

    Bovino, S; Latif, M A; Schleicher, D R G

    2013-01-01

    The formation of the first stars in the Universe is regulated by a sensitive interplay of chemistry and cooling with the dynamics of a self-gravitating system. As the outcome of the collapse and the final stellar masses depend sensitively on the thermal evolution, it is necessary to accurately model the thermal evolution in high resolution simulations. As previous investigations raised doubts regarding the convergence of the temperature at high resolution, we investigate the role of the numerical method employed to model the chemistry and the thermodynamics. Here we compare the standard implementation in the adaptive-mesh refinement code \\verb|ENZO|, employing a first order backward differentiation formula (BDF), with the 5th order accurate BDF solver \\verb|DLSODES|. While the standard implementation in \\verb|ENZO| shows a strong dependence on the employed resolution, the results obtained with \\verb|DLSODES| are considerably more robust, both with respect to the chemistry and thermodynamics, but also for dyna...

  10. Final Report for "Accurate Numerical Models of the Secondary Electron Yield from Grazing-incidence Collisions".

    Energy Technology Data Exchange (ETDEWEB)

    Seth A Veitzer

    2008-10-21

    Effects of stray electrons are a main factor limiting performance of many accelerators. Because heavy-ion fusion (HIF) accelerators will operate in regimes of higher current and with walls much closer to the beam than accelerators operating today, stray electrons might have a large, detrimental effect on the performance of an HIF accelerator. A primary source of stray electrons is electrons generated when halo ions strike the beam pipe walls. There is some research on these types of secondary electrons for the HIF community to draw upon, but this work is missing one crucial ingredient: the effect of grazing incidence. The overall goal of this project was to develop the numerical tools necessary to accurately model the effect of grazing incidence on the behavior of halo ions in a HIF accelerator, and further, to provide accurate models of heavy ion stopping powers with applications to ICF, WDM, and HEDP experiments.

  11. Reducing the invasiveness of modelling frameworks

    Science.gov (United States)

    Donchyts, G.; Baart, F.

    2010-12-01

    There are several modelling frameworks available that allow for environmental models to exchange data with other models. Many efforts have been made in the past years promoting solutions aimed at integrating different numerical models with each other as well as at simplifying the way to set them up, entering the data, and running them. Meanwhile the development of many modeling frameworks concentrated on the interoperability of different model engines, several standards were introduced such as ESMF, OMS and OpenMI. One of the issues with applying modelling frameworks is the invasessness, the more the model has to know about the framework, the more intrussive it is. Another issue when applying modelling frameworks are that a lot of environmental models are written in procedural and in FORTRAN, which is one of the few languages that doesn't have a proper interface with other programming languages. Most modelling frameworks are written in object oriented languages like java/c# and the modelling framework in FORTRAN ESMF is also objected oriented. In this research we show how the application of domain driven, object oriented development techniques to environmental models can reduce the invasiveness of modelling frameworks. Our approach is based on four different steps: 1) application of OO techniques and reflection to the existing model to allow introspection. 2) programming language interoperability, between model written in a procedural programming language and modeling framework written in an object oriented programming language. 3) Domain mapping between data types used by model and other components being integrated 4) Connecting models using framework (wrapper) We compare coupling of an existing model as it was to the same model adapted using the four step approach. We connect both versions of the models using two different integrated modelling frameworks. As an example of a model we use the coastal morphological model XBeach. By adapting this model it allows for

  12. Blasting Vibration Safety Criterion Analysis with Equivalent Elastic Boundary: Based on Accurate Loading Model

    OpenAIRE

    Qingwen Li; Lan Qiao; Gautam Dasgupta; Siwei Ma; Liping Wang; Jianghui Dong

    2015-01-01

    In the tunnel and underground space engineering, the blasting wave will attenuate from shock wave to stress wave to elastic seismic wave in the host rock. Also, the host rock will form crushed zone, fractured zone, and elastic seismic zone under the blasting loading and waves. In this paper, an accurate mathematical dynamic loading model was built. And the crushed zone as well as fractured zone was considered as the blasting vi...

  13. Accurate and interpretable nanoSAR models from genetic programming-based decision tree construction approaches.

    Science.gov (United States)

    Oksel, Ceyda; Winkler, David A; Ma, Cai Y; Wilkins, Terry; Wang, Xue Z

    2016-09-01

    The number of engineered nanomaterials (ENMs) being exploited commercially is growing rapidly, due to the novel properties they exhibit. Clearly, it is important to understand and minimize any risks to health or the environment posed by the presence of ENMs. Data-driven models that decode the relationships between the biological activities of ENMs and their physicochemical characteristics provide an attractive means of maximizing the value of scarce and expensive experimental data. Although such structure-activity relationship (SAR) methods have become very useful tools for modelling nanotoxicity endpoints (nanoSAR), they have limited robustness and predictivity and, most importantly, interpretation of the models they generate is often very difficult. New computational modelling tools or new ways of using existing tools are required to model the relatively sparse and sometimes lower quality data on the biological effects of ENMs. The most commonly used SAR modelling methods work best with large datasets, are not particularly good at feature selection, can be relatively opaque to interpretation, and may not account for nonlinearity in the structure-property relationships. To overcome these limitations, we describe the application of a novel algorithm, a genetic programming-based decision tree construction tool (GPTree) to nanoSAR modelling. We demonstrate the use of GPTree in the construction of accurate and interpretable nanoSAR models by applying it to four diverse literature datasets. We describe the algorithm and compare model results across the four studies. We show that GPTree generates models with accuracies equivalent to or superior to those of prior modelling studies on the same datasets. GPTree is a robust, automatic method for generation of accurate nanoSAR models with important advantages that it works with small datasets, automatically selects descriptors, and provides significantly improved interpretability of models. PMID:26956430

  14. A rapid and accurate two-point ray tracing method in horizontally layered velocity model

    Institute of Scientific and Technical Information of China (English)

    TIAN Yue; CHEN Xiao-fei

    2005-01-01

    A rapid and accurate method for two-point ray tracing in horizontally layered velocity model is presented in this paper. Numerical experiments show that this method provides stable and rapid convergence with high accuracies, regardless of various 1-D velocity structures, takeoff angles and epicentral distances. This two-point ray tracing method is compared with the pseudobending technique and the method advanced by Kim and Baag (2002). It turns out that the method in this paper is much more efficient and accurate than the pseudobending technique, but is only applicable to 1-D velocity model. Kim(s method is equivalent to ours for cases without large takeoff angles, but it fails to work when the takeoff angle is close to 90o. On the other hand, the method presented in this paper is applicable to cases with any takeoff angles with rapid and accurate convergence. Therefore, this method is a good choice for two-point ray tracing problems in horizontally layered velocity model and is efficient enough to be applied to a wide range of seismic problems.

  15. Bayesian reduced-order models for multiscale dynamical systems

    CERN Document Server

    Koutsourelakis, P S

    2010-01-01

    While existing mathematical descriptions can accurately account for phenomena at microscopic scales (e.g. molecular dynamics), these are often high-dimensional, stochastic and their applicability over macroscopic time scales of physical interest is computationally infeasible or impractical. In complex systems, with limited physical insight on the coherent behavior of their constituents, the only available information is data obtained from simulations of the trajectories of huge numbers of degrees of freedom over microscopic time scales. This paper discusses a Bayesian approach to deriving probabilistic coarse-grained models that simultaneously address the problems of identifying appropriate reduced coordinates and the effective dynamics in this lower-dimensional representation. At the core of the models proposed lie simple, low-dimensional dynamical systems which serve as the building blocks of the global model. These approximate the latent, generating sources and parameterize the reduced-order dynamics. We d...

  16. Accurate Analytic Results for the Steady State Distribution of the Eigen Model

    Science.gov (United States)

    Huang, Guan-Rong; Saakian, David B.; Hu, Chin-Kun

    2016-04-01

    Eigen model of molecular evolution is popular in studying complex biological and biomedical systems. Using the Hamilton-Jacobi equation method, we have calculated analytic equations for the steady state distribution of the Eigen model with a relative accuracy of O(1/N), where N is the length of genome. Our results can be applied for the case of small genome length N, as well as the cases where the direct numerics can not give accurate result, e.g., the tail of distribution.

  17. Fast and accurate calculations for cumulative first-passage time distributions in Wiener diffusion models

    DEFF Research Database (Denmark)

    Blurton, Steven Paul; Kesselmeier, M.; Gondan, Matthias

    2012-01-01

    We propose an improved method for calculating the cumulative first-passage time distribution in Wiener diffusion models with two absorbing barriers. This distribution function is frequently used to describe responses and error probabilities in choice reaction time tasks. The present work extends...... related work on the density of first-passage times [Navarro, D.J., Fuss, I.G. (2009). Fast and accurate calculations for first-passage times in Wiener diffusion models. Journal of Mathematical Psychology, 53, 222-230]. Two representations exist for the distribution, both including infinite series. We...

  18. An Accurate Thermoviscoelastic Rheological Model for Ethylene Vinyl Acetate Based on Fractional Calculus

    Directory of Open Access Journals (Sweden)

    Marco Paggi

    2015-01-01

    Full Text Available The thermoviscoelastic rheological properties of ethylene vinyl acetate (EVA used to embed solar cells have to be accurately described to assess the deformation and the stress state of photovoltaic (PV modules and their durability. In the present work, considering the stress as dependent on a noninteger derivative of the strain, a two-parameter model is proposed to approximate the power-law relation between the relaxation modulus and time for a given temperature level. Experimental validation with EVA uniaxial relaxation data at different constant temperatures proves the great advantage of the proposed approach over classical rheological models based on exponential solutions.

  19. Accurate halo-model matter power spectra with dark energy, massive neutrinos and modified gravitational forces

    Science.gov (United States)

    Mead, A. J.; Heymans, C.; Lombriser, L.; Peacock, J. A.; Steele, O. I.; Winther, H. A.

    2016-06-01

    We present an accurate non-linear matter power spectrum prediction scheme for a variety of extensions to the standard cosmological paradigm, which uses the tuned halo model previously developed in Mead et al. We consider dark energy models that are both minimally and non-minimally coupled, massive neutrinos and modified gravitational forces with chameleon and Vainshtein screening mechanisms. In all cases, we compare halo-model power spectra to measurements from high-resolution simulations. We show that the tuned halo-model method can predict the non-linear matter power spectrum measured from simulations of parametrized w(a) dark energy models at the few per cent level for k 0.5 h Mpc-1. An updated version of our publicly available HMCODE can be found at https://github.com/alexander-mead/hmcode.

  20. Accurate halo-model matter power spectra with dark energy, massive neutrinos and modified gravitational forces

    CERN Document Server

    Mead, Alexander; Lombriser, Lucas; Peacock, John; Steele, Olivia; Winther, Hans

    2016-01-01

    We present an accurate non-linear matter power spectrum prediction scheme for a variety of extensions to the standard cosmological paradigm, which uses the tuned halo model previously developed in Mead (2015b). We consider dark energy models that are both minimally and non-minimally coupled, massive neutrinos and modified gravitational forces with chameleon and Vainshtein screening mechanisms. In all cases we compare halo-model power spectra to measurements from high-resolution simulations. We show that the tuned halo model method can predict the non-linear matter power spectrum measured from simulations of parameterised $w(a)$ dark energy models at the few per cent level for $k0.5\\,h\\mathrm{Mpc}^{-1}$. An updated version of our publicly available HMcode can be found at https://github.com/alexander-mead/HMcode

  1. An improved model for reduced-order physiological fluid flows

    CERN Document Server

    San, Omer; 10.1142/S0219519411004666

    2012-01-01

    An improved one-dimensional mathematical model based on Pulsed Flow Equations (PFE) is derived by integrating the axial component of the momentum equation over the transient Womersley velocity profile, providing a dynamic momentum equation whose coefficients are smoothly varying functions of the spatial variable. The resulting momentum equation along with the continuity equation and pressure-area relation form our reduced-order model for physiological fluid flows in one dimension, and are aimed at providing accurate and fast-to-compute global models for physiological systems represented as networks of quasi one-dimensional fluid flows. The consequent nonlinear coupled system of equations is solved by the Lax-Wendroff scheme and is then applied to an open model arterial network of the human vascular system containing the largest fifty-five arteries. The proposed model with functional coefficients is compared with current classical one-dimensional theories which assume steady state Hagen-Poiseuille velocity pro...

  2. An Accurate Multimoment Constrained Finite Volume Transport Model on Yin-Yang Grids

    Institute of Scientific and Technical Information of China (English)

    LI Xingliang; SHEN Xueshun; PENG Xindong; XIAO Feng; ZHUANG Zhaorong; CHEN Chungang

    2013-01-01

    A global transport model is proposed in which a multimoment constrained finite volume (MCV) scheme is applied to a Yin-Yang overset grid.The MCV scheme defines 16 degrees of freedom (DOFs) within each element to build a 2D cubic reconstruction polynomial.The time evolution equations for DOFs are derived from constraint conditions on moments of line-integrated averages (LIA),point values (PV),and values of first-order derivatives (DV).The Yin-Yang grid eliminates polar singularities and results in a quasi-uniform mesh.A limiting projection is designed to remove nonphysical oscillations around discontinuities.Our model was tested against widely used benchmarks; the competitive results reveal that the model is accurate and promising for developing general circulation models.

  3. Development of accurate contact force models for use with Discrete Element Method (DEM) modelling of bulk fruit handling processes

    OpenAIRE

    Dintwa, Edward

    2006-01-01

    This thesis is primarily concerned with the development of accurate, simplified and validated contact force models for the discrete element modelling (DEM) of fruit bulk handling systems. The DEM is essentially a numerical technique to model a system of particles interacting with one another and with the system boundaries through collisions. The specific area of application envisaged is in postharvest agriculture, where DEM could be used in simulation of many unit operations with bulk fruit,...

  4. Towards an accurate model of the redshift space clustering of halos in the quasilinear regime

    CERN Document Server

    Reid, Beth A

    2011-01-01

    Observations of redshift-space distortions in spectroscopic galaxy surveys offer an attractive method for measuring the build-up of cosmological structure, which depends both on the expansion rate of the Universe and our theory of gravity. Galaxies occupy dark matter halos, whose redshift space clustering has a complex dependence on bias that cannot be inferred from the behavior of matter. We identify two distinct corrections on quasilinear scales (~ 30-80 Mpc/h): the non-linear mapping between real and redshift space positions, and the non-linear suppression of power in the velocity divergence field. We model the first non-perturbatively using the scale-dependent Gaussian streaming model, which we show is accurate at the 10 (s>25) Mpc/h for the monopole (quadrupole) halo correlation functions. We use perturbation theory to predict the real space pairwise halo velocity statistics. Our fully analytic model is accurate at the 2 per cent level only on scales s > 40 Mpc/h. Recent models that neglect the correctio...

  5. What input data are needed to accurately model electromagnetic fields from mobile phone base stations?

    Science.gov (United States)

    Beekhuizen, Johan; Kromhout, Hans; Bürgi, Alfred; Huss, Anke; Vermeulen, Roel

    2015-01-01

    The increase in mobile communication technology has led to concern about potential health effects of radio frequency electromagnetic fields (RF-EMFs) from mobile phone base stations. Different RF-EMF prediction models have been applied to assess population exposure to RF-EMF. Our study examines what input data are needed to accurately model RF-EMF, as detailed data are not always available for epidemiological studies. We used NISMap, a 3D radio wave propagation model, to test models with various levels of detail in building and antenna input data. The model outcomes were compared with outdoor measurements taken in Amsterdam, the Netherlands. Results showed good agreement between modelled and measured RF-EMF when 3D building data and basic antenna information (location, height, frequency and direction) were used: Spearman correlations were >0.6. Model performance was not sensitive to changes in building damping parameters. Antenna-specific information about down-tilt, type and output power did not significantly improve model performance compared with using average down-tilt and power values, or assuming one standard antenna type. We conclude that 3D radio wave propagation modelling is a feasible approach to predict outdoor RF-EMF levels for ranking exposure levels in epidemiological studies, when 3D building data and information on the antenna height, frequency, location and direction are available. PMID:24472756

  6. What input data are needed to accurately model electromagnetic fields from mobile phone base stations?

    Science.gov (United States)

    Beekhuizen, Johan; Kromhout, Hans; Bürgi, Alfred; Huss, Anke; Vermeulen, Roel

    2015-01-01

    The increase in mobile communication technology has led to concern about potential health effects of radio frequency electromagnetic fields (RF-EMFs) from mobile phone base stations. Different RF-EMF prediction models have been applied to assess population exposure to RF-EMF. Our study examines what input data are needed to accurately model RF-EMF, as detailed data are not always available for epidemiological studies. We used NISMap, a 3D radio wave propagation model, to test models with various levels of detail in building and antenna input data. The model outcomes were compared with outdoor measurements taken in Amsterdam, the Netherlands. Results showed good agreement between modelled and measured RF-EMF when 3D building data and basic antenna information (location, height, frequency and direction) were used: Spearman correlations were >0.6. Model performance was not sensitive to changes in building damping parameters. Antenna-specific information about down-tilt, type and output power did not significantly improve model performance compared with using average down-tilt and power values, or assuming one standard antenna type. We conclude that 3D radio wave propagation modelling is a feasible approach to predict outdoor RF-EMF levels for ranking exposure levels in epidemiological studies, when 3D building data and information on the antenna height, frequency, location and direction are available.

  7. Accurate Modeling of a Transverse Flux Permanent Magnet Generator Using 3D Finite Element Analysis

    DEFF Research Database (Denmark)

    Hosseini, Seyedmohsen; Moghani, Javad Shokrollahi; Jensen, Bogi Bech

    2011-01-01

    This paper presents an accurate modeling method that is applied to a single-sided outer-rotor transverse flux permanent magnet generator. The inductances and the induced electromotive force for a typical generator are calculated using the magnetostatic three-dimensional finite element method. A new...... method is then proposed that reveals the behavior of the generator under any load. Finally, torque calculations are carried out using three dimensional finite element analyses. It is shown that although in the single-phase generator the cogging torque is very high, this can be improved significantly by...... combining three single-phase modules into a three-phase generator....

  8. Applying an accurate spherical model to gamma-ray burst afterglow observations

    Science.gov (United States)

    Leventis, K.; van der Horst, A. J.; van Eerten, H. J.; Wijers, R. A. M. J.

    2013-05-01

    We present results of model fits to afterglow data sets of GRB 970508, GRB 980703 and GRB 070125, characterized by long and broad-band coverage. The model assumes synchrotron radiation (including self-absorption) from a spherical adiabatic blast wave and consists of analytic flux prescriptions based on numerical results. For the first time it combines the accuracy of hydrodynamic simulations through different stages of the outflow dynamics with the flexibility of simple heuristic formulas. The prescriptions are especially geared towards accurate description of the dynamical transition of the outflow from relativistic to Newtonian velocities in an arbitrary power-law density environment. We show that the spherical model can accurately describe the data only in the case of GRB 970508, for which we find a circumburst medium density n ∝ r-2. We investigate in detail the implied spectra and physical parameters of that burst. For the microphysics we show evidence for equipartition between the fraction of energy density carried by relativistic electrons and magnetic field. We also find that for the blast wave to be adiabatic, the fraction of electrons accelerated at the shock has to be smaller than 1. We present best-fitting parameters for the afterglows of all three bursts, including uncertainties in the parameters of GRB 970508, and compare the inferred values to those obtained by different authors.

  9. Fast and accurate computation of system matrix for area integral model-based algebraic reconstruction technique

    Science.gov (United States)

    Zhang, Shunli; Zhang, Dinghua; Gong, Hao; Ghasemalizadeh, Omid; Wang, Ge; Cao, Guohua

    2014-11-01

    Iterative algorithms, such as the algebraic reconstruction technique (ART), are popular for image reconstruction. For iterative reconstruction, the area integral model (AIM) is more accurate for better reconstruction quality than the line integral model (LIM). However, the computation of the system matrix for AIM is more complex and time-consuming than that for LIM. Here, we propose a fast and accurate method to compute the system matrix for AIM. First, we calculate the intersection of each boundary line of a narrow fan-beam with pixels in a recursive and efficient manner. Then, by grouping the beam-pixel intersection area into six types according to the slopes of the two boundary lines, we analytically compute the intersection area of the narrow fan-beam with the pixels in a simple algebraic fashion. Overall, experimental results show that our method is about three times faster than the Siddon algorithm and about two times faster than the distance-driven model (DDM) in computation of the system matrix. The reconstruction speed of our AIM-based ART is also faster than the LIM-based ART that uses the Siddon algorithm and DDM-based ART, for one iteration. The fast reconstruction speed of our method was accomplished without compromising the image quality.

  10. Particle Image Velocimetry Measurements in Anatomically-Accurate Models of the Mammalian Nasal Cavity

    Science.gov (United States)

    Rumple, C.; Richter, J.; Craven, B. A.; Krane, M.

    2012-11-01

    A summary of the research being carried out by our multidisciplinary team to better understand the form and function of the nose in different mammalian species that include humans, carnivores, ungulates, rodents, and marine animals will be presented. The mammalian nose houses a convoluted airway labyrinth, where two hallmark features of mammals occur, endothermy and olfaction. Because of the complexity of the nasal cavity, the anatomy and function of these upper airways remain poorly understood in most mammals. However, recent advances in high-resolution medical imaging, computational modeling, and experimental flow measurement techniques are now permitting the study of airflow and respiratory and olfactory transport phenomena in anatomically-accurate reconstructions of the nasal cavity. Here, we focus on efforts to manufacture transparent, anatomically-accurate models for stereo particle image velocimetry (SPIV) measurements of nasal airflow. Challenges in the design and manufacture of index-matched anatomical models are addressed and preliminary SPIV measurements are presented. Such measurements will constitute a validation database for concurrent computational fluid dynamics (CFD) simulations of mammalian respiration and olfaction. Supported by the National Science Foundation.

  11. A Biomechanical Model of the Scapulothoracic Joint to Accurately Capture Scapular Kinematics during Shoulder Movements

    OpenAIRE

    Seth, Ajay; Matias, Ricardo; António P Veloso; Delp, Scott L.

    2016-01-01

    The complexity of shoulder mechanics combined with the movement of skin relative to the scapula makes it difficult to measure shoulder kinematics with sufficient accuracy to distinguish between symptomatic and asymptomatic individuals. Multibody skeletal models can improve motion capture accuracy by reducing the space of possible joint movements, and models are used widely to improve measurement of lower limb kinematics. In this study, we developed a rigid-body model of a scapulothoracic join...

  12. A Multilayer Recurrent Fuzzy Neural Network for Accurate Dynamic System Modeling

    Institute of Scientific and Technical Information of China (English)

    LIU He; HUANG Dao

    2008-01-01

    A muitilayer recurrent fuzzy neural network (MRFNN)is proposed for accurate dynamic system modeling.The proposed MRFNN has six layers combined with T-S fuzzy model.The recurrent structures are formed by local feedback connections in the membership layer and the rule layer.With these feedbacks,the fuzzy sets are time-varying and the temporal problem of dynamic system can he solved well.The parameters of MRFNN are learned by chaotic search(CS)and least square estimation(LSE)simultaneously,where CS is for tuning the premise parameters and LSE is for updating the consequent coefficients accordingly.Results of simulations show the proposed approach is effective for dynamic system modeling with high accuracy.

  13. A probabilistic model for reducing medication errors.

    Directory of Open Access Journals (Sweden)

    Phung Anh Nguyen

    Full Text Available BACKGROUND: Medication errors are common, life threatening, costly but preventable. Information technology and automated systems are highly efficient for preventing medication errors and therefore widely employed in hospital settings. The aim of this study was to construct a probabilistic model that can reduce medication errors by identifying uncommon or rare associations between medications and diseases. METHODS AND FINDINGS: Association rules of mining techniques are utilized for 103.5 million prescriptions from Taiwan's National Health Insurance database. The dataset included 204.5 million diagnoses with ICD9-CM codes and 347.7 million medications by using ATC codes. Disease-Medication (DM and Medication-Medication (MM associations were computed by their co-occurrence and associations' strength were measured by the interestingness or lift values which were being referred as Q values. The DMQs and MMQs were used to develop the AOP model to predict the appropriateness of a given prescription. Validation of this model was done by comparing the results of evaluation performed by the AOP model and verified by human experts. The results showed 96% accuracy for appropriate and 45% accuracy for inappropriate prescriptions, with a sensitivity and specificity of 75.9% and 89.5%, respectively. CONCLUSIONS: We successfully developed the AOP model as an efficient tool for automatic identification of uncommon or rare associations between disease-medication and medication-medication in prescriptions. The AOP model helps to reduce medication errors by alerting physicians, improving the patients' safety and the overall quality of care.

  14. Accurate tissue area measurements with considerably reduced radiation dose achieved by patient-specific CT scan parameters

    DEFF Research Database (Denmark)

    Brandberg, J.; Bergelin, E.; Sjostrom, L.;

    2008-01-01

    for muscle tissue. Image noise was quantified by standard deviation measurements. The area deviation was radiation dose of the low-dose technique was reduced to 2-3% for diameters of 31-35 cm and to 7.5-50% for diameters of 36-47 cm...... as compared with the integral dose by the standard diagnostic technique. The CT numbers of muscle tissue remained unchanged with reduced radiation dose. Image noise was on average 20.9 HU (Hounsfield units) for subjects with diameters of 31-35 cm and 11.2 HU for subjects with diameters in the range of 36...

  15. Digitalized accurate modeling of SPCB with multi-spiral surface based on CPC algorithm

    Science.gov (United States)

    Huang, Yanhua; Gu, Lizhi

    2015-09-01

    The main methods of the existing multi-spiral surface geometry modeling include spatial analytic geometry algorithms, graphical method, interpolation and approximation algorithms. However, there are some shortcomings in these modeling methods, such as large amount of calculation, complex process, visible errors, and so on. The above methods have, to some extent, restricted the design and manufacture of the premium and high-precision products with spiral surface considerably. This paper introduces the concepts of the spatially parallel coupling with multi-spiral surface and spatially parallel coupling body. The typical geometry and topological features of each spiral surface forming the multi-spiral surface body are determined, by using the extraction principle of datum point cluster, the algorithm of coupling point cluster by removing singular point, and the "spatially parallel coupling" principle based on the non-uniform B-spline for each spiral surface. The orientation and quantitative relationships of datum point cluster and coupling point cluster in Euclidean space are determined accurately and in digital description and expression, coupling coalescence of the surfaces with multi-coupling point clusters under the Pro/E environment. The digitally accurate modeling of spatially parallel coupling body with multi-spiral surface is realized. The smooth and fairing processing is done to the three-blade end-milling cutter's end section area by applying the principle of spatially parallel coupling with multi-spiral surface, and the alternative entity model is processed in the four axis machining center after the end mill is disposed. And the algorithm is verified and then applied effectively to the transition area among the multi-spiral surface. The proposed model and algorithms may be used in design and manufacture of the multi-spiral surface body products, as well as in solving essentially the problems of considerable modeling errors in computer graphics and

  16. Towards an accurate understanding of UHMWPE visco-dynamic behaviour for numerical modelling of implants.

    Science.gov (United States)

    Quinci, Federico; Dressler, Matthew; Strickland, Anthony M; Limbert, Georges

    2014-04-01

    Considerable progress has been made in understanding implant wear and developing numerical models to predict wear for new orthopaedic devices. However any model of wear could be improved through a more accurate representation of the biomaterial mechanics, including time-varying dynamic and inelastic behaviour such as viscosity and plastic deformation. In particular, most computational models of wear of UHMWPE implement a time-invariant version of Archard's law that links the volume of worn material to the contact pressure between the metal implant and the polymeric tibial insert. During in-vivo conditions, however, the contact area is a time-varying quantity and is therefore dependent upon the dynamic deformation response of the material. From this observation one can conclude that creep deformations of UHMWPE may be very important to consider when conducting computational wear analyses, in stark contrast to what can be found in the literature. In this study, different numerical modelling techniques are compared with experimental creep testing on a unicondylar knee replacement system in a physiologically representative context. Linear elastic, plastic and time-varying visco-dynamic models are benchmarked using literature data to predict contact deformations, pressures and areas. The aim of this study is to elucidate the contributions of viscoelastic and plastic effects on these surface quantities. It is concluded that creep deformations have a significant effect on the contact pressure measured (experiment) and calculated (computational models) at the surface of the UHMWPE unicondylar insert. The use of a purely elastoplastic constitutive model for UHMWPE lead to compressive deformations of the insert which are much smaller than those predicted by a creep-capturing viscoelastic model (and those measured experimentally). This shows again the importance of including creep behaviour into a constitutive model in order to predict the right level of surface deformation

  17. Towards an accurate understanding of UHMWPE visco-dynamic behaviour for numerical modelling of implants.

    Science.gov (United States)

    Quinci, Federico; Dressler, Matthew; Strickland, Anthony M; Limbert, Georges

    2014-04-01

    Considerable progress has been made in understanding implant wear and developing numerical models to predict wear for new orthopaedic devices. However any model of wear could be improved through a more accurate representation of the biomaterial mechanics, including time-varying dynamic and inelastic behaviour such as viscosity and plastic deformation. In particular, most computational models of wear of UHMWPE implement a time-invariant version of Archard's law that links the volume of worn material to the contact pressure between the metal implant and the polymeric tibial insert. During in-vivo conditions, however, the contact area is a time-varying quantity and is therefore dependent upon the dynamic deformation response of the material. From this observation one can conclude that creep deformations of UHMWPE may be very important to consider when conducting computational wear analyses, in stark contrast to what can be found in the literature. In this study, different numerical modelling techniques are compared with experimental creep testing on a unicondylar knee replacement system in a physiologically representative context. Linear elastic, plastic and time-varying visco-dynamic models are benchmarked using literature data to predict contact deformations, pressures and areas. The aim of this study is to elucidate the contributions of viscoelastic and plastic effects on these surface quantities. It is concluded that creep deformations have a significant effect on the contact pressure measured (experiment) and calculated (computational models) at the surface of the UHMWPE unicondylar insert. The use of a purely elastoplastic constitutive model for UHMWPE lead to compressive deformations of the insert which are much smaller than those predicted by a creep-capturing viscoelastic model (and those measured experimentally). This shows again the importance of including creep behaviour into a constitutive model in order to predict the right level of surface deformation

  18. LogGPO: An accurate communication model for performance prediction of MPI programs

    Institute of Scientific and Technical Information of China (English)

    CHEN WenGuang; ZHAI JiDong; ZHANG Jin; ZHENG WeiMin

    2009-01-01

    Message passing interface (MPI) is the de facto standard in writing parallel scientific applications on distributed memory systems. Performance prediction of MPI programs on current or future parallel sys-terns can help to find system bottleneck or optimize programs. To effectively analyze and predict per-formance of a large and complex MPI program, an efficient and accurate communication model is highly needed. A series of communication models have been proposed, such as the LogP model family, which assume that the sending overhead, message transmission, and receiving overhead of a communication is not overlapped and there is a maximum overlap degree between computation and communication. However, this assumption does not always hold for MPI programs because either sending or receiving overhead introduced by MPI implementations can decrease potential overlap for large messages. In this paper, we present a new communication model, named LogGPO, which captures the potential overlap between computation with communication of MPI programs. We design and implement a trace-driven simulator to verify the LogGPO model by predicting performance of point-to-point communication and two real applications CG and Sweep3D. The average prediction errors of LogGPO model are 2.4% and 2.0% for these two applications respectively, while the average prediction errors of LogGP model are 38.3% and 9.1% respectively.

  19. Physical modeling of real-world slingshots for accurate speed predictions

    CERN Document Server

    Yeats, Bob

    2016-01-01

    We discuss the physics and modeling of latex-rubber slingshots. The goal is to get accurate speed predictions inspite of the significant real world difficulties of force drift, force hysteresis, rubber ageing, and the very non- linear, non-ideal, force vs. pull distance curves of slingshot rubber bands. Slingshots are known to shoot faster under some circumstances when the bands are tapered rather than having constant width and stiffness. We give both qualitative understanding and numerical predictions of this effect. We consider two models. The first is based on conservation of energy and is easier to implement, but cannot determine the speeds along the rubber bands without making assumptions. The second, treats the bands as a series of mass points subject to being pulled by immediately adjacent mass points according to how much the rubber has been stretched on the two adjacent sides. This is a classic many-body F=ma problem but convergence requires using a particular numerical technique. It gives accurate p...

  20. Oxygen-Enhanced MRI Accurately Identifies, Quantifies, and Maps Tumor Hypoxia in Preclinical Cancer Models.

    Science.gov (United States)

    O'Connor, James P B; Boult, Jessica K R; Jamin, Yann; Babur, Muhammad; Finegan, Katherine G; Williams, Kaye J; Little, Ross A; Jackson, Alan; Parker, Geoff J M; Reynolds, Andrew R; Waterton, John C; Robinson, Simon P

    2016-02-15

    There is a clinical need for noninvasive biomarkers of tumor hypoxia for prognostic and predictive studies, radiotherapy planning, and therapy monitoring. Oxygen-enhanced MRI (OE-MRI) is an emerging imaging technique for quantifying the spatial distribution and extent of tumor oxygen delivery in vivo. In OE-MRI, the longitudinal relaxation rate of protons (ΔR1) changes in proportion to the concentration of molecular oxygen dissolved in plasma or interstitial tissue fluid. Therefore, well-oxygenated tissues show positive ΔR1. We hypothesized that the fraction of tumor tissue refractory to oxygen challenge (lack of positive ΔR1, termed "Oxy-R fraction") would be a robust biomarker of hypoxia in models with varying vascular and hypoxic features. Here, we demonstrate that OE-MRI signals are accurate, precise, and sensitive to changes in tumor pO2 in highly vascular 786-0 renal cancer xenografts. Furthermore, we show that Oxy-R fraction can quantify the hypoxic fraction in multiple models with differing hypoxic and vascular phenotypes, when used in combination with measurements of tumor perfusion. Finally, Oxy-R fraction can detect dynamic changes in hypoxia induced by the vasomodulator agent hydralazine. In contrast, more conventional biomarkers of hypoxia (derived from blood oxygenation-level dependent MRI and dynamic contrast-enhanced MRI) did not relate to tumor hypoxia consistently. Our results show that the Oxy-R fraction accurately quantifies tumor hypoxia noninvasively and is immediately translatable to the clinic.

  1. Accurate calibration of the velocity-dependent one-scale model for domain walls

    Energy Technology Data Exchange (ETDEWEB)

    Leite, A.M.M., E-mail: up080322016@alunos.fc.up.pt [Centro de Astrofisica, Universidade do Porto, Rua das Estrelas, 4150-762 Porto (Portugal); Ecole Polytechnique, 91128 Palaiseau Cedex (France); Martins, C.J.A.P., E-mail: Carlos.Martins@astro.up.pt [Centro de Astrofisica, Universidade do Porto, Rua das Estrelas, 4150-762 Porto (Portugal); Shellard, E.P.S., E-mail: E.P.S.Shellard@damtp.cam.ac.uk [Department of Applied Mathematics and Theoretical Physics, Centre for Mathematical Sciences, University of Cambridge, Wilberforce Road, Cambridge CB3 0WA (United Kingdom)

    2013-01-08

    We study the asymptotic scaling properties of standard domain wall networks in several cosmological epochs. We carry out the largest field theory simulations achieved to date, with simulation boxes of size 2048{sup 3}, and confirm that a scale-invariant evolution of the network is indeed the attractor solution. The simulations are also used to obtain an accurate calibration for the velocity-dependent one-scale model for domain walls: we numerically determine the two free model parameters to have the values c{sub w}=0.34{+-}0.16 and k{sub w}=0.98{+-}0.07, which are of higher precision than (but in agreement with) earlier estimates.

  2. Simple and accurate modelling of the gravitational potential produced by thick and thin exponential disks

    CERN Document Server

    Smith, Rory; Candlish, Graeme N; Fellhauer, Michael; Gibson, Bradley K

    2015-01-01

    We present accurate models of the gravitational potential produced by a radially exponential disk mass distribution. The models are produced by combining three separate Miyamoto-Nagai disks. Such models have been used previously to model the disk of the Milky Way, but here we extend this framework to allow its application to disks of any mass, scalelength, and a wide range of thickness from infinitely thin to near spherical (ellipticities from 0 to 0.9). The models have the advantage of simplicity of implementation, and we expect faster run speeds over a double exponential disk treatment. The potentials are fully analytical, and differentiable at all points. The mass distribution of our models deviates from the radial mass distribution of a pure exponential disk by <0.4% out to 4 disk scalelengths, and <1.9% out to 10 disk scalelengths. We tabulate fitting parameters which facilitate construction of exponential disks for any scalelength, and a wide range of disk thickness (a user-friendly, web-based int...

  3. Empirical Reduced-Order Modeling for Boundary Feedback Flow Control

    Directory of Open Access Journals (Sweden)

    Seddik M. Djouadi

    2008-01-01

    Full Text Available This paper deals with the practical and theoretical implications of model reduction for aerodynamic flow-based control problems. Various aspects of model reduction are discussed that apply to partial differential equation- (PDE- based models in general. Specifically, the proper orthogonal decomposition (POD of a high dimension system as well as frequency domain identification methods are discussed for initial model construction. Projections on the POD basis give a nonlinear Galerkin model. Then, a model reduction method based on empirical balanced truncation is developed and applied to the Galerkin model. The rationale for doing so is that linear subspace approximations to exact submanifolds associated with nonlinear controllability and observability require only standard matrix manipulations utilizing simulation/experimental data. The proposed method uses a chirp signal as input to produce the output in the eigensystem realization algorithm (ERA. This method estimates the system's Markov parameters that accurately reproduce the output. Balanced truncation is used to show that model reduction is still effective on ERA produced approximated systems. The method is applied to a prototype convective flow on obstacle geometry. An H∞ feedback flow controller is designed based on the reduced model to achieve tracking and then applied to the full-order model with excellent performance.

  4. Using the Neumann series expansion for assembling Reduced Order Models

    Directory of Open Access Journals (Sweden)

    Nasisi S.

    2014-06-01

    Full Text Available An efficient method to remove the limitation in selecting the master degrees of freedom in a finite element model by means of a model order reduction is presented. A major difficulty of the Guyan reduction and IRS method (Improved Reduced System is represented by the need of appropriately select the master and slave degrees of freedom for the rate of convergence to be high. This study approaches the above limitation by using a particular arrangement of the rows and columns of the assembled matrices K and M and employing a combination between the IRS method and a variant of the analytical selection of masters presented in (Shah, V. N., Raymund, M., Analytical selection of masters for the reduced eigenvalue problem, International Journal for Numerical Methods in Engineering 18 (1 1982 in case first lowest frequencies had to be sought. One of the most significant characteristics of the approach is the use of the Neumann series expansion that motivates this particular arrangement of the matrices’ entries. The method shows a higher rate of convergence when compared to the standard IRS and very accurate results for the lowest reduced frequencies. To show the effectiveness of the proposed method two testing structures and the human vocal tract model employed in (Vampola, T., Horacek, J., Svec, J. G., FE modeling of human vocal tract acoustics. Part I: Prodution of Czech vowels, Acta Acustica United with Acustica 94 (3 2008 are presented.

  5. Fitmunk: improving protein structures by accurate, automatic modeling of side-chain conformations.

    Science.gov (United States)

    Porebski, Przemyslaw Jerzy; Cymborowski, Marcin; Pasenkiewicz-Gierula, Marta; Minor, Wladek

    2016-02-01

    Improvements in crystallographic hardware and software have allowed automated structure-solution pipelines to approach a near-`one-click' experience for the initial determination of macromolecular structures. However, in many cases the resulting initial model requires a laborious, iterative process of refinement and validation. A new method has been developed for the automatic modeling of side-chain conformations that takes advantage of rotamer-prediction methods in a crystallographic context. The algorithm, which is based on deterministic dead-end elimination (DEE) theory, uses new dense conformer libraries and a hybrid energy function derived from experimental data and prior information about rotamer frequencies to find the optimal conformation of each side chain. In contrast to existing methods, which incorporate the electron-density term into protein-modeling frameworks, the proposed algorithm is designed to take advantage of the highly discriminatory nature of electron-density maps. This method has been implemented in the program Fitmunk, which uses extensive conformational sampling. This improves the accuracy of the modeling and makes it a versatile tool for crystallographic model building, refinement and validation. Fitmunk was extensively tested on over 115 new structures, as well as a subset of 1100 structures from the PDB. It is demonstrated that the ability of Fitmunk to model more than 95% of side chains accurately is beneficial for improving the quality of crystallographic protein models, especially at medium and low resolutions. Fitmunk can be used for model validation of existing structures and as a tool to assess whether side chains are modeled optimally or could be better fitted into electron density. Fitmunk is available as a web service at http://kniahini.med.virginia.edu/fitmunk/server/ or at http://fitmunk.bitbucket.org/. PMID:26894674

  6. Fitmunk: improving protein structures by accurate, automatic modeling of side-chain conformations.

    Science.gov (United States)

    Porebski, Przemyslaw Jerzy; Cymborowski, Marcin; Pasenkiewicz-Gierula, Marta; Minor, Wladek

    2016-02-01

    Improvements in crystallographic hardware and software have allowed automated structure-solution pipelines to approach a near-`one-click' experience for the initial determination of macromolecular structures. However, in many cases the resulting initial model requires a laborious, iterative process of refinement and validation. A new method has been developed for the automatic modeling of side-chain conformations that takes advantage of rotamer-prediction methods in a crystallographic context. The algorithm, which is based on deterministic dead-end elimination (DEE) theory, uses new dense conformer libraries and a hybrid energy function derived from experimental data and prior information about rotamer frequencies to find the optimal conformation of each side chain. In contrast to existing methods, which incorporate the electron-density term into protein-modeling frameworks, the proposed algorithm is designed to take advantage of the highly discriminatory nature of electron-density maps. This method has been implemented in the program Fitmunk, which uses extensive conformational sampling. This improves the accuracy of the modeling and makes it a versatile tool for crystallographic model building, refinement and validation. Fitmunk was extensively tested on over 115 new structures, as well as a subset of 1100 structures from the PDB. It is demonstrated that the ability of Fitmunk to model more than 95% of side chains accurately is beneficial for improving the quality of crystallographic protein models, especially at medium and low resolutions. Fitmunk can be used for model validation of existing structures and as a tool to assess whether side chains are modeled optimally or could be better fitted into electron density. Fitmunk is available as a web service at http://kniahini.med.virginia.edu/fitmunk/server/ or at http://fitmunk.bitbucket.org/.

  7. A Simple and Accurate Model to Predict Responses to Multi-electrode Stimulation in the Retina.

    Directory of Open Access Journals (Sweden)

    Matias I Maturana

    2016-04-01

    Full Text Available Implantable electrode arrays are widely used in therapeutic stimulation of the nervous system (e.g. cochlear, retinal, and cortical implants. Currently, most neural prostheses use serial stimulation (i.e. one electrode at a time despite this severely limiting the repertoire of stimuli that can be applied. Methods to reliably predict the outcome of multi-electrode stimulation have not been available. Here, we demonstrate that a linear-nonlinear model accurately predicts neural responses to arbitrary patterns of stimulation using in vitro recordings from single retinal ganglion cells (RGCs stimulated with a subretinal multi-electrode array. In the model, the stimulus is projected onto a low-dimensional subspace and then undergoes a nonlinear transformation to produce an estimate of spiking probability. The low-dimensional subspace is estimated using principal components analysis, which gives the neuron's electrical receptive field (ERF, i.e. the electrodes to which the neuron is most sensitive. Our model suggests that stimulation proportional to the ERF yields a higher efficacy given a fixed amount of power when compared to equal amplitude stimulation on up to three electrodes. We find that the model captures the responses of all the cells recorded in the study, suggesting that it will generalize to most cell types in the retina. The model is computationally efficient to evaluate and, therefore, appropriate for future real-time applications including stimulation strategies that make use of recorded neural activity to improve the stimulation strategy.

  8. A Simple and Accurate Model to Predict Responses to Multi-electrode Stimulation in the Retina.

    Science.gov (United States)

    Maturana, Matias I; Apollo, Nicholas V; Hadjinicolaou, Alex E; Garrett, David J; Cloherty, Shaun L; Kameneva, Tatiana; Grayden, David B; Ibbotson, Michael R; Meffin, Hamish

    2016-04-01

    Implantable electrode arrays are widely used in therapeutic stimulation of the nervous system (e.g. cochlear, retinal, and cortical implants). Currently, most neural prostheses use serial stimulation (i.e. one electrode at a time) despite this severely limiting the repertoire of stimuli that can be applied. Methods to reliably predict the outcome of multi-electrode stimulation have not been available. Here, we demonstrate that a linear-nonlinear model accurately predicts neural responses to arbitrary patterns of stimulation using in vitro recordings from single retinal ganglion cells (RGCs) stimulated with a subretinal multi-electrode array. In the model, the stimulus is projected onto a low-dimensional subspace and then undergoes a nonlinear transformation to produce an estimate of spiking probability. The low-dimensional subspace is estimated using principal components analysis, which gives the neuron's electrical receptive field (ERF), i.e. the electrodes to which the neuron is most sensitive. Our model suggests that stimulation proportional to the ERF yields a higher efficacy given a fixed amount of power when compared to equal amplitude stimulation on up to three electrodes. We find that the model captures the responses of all the cells recorded in the study, suggesting that it will generalize to most cell types in the retina. The model is computationally efficient to evaluate and, therefore, appropriate for future real-time applications including stimulation strategies that make use of recorded neural activity to improve the stimulation strategy. PMID:27035143

  9. A reduced model for shock and detonation waves. I. The inert case

    OpenAIRE

    Stoltz, G.

    2006-01-01

    We present a model of mesoparticles, very much in the Dissipative Particle Dynamics spirit, in which a molecule is replaced by a particle with an internal thermodynamic degree of freedom (temperature or energy). The model is shown to give quantitavely accurate results for the simulation of shock waves in a crystalline polymer, and opens the way to a reduced model of detonation waves.

  10. Optimal Cluster Mill Pass Scheduling With an Accurate and Rapid New Strip Crown Model

    Science.gov (United States)

    Malik, Arif S.; Grandhi, Ramana V.; Zipf, Mark E.

    2007-05-01

    Besides the requirement to roll coiled sheet at high levels of productivity, the optimal pass scheduling of cluster-type reversing cold mills presents the added challenge of assigning mill parameters that facilitate the best possible strip flatness. The pressures of intense global competition, and the requirements for increasingly thinner, higher quality specialty sheet products that are more difficult to roll, continue to force metal producers to commission innovative flatness-control technologies. This means that during the on-line computerized set-up of rolling mills, the mathematical model should not only determine the minimum total number of passes and maximum rolling speed, it should simultaneously optimize the pass-schedule so that desired flatness is assured, either by manual or automated means. In many cases today, however, on-line prediction of strip crown and corresponding flatness for the complex cluster-type rolling mills is typically addressed either by trial and error, by approximate deflection models for equivalent vertical roll-stacks, or by non-physical pattern recognition style models. The abundance of the aforementioned methods is largely due to the complexity of cluster-type mill configurations and the lack of deflection models with sufficient accuracy and speed for on-line use. Without adequate assignment of the pass-schedule set-up parameters, it may be difficult or impossible to achieve the required strip flatness. In this paper, we demonstrate optimization of cluster mill pass-schedules using a new accurate and rapid strip crown model. This pass-schedule optimization includes computations of the predicted strip thickness profile to validate mathematical constraints. In contrast to many of the existing methods for on-line prediction of strip crown and flatness on cluster mills, the demonstrated method requires minimal prior tuning and no extensive training with collected mill data. To rapidly and accurately solve the multi-contact problem

  11. Fast and accurate analytical model to solve inverse problem in SHM using Lamb wave propagation

    Science.gov (United States)

    Poddar, Banibrata; Giurgiutiu, Victor

    2016-04-01

    Lamb wave propagation is at the center of attention of researchers for structural health monitoring of thin walled structures. This is due to the fact that Lamb wave modes are natural modes of wave propagation in these structures with long travel distances and without much attenuation. This brings the prospect of monitoring large structure with few sensors/actuators. However the problem of damage detection and identification is an "inverse problem" where we do not have the luxury to know the exact mathematical model of the system. On top of that the problem is more challenging due to the confounding factors of statistical variation of the material and geometric properties. Typically this problem may also be ill posed. Due to all these complexities the direct solution of the problem of damage detection and identification in SHM is impossible. Therefore an indirect method using the solution of the "forward problem" is popular for solving the "inverse problem". This requires a fast forward problem solver. Due to the complexities involved with the forward problem of scattering of Lamb waves from damages researchers rely primarily on numerical techniques such as FEM, BEM, etc. But these methods are slow and practically impossible to be used in structural health monitoring. We have developed a fast and accurate analytical forward problem solver for this purpose. This solver, CMEP (complex modes expansion and vector projection), can simulate scattering of Lamb waves from all types of damages in thin walled structures fast and accurately to assist the inverse problem solver.

  12. Accurate modeling of vector hysteresis using a superposition of Preisach-type models

    Energy Technology Data Exchange (ETDEWEB)

    Adly, A.A. [Cairo Univ., Giza (Egypt). Electrical Power and Machines Dept.; Mayergoyz, I.D. [Univ. of Maryland, College Park, MD (United States). Electrical Engineering Dept.

    1997-09-01

    Vector hysteresis models are basically regarded as helpful tools that can be utilized in simulating and/or predicting multi-dimensional field-media interactions. Simulations of energy loss in power devices having unoriented magnetic cores, read/write recording processes as well as tape and disk erasure approaches are examples of such interactions that are currently of considerable interest. Vector hysteresis models are generally regarded as helpful tools that can be utilized in simulating multi-dimensional field-media interactions. In this paper, simulation of vector hysteresis is proposed by using a superposition of isotropic Preisach-type models. This approach gives the opportunity to fully incorporate rotational experimental results in its identification procedure, thus leading to higher simulation accuracy. Detailed solution of the model identification problem and some experimental testing results are given in the paper.

  13. An accurate and efficient Lagrangian sub-grid model for multi-particle dispersion

    Science.gov (United States)

    Toschi, Federico; Mazzitelli, Irene; Lanotte, Alessandra S.

    2014-11-01

    Many natural and industrial processes involve the dispersion of particle in turbulent flows. Despite recent theoretical progresses in the understanding of particle dynamics in simple turbulent flows, complex geometries often call for numerical approaches based on eulerian Large Eddy Simulation (LES). One important issue related to the Lagrangian integration of tracers in under-resolved velocity fields is connected to the lack of spatial correlations at unresolved scales. Here we propose a computationally efficient Lagrangian model for the sub-grid velocity of tracers dispersed in statistically homogeneous and isotropic turbulent flows. The model incorporates the multi-scale nature of turbulent temporal and spatial correlations that are essential to correctly reproduce the dynamics of multi-particle dispersion. The new model is able to describe the Lagrangian temporal and spatial correlations in clouds of particles. In particular we show that pairs and tetrads dispersion compare well with results from Direct Numerical Simulations of statistically isotropic and homogeneous 3d turbulence. This model may offer an accurate and efficient way to describe multi-particle dispersion in under resolved turbulent velocity fields such as the one employed in eulerian LES. This work is part of the research programmes FP112 of the Foundation for Fundamental Research on Matter (FOM), which is part of the Netherlands Organisation for Scientific Research (NWO). We acknowledge support from the EU COST Action MP0806.

  14. Modeling methodology for the accurate and prompt prediction of symptomatic events in chronic diseases.

    Science.gov (United States)

    Pagán, Josué; Risco-Martín, José L; Moya, José M; Ayala, José L

    2016-08-01

    Prediction of symptomatic crises in chronic diseases allows to take decisions before the symptoms occur, such as the intake of drugs to avoid the symptoms or the activation of medical alarms. The prediction horizon is in this case an important parameter in order to fulfill the pharmacokinetics of medications, or the time response of medical services. This paper presents a study about the prediction limits of a chronic disease with symptomatic crises: the migraine. For that purpose, this work develops a methodology to build predictive migraine models and to improve these predictions beyond the limits of the initial models. The maximum prediction horizon is analyzed, and its dependency on the selected features is studied. A strategy for model selection is proposed to tackle the trade off between conservative but robust predictive models, with respect to less accurate predictions with higher horizons. The obtained results show a prediction horizon close to 40min, which is in the time range of the drug pharmacokinetics. Experiments have been performed in a realistic scenario where input data have been acquired in an ambulatory clinical study by the deployment of a non-intrusive Wireless Body Sensor Network. Our results provide an effective methodology for the selection of the future horizon in the development of prediction algorithms for diseases experiencing symptomatic crises. PMID:27260782

  15. A general pairwise interaction model provides an accurate description of in vivo transcription factor binding sites.

    Directory of Open Access Journals (Sweden)

    Marc Santolini

    Full Text Available The identification of transcription factor binding sites (TFBSs on genomic DNA is of crucial importance for understanding and predicting regulatory elements in gene networks. TFBS motifs are commonly described by Position Weight Matrices (PWMs, in which each DNA base pair contributes independently to the transcription factor (TF binding. However, this description ignores correlations between nucleotides at different positions, and is generally inaccurate: analysing fly and mouse in vivo ChIPseq data, we show that in most cases the PWM model fails to reproduce the observed statistics of TFBSs. To overcome this issue, we introduce the pairwise interaction model (PIM, a generalization of the PWM model. The model is based on the principle of maximum entropy and explicitly describes pairwise correlations between nucleotides at different positions, while being otherwise as unconstrained as possible. It is mathematically equivalent to considering a TF-DNA binding energy that depends additively on each nucleotide identity at all positions in the TFBS, like the PWM model, but also additively on pairs of nucleotides. We find that the PIM significantly improves over the PWM model, and even provides an optimal description of TFBS statistics within statistical noise. The PIM generalizes previous approaches to interdependent positions: it accounts for co-variation of two or more base pairs, and predicts secondary motifs, while outperforming multiple-motif models consisting of mixtures of PWMs. We analyse the structure of pairwise interactions between nucleotides, and find that they are sparse and dominantly located between consecutive base pairs in the flanking region of TFBS. Nonetheless, interactions between pairs of non-consecutive nucleotides are found to play a significant role in the obtained accurate description of TFBS statistics. The PIM is computationally tractable, and provides a general framework that should be useful for describing and predicting

  16. Ab initio calculations to support accurate modelling of the rovibronic spectroscopy calculations of vanadium monoxide (VO)

    CERN Document Server

    McKemmish, Laura K; Tennyson, Jonathan

    2016-01-01

    Accurate knowledge of the rovibronic near-infrared and visible spectra of vanadium monoxide (VO) is very important for studies of cool stellar and hot planetary atmospheres. Here, the required ab initio dipole moment and spin-orbit coupling curves for VO are produced. This data forms the basis of a new VO line list considering 13 different electronic states and containing over 277 million transitions. Open shell transition, metal diatomics are challenging species to model through ab initio quantum mechanics due to the large number of low-lying electronic states, significant spin-orbit coupling and strong static and dynamic electron correlation. Multi-reference configuration interaction methodologies using orbitals from a complete active space self-consistent-field (CASSCF) calculation are the standard technique for these systems. We use different state-specific or minimal-state CASSCF orbitals for each electronic state to maximise the calculation accuracy. The off-diagonal dipole moment controls the intensity...

  17. An Accurately Stable Thermo-Hydro-Mechanical Model for Geo-Environmental Simulations

    Science.gov (United States)

    Gambolati, G.; Castelletto, N.; Ferronato, M.

    2011-12-01

    In real-world applications involving complex 3D heterogeneous domains the use of advanced numerical algorithms is of paramount importance to stabily, accurately and efficiently solve the coupled system of partial differential equations governing the mass and the energy balance in deformable porous media. The present communication discusses a novel coupled 3-D numerical model based on a suitable combination of Finite Elements (FEs), Mixed FEs (MFEs), and Finite Volumes (FVs) developed with the aim at stabilizing the numerical solution. Elemental pressures and temperatures, nodal displacements and face normal Darcy and Fourier fluxes are the selected primary variables. Such an approach provides an element-wise conservative velocity field, with both pore pressure and stress having the same order of approximation, and allows for the accurate prediction of sharp temperature convective fronts. In particular, the flow-deformation problem is addressed jointly by FEs and MFEs and is coupled to the heat transfer equation using an ad hoc time splitting technique that separates the time temperature evolution into two partial differential equations, accounting for the convective and the diffusive contribution, respectively. The convective part is addressed by a FV scheme which proves effective in treating sharp convective fronts, while the diffusive part is solved by a MFE formulation. A staggered technique is then implemented for the global solution of the coupled thermo-hydro-mechanical problem, solving iteratively the flow-deformation and the heat transport at each time step. Finally, the model is successfully experimented with in realistic applications dealing with geothermal energy extraction and injection.

  18. Parallel kinetic Monte Carlo simulation framework incorporating accurate models of adsorbate lateral interactions

    Science.gov (United States)

    Nielsen, Jens; d'Avezac, Mayeul; Hetherington, James; Stamatakis, Michail

    2013-12-01

    Ab initio kinetic Monte Carlo (KMC) simulations have been successfully applied for over two decades to elucidate the underlying physico-chemical phenomena on the surfaces of heterogeneous catalysts. These simulations necessitate detailed knowledge of the kinetics of elementary reactions constituting the reaction mechanism, and the energetics of the species participating in the chemistry. The information about the energetics is encoded in the formation energies of gas and surface-bound species, and the lateral interactions between adsorbates on the catalytic surface, which can be modeled at different levels of detail. The majority of previous works accounted for only pairwise-additive first nearest-neighbor interactions. More recently, cluster-expansion Hamiltonians incorporating long-range interactions and many-body terms have been used for detailed estimations of catalytic rate [C. Wu, D. J. Schmidt, C. Wolverton, and W. F. Schneider, J. Catal. 286, 88 (2012)]. In view of the increasing interest in accurate predictions of catalytic performance, there is a need for general-purpose KMC approaches incorporating detailed cluster expansion models for the adlayer energetics. We have addressed this need by building on the previously introduced graph-theoretical KMC framework, and we have developed Zacros, a FORTRAN2003 KMC package for simulating catalytic chemistries. To tackle the high computational cost in the presence of long-range interactions we introduce parallelization with OpenMP. We further benchmark our framework by simulating a KMC analogue of the NO oxidation system established by Schneider and co-workers [J. Catal. 286, 88 (2012)]. We show that taking into account only first nearest-neighbor interactions may lead to large errors in the prediction of the catalytic rate, whereas for accurate estimates thereof, one needs to include long-range terms in the cluster expansion.

  19. An Efficient Hybrid DSMC/MD Algorithm for Accurate Modeling of Micro Gas Flows

    KAUST Repository

    Liang, Tengfei

    2013-01-01

    Aiming at simulating micro gas flows with accurate boundary conditions, an efficient hybrid algorithmis developed by combining themolecular dynamics (MD) method with the direct simulationMonte Carlo (DSMC)method. The efficiency comes from the fact that theMD method is applied only within the gas-wall interaction layer, characterized by the cut-off distance of the gas-solid interaction potential, to resolve accurately the gas-wall interaction process, while the DSMC method is employed in the remaining portion of the flow field to efficiently simulate rarefied gas transport outside the gas-wall interaction layer. A unique feature about the present scheme is that the coupling between the two methods is realized by matching the molecular velocity distribution function at the DSMC/MD interface, hence there is no need for one-toone mapping between a MD gas molecule and a DSMC simulation particle. Further improvement in efficiency is achieved by taking advantage of gas rarefaction inside the gas-wall interaction layer and by employing the "smart-wall model" proposed by Barisik et al. The developed hybrid algorithm is validated on two classical benchmarks namely 1-D Fourier thermal problem and Couette shear flow problem. Both the accuracy and efficiency of the hybrid algorithm are discussed. As an application, the hybrid algorithm is employed to simulate thermal transpiration coefficient in the free-molecule regime for a system with atomically smooth surface. Result is utilized to validate the coefficients calculated from the pure DSMC simulation with Maxwell and Cercignani-Lampis gas-wall interaction models. ©c 2014 Global-Science Press.

  20. Accurate state estimation from uncertain data and models: an application of data assimilation to mathematical models of human brain tumors

    Directory of Open Access Journals (Sweden)

    Kostelich Eric J

    2011-12-01

    Full Text Available Abstract Background Data assimilation refers to methods for updating the state vector (initial condition of a complex spatiotemporal model (such as a numerical weather model by combining new observations with one or more prior forecasts. We consider the potential feasibility of this approach for making short-term (60-day forecasts of the growth and spread of a malignant brain cancer (glioblastoma multiforme in individual patient cases, where the observations are synthetic magnetic resonance images of a hypothetical tumor. Results We apply a modern state estimation algorithm (the Local Ensemble Transform Kalman Filter, previously developed for numerical weather prediction, to two different mathematical models of glioblastoma, taking into account likely errors in model parameters and measurement uncertainties in magnetic resonance imaging. The filter can accurately shadow the growth of a representative synthetic tumor for 360 days (six 60-day forecast/update cycles in the presence of a moderate degree of systematic model error and measurement noise. Conclusions The mathematical methodology described here may prove useful for other modeling efforts in biology and oncology. An accurate forecast system for glioblastoma may prove useful in clinical settings for treatment planning and patient counseling. Reviewers This article was reviewed by Anthony Almudevar, Tomas Radivoyevitch, and Kristin Swanson (nominated by Georg Luebeck.

  1. Fast and Accurate Radiative Transfer Calculations Using Principal Component Analysis for (Exo-)Planetary Retrieval Models

    Science.gov (United States)

    Kopparla, P.; Natraj, V.; Shia, R. L.; Spurr, R. J. D.; Crisp, D.; Yung, Y. L.

    2015-12-01

    Radiative transfer (RT) computations form the engine of atmospheric retrieval codes. However, full treatment of RT processes is computationally expensive, prompting usage of two-stream approximations in current exoplanetary atmospheric retrieval codes [Line et al., 2013]. Natraj et al. [2005, 2010] and Spurr and Natraj [2013] demonstrated the ability of a technique using principal component analysis (PCA) to speed up RT computations. In the PCA method for RT performance enhancement, empirical orthogonal functions are developed for binned sets of inherent optical properties that possess some redundancy; costly multiple-scattering RT calculations are only done for those few optical states corresponding to the most important principal components, and correction factors are applied to approximate radiation fields. Kopparla et al. [2015, in preparation] extended the PCA method to a broadband spectral region from the ultraviolet to the shortwave infrared (0.3-3 micron), accounting for major gas absorptions in this region. Here, we apply the PCA method to a some typical (exo-)planetary retrieval problems. Comparisons between the new model, called Universal Principal Component Analysis Radiative Transfer (UPCART) model, two-stream models and line-by-line RT models are performed, for spectral radiances, spectral fluxes and broadband fluxes. Each of these are calculated at the top of the atmosphere for several scenarios with varying aerosol types, extinction and scattering optical depth profiles, and stellar and viewing geometries. We demonstrate that very accurate radiance and flux estimates can be obtained, with better than 1% accuracy in all spectral regions and better than 0.1% in most cases, as compared to a numerically exact line-by-line RT model. The accuracy is enhanced when the results are convolved to typical instrument resolutions. The operational speed and accuracy of UPCART can be further improved by optimizing binning schemes and parallelizing the codes, work

  2. Fast and Accurate Modeling of Molecular Atomization Energies with Machine Learning

    CERN Document Server

    Rupp, Matthias; Müller, Klaus-Robert; von Lilienfeld, O Anatole

    2011-01-01

    We introduce a machine learning model to predict atomization energies of a diverse set of organic molecules, based on nuclear charges and atomic positions only. The problem of solving the molecular Schr\\"odinger equation is mapped onto a non-linear statistical regression problem of reduced complexity. Regression models are trained on and compared to atomization energies computed with hybrid density-functional theory. Cross-validation over more than seven thousand small organic molecules yields a mean absolute error of ~10 kcal/mol. Applicability is demonstrated for the prediction of molecular atomization potential energy curves.

  3. A non-contact method based on multiple signal classification algorithm to reduce the measurement time for accurately heart rate detection

    Science.gov (United States)

    Bechet, P.; Mitran, R.; Munteanu, M.

    2013-08-01

    Non-contact methods for the assessment of vital signs are of great interest for specialists due to the benefits obtained in both medical and special applications, such as those for surveillance, monitoring, and search and rescue. This paper investigates the possibility of implementing a digital processing algorithm based on the MUSIC (Multiple Signal Classification) parametric spectral estimation in order to reduce the observation time needed to accurately measure the heart rate. It demonstrates that, by proper dimensioning the signal subspace, the MUSIC algorithm can be optimized in order to accurately assess the heart rate during an 8-28 s time interval. The validation of the processing algorithm performance was achieved by minimizing the mean error of the heart rate after performing simultaneous comparative measurements on several subjects. In order to calculate the error the reference value of heart rate was measured using a classic measurement system through direct contact.

  4. Blasting Vibration Safety Criterion Analysis with Equivalent Elastic Boundary: Based on Accurate Loading Model

    Directory of Open Access Journals (Sweden)

    Qingwen Li

    2015-01-01

    Full Text Available In the tunnel and underground space engineering, the blasting wave will attenuate from shock wave to stress wave to elastic seismic wave in the host rock. Also, the host rock will form crushed zone, fractured zone, and elastic seismic zone under the blasting loading and waves. In this paper, an accurate mathematical dynamic loading model was built. And the crushed zone as well as fractured zone was considered as the blasting vibration source thus deducting the partial energy for cutting host rock. So this complicated dynamic problem of segmented differential blasting was regarded as an equivalent elastic boundary problem by taking advantage of Saint-Venant’s Theorem. At last, a 3D model in finite element software FLAC3D accepted the constitutive parameters, uniformly distributed mutative loading, and the cylindrical attenuation law to predict the velocity curves and effective tensile curves for calculating safety criterion formulas of surrounding rock and tunnel liner after verifying well with the in situ monitoring data.

  5. New possibilities of accurate particle characterisation by applying direct boundary models to analytical centrifugation

    Science.gov (United States)

    Walter, Johannes; Thajudeen, Thaseem; Süß, Sebastian; Segets, Doris; Peukert, Wolfgang

    2015-04-01

    Analytical centrifugation (AC) is a powerful technique for the characterisation of nanoparticles in colloidal systems. As a direct and absolute technique it requires no calibration or measurements of standards. Moreover, it offers simple experimental design and handling, high sample throughput as well as moderate investment costs. However, the full potential of AC for nanoparticle size analysis requires the development of powerful data analysis techniques. In this study we show how the application of direct boundary models to AC data opens up new possibilities in particle characterisation. An accurate analysis method, successfully applied to sedimentation data obtained by analytical ultracentrifugation (AUC) in the past, was used for the first time in analysing AC data. Unlike traditional data evaluation routines for AC using a designated number of radial positions or scans, direct boundary models consider the complete sedimentation boundary, which results in significantly better statistics. We demonstrate that meniscus fitting, as well as the correction of radius and time invariant noise significantly improves the signal-to-noise ratio and prevents the occurrence of false positives due to optical artefacts. Moreover, hydrodynamic non-ideality can be assessed by the residuals obtained from the analysis. The sedimentation coefficient distributions obtained by AC are in excellent agreement with the results from AUC. Brownian dynamics simulations were used to generate numerical sedimentation data to study the influence of diffusion on the obtained distributions. Our approach is further validated using polystyrene and silica nanoparticles. In particular, we demonstrate the strength of AC for analysing multimodal distributions by means of gold nanoparticles.

  6. SPARC: Mass Models for 175 Disk Galaxies with Spitzer Photometry and Accurate Rotation Curves

    CERN Document Server

    Lelli, Federico; Schombert, James M

    2016-01-01

    We introduce SPARC (Spitzer Photometry & Accurate Rotation Curves): a sample of 175 nearby galaxies with new surface photometry at 3.6 um and high-quality rotation curves from previous HI/Halpha studies. SPARC spans a broad range of morphologies (S0 to Irr), luminosities (~5 dex), and surface brightnesses (~4 dex). We derive [3.6] surface photometry and study structural relations of stellar and gas disks. We find that both the stellar mass-HI mass relation and the stellar radius-HI radius relation have significant intrinsic scatter, while the HI mass-radius relation is extremely tight. We build detailed mass models and quantify the ratio of baryonic-to-observed velocity (Vbar/Vobs) for different characteristic radii and values of the stellar mass-to-light ratio (M/L) at [3.6]. Assuming M/L=0.5 Msun/Lsun (as suggested by stellar population models) we find that (i) the gas fraction linearly correlates with total luminosity, (ii) the transition from star-dominated to gas-dominated galaxies roughly correspond...

  7. ACCURATE UNIVERSAL MODELS FOR THE MASS ACCRETION HISTORIES AND CONCENTRATIONS OF DARK MATTER HALOS

    International Nuclear Information System (INIS)

    A large amount of observations have constrained cosmological parameters and the initial density fluctuation spectrum to a very high accuracy. However, cosmological parameters change with time and the power index of the power spectrum dramatically varies with mass scale in the so-called concordance ΛCDM cosmology. Thus, any successful model for its structural evolution should work well simultaneously for various cosmological models and different power spectra. We use a large set of high-resolution N-body simulations of a variety of structure formation models (scale-free, standard CDM, open CDM, and ΛCDM) to study the mass accretion histories, the mass and redshift dependence of concentrations, and the concentration evolution histories of dark matter halos. We find that there is significant disagreement between the much-used empirical models in the literature and our simulations. Based on our simulation results, we find that the mass accretion rate of a halo is tightly correlated with a simple function of its mass, the redshift, parameters of the cosmology, and of the initial density fluctuation spectrum, which correctly disentangles the effects of all these factors and halo environments. We also find that the concentration of a halo is strongly correlated with the universe age when its progenitor on the mass accretion history first reaches 4% of its current mass. According to these correlations, we develop new empirical models for both the mass accretion histories and the concentration evolution histories of dark matter halos, and the latter can also be used to predict the mass and redshift dependence of halo concentrations. These models are accurate and universal: the same set of model parameters works well for different cosmological models and for halos of different masses at different redshifts, and in the ΛCDM case the model predictions match the simulation results very well even though halo mass is traced to about 0.0005 times the final mass, when

  8. Inflation model building with an accurate measure of e-folding

    CERN Document Server

    Chongchitnan, Sirichai

    2016-01-01

    We revisit the problem of measuring the number of e-folding during inflation. It has become standard practice to take the logarithmic growth of the scale factor as a measure of the amount of inflation. However, this is only an approximation for the true amount of inflation required to solve the horizon and flatness problems. The aim of this work is to quantify the error in this approximation, and show how it can be avoided. We present an alternative framework for inflation model building using the inverse Hubble radius, aH, as the key parameter. We show that in this formalism, the correct number of e-folding arises naturally as a measure of inflation. As an application, we present an interesting model in which the entire inflationary dynamics can be solved analytically and exactly, and, in special cases, reduces to the familiar class of power-law models.

  9. The accurate simulation of the tension test for stainless steel sheet: the plasticity model

    International Nuclear Information System (INIS)

    Full text: The overall aim of this research project is to achieve the accurate simulation of a hydroforming process chain, in this case the manufacturing of a metal bellow. The work is done in cooperation with the project group for numerical research at the computer centre of the University of Karlsruhe, which is responsible for the simulation itself, while the Institute for Metal Forming Technology (IFU) of the University of Stuttgart is responsible for the material modeling and the resulting differential equations to describe the material behavior. Hydroforming technology uses highly compressed fluid media (up to 4200 bar) to form the basic, mostly metallic material. One hydroforming field is tube hydroforming (THF), which uses tubes or extrusions as basic material. The forming conditions created by hydroforming are quite different from those originated by other processes as for example deep drawing. That's why today's available simulation software is not always able to show satisfying results when a hydroforming process is simulated. The partners of this project try to solve this problem with the FDEM simulation software, developed by W. Schoenauer at the University of Karlsruhe, Germany. It was designed to solve systems of partial differential equations, which in this project are delivered by the IFU. The manufacturing of a metal bellow by hydroforming leads to tensile stress in longitudinal and tangential direction and to bend load due to the shifting and rollforming process. Therefore as a first step, the standardized tensile test is simulated. For plastic deformation a material model developed by D. Banabic is used. It describes the plastic behavior of orthotropic sheet metal. For elastic deformation Hooke's law for isotropic materials is used. In permanent iteration with the simulation the used material model has to be checked for its validity and must be modified if necessary. Refs. 3 (author)

  10. Numerical simulation of pharyngeal airflow applied to obstructive sleep apnea: effect of the nasal cavity in anatomically accurate airway models.

    Science.gov (United States)

    Cisonni, Julien; Lucey, Anthony D; King, Andrew J C; Islam, Syed Mohammed Shamsul; Lewis, Richard; Goonewardene, Mithran S

    2015-11-01

    Repetitive brief episodes of soft-tissue collapse within the upper airway during sleep characterize obstructive sleep apnea (OSA), an extremely common and disabling disorder. Failure to maintain the patency of the upper airway is caused by the combination of sleep-related loss of compensatory dilator muscle activity and aerodynamic forces promoting closure. The prediction of soft-tissue movement in patient-specific airway 3D mechanical models is emerging as a useful contribution to clinical understanding and decision making. Such modeling requires reliable estimations of the pharyngeal wall pressure forces. While nasal obstruction has been recognized as a risk factor for OSA, the need to include the nasal cavity in upper-airway models for OSA studies requires consideration, as it is most often omitted because of its complex shape. A quantitative analysis of the flow conditions generated by the nasal cavity and the sinuses during inspiration upstream of the pharynx is presented. Results show that adequate velocity boundary conditions and simple artificial extensions of the flow domain can reproduce the essential effects of the nasal cavity on the pharyngeal flow field. Therefore, the overall complexity and computational cost of accurate flow predictions can be reduced.

  11. Warm gas in the rotating disk of the Red Rectangle: accurate models of molecular line emission

    CERN Document Server

    Bujarrabal, V

    2013-01-01

    We aim to study the excitation conditions of the molecular gas in the rotating disk of the Red Rectangle, the only post-Asymptotic-Giant-Branch object in which the existence of an equatorial rotating disk has been demonstrated. For this purpose, we developed a complex numerical code that accurately treats radiative transfer in 2-D, adapted to the study of molecular lines from rotating disks. We present far-infrared Herschel/HIFI observations of the 12CO and 13CO J=6-5, J=10-9, and J=16-15 transitions in the Red Rectangle. We also present our code in detail and discuss the accuracy of its predictions, from comparison with well-tested codes. Theoretical line profiles are compared with the empirical data to deduce the physical conditions in the disk by means of model fitting. We conclude that our code is very efficient and produces reliable results. The comparison of the theoretical predictions with our observations reveals that the temperature of the Red Rectangle disk is typically ~ 100-150 K, about twice as h...

  12. Accurate two-dimensional model of an arrayed-waveguide grating demultiplexer and optimal design based on the reciprocity theory.

    Science.gov (United States)

    Dai, Daoxin; He, Sailing

    2004-12-01

    An accurate two-dimensional (2D) model is introduced for the simulation of an arrayed-waveguide grating (AWG) demultiplexer by integrating the field distribution along the vertical direction. The equivalent 2D model has almost the same accuracy as the original three-dimensional model and is more accurate for the AWG considered here than the conventional 2D model based on the effective-index method. To further improve the computational efficiency, the reciprocity theory is applied to the optimal design of a flat-top AWG demultiplexer with a special input structure.

  13. Accurate modeling of cache replacement policies in a Data-Grid.

    Energy Technology Data Exchange (ETDEWEB)

    Otoo, Ekow J.; Shoshani, Arie

    2003-01-23

    Caching techniques have been used to improve the performance gap of storage hierarchies in computing systems. In data intensive applications that access large data files over wide area network environment, such as a data grid,caching mechanism can significantly improve the data access performance under appropriate workloads. In a data grid, it is envisioned that local disk storage resources retain or cache the data files being used by local application. Under a workload of shared access and high locality of reference, the performance of the caching techniques depends heavily on the replacement policies being used. A replacement policy effectively determines which set of objects must be evicted when space is needed. Unlike cache replacement policies in virtual memory paging or database buffering, developing an optimal replacement policy for data grids is complicated by the fact that the file objects being cached have varying sizes and varying transfer and processing costs that vary with time. We present an accurate model for evaluating various replacement policies and propose a new replacement algorithm referred to as ''Least Cost Beneficial based on K backward references (LCB-K).'' Using this modeling technique, we compare LCB-K with various replacement policies such as Least Frequently Used (LFU), Least Recently Used (LRU), Greedy DualSize (GDS), etc., using synthetic and actual workload of accesses to and from tertiary storage systems. The results obtained show that (LCB-K) and (GDS) are the most cost effective cache replacement policies for storage resource management in data grids.

  14. Stable, accurate and efficient computation of normal modes for horizontal stratified models

    Science.gov (United States)

    Wu, Bo; Chen, Xiaofei

    2016-08-01

    We propose an adaptive root-determining strategy that is very useful when dealing with trapped modes or Stoneley modes whose energies become very insignificant on the free surface in the presence of low-velocity layers or fluid layers in the model. Loss of modes in these cases or inaccuracy in the calculation of these modes may then be easily avoided. Built upon the generalized reflection/transmission coefficients, the concept of `family of secular functions' that we herein call `adaptive mode observers' is thus naturally introduced to implement this strategy, the underlying idea of which has been distinctly noted for the first time and may be generalized to other applications such as free oscillations or applied to other methods in use when these cases are encountered. Additionally, we have made further improvements upon the generalized reflection/transmission coefficient method; mode observers associated with only the free surface and low-velocity layers (and the fluid/solid interface if the model contains fluid layers) are adequate to guarantee no loss and high precision at the same time of any physically existent modes without excessive calculations. Finally, the conventional definition of the fundamental mode is reconsidered, which is entailed in the cases under study. Some computational aspects are remarked on. With the additional help afforded by our superior root-searching scheme and the possibility of speeding calculation using a less number of layers aided by the concept of `turning point', our algorithm is remarkably efficient as well as stable and accurate and can be used as a powerful tool for widely related applications.

  15. Precise and accurate assessment of uncertainties in model parameters from stellar interferometry. Application to stellar diameters

    Science.gov (United States)

    Lachaume, Regis; Rabus, Markus; Jordan, Andres

    2015-08-01

    In stellar interferometry, the assumption that the observables can be seen as Gaussian, independent variables is the norm. In particular, neither the optical interferometry FITS (OIFITS) format nor the most popular fitting software in the field, LITpro, offer means to specify a covariance matrix or non-Gaussian uncertainties. Interferometric observables are correlated by construct, though. Also, the calibration by an instrumental transfer function ensures that the resulting observables are not Gaussian, even if uncalibrated ones happened to be so.While analytic frameworks have been published in the past, they are cumbersome and there is no generic implementation available. We propose here a relatively simple way of dealing with correlated errors without the need to extend the OIFITS specification or making some Gaussian assumptions. By repeatedly picking at random which interferograms, which calibrator stars, and which are the errors on their diameters, and performing the data processing on the bootstrapped data, we derive a sampling of p(O), the multivariate probability density function (PDF) of the observables O. The results can be stored in a normal OIFITS file. Then, given a model m with parameters P predicting observables O = m(P), we can estimate the PDF of the model parameters f(P) = p(m(P)) by using a density estimation of the observables' PDF p.With observations repeated over different baselines, on nights several days apart, and with a significant set of calibrators systematic errors are de facto taken into account. We apply the technique to a precise and accurate assessment of stellar diameters obtained at the Very Large Telescope Interferometer with PIONIER.

  16. Studies of accurate multi-component lattice Boltzmann models on benchmark cases required for engineering applications

    CERN Document Server

    Otomo, Hiroshi; Li, Yong; Dressler, Marco; Staroselsky, Ilya; Zhang, Raoyang; Chen, Hudong

    2016-01-01

    We present recent developments in lattice Boltzmann modeling for multi-component flows, implemented on the platform of a general purpose, arbitrary geometry solver PowerFLOW. Presented benchmark cases demonstrate the method's accuracy and robustness necessary for handling real world engineering applications at practical resolution and computational cost. The key requirements for such approach are that the relevant physical properties and flow characteristics do not strongly depend on numerics. In particular, the strength of surface tension obtained using our new approach is independent of viscosity and resolution, while the spurious currents are significantly suppressed. Using a much improved surface wetting model, undesirable numerical artifacts including thin film and artificial droplet movement on inclined wall are significantly reduced.

  17. Accurate Locally Conservative Discretizations for Modeling Multiphase Flow in Porous Media on General Hexahedra Grids

    KAUST Repository

    Wheeler, M.F.

    2010-09-06

    For many years there have been formulations considered for modeling single phase ow on general hexahedra grids. These include the extended mixed nite element method, and families of mimetic nite di erence methods. In most of these schemes either no rate of convergence of the algorithm has been demonstrated both theoret- ically and computationally or a more complicated saddle point system needs to be solved for an accurate solution. Here we describe a multipoint ux mixed nite element (MFMFE) method [5, 2, 3]. This method is motivated from the multipoint ux approximation (MPFA) method [1]. The MFMFE method is locally conservative with continuous ux approximations and is a cell-centered scheme for the pressure. Compared to the MPFA method, the MFMFE has a variational formulation, since it can be viewed as a mixed nite element with special approximating spaces and quadrature rules. The framework allows han- dling of hexahedral grids with non-planar faces by applying trilinear mappings from physical elements to reference cubic elements. In addition, there are several multi- scale and multiphysics extensions such as the mortar mixed nite element method that allows the treatment of non-matching grids [4]. Extensions to the two-phase oil-water ow are considered. We reformulate the two- phase model in terms of total velocity, capillary velocity, water pressure, and water saturation. We choose water pressure and water saturation as primary variables. The total velocity is driven by the gradient of the water pressure and total mobility. Iterative coupling scheme is employed for the coupled system. This scheme allows treatments of di erent time scales for the water pressure and water saturation. In each time step, we rst solve the pressure equation using the MFMFE method; we then Center for Subsurface Modeling, The University of Texas at Austin, Austin, TX 78712; mfw@ices.utexas.edu. yCenter for Subsurface Modeling, The University of Texas at Austin, Austin, TX 78712; gxue

  18. Reduced order model of draft tube flow

    Science.gov (United States)

    Rudolf, P.; Štefan, D.

    2014-03-01

    Swirling flow with compact coherent structures is very good candidate for proper orthogonal decomposition (POD), i.e. for decomposition into eigenmodes, which are the cornerstones of the flow field. Present paper focuses on POD of steady flows, which correspond to different operating points of Francis turbine draft tube flow. Set of eigenmodes is built using a limited number of snapshots from computational simulations. Resulting reduced order model (ROM) describes whole operating range of the draft tube. ROM enables to interpolate in between the operating points exploiting the knowledge about significance of particular eigenmodes and thus reconstruct the velocity field in any operating point within the given range. Practical example, which employs axisymmetric simulations of the draft tube flow, illustrates accuracy of ROM in regions without vortex breakdown together with need for higher resolution of the snapshot database close to location of sudden flow changes (e.g. vortex breakdown). ROM based on POD interpolation is very suitable tool for insight into flow physics of the draft tube flows (especially energy transfers in between different operating points), for supply of data for subsequent stability analysis or as an initialization database for advanced flow simulations.

  19. Toward accurate tooth segmentation from computed tomography images using a hybrid level set model

    Energy Technology Data Exchange (ETDEWEB)

    Gan, Yangzhou; Zhao, Qunfei [Department of Automation, Shanghai Jiao Tong University, and Key Laboratory of System Control and Information Processing, Ministry of Education of China, Shanghai 200240 (China); Xia, Zeyang, E-mail: zy.xia@siat.ac.cn, E-mail: jing.xiong@siat.ac.cn; Hu, Ying [Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, and The Chinese University of Hong Kong, Shenzhen 518055 (China); Xiong, Jing, E-mail: zy.xia@siat.ac.cn, E-mail: jing.xiong@siat.ac.cn [Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen 510855 (China); Zhang, Jianwei [TAMS, Department of Informatics, University of Hamburg, Hamburg 22527 (Germany)

    2015-01-15

    Purpose: A three-dimensional (3D) model of the teeth provides important information for orthodontic diagnosis and treatment planning. Tooth segmentation is an essential step in generating the 3D digital model from computed tomography (CT) images. The aim of this study is to develop an accurate and efficient tooth segmentation method from CT images. Methods: The 3D dental CT volumetric images are segmented slice by slice in a two-dimensional (2D) transverse plane. The 2D segmentation is composed of a manual initialization step and an automatic slice by slice segmentation step. In the manual initialization step, the user manually picks a starting slice and selects a seed point for each tooth in this slice. In the automatic slice segmentation step, a developed hybrid level set model is applied to segment tooth contours from each slice. Tooth contour propagation strategy is employed to initialize the level set function automatically. Cone beam CT (CBCT) images of two subjects were used to tune the parameters. Images of 16 additional subjects were used to validate the performance of the method. Volume overlap metrics and surface distance metrics were adopted to assess the segmentation accuracy quantitatively. The volume overlap metrics were volume difference (VD, mm{sup 3}) and Dice similarity coefficient (DSC, %). The surface distance metrics were average symmetric surface distance (ASSD, mm), RMS (root mean square) symmetric surface distance (RMSSSD, mm), and maximum symmetric surface distance (MSSD, mm). Computation time was recorded to assess the efficiency. The performance of the proposed method has been compared with two state-of-the-art methods. Results: For the tested CBCT images, the VD, DSC, ASSD, RMSSSD, and MSSD for the incisor were 38.16 ± 12.94 mm{sup 3}, 88.82 ± 2.14%, 0.29 ± 0.03 mm, 0.32 ± 0.08 mm, and 1.25 ± 0.58 mm, respectively; the VD, DSC, ASSD, RMSSSD, and MSSD for the canine were 49.12 ± 9.33 mm{sup 3}, 91.57 ± 0.82%, 0.27 ± 0.02 mm, 0

  20. A fast and accurate implementation of tunable algorithms used for generation of fractal-like aggregate models

    Science.gov (United States)

    Skorupski, Krzysztof; Mroczka, Janusz; Wriedt, Thomas; Riefler, Norbert

    2014-06-01

    In many branches of science experiments are expensive, require specialist equipment or are very time consuming. Studying the light scattering phenomenon by fractal aggregates can serve as an example. Light scattering simulations can overcome these problems and provide us with theoretical, additional data which complete our study. For this reason a fractal-like aggregate model as well as fast aggregation codes are needed. Until now various computer models, that try to mimic the physics behind this phenomenon, have been developed. However, their implementations are mostly based on a trial-and-error procedure. Such approach is very time consuming and the morphological parameters of resulting aggregates are not exact because the postconditions (e.g. the position error) cannot be very strict. In this paper we present a very fast and accurate implementation of a tunable aggregation algorithm based on the work of Filippov et al. (2000). Randomization is reduced to its necessary minimum (our technique can be more than 1000 times faster than standard algorithms) and the position of a new particle, or a cluster, is calculated with algebraic methods. Therefore, the postconditions can be extremely strict and the resulting errors negligible (e.g. the position error can be recognized as non-existent). In our paper two different methods, which are based on the particle-cluster (PC) and the cluster-cluster (CC) aggregation processes, are presented.

  1. A semi-implicit, second-order-accurate numerical model for multiphase underexpanded volcanic jets

    Directory of Open Access Journals (Sweden)

    S. Carcano

    2013-11-01

    Full Text Available An improved version of the PDAC (Pyroclastic Dispersal Analysis Code, Esposti Ongaro et al., 2007 numerical model for the simulation of multiphase volcanic flows is presented and validated for the simulation of multiphase volcanic jets in supersonic regimes. The present version of PDAC includes second-order time- and space discretizations and fully multidimensional advection discretizations in order to reduce numerical diffusion and enhance the accuracy of the original model. The model is tested on the problem of jet decompression in both two and three dimensions. For homogeneous jets, numerical results are consistent with experimental results at the laboratory scale (Lewis and Carlson, 1964. For nonequilibrium gas–particle jets, we consider monodisperse and bidisperse mixtures, and we quantify nonequilibrium effects in terms of the ratio between the particle relaxation time and a characteristic jet timescale. For coarse particles and low particle load, numerical simulations well reproduce laboratory experiments and numerical simulations carried out with an Eulerian–Lagrangian model (Sommerfeld, 1993. At the volcanic scale, we consider steady-state conditions associated with the development of Vulcanian and sub-Plinian eruptions. For the finest particles produced in these regimes, we demonstrate that the solid phase is in mechanical and thermal equilibrium with the gas phase and that the jet decompression structure is well described by a pseudogas model (Ogden et al., 2008. Coarse particles, on the other hand, display significant nonequilibrium effects, which associated with their larger relaxation time. Deviations from the equilibrium regime, with maximum velocity and temperature differences on the order of 150 m s−1 and 80 K across shock waves, occur especially during the rapid acceleration phases, and are able to modify substantially the jet dynamics with respect to the homogeneous case.

  2. PSI/TM-Coffee: a web server for fast and accurate multiple sequence alignments of regular and transmembrane proteins using homology extension on reduced databases.

    Science.gov (United States)

    Floden, Evan W; Tommaso, Paolo D; Chatzou, Maria; Magis, Cedrik; Notredame, Cedric; Chang, Jia-Ming

    2016-07-01

    The PSI/TM-Coffee web server performs multiple sequence alignment (MSA) of proteins by combining homology extension with a consistency based alignment approach. Homology extension is performed with Position Specific Iterative (PSI) BLAST searches against a choice of redundant and non-redundant databases. The main novelty of this server is to allow databases of reduced complexity to rapidly perform homology extension. This server also gives the possibility to use transmembrane proteins (TMPs) reference databases to allow even faster homology extension on this important category of proteins. Aside from an MSA, the server also outputs topological prediction of TMPs using the HMMTOP algorithm. Previous benchmarking of the method has shown this approach outperforms the most accurate alignment methods such as MSAProbs, Kalign, PROMALS, MAFFT, ProbCons and PRALINE™. The web server is available at http://tcoffee.crg.cat/tmcoffee. PMID:27106060

  3. The development and verification of a highly accurate collision prediction model for automated noncoplanar plan delivery

    Energy Technology Data Exchange (ETDEWEB)

    Yu, Victoria Y.; Tran, Angelia; Nguyen, Dan; Cao, Minsong; Ruan, Dan; Low, Daniel A.; Sheng, Ke, E-mail: ksheng@mednet.ucla.edu [Department of Radiation Oncology, David Geffen School of Medicine, University of California Los Angeles, Los Angeles, California 90024 (United States)

    2015-11-15

    Purpose: Significant dosimetric benefits had been previously demonstrated in highly noncoplanar treatment plans. In this study, the authors developed and verified an individualized collision model for the purpose of delivering highly noncoplanar radiotherapy and tested the feasibility of total delivery automation with Varian TrueBeam developer mode. Methods: A hand-held 3D scanner was used to capture the surfaces of an anthropomorphic phantom and a human subject, which were positioned with a computer-aided design model of a TrueBeam machine to create a detailed virtual geometrical collision model. The collision model included gantry, collimator, and couch motion degrees of freedom. The accuracy of the 3D scanner was validated by scanning a rigid cubical phantom with known dimensions. The collision model was then validated by generating 300 linear accelerator orientations corresponding to 300 gantry-to-couch and gantry-to-phantom distances, and comparing the corresponding distance measurements to their corresponding models. The linear accelerator orientations reflected uniformly sampled noncoplanar beam angles to the head, lung, and prostate. The distance discrepancies between measurements on the physical and virtual systems were used to estimate treatment-site-specific safety buffer distances with 0.1%, 0.01%, and 0.001% probability of collision between the gantry and couch or phantom. Plans containing 20 noncoplanar beams to the brain, lung, and prostate optimized via an in-house noncoplanar radiotherapy platform were converted into XML script for automated delivery and the entire delivery was recorded and timed to demonstrate the feasibility of automated delivery. Results: The 3D scanner measured the dimension of the 14 cm cubic phantom within 0.5 mm. The maximal absolute discrepancy between machine and model measurements for gantry-to-couch and gantry-to-phantom was 0.95 and 2.97 cm, respectively. The reduced accuracy of gantry-to-phantom measurements was

  4. Reduced form models of bond portfolios

    OpenAIRE

    Matti Koivu; Teemu Pennanen

    2010-01-01

    We derive simple return models for several classes of bond portfolios. With only one or two risk factors our models are able to explain most of the return variations in portfolios of fixed rate government bonds, inflation linked government bonds and investment grade corporate bonds. The underlying risk factors have natural interpretations which make the models well suited for risk management and portfolio design.

  5. Applicability of CFD Modelling in Determining Accurate Weir Discharge: Water Level Relationships

    NARCIS (Netherlands)

    Rombouts, P.M.M.; Tralli, A.; Langeveld, J.G.; Verhaart, F.; Clemens, F.H.L.R.

    2014-01-01

    Being able to accurately determine weir discharges is of key importance in urban water management. The most common method is performing a level measurement and calculating the discharge using the standard weir equation. Since this equation is only valid in certain conditions, this can lead to large

  6. Finite Element Modelling of a Field-Sensed Magnetic Suspended System for Accurate Proximity Measurement Based on a Sensor Fusion Algorithm with Unscented Kalman Filter.

    Science.gov (United States)

    Chowdhury, Amor; Sarjaš, Andrej

    2016-01-01

    The presented paper describes accurate distance measurement for a field-sensed magnetic suspension system. The proximity measurement is based on a Hall effect sensor. The proximity sensor is installed directly on the lower surface of the electro-magnet, which means that it is very sensitive to external magnetic influences and disturbances. External disturbances interfere with the information signal and reduce the usability and reliability of the proximity measurements and, consequently, the whole application operation. A sensor fusion algorithm is deployed for the aforementioned reasons. The sensor fusion algorithm is based on the Unscented Kalman Filter, where a nonlinear dynamic model was derived with the Finite Element Modelling approach. The advantage of such modelling is a more accurate dynamic model parameter estimation, especially in the case when the real structure, materials and dimensions of the real-time application are known. The novelty of the paper is the design of a compact electro-magnetic actuator with a built-in low cost proximity sensor for accurate proximity measurement of the magnetic object. The paper successively presents a modelling procedure with the finite element method, design and parameter settings of a sensor fusion algorithm with Unscented Kalman Filter and, finally, the implementation procedure and results of real-time operation. PMID:27649197

  7. Finite Element Modelling of a Field-Sensed Magnetic Suspended System for Accurate Proximity Measurement Based on a Sensor Fusion Algorithm with Unscented Kalman Filter.

    Science.gov (United States)

    Chowdhury, Amor; Sarjaš, Andrej

    2016-09-15

    The presented paper describes accurate distance measurement for a field-sensed magnetic suspension system. The proximity measurement is based on a Hall effect sensor. The proximity sensor is installed directly on the lower surface of the electro-magnet, which means that it is very sensitive to external magnetic influences and disturbances. External disturbances interfere with the information signal and reduce the usability and reliability of the proximity measurements and, consequently, the whole application operation. A sensor fusion algorithm is deployed for the aforementioned reasons. The sensor fusion algorithm is based on the Unscented Kalman Filter, where a nonlinear dynamic model was derived with the Finite Element Modelling approach. The advantage of such modelling is a more accurate dynamic model parameter estimation, especially in the case when the real structure, materials and dimensions of the real-time application are known. The novelty of the paper is the design of a compact electro-magnetic actuator with a built-in low cost proximity sensor for accurate proximity measurement of the magnetic object. The paper successively presents a modelling procedure with the finite element method, design and parameter settings of a sensor fusion algorithm with Unscented Kalman Filter and, finally, the implementation procedure and results of real-time operation.

  8. Finite Element Modelling of a Field-Sensed Magnetic Suspended System for Accurate Proximity Measurement Based on a Sensor Fusion Algorithm with Unscented Kalman Filter

    Science.gov (United States)

    Chowdhury, Amor; Sarjaš, Andrej

    2016-01-01

    The presented paper describes accurate distance measurement for a field-sensed magnetic suspension system. The proximity measurement is based on a Hall effect sensor. The proximity sensor is installed directly on the lower surface of the electro-magnet, which means that it is very sensitive to external magnetic influences and disturbances. External disturbances interfere with the information signal and reduce the usability and reliability of the proximity measurements and, consequently, the whole application operation. A sensor fusion algorithm is deployed for the aforementioned reasons. The sensor fusion algorithm is based on the Unscented Kalman Filter, where a nonlinear dynamic model was derived with the Finite Element Modelling approach. The advantage of such modelling is a more accurate dynamic model parameter estimation, especially in the case when the real structure, materials and dimensions of the real-time application are known. The novelty of the paper is the design of a compact electro-magnetic actuator with a built-in low cost proximity sensor for accurate proximity measurement of the magnetic object. The paper successively presents a modelling procedure with the finite element method, design and parameter settings of a sensor fusion algorithm with Unscented Kalman Filter and, finally, the implementation procedure and results of real-time operation. PMID:27649197

  9. Finite Element Modelling of a Field-Sensed Magnetic Suspended System for Accurate Proximity Measurement Based on a Sensor Fusion Algorithm with Unscented Kalman Filter

    Directory of Open Access Journals (Sweden)

    Amor Chowdhury

    2016-09-01

    Full Text Available The presented paper describes accurate distance measurement for a field-sensed magnetic suspension system. The proximity measurement is based on a Hall effect sensor. The proximity sensor is installed directly on the lower surface of the electro-magnet, which means that it is very sensitive to external magnetic influences and disturbances. External disturbances interfere with the information signal and reduce the usability and reliability of the proximity measurements and, consequently, the whole application operation. A sensor fusion algorithm is deployed for the aforementioned reasons. The sensor fusion algorithm is based on the Unscented Kalman Filter, where a nonlinear dynamic model was derived with the Finite Element Modelling approach. The advantage of such modelling is a more accurate dynamic model parameter estimation, especially in the case when the real structure, materials and dimensions of the real-time application are known. The novelty of the paper is the design of a compact electro-magnetic actuator with a built-in low cost proximity sensor for accurate proximity measurement of the magnetic object. The paper successively presents a modelling procedure with the finite element method, design and parameter settings of a sensor fusion algorithm with Unscented Kalman Filter and, finally, the implementation procedure and results of real-time operation.

  10. Fast and accurate conversion of atomic models into electron density maps

    Directory of Open Access Journals (Sweden)

    Carlos O.S. Sorzano

    2015-03-01

    Full Text Available New image processing methodologies and algorithms have greatly contributed to the signi cant progress in three-dimensional electron microscopy (3DEM of biological complexes we have seen over the last decades. Naturally, the availability of accurate procedures for the objective testing of new algorithms is a crucial requirement for the further advancement of the eld. A good and accepted testing work ow involves the generation of realistic 3DEM-like maps of biological macromolecules from which some measure of ground truth can be derived, ideally because their 3D atomic structure is already known. In this work we propose a very accurate generation of maps using atomic form factors for electron scattering. We thoroughly review current approaches in the eld, quantitatively demonstrating the bene ts of the new methodology. Additionally, we study a concrete example of the use of this approach for hypothesis testing in 3D Electron Microscopy.

  11. A simple and accurate model for Love wave based sensors: Dispersion equation and mass sensitivity

    OpenAIRE

    Jiansheng Liu

    2014-01-01

    Dispersion equation is an important tool for analyzing propagation properties of acoustic waves in layered structures. For Love wave (LW) sensors, the dispersion equation with an isotropic-considered substrate is too rough to get accurate solutions; the full dispersion equation with a piezoelectric-considered substrate is too complicated to get simple and practical expressions for optimizing LW-based sensors. In this work, a dispersion equation is introduced for Love waves in a layered struct...

  12. Causal transmission in reduced-form models

    OpenAIRE

    Vassili Bazinas; Bent Nielsen

    2015-01-01

    We propose a method to explore the causal transmission of a catalyst variable through two endogenous variables of interest. The method is based on the reduced-form system formed from the conditional distribution of the two endogenous variables given the catalyst. The method combines elements from instru- mental variable analysis and Cholesky decomposition of structural vector autoregressions. We give conditions for uniqueness of the causal transmission.

  13. Refinement of reduced-models for dynamic systems

    Institute of Scientific and Technical Information of China (English)

    2008-01-01

    A refinement procedure for the reduced models of structural dynamic systems is presented in this article. The refinement procedure is to "tune" the parameters of a reduced model, which could be obtained from any traditional model reduction scheme, into an improved reduced model. Upon the completion of the refinement, the improved reduced model matches the dynamic characteristics - the chosen structural frequencies and their mode shapes - of the full order model. Mathematically, the procedure to implement the model refinement technique is an application of the recently developed cross-model cross-mode (CMCM) method for model updating. A numerical example of reducing a 5-DOF (degree-of-freedom) classical mass-spring (or shear-building) model into a 3-DOF generalized mass-spring model is demonstrated in this article.

  14. Reducing the Ising model to matchings

    CERN Document Server

    Huber, Mark

    2009-01-01

    Canonical paths is one of the most powerful tools available to show that a Markov chain is rapidly mixing, thereby enabling approximate sampling from complex high dimensional distributions. Two success stories for the canonical paths method are chains for drawing matchings in a graph, and a chain for a version of the Ising model called the subgraphs world. In this paper, it is shown that a subgraphs world draw can be obtained by taking a draw from matchings on a graph that is linear in the size of the original graph. This provides a partial answer to why canonical paths works so well for both problems, as well as providing a new source of algorithms for the Ising model. For instance, this new reduction immediately yields a fully polynomial time approximation scheme for the Ising model on a bounded degree graph when the magnitization is bounded away from 0.

  15. Surface electron density models for accurate ab initio molecular dynamics with electronic friction

    Science.gov (United States)

    Novko, D.; Blanco-Rey, M.; Alducin, M.; Juaristi, J. I.

    2016-06-01

    Ab initio molecular dynamics with electronic friction (AIMDEF) is a valuable methodology to study the interaction of atomic particles with metal surfaces. This method, in which the effect of low-energy electron-hole (e-h) pair excitations is treated within the local density friction approximation (LDFA) [Juaristi et al., Phys. Rev. Lett. 100, 116102 (2008), 10.1103/PhysRevLett.100.116102], can provide an accurate description of both e-h pair and phonon excitations. In practice, its applicability becomes a complicated task in those situations of substantial surface atoms displacements because the LDFA requires the knowledge at each integration step of the bare surface electron density. In this work, we propose three different methods of calculating on-the-fly the electron density of the distorted surface and we discuss their suitability under typical surface distortions. The investigated methods are used in AIMDEF simulations for three illustrative adsorption cases, namely, dissociated H2 on Pd(100), N on Ag(111), and N2 on Fe(110). Our AIMDEF calculations performed with the three approaches highlight the importance of going beyond the frozen surface density to accurately describe the energy released into e-h pair excitations in case of large surface atom displacements.

  16. Effective and accurate approach for modeling of commensurate-incommensurate transition in krypton monolayer on graphite.

    Science.gov (United States)

    Ustinov, E A

    2014-10-01

    Commensurate-incommensurate (C-IC) transition of krypton molecular layer on graphite received much attention in recent decades in theoretical and experimental researches. However, there still exists a possibility of generalization of the phenomenon from thermodynamic viewpoint on the basis of accurate molecular simulation. Recently, a new technique was developed for analysis of two-dimensional (2D) phase transitions in systems involving a crystalline phase, which is based on accounting for the effect of temperature and the chemical potential on the lattice constant of the 2D layer using the Gibbs-Duhem equation [E. A. Ustinov, J. Chem. Phys. 140, 074706 (2014)]. The technique has allowed for determination of phase diagrams of 2D argon layers on the uniform surface and in slit pores. This paper extends the developed methodology on systems accounting for the periodic modulation of the substrate potential. The main advantage of the developed approach is that it provides highly accurate evaluation of the chemical potential of crystalline layers, which allows reliable determination of temperature and other parameters of various 2D phase transitions. Applicability of the methodology is demonstrated on the krypton-graphite system. Analysis of phase diagram of the krypton molecular layer, thermodynamic functions of coexisting phases, and a method of prediction of adsorption isotherms is considered accounting for a compression of the graphite due to the krypton-carbon interaction. The temperature and heat of C-IC transition has been reliably determined for the gas-solid and solid-solid system.

  17. Accurate and fast table look-up models for leakage current analysis in 65 nm CMOS technology

    Institute of Scientific and Technical Information of China (English)

    薛冀颖; 李涛; 余志平

    2009-01-01

    Novel physical models for leakage current analysis in 65 nm technology are proposed. Taking into con-sideration the process variations and emerging effects in nano-scaled technology, the presented models are capable of accurately estimating the subthreshold leakage current and junction tunneling leakage current in 65 nm technol-ogy. Based on the physical models, new table look-up models are developed and first applied to leakage current analysis in pursuit of higher simulation speed. Simulation results show that the novel physical models are in ex-cellent agreement with the data measured from the foundry in the 65 nm process, and the proposed table look-up models can provide great computational efficiency by using suitable interpolation techniques. Compared with the traditional physical-based models, the table look-up models can achieve 2.5X speedup on average on a variety of industry circuits.

  18. Can crop-climate models be accurate and precise? A case study for wheat production in Denmark

    DEFF Research Database (Denmark)

    Montesino San Martin, Manuel; Olesen, Jørgen E.; Porter, John Roy

    2015-01-01

    and mechanistic wheat models to assess how differences in the extent of process understanding in models affects uncertainties in projected impact. Predictive power of the models was tested via both accuracy (bias) and precision (or tightness of grouping) of yield projections for extrapolated weather conditions....... Yields predicted by the mechanistic model were generally more accurate than the empirical models for extrapolated conditions. This trend does not hold for all extrapolations; mechanistic and empirical models responded differently due to their sensitivities to distinct weather features. However, higher...... accuracy comes at the cost of precision of the mechanistic model to embrace all observations within given boundaries. The approaches showed complementarity in sensitivity to weather variables and in accuracy for different extrapolation domains. Their differences in model precision and accuracy make them...

  19. An accurate elasto-plastic frictional tangential force displacement model for granular-flow simulations: Displacement-driven formulation

    Science.gov (United States)

    Zhang, Xiang; Vu-Quoc, Loc

    2007-07-01

    We present in this paper the displacement-driven version of a tangential force-displacement (TFD) model that accounts for both elastic and plastic deformations together with interfacial friction occurring in collisions of spherical particles. This elasto-plastic frictional TFD model, with its force-driven version presented in [L. Vu-Quoc, L. Lesburg, X. Zhang. An accurate tangential force-displacement model for granular-flow simulations: contacting spheres with plastic deformation, force-driven formulation, Journal of Computational Physics 196(1) (2004) 298-326], is consistent with the elasto-plastic frictional normal force-displacement (NFD) model presented in [L. Vu-Quoc, X. Zhang. An elasto-plastic contact force-displacement model in the normal direction: displacement-driven version, Proceedings of the Royal Society of London, Series A 455 (1991) 4013-4044]. Both the NFD model and the present TFD model are based on the concept of additive decomposition of the radius of contact area into an elastic part and a plastic part. The effect of permanent indentation after impact is represented by a correction to the radius of curvature. The effect of material softening due to plastic flow is represented by a correction to the elastic moduli. The proposed TFD model is accurate, and is validated against nonlinear finite element analyses involving plastic flows in both the loading and unloading conditions. The proposed consistent displacement-driven, elasto-plastic NFD and TFD models are designed for implementation in computer codes using the discrete-element method (DEM) for granular-flow simulations. The model is shown to be accurate and is validated against nonlinear elasto-plastic finite-element analysis.

  20. Multi Sensor Data Integration for AN Accurate 3d Model Generation

    Science.gov (United States)

    Chhatkuli, S.; Satoh, T.; Tachibana, K.

    2015-05-01

    The aim of this paper is to introduce a novel technique of data integration between two different data sets, i.e. laser scanned RGB point cloud and oblique imageries derived 3D model, to create a 3D model with more details and better accuracy. In general, aerial imageries are used to create a 3D city model. Aerial imageries produce an overall decent 3D city models and generally suit to generate 3D model of building roof and some non-complex terrain. However, the automatically generated 3D model, from aerial imageries, generally suffers from the lack of accuracy in deriving the 3D model of road under the bridges, details under tree canopy, isolated trees, etc. Moreover, the automatically generated 3D model from aerial imageries also suffers from undulated road surfaces, non-conforming building shapes, loss of minute details like street furniture, etc. in many cases. On the other hand, laser scanned data and images taken from mobile vehicle platform can produce more detailed 3D road model, street furniture model, 3D model of details under bridge, etc. However, laser scanned data and images from mobile vehicle are not suitable to acquire detailed 3D model of tall buildings, roof tops, and so forth. Our proposed approach to integrate multi sensor data compensated each other's weakness and helped to create a very detailed 3D model with better accuracy. Moreover, the additional details like isolated trees, street furniture, etc. which were missing in the original 3D model derived from aerial imageries could also be integrated in the final model automatically. During the process, the noise in the laser scanned data for example people, vehicles etc. on the road were also automatically removed. Hence, even though the two dataset were acquired in different time period the integrated data set or the final 3D model was generally noise free and without unnecessary details.

  1. Minimum required number of specimen records to develop accurate species distribution models

    NARCIS (Netherlands)

    Proosdij, van A.S.J.; Sosef, M.S.M.; Wieringa, J.J.; Raes, N.

    2016-01-01

    Species distribution models (SDMs) are widely used to predict the occurrence of species. Because SDMs generally use presence-only data, validation of the predicted distribution and assessing model accuracy is challenging. Model performance depends on both sample size and species’ prevalence, being t

  2. Minimum required number of specimen records to develop accurate species distribution models

    NARCIS (Netherlands)

    Proosdij, van A.S.J.; Sosef, M.S.M.; Wieringa, J.J.; Raes, N.

    2015-01-01

    Species Distribution Models (SDMs) are widely used to predict the occurrence of species. Because SDMs generally use presence-only data, validation of the predicted distribution and assessing model accuracy is challenging. Model performance depends on both sample size and species’ prevalence, being t

  3. MULTI SENSOR DATA INTEGRATION FOR AN ACCURATE 3D MODEL GENERATION

    OpenAIRE

    S. Chhatkuli; Satoh, T; Tachibana, K

    2015-01-01

    The aim of this paper is to introduce a novel technique of data integration between two different data sets, i.e. laser scanned RGB point cloud and oblique imageries derived 3D model, to create a 3D model with more details and better accuracy. In general, aerial imageries are used to create a 3D city model. Aerial imageries produce an overall decent 3D city models and generally suit to generate 3D model of building roof and some non-complex terrain. However, the automatically generated 3D mod...

  4. Towards more accurate isoscapes encouraging results from wine, water and marijuana data/model and model/model comparisons.

    Science.gov (United States)

    West, J. B.; Ehleringer, J. R.; Cerling, T.

    2006-12-01

    Understanding how the biosphere responds to change it at the heart of biogeochemistry, ecology, and other Earth sciences. The dramatic increase in human population and technological capacity over the past 200 years or so has resulted in numerous, simultaneous changes to biosphere structure and function. This, then, has lead to increased urgency in the scientific community to try to understand how systems have already responded to these changes, and how they might do so in the future. Since all biospheric processes exhibit some patchiness or patterns over space, as well as time, we believe that understanding the dynamic interactions between natural systems and human technological manipulations can be improved if these systems are studied in an explicitly spatial context. We present here results of some of our efforts to model the spatial variation in the stable isotope ratios (δ2H and δ18O) of plants over large spatial extents, and how these spatial model predictions compare to spatially explicit data. Stable isotopes trace and record ecological processes and as such, if modeled correctly over Earth's surface allow us insights into changes in biosphere states and processes across spatial scales. The data-model comparisons show good agreement, in spite of the remaining uncertainties (e.g., plant source water isotopic composition). For example, inter-annual changes in climate are recorded in wine stable isotope ratios. Also, a much simpler model of leaf water enrichment driven with spatially continuous global rasters of precipitation and climate normals largely agrees with complex GCM modeling that includes leaf water δ18O. Our results suggest that modeling plant stable isotope ratios across large spatial extents may be done with reasonable accuracy, including over time. These spatial maps, or isoscapes, can now be utilized to help understand spatially distributed data, as well as to help guide future studies designed to understand ecological change across

  5. An approach to estimating and extrapolating model error based on inverse problem methods: towards accurate numerical weather prediction

    International Nuclear Information System (INIS)

    Model error is one of the key factors restricting the accuracy of numerical weather prediction (NWP). Considering the continuous evolution of the atmosphere, the observed data (ignoring the measurement error) can be viewed as a series of solutions of an accurate model governing the actual atmosphere. Model error is represented as an unknown term in the accurate model, thus NWP can be considered as an inverse problem to uncover the unknown error term. The inverse problem models can absorb long periods of observed data to generate model error correction procedures. They thus resolve the deficiency and faultiness of the NWP schemes employing only the initial-time data. In this study we construct two inverse problem models to estimate and extrapolate the time-varying and spatial-varying model errors in both the historical and forecast periods by using recent observations and analogue phenomena of the atmosphere. Numerical experiment on Burgers' equation has illustrated the substantial forecast improvement using inverse problem algorithms. The proposed inverse problem methods of suppressing NWP errors will be useful in future high accuracy applications of NWP. (geophysics, astronomy, and astrophysics)

  6. The Impact of Accurate Extinction Measurements for X-ray Spectral Models

    CERN Document Server

    Smith, Randall K; Corrales, Lia

    2016-01-01

    Interstellar extinction includes both absorption and scattering of photons from interstellar gas and dust grains, and it has the effect of altering a source's spectrum and its total observed intensity. However, while multiple absorption models exist, there are no useful scattering models in standard X-ray spectrum fitting tools, such as XSPEC. Nonetheless, X-ray halos, created by scattering from dust grains, are detected around even moderately absorbed sources and the impact on an observed source spectrum can be significant, if modest, compared to direct absorption. By convolving the scattering cross section with dust models, we have created a spectral model as a function of energy, type of dust, and extraction region that can be used with models of direct absorption. This will ensure the extinction model is consistent and enable direct connections to be made between a source's X-ray spectral fits and its UV/optical extinction.

  7. Efficient and accurate approach to modeling the microstructure and defect properties of LaCoO3

    Science.gov (United States)

    Buckeridge, J.; Taylor, F. H.; Catlow, C. R. A.

    2016-04-01

    Complex perovskite oxides are promising materials for cathode layers in solid oxide fuel cells. Such materials have intricate electronic, magnetic, and crystalline structures that prove challenging to model accurately. We analyze a wide range of standard density functional theory approaches to modeling a highly promising system, the perovskite LaCoO3, focusing on optimizing the Hubbard U parameter to treat the self-interaction of the B-site cation's d states, in order to determine the most appropriate method to study defect formation and the effect of spin on local structure. By calculating structural and electronic properties for different magnetic states we determine that U =4 eV for Co in LaCoO3 agrees best with available experiments. We demonstrate that the generalized gradient approximation (PBEsol +U ) is most appropriate for studying structure versus spin state, while the local density approximation (LDA +U ) is most appropriate for determining accurate energetics for defect properties.

  8. GLOBAL THRESHOLD AND REGION-BASED ACTIVE CONTOUR MODEL FOR ACCURATE IMAGE SEGMENTATION

    OpenAIRE

    Nuseiba M. Altarawneh; Suhuai Luo; Brian Regan; Changming Sun; Fucang Jia

    2014-01-01

    In this contribution, we develop a novel global threshold-based active contour model. This model deploys a new edge-stopping function to control the direction of the evolution and to stop the evolving contour at weak or blurred edges. An implementation of the model requires the use of selective binary and Gaussian filtering regularized level set (SBGFRLS) method. The method uses either a selective local or global segmentation property. It penalizes the level set function to force ...

  9. EXAMINING THE MOVEMENTS OF MOBILE NODES IN THE REAL WORLD TO PRODUCE ACCURATE MOBILITY MODELS

    Directory of Open Access Journals (Sweden)

    TANWEER ALAM

    2010-09-01

    Full Text Available All communication occurs through a wireless median in an ad hoc network. Ad hoc networks are dynamically created and maintained by the individual nodes comprising the network. Random Waypoint Mobility Model is a model that includes pause times between changes in destination and speed. To produce a real-world environment within which an ad hoc network can be formed among a set of nodes, there is a need for the development of realistic, generic and comprehensive mobility models. In this paper, we examine the movements of entities in the real world and present the production of mobility model in an ad hoc network.

  10. Accurate Impedance Calculation for Underground and Submarine Power Cables using MoM-SO and a Multilayer Ground Model

    OpenAIRE

    Patel, Utkarsh R.; Triverio, Piero

    2015-01-01

    An accurate knowledge of the per-unit length impedance of power cables is necessary to correctly predict electromagnetic transients in power systems. In particular, skin, proximity, and ground return effects must be properly estimated. In many applications, the medium that surrounds the cable is not uniform and can consist of multiple layers of different conductivity, such as dry and wet soil, water, or air. We introduce a multilayer ground model for the recently-proposed MoM-SO method, suita...

  11. Fault Tolerance for Industrial Actuators in Absence of Accurate Models and Hardware Redundancy

    DEFF Research Database (Denmark)

    Papageorgiou, Dimitrios; Blanke, Mogens; Niemann, Hans Henrik;

    2015-01-01

    This paper investigates Fault-Tolerant Control for closed-loop systems where only coarse models are available and there is lack of actuator and sensor redundancies. The problem is approached in the form of a typical servomotor in closed-loop. A linear model is extracted from input/output data to ...

  12. Active appearance model and deep learning for more accurate prostate segmentation on MRI

    Science.gov (United States)

    Cheng, Ruida; Roth, Holger R.; Lu, Le; Wang, Shijun; Turkbey, Baris; Gandler, William; McCreedy, Evan S.; Agarwal, Harsh K.; Choyke, Peter; Summers, Ronald M.; McAuliffe, Matthew J.

    2016-03-01

    Prostate segmentation on 3D MR images is a challenging task due to image artifacts, large inter-patient prostate shape and texture variability, and lack of a clear prostate boundary specifically at apex and base levels. We propose a supervised machine learning model that combines atlas based Active Appearance Model (AAM) with a Deep Learning model to segment the prostate on MR images. The performance of the segmentation method is evaluated on 20 unseen MR image datasets. The proposed method combining AAM and Deep Learning achieves a mean Dice Similarity Coefficient (DSC) of 0.925 for whole 3D MR images of the prostate using axial cross-sections. The proposed model utilizes the adaptive atlas-based AAM model and Deep Learning to achieve significant segmentation accuracy.

  13. Accurate characterization and modeling of transmission lines for GaAs MMIC's

    Science.gov (United States)

    Finlay, Hugh J.; Jansen, Rolf H.; Jenkins, John A.; Eddison, Ian G.

    1988-06-01

    The authors discuss computer-aided design (CAD) tools together with high-accuracy microwave measurements to realize improved design data for GaAs monolithic microwave integrated circuits (MMICs). In particular, a combined theoretical and experimental approach to the generation of an accurate design database for transmission lines on GaAs MMICs is presented. The theoretical approach is based on an improved transmission-line theory which is part of the spectral-domain hybrid-mode computer program MCLINE. The benefit of this approach in the design of multidielectric-media transmission lines is described. The program was designed to include loss mechanisms in all dielectric layers and to include conductor and surface roughness loss contributions. As an example, using GaAs ring resonator techniques covering 2 to 24 GHz, accuracies in effective dielectric constant and loss of 1 percent and 15 percent respectively, are presented. By combining theoretical and experimental techniques, a generalized MMIC microstrip design database is outlined.

  14. Parameterized Reduced Order Modeling of Misaligned Stacked Disks Rotor Assemblies

    OpenAIRE

    Ganine, Vladislav; Laxalde, Denis; Michalska, Hannah; Pierre, Christophe

    2011-01-01

    Light and flexible rotating parts of modern turbine engines operating at supercritical speeds necessitate application of more accurate but rather computationally expensive 3D FE modeling techniques. Stacked disks misalignment due to manufacturing variability in the geometry of individual components constitutes a particularly important aspect to be included in the analysis because of its impact on system dynamics. A new parametric model order reduction algorithm is presented to achieve this go...

  15. Highly Accurate Tree Models Derived from Terrestrial Laser Scan Data: A Method Description

    Directory of Open Access Journals (Sweden)

    Jan Hackenberg

    2014-05-01

    Full Text Available This paper presents a method for fitting cylinders into a point cloud, derived from a terrestrial laser-scanned tree. Utilizing high scan quality data as the input, the resulting models describe the branching structure of the tree, capable of detecting branches with a diameter smaller than a centimeter. The cylinders are stored as a hierarchical tree-like data structure encapsulating parent-child neighbor relations and incorporating the tree’s direction of growth. This structure enables the efficient extraction of tree components, such as the stem or a single branch. The method was validated both by applying a comparison of the resulting cylinder models with ground truth data and by an analysis between the input point clouds and the models. Tree models were accomplished representing more than 99% of the input point cloud, with an average distance from the cylinder model to the point cloud within sub-millimeter accuracy. After validation, the method was applied to build two allometric models based on 24 tree point clouds as an example of the application. Computation terminated successfully within less than 30 min. For the model predicting the total above ground volume, the coefficient of determination was 0.965, showing the high potential of terrestrial laser-scanning for forest inventories.

  16. A new geometric-based model to accurately estimate arm and leg inertial estimates.

    Science.gov (United States)

    Wicke, Jason; Dumas, Geneviève A

    2014-06-01

    Segment estimates of mass, center of mass and moment of inertia are required input parameters to analyze the forces and moments acting across the joints. The objectives of this study were to propose a new geometric model for limb segments, to evaluate it against criterion values obtained from DXA, and to compare its performance to five other popular models. Twenty five female and 24 male college students participated in the study. For the criterion measures, the participants underwent a whole body DXA scan, and estimates for segment mass, center of mass location, and moment of inertia (frontal plane) were directly computed from the DXA mass units. For the new model, the volume was determined from two standing frontal and sagittal photographs. Each segment was modeled as a stack of slices, the sections of which were ellipses if they are not adjoining another segment and sectioned ellipses if they were adjoining another segment (e.g. upper arm and trunk). Length of axes of the ellipses was obtained from the photographs. In addition, a sex-specific, non-uniform density function was developed for each segment. A series of anthropometric measurements were also taken by directly following the definitions provided of the different body segment models tested, and the same parameters determined for each model. Comparison of models showed that estimates from the new model were consistently closer to the DXA criterion than those from the other models, with an error of less than 5% for mass and moment of inertia and less than about 6% for center of mass location. PMID:24735506

  17. Simplified versus geometrically accurate models of forefoot anatomy to predict plantar pressures: A finite element study.

    Science.gov (United States)

    Telfer, Scott; Erdemir, Ahmet; Woodburn, James; Cavanagh, Peter R

    2016-01-25

    Integration of patient-specific biomechanical measurements into the design of therapeutic footwear has been shown to improve clinical outcomes in patients with diabetic foot disease. The addition of numerical simulations intended to optimise intervention design may help to build on these advances, however at present the time and labour required to generate and run personalised models of foot anatomy restrict their routine clinical utility. In this study we developed second-generation personalised simple finite element (FE) models of the forefoot with varying geometric fidelities. Plantar pressure predictions from barefoot, shod, and shod with insole simulations using simplified models were compared to those obtained from CT-based FE models incorporating more detailed representations of bone and tissue geometry. A simplified model including representations of metatarsals based on simple geometric shapes, embedded within a contoured soft tissue block with outer geometry acquired from a 3D surface scan was found to provide pressure predictions closest to the more complex model, with mean differences of 13.3kPa (SD 13.4), 12.52kPa (SD 11.9) and 9.6kPa (SD 9.3) for barefoot, shod, and insole conditions respectively. The simplified model design could be produced in 3h in the case of the more detailed model, and solved on average 24% faster. FE models of the forefoot based on simplified geometric representations of the metatarsal bones and soft tissue surface geometry from 3D surface scans may potentially provide a simulation approach with improved clinical utility, however further validity testing around a range of therapeutic footwear types is required.

  18. Accurate Modeling of Multilayer Transmission Lines for High-Speed Digital Interconnects

    Directory of Open Access Journals (Sweden)

    Sarhan M. Musa

    2014-03-01

    Full Text Available In this paper, we consider the finite element modeling of multilayer transmission lines for high-speed digital interconnects. We mainly focused on the modeling of the transmission structures with both cases of symmetric and asymmetric geometries. We specifically designed asymmetric coupled microstrips and four-line symmetric coupled microstrips with a two-layer substrate. We computed the capacitance matrix for asymmetric coupled microstrips and the capacitance, and inductance matrices for four-line symmetric coupled microstrips on a twolayer substrate. We also provide the potential distribution spectrums of the models.

  19. Accurate Fabrication of Hydroxyapatite Bone Models with Porous Scaffold Structures by Using Stereolithography

    Energy Technology Data Exchange (ETDEWEB)

    Maeda, Chiaki; Tasaki, Satoko; Kirihara, Soshu, E-mail: c-maeda@jwri.osaka-u.ac.jp [Joining and Welding Research Institute, Osaka University, 11-1 Mihogaoka, Ibaraki City, Osaka 567-0047 (Japan)

    2011-05-15

    Computer graphic models of bioscaffolds with four-coordinate lattice structures of solid rods in artificial bones were designed by using a computer aided design. The scaffold models composed of acryl resin with hydroxyapatite particles at 45vol. % were fabricated by using stereolithography of a computer aided manufacturing. After dewaxing and sintering heat treatment processes, the ceramics scaffold models with four-coordinate lattices and fine hydroxyapatite microstructures were obtained successfully. By using a computer aided analysis, it was found that bio-fluids could flow extensively inside the sintered scaffolds. This result shows that the lattice structures will realize appropriate bio-fluid circulations and promote regenerations of new bones.

  20. HIGH ACCURATE LOW COMPLEX FACE DETECTION BASED ON KL TRANSFORM AND YCBCR GAUSSIAN MODEL

    Directory of Open Access Journals (Sweden)

    Epuru Nithish Kumar

    2013-05-01

    Full Text Available This paper presents a skin color model for face detection based on YCbCr Gauss model and KL transform. The simple gauss model and the region model of the skin color are designed in both KL color space and YCbCr space according to clustering. Skin regions are segmented using optimal threshold value obtained from adaptive algorithm. The segmentation results are then used to eliminate likely skin region in the gauss-likelihood image. Different morphological processes are then used to eliminate noise from binary image. In order to locate the face, the obtained regions are grouped out with simple detection algorithms. The proposed algorithm works well for complex background and many faces.

  1. Empirical approaches to more accurately predict benthic-pelagic coupling in biogeochemical ocean models

    Science.gov (United States)

    Dale, Andy; Stolpovsky, Konstantin; Wallmann, Klaus

    2016-04-01

    The recycling and burial of biogenic material in the sea floor plays a key role in the regulation of ocean chemistry. Proper consideration of these processes in ocean biogeochemical models is becoming increasingly recognized as an important step in model validation and prediction. However, the rate of organic matter remineralization in sediments and the benthic flux of redox-sensitive elements are difficult to predict a priori. In this communication, examples of empirical benthic flux models that can be coupled to earth system models to predict sediment-water exchange in the open ocean are presented. Large uncertainties hindering further progress in this field include knowledge of the reactivity of organic carbon reaching the sediment, the importance of episodic variability in bottom water chemistry and particle rain rates (for both the deep-sea and margins) and the role of benthic fauna. How do we meet the challenge?

  2. A tri-stage cluster identification model for accurate analysis of seismic catalogs

    Directory of Open Access Journals (Sweden)

    S. J. Nanda

    2013-02-01

    Full Text Available In this paper we propose a tri-stage cluster identification model that is a combination of a simple single iteration distance algorithm and an iterative K-means algorithm. In this study of earthquake seismicity, the model considers event location, time and magnitude information from earthquake catalog data to efficiently classify events as either background or mainshock and aftershock sequences. Tests on a synthetic seismicity catalog demonstrate the efficiency of the proposed model in terms of accuracy percentage (94.81% for background and 89.46% for aftershocks. The close agreement between lambda and cumulative plots for the ideal synthetic catalog and that generated by the proposed model also supports the accuracy of the proposed technique. There is flexibility in the model design to allow for proper selection of location and magnitude ranges, depending upon the nature of the mainshocks present in the catalog. The effectiveness of the proposed model also is evaluated by the classification of events in three historic catalogs: California, Japan and Indonesia. As expected, for both synthetic and historic catalog analysis it is observed that the density of events classified as background is almost uniform throughout the region, whereas the density of aftershock events are higher near the mainshocks.

  3. A Two-Phase Space Resection Model for Accurate Topographic Reconstruction from Lunar Imagery with PushbroomScanners

    Directory of Open Access Journals (Sweden)

    Xuemiao Xu

    2016-04-01

    Full Text Available Exterior orientation parameters’ (EOP estimation using space resection plays an important role in topographic reconstruction for push broom scanners. However, existing models of space resection are highly sensitive to errors in data. Unfortunately, for lunar imagery, the altitude data at the ground control points (GCPs for space resection are error-prone. Thus, existing models fail to produce reliable EOPs. Motivated by a finding that for push broom scanners, angular rotations of EOPs can be estimated independent of the altitude data and only involving the geographic data at the GCPs, which are already provided, hence, we divide the modeling of space resection into two phases. Firstly, we estimate the angular rotations based on the reliable geographic data using our proposed mathematical model. Then, with the accurate angular rotations, the collinear equations for space resection are simplified into a linear problem, and the global optimal solution for the spatial position of EOPs can always be achieved. Moreover, a certainty term is integrated to penalize the unreliable altitude data for increasing the error tolerance. Experimental results evidence that our model can obtain more accurate EOPs and topographic maps not only for the simulated data, but also for the real data from Chang’E-1, compared to the existing space resection model.

  4. A Two-Phase Space Resection Model for Accurate Topographic Reconstruction from Lunar Imagery with PushbroomScanners.

    Science.gov (United States)

    Xu, Xuemiao; Zhang, Huaidong; Han, Guoqiang; Kwan, Kin Chung; Pang, Wai-Man; Fang, Jiaming; Zhao, Gansen

    2016-01-01

    Exterior orientation parameters' (EOP) estimation using space resection plays an important role in topographic reconstruction for push broom scanners. However, existing models of space resection are highly sensitive to errors in data. Unfortunately, for lunar imagery, the altitude data at the ground control points (GCPs) for space resection are error-prone. Thus, existing models fail to produce reliable EOPs. Motivated by a finding that for push broom scanners, angular rotations of EOPs can be estimated independent of the altitude data and only involving the geographic data at the GCPs, which are already provided, hence, we divide the modeling of space resection into two phases. Firstly, we estimate the angular rotations based on the reliable geographic data using our proposed mathematical model. Then, with the accurate angular rotations, the collinear equations for space resection are simplified into a linear problem, and the global optimal solution for the spatial position of EOPs can always be achieved. Moreover, a certainty term is integrated to penalize the unreliable altitude data for increasing the error tolerance. Experimental results evidence that our model can obtain more accurate EOPs and topographic maps not only for the simulated data, but also for the real data from Chang'E-1, compared to the existing space resection model. PMID:27077855

  5. Credit Risk Modelling Under the Reduced Form Approach

    OpenAIRE

    Cãlin Adrian Cantemir; Popovici Oana Cristina

    2012-01-01

    Credit risk is one of the most important aspects that need to be considered by financial institutions involved in credit-granting. It is defined as the risk of loss that arises from a borrower who does not make payments as promised. For modelling credit risk there are two main approaches: the structural models and the reduced form models. The purpose of this paper is to review the evolution of reduced form models from the pioneering days of Jarrow and Turnbull to present

  6. Accurate modeling of a DOI capable small animal PET scanner using GATE

    International Nuclear Information System (INIS)

    In this work we developed a Monte Carlo (MC) model of the Sedecal Argus pre-clinical PET scanner, using GATE (Geant4 Application for Tomographic Emission). This is a dual-ring scanner which features DOI compensation by means of two layers of detector crystals (LYSO and GSO). Geometry of detectors and sources, pulses readout and selection of coincidence events were modeled with GATE, while a separate code was developed in order to emulate the processing of digitized data (for example, customized time windows and data flow saturation), the final binning of the lines of response and to reproduce the data output format of the scanner's acquisition software. Validation of the model was performed by modeling several phantoms used in experimental measurements, in order to compare the results of the simulations. Spatial resolution, sensitivity, scatter fraction, count rates and NECR were tested. Moreover, the NEMA NU-4 phantom was modeled in order to check for the image quality yielded by the model. Noise, contrast of cold and hot regions and recovery coefficient were calculated and compared using images of the NEMA phantom acquired with our scanner. The energy spectrum of coincidence events due to the small amount of 176Lu in LYSO crystals, which was suitably included in our model, was also compared with experimental measurements. Spatial resolution, sensitivity and scatter fraction showed an agreement within 7%. Comparison of the count rates curves resulted satisfactory, being the values within the uncertainties, in the range of activities practically used in research scans. Analysis of the NEMA phantom images also showed a good agreement between simulated and acquired data, within 9% for all the tested parameters. This work shows that basic MC modeling of this kind of system is possible using GATE as a base platform; extension through suitably written customized code allows for an adequate level of accuracy in the results. Our careful validation against experimental

  7. Toward Accurate Modeling of the Effect of Ion-Pair Formation on Solute Redox Potential.

    Science.gov (United States)

    Qu, Xiaohui; Persson, Kristin A

    2016-09-13

    A scheme to model the dependence of a solute redox potential on the supporting electrolyte is proposed, and the results are compared to experimental observations and other reported theoretical models. An improved agreement with experiment is exhibited if the effect of the supporting electrolyte on the redox potential is modeled through a concentration change induced via ion pair formation with the salt, rather than by only considering the direct impact on the redox potential of the solute itself. To exemplify the approach, the scheme is applied to the concentration-dependent redox potential of select molecules proposed for nonaqueous flow batteries. However, the methodology is general and enables rational computational electrolyte design through tuning of the operating window of electrochemical systems by shifting the redox potential of its solutes; including potentially both salts as well as redox active molecules. PMID:27500744

  8. Modelling of Limestone Dissolution in Wet FGD Systems: The Importance of an Accurate Particle Size Distribution

    DEFF Research Database (Denmark)

    Kiil, Søren; Johnsson, Jan Erik; Dam-Johansen, Kim

    1999-01-01

    In wet flue gas desulphurisation (FGD) plants, the most common sorbent is limestone. Over the past 25 years, many attempts to model the transient dissolution of limestone particles in aqueous solutions have been performed, due to the importance for the development of reliable FGD simu-lation tools...... Danish limestone types with very different particle size distributions (PSDs). All limestones were of a high purity. Model predictions were found to be qualitatively in good agreement with experimental data without any use of adjustable parameters. Deviations between measurements and simulations were...... attributed primarily to the PSD measurements of the limestone particles, which were used as model inputs. The PSDs, measured using a laser diffrac-tion-based Malvern analyser, were probably not representative of the limestone samples because agglomeration phenomena took place when the particles were...

  9. A Reduced Wind Power Grid Model for Research and Education

    DEFF Research Database (Denmark)

    Akhmatov, Vladislav; Lund, Torsten; Hansen, Anca Daniela;

    2007-01-01

    A reduced grid model of a transmission system with a number of central power plants, consumption centers, local wind turbines and a large offshore wind farm is developed and implemented in the simulation tool PowerFactory (DIgSILENT). The reduced grid model is given by Energinet.dk, Transmission...

  10. High-order accurate finite-volume formulations for the pressure gradient force in layered ocean models

    CERN Document Server

    Engwirda, Darren; Marshall, John

    2016-01-01

    The development of a set of high-order accurate finite-volume formulations for evaluation of the pressure gradient force in layered ocean models is described. A pair of new schemes are presented, both based on an integration of the contact pressure force about the perimeter of an associated momentum control-volume. The two proposed methods differ in their choice of control-volume geometries. High-order accurate numerical integration techniques are employed in both schemes to account for non-linearities in the underlying equation-of-state definitions and thermodynamic profiles, and details of an associated vertical interpolation and quadrature scheme are discussed in detail. Numerical experiments are used to confirm the consistency of the two formulations, and it is demonstrated that the new methods maintain hydrostatic and thermobaric equilibrium in the presence of strongly-sloping layer-wise geometry, non-linear equation-of-state definitions and non-uniform vertical stratification profiles. Additionally, one...

  11. Accurate Finite Element Modelling of Chipboard Single-Stud Floor Panels subjected to Dynamic Loads

    DEFF Research Database (Denmark)

    Sjöström, A.; Flodén, O.; Persson, K.;

    2012-01-01

    in constructing a building compliant with building codes vis-a-vis the propagation of sound and vibrations within the structure is a challenge. Focusing on junctions in a multi-storey lightweight buildings, a modular finite element model is developed to be used for analyses of vibration transmission...

  12. Analysis of computational models for an accurate study of electronic excitations in GFP

    DEFF Research Database (Denmark)

    Schwabe, Tobias; Beerepoot, Maarten; Olsen, Jógvan Magnus Haugaard;

    2015-01-01

    Using the chromophore of the green fluorescent protein (GFP), the performance of a hybrid RI-CC2 / polarizable embedding (PE) model is tested against a quantum chemical cluster pproach. Moreover, the effect of the rest of the protein environment is studied by systematically increasing the size...

  13. What input data are needed to accurately model electromagnetic fields from mobile phone base stations?

    NARCIS (Netherlands)

    Beekhuizen, Johan; Kromhout, Hans; Bürgi, Alfred; Huss, Anke; Vermeulen, Roel

    2015-01-01

    The increase in mobile communication technology has led to concern about potential health effects of radio frequency electromagnetic fields (RF-EMFs) from mobile phone base stations. Different RF-EMF prediction models have been applied to assess population exposure to RF-EMF. Our study examines what

  14. A fast and accurate SystemC-AMS model for PLL

    NARCIS (Netherlands)

    Ma, K.; Leuken, R. van; Vidojkovic, M.; Romme, J.; Rampu, S.; Pflug, H.; Huang, L.; Dolmans, G.

    2011-01-01

    PLLs have become an important part of electrical systems. When designing a PLL, an efficient and reliable simulation platform for system evaluation is needed. However, the closed loop simulation of a PLL is time consuming. To address this problem, in this paper, a new PLL model containing both digit

  15. Compact and accurate linear and nonlinear autoregressive moving average model parameter estimation using laguerre functions

    DEFF Research Database (Denmark)

    Chon, K H; Cohen, R J; Holstein-Rathlou, N H

    1997-01-01

    A linear and nonlinear autoregressive moving average (ARMA) identification algorithm is developed for modeling time series data. The algorithm uses Laguerre expansion of kernals (LEK) to estimate Volterra-Wiener kernals. However, instead of estimating linear and nonlinear system dynamics via movi...

  16. Improved image quality in pinhole SPECT by accurate modeling of the point spread function in low magnification systems

    Energy Technology Data Exchange (ETDEWEB)

    Pino, Francisco [Unitat de Biofísica, Facultat de Medicina, Universitat de Barcelona, Barcelona 08036, Spain and Servei de Física Mèdica i Protecció Radiològica, Institut Català d’Oncologia, L’Hospitalet de Llobregat 08907 (Spain); Roé, Nuria [Unitat de Biofísica, Facultat de Medicina, Universitat de Barcelona, Barcelona 08036 (Spain); Aguiar, Pablo, E-mail: pablo.aguiar.fernandez@sergas.es [Fundación Ramón Domínguez, Complexo Hospitalario Universitario de Santiago de Compostela 15706, Spain and Grupo de Imagen Molecular, Instituto de Investigacións Sanitarias de Santiago de Compostela (IDIS), Galicia 15782 (Spain); Falcon, Carles; Ros, Domènec [Institut d’Investigacions Biomèdiques August Pi i Sunyer (IDIBAPS), Barcelona 08036, Spain and CIBER en Bioingeniería, Biomateriales y Nanomedicina (CIBER-BBN), Barcelona 08036 (Spain); Pavía, Javier [Institut d’Investigacions Biomèdiques August Pi i Sunyer (IDIBAPS), Barcelona 080836 (Spain); CIBER en Bioingeniería, Biomateriales y Nanomedicina (CIBER-BBN), Barcelona 08036 (Spain); and Servei de Medicina Nuclear, Hospital Clínic, Barcelona 08036 (Spain)

    2015-02-15

    Purpose: Single photon emission computed tomography (SPECT) has become an important noninvasive imaging technique in small-animal research. Due to the high resolution required in small-animal SPECT systems, the spatially variant system response needs to be included in the reconstruction algorithm. Accurate modeling of the system response should result in a major improvement in the quality of reconstructed images. The aim of this study was to quantitatively assess the impact that an accurate modeling of spatially variant collimator/detector response has on image-quality parameters, using a low magnification SPECT system equipped with a pinhole collimator and a small gamma camera. Methods: Three methods were used to model the point spread function (PSF). For the first, only the geometrical pinhole aperture was included in the PSF. For the second, the septal penetration through the pinhole collimator was added. In the third method, the measured intrinsic detector response was incorporated. Tomographic spatial resolution was evaluated and contrast, recovery coefficients, contrast-to-noise ratio, and noise were quantified using a custom-built NEMA NU 4–2008 image-quality phantom. Results: A high correlation was found between the experimental data corresponding to intrinsic detector response and the fitted values obtained by means of an asymmetric Gaussian distribution. For all PSF models, resolution improved as the distance from the point source to the center of the field of view increased and when the acquisition radius diminished. An improvement of resolution was observed after a minimum of five iterations when the PSF modeling included more corrections. Contrast, recovery coefficients, and contrast-to-noise ratio were better for the same level of noise in the image when more accurate models were included. Ring-type artifacts were observed when the number of iterations exceeded 12. Conclusions: Accurate modeling of the PSF improves resolution, contrast, and recovery

  17. An accurate two-phase approximate solution to the acute viral infection model

    Energy Technology Data Exchange (ETDEWEB)

    Perelson, Alan S [Los Alamos National Laboratory

    2009-01-01

    During an acute viral infection, virus levels rise, reach a peak and then decline. Data and numerical solutions suggest the growth and decay phases are linear on a log scale. While viral dynamic models are typically nonlinear with analytical solutions difficult to obtain, the exponential nature of the solutions suggests approximations can be found. We derive a two-phase approximate solution to the target cell limited influenza model and illustrate the accuracy using data and previously established parameter values of six patients infected with influenza A. For one patient, the subsequent fall in virus concentration was not consistent with our predictions during the decay phase and an alternate approximation is derived. We find expressions for the rate and length of initial viral growth in terms of the parameters, the extent each parameter is involved in viral peaks, and the single parameter responsible for virus decay. We discuss applications of this analysis in antiviral treatments and investigating host and virus heterogeneities.

  18. Accurate Simulation of 802.11 Indoor Links: A "Bursty" Channel Model Based on Real Measurements

    Directory of Open Access Journals (Sweden)

    Agüero Ramón

    2010-01-01

    Full Text Available We propose a novel channel model to be used for simulating indoor wireless propagation environments. An extensive measurement campaign was carried out to assess the performance of different transport protocols over 802.11 links. This enabled us to better adjust our approach, which is based on an autoregressive filter. One of the main advantages of this proposal lies in its ability to reflect the "bursty" behavior which characterizes indoor wireless scenarios, having a great impact on the behavior of upper layer protocols. We compare this channel model, integrated within the Network Simulator (ns-2 platform, with other traditional approaches, showing that it is able to better reflect the real behavior which was empirically assessed.

  19. Features of creation of highly accurate models of triumphal pylons for archaeological reconstruction

    Science.gov (United States)

    Grishkanich, A. S.; Sidorov, I. S.; Redka, D. N.

    2015-12-01

    Cited a measuring operation for determining the geometric characteristics of objects in space and geodetic survey objects on the ground. In the course of the work, data were obtained on a relative positioning of the pylons in space. There are deviations from verticality. In comparison with traditional surveying this testing method is preferable because it allows you to get in semi-automated mode, the CAD model of the object is high for subsequent analysis that is more economical-ly advantageous.

  20. Morphometric analysis of Russian Plain's small lakes on the base of accurate digital bathymetric models

    Science.gov (United States)

    Naumenko, Mikhail; Guzivaty, Vadim; Sapelko, Tatiana

    2016-04-01

    Lake morphometry refers to physical factors (shape, size, structure, etc) that determine the lake depression. Morphology has a great influence on lake ecological characteristics especially on water thermal conditions and mixing depth. Depth analyses, including sediment measurement at various depths, volumes of strata and shoreline characteristics are often critical to the investigation of biological, chemical and physical properties of fresh waters as well as theoretical retention time. Management techniques such as loading capacity for effluents and selective removal of undesirable components of the biota are also dependent on detailed knowledge of the morphometry and flow characteristics. During the recent years a lake bathymetric surveys were carried out by using echo sounder with a high bottom depth resolution and GPS coordinate determination. Few digital bathymetric models have been created with 10*10 m spatial grid for some small lakes of Russian Plain which the areas not exceed 1-2 sq. km. The statistical characteristics of the depth and slopes distribution of these lakes calculated on an equidistant grid. It will provide the level-surface-volume variations of small lakes and reservoirs, calculated through combination of various satellite images. We discuss the methodological aspects of creating of morphometric models of depths and slopes of small lakes as well as the advantages of digital models over traditional methods.

  1. Parameterized reduced order modeling of misaligned stacked disks rotor assemblies

    Science.gov (United States)

    Ganine, Vladislav; Laxalde, Denis; Michalska, Hannah; Pierre, Christophe

    2011-01-01

    Light and flexible rotating parts of modern turbine engines operating at supercritical speeds necessitate application of more accurate but rather computationally expensive 3D FE modeling techniques. Stacked disks misalignment due to manufacturing variability in the geometry of individual components constitutes a particularly important aspect to be included in the analysis because of its impact on system dynamics. A new parametric model order reduction algorithm is presented to achieve this goal at affordable computational costs. It is shown that the disks misalignment leads to significant changes in nominal system properties that manifest themselves as additional blocks coupling neighboring spatial harmonics in Fourier space. Consequently, the misalignment effects can no longer be accurately modeled as equivalent forces applied to a nominal unperturbed system. The fact that the mode shapes become heavily distorted by extra harmonic content renders the nominal modal projection-based methods inaccurate and thus numerically ineffective in the context of repeated analysis of multiple misalignment realizations. The significant numerical bottleneck is removed by employing an orthogonal projection onto the subspace spanned by first few Fourier harmonic basis vectors. The projected highly sparse systems are shown to accurately approximate the specific misalignment effects, to be inexpensive to solve using direct sparse methods and easy to parameterize with a small set of measurable eccentricity and tilt angle parameters. Selected numerical examples on an industrial scale model are presented to illustrate the accuracy and efficiency of the algorithm implementation.

  2. Accurate 3d Textured Models of Vessels for the Improvement of the Educational Tools of a Museum

    Science.gov (United States)

    Soile, S.; Adam, K.; Ioannidis, C.; Georgopoulos, A.

    2013-02-01

    Besides the demonstration of the findings, modern museums organize educational programs which aim to experience and knowledge sharing combined with entertainment rather than to pure learning. Toward that effort, 2D and 3D digital representations are gradually replacing the traditional recording of the findings through photos or drawings. The present paper refers to a project that aims to create 3D textured models of two lekythoi that are exhibited in the National Archaeological Museum of Athens in Greece; on the surfaces of these lekythoi scenes of the adventures of Odysseus are depicted. The project is expected to support the production of an educational movie and some other relevant interactive educational programs for the museum. The creation of accurate developments of the paintings and of accurate 3D models is the basis for the visualization of the adventures of the mythical hero. The data collection was made by using a structured light scanner consisting of two machine vision cameras that are used for the determination of geometry of the object, a high resolution camera for the recording of the texture, and a DLP projector. The creation of the final accurate 3D textured model is a complicated and tiring procedure which includes the collection of geometric data, the creation of the surface, the noise filtering, the merging of individual surfaces, the creation of a c-mesh, the creation of the UV map, the provision of the texture and, finally, the general processing of the 3D textured object. For a better result a combination of commercial and in-house software made for the automation of various steps of the procedure was used. The results derived from the above procedure were especially satisfactory in terms of accuracy and quality of the model. However, the procedure was proved to be time consuming while the use of various software packages presumes the services of a specialist.

  3. An accurate higher order displacement model with shear and normal deformations effects for functionally graded plates

    Energy Technology Data Exchange (ETDEWEB)

    Jha, D.K., E-mail: dkjha@barc.gov.in [Civil Engineering Division, Bhabha Atomic Research Centre, Trombay, Mumbai 400 085 (India); Kant, Tarun [Department of Civil Engineering, Indian Institute of Technology Bombay, Powai, Mumbai 400 076 (India); Srinivas, K. [Civil Engineering Division, Bhabha Atomic Research Centre, Mumbai 400 085 (India); Singh, R.K. [Reactor Safety Division, Bhabha Atomic Research Centre, Mumbai 400 085 (India)

    2013-12-15

    Highlights: • We model through-thickness variation of material properties in functionally graded (FG) plates. • Effect of material grading index on deformations, stresses and natural frequency of FG plates is studied. • Effect of higher order terms in displacement models is studied for plate statics. • The benchmark solutions for the static analysis and free vibration of thick FG plates are presented. -- Abstract: Functionally graded materials (FGMs) are the potential candidates under consideration for designing the first wall of fusion reactors with a view to make best use of potential properties of available materials under severe thermo-mechanical loading conditions. A higher order shear and normal deformations plate theory is employed for stress and free vibration analyses of functionally graded (FG) elastic, rectangular, and simply (diaphragm) supported plates. Although FGMs are highly heterogeneous in nature, they are generally idealized as continua with mechanical properties changing smoothly with respect to spatial coordinates. The material properties of FG plates are assumed here to vary through thickness of plate in a continuous manner. Young's modulii and material densities are considered to be varying continuously in thickness direction according to volume fraction of constituents which are mathematically modeled here as exponential and power law functions. The effects of variation of material properties in terms of material gradation index on deformations, stresses and natural frequency of FG plates are investigated. The accuracy of present numerical solutions has been established with respect to exact three-dimensional (3D) elasticity solutions and the other models’ solutions available in literature.

  4. A simple iterative model accurately captures complex trapline formation by bumblebees across spatial scales and flower arrangements.

    Science.gov (United States)

    Reynolds, Andrew M; Lihoreau, Mathieu; Chittka, Lars

    2013-01-01

    Pollinating bees develop foraging circuits (traplines) to visit multiple flowers in a manner that minimizes overall travel distance, a task analogous to the travelling salesman problem. We report on an in-depth exploration of an iterative improvement heuristic model of bumblebee traplining previously found to accurately replicate the establishment of stable routes by bees between flowers distributed over several hectares. The critical test for a model is its predictive power for empirical data for which the model has not been specifically developed, and here the model is shown to be consistent with observations from different research groups made at several spatial scales and using multiple configurations of flowers. We refine the model to account for the spatial search strategy of bees exploring their environment, and test several previously unexplored predictions. We find that the model predicts accurately 1) the increasing propensity of bees to optimize their foraging routes with increasing spatial scale; 2) that bees cannot establish stable optimal traplines for all spatial configurations of rewarding flowers; 3) the observed trade-off between travel distance and prioritization of high-reward sites (with a slight modification of the model); 4) the temporal pattern with which bees acquire approximate solutions to travelling salesman-like problems over several dozen foraging bouts; 5) the instability of visitation schedules in some spatial configurations of flowers; 6) the observation that in some flower arrays, bees' visitation schedules are highly individually different; 7) the searching behaviour that leads to efficient location of flowers and routes between them. Our model constitutes a robust theoretical platform to generate novel hypotheses and refine our understanding about how small-brained insects develop a representation of space and use it to navigate in complex and dynamic environments.

  5. A simple and accurate numerical network flow model for bionic micro heat exchangers

    Energy Technology Data Exchange (ETDEWEB)

    Pieper, M.; Klein, P. [Fraunhofer Institute (ITWM), Kaiserslautern (Germany)

    2011-05-15

    Heat exchangers are often associated with drawbacks like a large pressure drop or a non-uniform flow distribution. Recent research shows that bionic structures can provide possible improvements. We considered a set of such structures that were designed with M. Hermann's FracTherm {sup registered} algorithm. In order to optimize and compare them with conventional heat exchangers, we developed a numerical method to determine their performance. We simulated the flow in the heat exchanger applying a network model and coupled these results with a finite volume method to determine the heat distribution in the heat exchanger. (orig.)

  6. Considering mask pellicle effect for more accurate OPC model at 45nm technology node

    Science.gov (United States)

    Wang, Ching-Heng; Liu, Qingwei; Zhang, Liguo

    2008-11-01

    Now it comes to the 45nm technology node, which should be the first generation of the immersion micro-lithography. And the brand-new lithography tool makes many optical effects, which can be ignored at 90nm and 65nm nodes, now have significant impact on the pattern transmission process from design to silicon. Among all the effects, one that needs to be pay attention to is the mask pellicle effect's impact on the critical dimension variation. With the implement of hyper-NA lithography tools, light transmits the mask pellicle vertically is not a good approximation now, and the image blurring induced by the mask pellicle should be taken into account in the computational microlithography. In this works, we investigate how the mask pellicle impacts the accuracy of the OPC model. And we will show that considering the extremely tight critical dimension control spec for 45nm generation node, to take the mask pellicle effect into the OPC model now becomes necessary.

  7. A new method based on the subpixel Gaussian model for accurate estimation of asteroid coordinates

    CERN Document Server

    Savanevych, V E; Sokovikova, N S; Bezkrovny, M M; Vavilova, I B; Ivashchenko, Yu M; Elenin, L V; Khlamov, S V; Movsesian, Ia S; Dashkova, A M; Pogorelov, A V

    2015-01-01

    We describe a new iteration method to estimate asteroid coordinates, which is based on the subpixel Gaussian model of a discrete object image. The method operates by continuous parameters (asteroid coordinates) in a discrete observational space (the set of pixels potential) of the CCD frame. In this model, a kind of the coordinate distribution of the photons hitting a pixel of the CCD frame is known a priori, while the associated parameters are determined from a real digital object image. The developed method, being more flexible in adapting to any form of the object image, has a high measurement accuracy along with a low calculating complexity due to a maximum likelihood procedure, which is implemented to obtain the best fit instead of a least-squares method and Levenberg-Marquardt algorithm for the minimisation of the quadratic form. Since 2010, the method was tested as the basis of our CoLiTec (Collection Light Technology) software, which has been installed at several observatories of the world with the ai...

  8. Extrapolation of Urn Models via Poissonization: Accurate Measurements of the Microbial Unknown

    CERN Document Server

    Lladser, Manuel; Reeder, Jens; 10.1371/journal.pone.0021105

    2011-01-01

    The availability of high-throughput parallel methods for sequencing microbial communities is increasing our knowledge of the microbial world at an unprecedented rate. Though most attention has focused on determining lower-bounds on the alpha-diversity i.e. the total number of different species present in the environment, tight bounds on this quantity may be highly uncertain because a small fraction of the environment could be composed of a vast number of different species. To better assess what remains unknown, we propose instead to predict the fraction of the environment that belongs to unsampled classes. Modeling samples as draws with replacement of colored balls from an urn with an unknown composition, and under the sole assumption that there are still undiscovered species, we show that conditionally unbiased predictors and exact prediction intervals (of constant length in logarithmic scale) are possible for the fraction of the environment that belongs to unsampled classes. Our predictions are based on a P...

  9. How to build accurate macroscopic models of actinide ions in aqueous solvents?

    International Nuclear Information System (INIS)

    Classical molecular dynamics (MD) based on parameterized force fields allow one to simulate large molecular systems on significantly long simulation times (usually, at the ns scale and above). Hence, they provide statistically relevant sampled sets of data, which may then be post-processed to estimate specific properties. However, the study of the ligand coordination dynamics around heavy ions requires the use of sophisticated force fields accounting for in particular polarization phenomena, as well as for the charge-transfer effects affecting ion/ligand interactions, which are shown to be significant in several heavy element systems. Our current efforts focus on the development of force-field models for radionuclides, with the intention of pushing as far as possible the accuracy of all competing interactions between the various elements present in solution, that is the metal, the ligands, the solvent, and the counter-ions

  10. Combined model of non-conformal layer growth for accurate optical simulation of thin-film silicon solar cells

    Energy Technology Data Exchange (ETDEWEB)

    Sever, M.; Lipovsek, B.; Krc, J.; Campa, A.; Topic, M. [University of Ljubljana, Faculty of Electrical Engineering Trzaska cesta 25, Ljubljana 1000 (Slovenia); Sanchez Plaza, G. [Technical University of Valencia, Valencia Nanophotonics Technology Center (NTC) Valencia 46022 (Spain); Haug, F.J. [Ecole Polytechnique Federale de Lausanne EPFL, Institute of Microengineering IMT, Photovoltaics and Thin-Film Electronics Laboratory, Neuchatel 2000 (Switzerland); Duchamp, M. [Ernst Ruska-Centre for Microscopy and Spectroscopy with Electrons Institute for Microstructure Research, Research Centre Juelich, Juelich D-52425 (Germany); Soppe, W. [ECN-Solliance, High Tech Campus 5, Eindhoven 5656 AE (Netherlands)

    2013-12-15

    In thin-film silicon solar cells textured interfaces are introduced, leading to improved antireflection and light trapping capabilities of the devices. Thin-layers are deposited on surface-textured substrates or superstrates and the texture is translated to internal interfaces. For accurate optical modelling of the thin-film silicon solar cells it is important to define and include the morphology of textured interfaces as realistic as possible. In this paper we present a model of thin-layer growth on textured surfaces which combines two growth principles: conformal and isotropic one. With the model we can predict the morphology of subsequent internal interfaces in thin-film silicon solar cells based on the known morphology of the substrate or superstrate. Calibration of the model for different materials grown under certain conditions is done on various cross-sectional scanning electron microscopy images of realistic devices. Advantages over existing growth modelling approaches are demonstrated - one of them is the ability of the model to predict and omit the textures with high possibility of defective regions formation inside the Si absorber layers. The developed model of layer growth is used in rigorous 3-D optical simulations employing the COMSOL simulator. A sinusoidal texture of the substrate is optimised for the case of a micromorph silicon solar cell. More than a 50 % increase in short-circuit current density of the bottom cell with respect to the flat case is predicted, considering the defect-free absorber layers. The developed approach enables accurate prediction and powerful design of current-matched top and bottom cell.

  11. Error Estimation for Reduced Order Models of Dynamical Systems

    Energy Technology Data Exchange (ETDEWEB)

    Homescu, C; Petzold, L; Serban, R

    2004-01-22

    The use of reduced order models to describe a dynamical system is pervasive in science and engineering. Often these models are used without an estimate of their error or range of validity. In this paper we consider dynamical systems and reduced models built using proper orthogonal decomposition. We show how to compute estimates and bounds for these errors, by a combination of small sample statistical condition estimation and error estimation using the adjoint method. Most importantly, the proposed approach allows the assessment of regions of validity for reduced models, i.e., ranges of perturbations in the original system over which the reduced model is still appropriate. Numerical examples validate our approach: the error norm estimates approximate well the forward error while the derived bounds are within an order of magnitude.

  12. Communication: Accurate higher-order van der Waals coefficients between molecules from a model dynamic multipole polarizability

    International Nuclear Information System (INIS)

    Due to the absence of the long-range van der Waals (vdW) interaction, conventional density functional theory (DFT) often fails in the description of molecular complexes and solids. In recent years, considerable progress has been made in the development of the vdW correction. However, the vdW correction based on the leading-order coefficient C6 alone can only achieve limited accuracy, while accurate modeling of higher-order coefficients remains a formidable task, due to the strong non-additivity effect. Here, we apply a model dynamic multipole polarizability within a modified single-frequency approximation to calculate C8 and C10 between small molecules. We find that the higher-order vdW coefficients from this model can achieve remarkable accuracy, with mean absolute relative deviations of 5% for C8 and 7% for C10. Inclusion of accurate higher-order contributions in the vdW correction will effectively enhance the predictive power of DFT in condensed matter physics and quantum chemistry

  13. Wind-tunnel tests and modeling indicate that aerial dispersant delivery operations are highly accurate

    Energy Technology Data Exchange (ETDEWEB)

    Hoffman, C.; Fritz, B. [United States Dept. of Agriculture, College Station, TX (United States); Nedwed, T. [ExxonMobil Upstream Research Co., Houston, TX (United States); Coolbaugh, T. [ExxonMobil Research and Engineering Co., Fairfax, VA (United States); Huber, C.A. [CAH Inc., Williamsburg, VA (United States)

    2009-07-01

    Oil dispersants are used to accelerate the dispersion of floating oil slicks. This study was conducted to select application equipment that will help to optimize the application oil dispersants from aircraft. Oil spill responders have a broad range of oil dispersants at their disposal because the physical and chemical interaction between the oil and dispersant is critical to successful mitigation. In order to make efficient use of dispersants, it is important to evaluate how each one atomizes once released from an aircraft. The specific goal of this study was to evaluate current spray nozzles used to spray oil dispersants from aircraft. The United States Department of Agriculture's high-speed wind tunnel facility in College Station, Texas was used to determine droplet size distributions generated by dispersant delivery nozzles at wind speeds similar to those used in aerial dispersant applications. Droplet distribution was quantified using a laser particle size analyzer. Wind-tunnel tests were conducted using water, Corexit 9500 and 9527 as well as a new dispersant gel being developed by ExxonMobil. The measured drop-size distributions were then used in an agriculture spray model to predict the delivery efficiency and swath width of dispersant delivered at flight speeds and altitudes commonly used for dispersant application. It was concluded that current practices for aerial application of dispersants lead to very efficient application. 19 refs., 5 tabs., 10 figs.

  14. GPS satellite and receiver instrumental biases estimation using least squares method for accurate ionosphere modelling

    Indian Academy of Sciences (India)

    G Sasibhushana Rao

    2007-10-01

    The positional accuracy of the Global Positioning System (GPS)is limited due to several error sources.The major error is ionosphere.By augmenting the GPS,the Category I (CAT I)Precision Approach (PA)requirements can be achieved.The Space-Based Augmentation System (SBAS)in India is known as GPS Aided Geo Augmented Navigation (GAGAN).One of the prominent errors in GAGAN that limits the positional accuracy is instrumental biases.Calibration of these biases is particularly important in achieving the CAT I PA landings.In this paper,a new algorithm is proposed to estimate the instrumental biases by modelling the TEC using 4th order polynomial.The algorithm uses values corresponding to a single station for one month period and the results confirm the validity of the algorithm.The experimental results indicate that the estimation precision of the satellite-plus-receiver instrumental bias is of the order of ± 0.17 nsec.The observed mean bias error is of the order − 3.638 nsec and − 4.71 nsec for satellite 1 and 31 respectively.It is found that results are consistent over the period.

  15. The human skin/chick chorioallantoic membrane model accurately predicts the potency of cosmetic allergens.

    Science.gov (United States)

    Slodownik, Dan; Grinberg, Igor; Spira, Ram M; Skornik, Yehuda; Goldstein, Ronald S

    2009-04-01

    The current standard method for predicting contact allergenicity is the murine local lymph node assay (LLNA). Public objection to the use of animals in testing of cosmetics makes the development of a system that does not use sentient animals highly desirable. The chorioallantoic membrane (CAM) of the chick egg has been extensively used for the growth of normal and transformed mammalian tissues. The CAM is not innervated, and embryos are sacrificed before the development of pain perception. The aim of this study was to determine whether the sensitization phase of contact dermatitis to known cosmetic allergens can be quantified using CAM-engrafted human skin and how these results compare with published EC3 data obtained with the LLNA. We studied six common molecules used in allergen testing and quantified migration of epidermal Langerhans cells (LC) as a measure of their allergic potency. All agents with known allergic potential induced statistically significant migration of LC. The data obtained correlated well with published data for these allergens generated using the LLNA test. The human-skin CAM model therefore has great potential as an inexpensive, non-radioactive, in vivo alternative to the LLNA, which does not require the use of sentient animals. In addition, this system has the advantage of testing the allergic response of human, rather than animal skin. PMID:19054059

  16. A fuzzy-logic-based approach to accurate modeling of a double gate MOSFET for nanoelectronic circuit design

    Institute of Scientific and Technical Information of China (English)

    F. Djeffal; A. Ferdi; M. Chahdi

    2012-01-01

    The double gate (DG) silicon MOSFET with an extremely short-channel length has the appropriate features to constitute the devices for nanoscale circuit design.To develop a physical model for extremely scaled DG MOSFETs,the drain current in the channel must be accurately determined under the application of drain and gate voltages.However,modeling the transport mechanism for the nanoscale structures requires the use of overkill methods and models in terms of their complexity and computation time (self-consistent,quantum computations ).Therefore,new methods and techniques are required to overcome these constraints.In this paper,a new approach based on the fuzzy logic computation is proposed to investigate nanoscale DG MOSFETs.The proposed approach has been implemented in a device simulator to show the impact of the proposed approach on the nanoelectronic circuit design.The approach is general and thus is suitable for any type ofnanoscale structure investigation problems in the nanotechnology industry.

  17. Reduced-Order Modeling for Flutter/LCO Using Recurrent Artificial Neural Network

    Science.gov (United States)

    Yao, Weigang; Liou, Meng-Sing

    2012-01-01

    The present study demonstrates the efficacy of a recurrent artificial neural network to provide a high fidelity time-dependent nonlinear reduced-order model (ROM) for flutter/limit-cycle oscillation (LCO) modeling. An artificial neural network is a relatively straightforward nonlinear method for modeling an input-output relationship from a set of known data, for which we use the radial basis function (RBF) with its parameters determined through a training process. The resulting RBF neural network, however, is only static and is not yet adequate for an application to problems of dynamic nature. The recurrent neural network method [1] is applied to construct a reduced order model resulting from a series of high-fidelity time-dependent data of aero-elastic simulations. Once the RBF neural network ROM is constructed properly, an accurate approximate solution can be obtained at a fraction of the cost of a full-order computation. The method derived during the study has been validated for predicting nonlinear aerodynamic forces in transonic flow and is capable of accurate flutter/LCO simulations. The obtained results indicate that the present recurrent RBF neural network is accurate and efficient for nonlinear aero-elastic system analysis

  18. Improved dimensionally-reduced visual cortical network using stochastic noise modeling.

    Science.gov (United States)

    Tao, Louis; Praissman, Jeremy; Sornborger, Andrew T

    2012-04-01

    In this paper, we extend our framework for constructing low-dimensional dynamical system models of large-scale neuronal networks of mammalian primary visual cortex. Our dimensional reduction procedure consists of performing a suitable linear change of variables and then systematically truncating the new set of equations. The extended framework includes modeling the effect of neglected modes as a stochastic process. By parametrizing and including stochasticity in one of two ways we show that we can improve the systems-level characterization of our dimensionally reduced neuronal network model. We examined orientation selectivity maps calculated from the firing rate distribution of large-scale simulations and stochastic dimensionally reduced models and found that by using stochastic processes to model the neglected modes, we were able to better reproduce the mean and variance of firing rates in the original large-scale simulations while still accurately predicting the orientation preference distribution.

  19. Error Estimation for Reduced Order Models of Dynamical systems

    Energy Technology Data Exchange (ETDEWEB)

    Homescu, C; Petzold, L R; Serban, R

    2003-12-16

    The use of reduced order models to describe a dynamical system is pervasive in science and engineering. Often these models are used without an estimate of their error or range of validity. In this paper we consider dynamical systems and reduced models built using proper orthogonal decomposition. We show how to compute estimates and bounds for these errors, by a combination of the small sample statistical condition estimation method and of error estimation using the adjoint method. More importantly, the proposed approach allows the assessment of so-called regions of validity for reduced models, i.e., ranges of perturbations in the original system over which the reduced model is still appropriate. This question is particularly important for applications in which reduced models are used not just to approximate the solution to the system that provided the data used in constructing the reduced model, but rather to approximate the solution of systems perturbed from the original one. Numerical examples validate our approach: the error norm estimates approximate well the forward error while the derived bounds are within an order of magnitude.

  20. The Reduced RUM as a Logit Model: Parameterization and Constraints.

    Science.gov (United States)

    Chiu, Chia-Yi; Köhn, Hans-Friedrich

    2016-06-01

    Cognitive diagnosis models (CDMs) for educational assessment are constrained latent class models. Examinees are assigned to classes of intellectual proficiency defined in terms of cognitive skills called attributes, which an examinee may or may not have mastered. The Reduced Reparameterized Unified Model (Reduced RUM) has received considerable attention among psychometricians. Markov Chain Monte Carlo (MCMC) or Expectation Maximization (EM) are typically used for estimating the Reduced RUM. Commercial implementations of the EM algorithm are available in the latent class analysis (LCA) routines of Latent GOLD and Mplus, for example. Fitting the Reduced RUM with an LCA routine requires that it be reparameterized as a logit model, with constraints imposed on the parameters. For models involving two attributes, these have been worked out. However, for models involving more than two attributes, the parameterization and the constraints are nontrivial and currently unknown. In this article, the general parameterization of the Reduced RUM as a logit model involving any number of attributes and the associated parameter constraints are derived. As a practical illustration, the LCA routine in Mplus is used for fitting the Reduced RUM to two synthetic data sets and to a real-world data set; for comparison, the results obtained by using the MCMC implementation in OpenBUGS are also provided.

  1. The Reduced RUM as a Logit Model: Parameterization and Constraints.

    Science.gov (United States)

    Chiu, Chia-Yi; Köhn, Hans-Friedrich

    2016-06-01

    Cognitive diagnosis models (CDMs) for educational assessment are constrained latent class models. Examinees are assigned to classes of intellectual proficiency defined in terms of cognitive skills called attributes, which an examinee may or may not have mastered. The Reduced Reparameterized Unified Model (Reduced RUM) has received considerable attention among psychometricians. Markov Chain Monte Carlo (MCMC) or Expectation Maximization (EM) are typically used for estimating the Reduced RUM. Commercial implementations of the EM algorithm are available in the latent class analysis (LCA) routines of Latent GOLD and Mplus, for example. Fitting the Reduced RUM with an LCA routine requires that it be reparameterized as a logit model, with constraints imposed on the parameters. For models involving two attributes, these have been worked out. However, for models involving more than two attributes, the parameterization and the constraints are nontrivial and currently unknown. In this article, the general parameterization of the Reduced RUM as a logit model involving any number of attributes and the associated parameter constraints are derived. As a practical illustration, the LCA routine in Mplus is used for fitting the Reduced RUM to two synthetic data sets and to a real-world data set; for comparison, the results obtained by using the MCMC implementation in OpenBUGS are also provided. PMID:25838247

  2. Antenna modeling considerations for accurate SAR calculations in human phantoms in close proximity to GSM cellular base station antennas.

    Science.gov (United States)

    van Wyk, Marnus J; Bingle, Marianne; Meyer, Frans J C

    2005-09-01

    International bodies such as International Commission on Non-Ionizing Radiation Protection (ICNIRP) and the Institute for Electrical and Electronic Engineering (IEEE) make provision for human exposure assessment based on SAR calculations (or measurements) and basic restrictions. In the case of base station exposure this is mostly applicable to occupational exposure scenarios in the very near field of these antennas where the conservative reference level criteria could be unnecessarily restrictive. This study presents a variety of critical aspects that need to be considered when calculating SAR in a human body close to a mobile phone base station antenna. A hybrid FEM/MoM technique is proposed as a suitable numerical method to obtain accurate results. The verification of the FEM/MoM implementation has been presented in a previous publication; the focus of this study is an investigation into the detail that must be included in a numerical model of the antenna, to accurately represent the real-world scenario. This is accomplished by comparing numerical results to measurements for a generic GSM base station antenna and appropriate, representative canonical and human phantoms. The results show that it is critical to take the disturbance effect of the human phantom (a large conductive body) on the base station antenna into account when the antenna-phantom spacing is less than 300 mm. For these small spacings, the antenna structure must be modeled in detail. The conclusion is that it is feasible to calculate, using the proposed techniques and methodology, accurate occupational compliance zones around base station antennas based on a SAR profile and basic restriction guidelines. PMID:15931680

  3. What makes an accurate and reliable subject-specific finite element model? A case study of an elephant femur.

    Science.gov (United States)

    Panagiotopoulou, O; Wilshin, S D; Rayfield, E J; Shefelbine, S J; Hutchinson, J R

    2012-02-01

    Finite element modelling is well entrenched in comparative vertebrate biomechanics as a tool to assess the mechanical design of skeletal structures and to better comprehend the complex interaction of their form-function relationships. But what makes a reliable subject-specific finite element model? To approach this question, we here present a set of convergence and sensitivity analyses and a validation study as an example, for finite element analysis (FEA) in general, of ways to ensure a reliable model. We detail how choices of element size, type and material properties in FEA influence the results of simulations. We also present an empirical model for estimating heterogeneous material properties throughout an elephant femur (but of broad applicability to FEA). We then use an ex vivo experimental validation test of a cadaveric femur to check our FEA results and find that the heterogeneous model matches the experimental results extremely well, and far better than the homogeneous model. We emphasize how considering heterogeneous material properties in FEA may be critical, so this should become standard practice in comparative FEA studies along with convergence analyses, consideration of element size, type and experimental validation. These steps may be required to obtain accurate models and derive reliable conclusions from them. PMID:21752810

  4. Rolling mill optimization using an accurate and rapid new model for mill deflection and strip thickness profile

    Science.gov (United States)

    Malik, Arif Sultan

    This work presents improved technology for attaining high-quality rolled metal strip. The new technology is based on an innovative method to model both the static and dynamic characteristics of rolling mill deflection, and it applies equally to both cluster-type and non cluster-type rolling mill configurations. By effectively combining numerical Finite Element Analysis (FEA) with analytical solid mechanics, the devised approach delivers a rapid, accurate, flexible, high-fidelity model useful for optimizing many important rolling parameters. The associated static deflection model enables computation of the thickness profile and corresponding flatness of the rolled strip. Accurate methods of predicting the strip thickness profile and strip flatness are important in rolling mill design, rolling schedule set-up, control of mill flatness actuators, and optimization of ground roll profiles. The corresponding dynamic deflection model enables solution of the standard eigenvalue problem to determine natural frequencies and modes of vibration. The presented method for solving the roll-stack deflection problem offers several important advantages over traditional methods. In particular, it includes continuity of elastic foundations, non-iterative solution when using pre-determined elastic foundation moduli, continuous third-order displacement fields, simple stress-field determination, the ability to calculate dynamic characteristics, and a comparatively faster solution time. Consistent with the most advanced existing methods, the presented method accommodates loading conditions that represent roll crowning, roll bending, roll shifting, and roll crossing mechanisms. Validation of the static model is provided by comparing results and solution time with large-scale, commercial finite element simulations. In addition to examples with the common 4-high vertical stand rolling mill, application of the presented method to the most complex of rolling mill configurations is demonstrated

  5. Research of Gear Accurate Modeling Based on UG%基于UG的齿轮精确建模研究

    Institute of Scientific and Technical Information of China (English)

    张磊; 赵韩; 刘迪祥; 崔港

    2011-01-01

    In the analysis of equations about transition curve,the research on gear accurate parametric modeling was done.The mixed technology of UG/Open API,UG/Open GRIP and MFC was used to accomplish gear accurate parametric modeling system.It makes preparations for subsequent finite element analysis and dynamic simulation,and has a practical engineering significance.%在分析了齿轮两种过渡曲线方程的基础之上,对齿轮的精确参数化建模方法进行了研究,并利用UG/OpenAPI、UG/Open GR IP及MFC的混合开发技术实现了渐开线齿轮精确参数化建模系统的设计。为齿轮后续有限元分析和运动仿真的准确进行做准备,具有实际工程意义。

  6. Projection-Based Reduced Order Modeling for Spacecraft Thermal Analysis

    Science.gov (United States)

    Qian, Jing; Wang, Yi; Song, Hongjun; Pant, Kapil; Peabody, Hume; Ku, Jentung; Butler, Charles D.

    2015-01-01

    This paper presents a mathematically rigorous, subspace projection-based reduced order modeling (ROM) methodology and an integrated framework to automatically generate reduced order models for spacecraft thermal analysis. Two key steps in the reduced order modeling procedure are described: (1) the acquisition of a full-scale spacecraft model in the ordinary differential equation (ODE) and differential algebraic equation (DAE) form to resolve its dynamic thermal behavior; and (2) the ROM to markedly reduce the dimension of the full-scale model. Specifically, proper orthogonal decomposition (POD) in conjunction with discrete empirical interpolation method (DEIM) and trajectory piece-wise linear (TPWL) methods are developed to address the strong nonlinear thermal effects due to coupled conductive and radiative heat transfer in the spacecraft environment. Case studies using NASA-relevant satellite models are undertaken to verify the capability and to assess the computational performance of the ROM technique in terms of speed-up and error relative to the full-scale model. ROM exhibits excellent agreement in spatiotemporal thermal profiles (analysis. These findings establish the feasibility of ROM to perform rational and computationally affordable thermal analysis, develop reliable thermal control strategies for spacecraft, and greatly reduce the development cycle times and costs.

  7. An Accurate GPS-IMU/DR Data Fusion Method for Driverless Car Based on a Set of Predictive Models and Grid Constraints.

    Science.gov (United States)

    Wang, Shiyao; Deng, Zhidong; Yin, Gang

    2016-01-01

    A high-performance differential global positioning system (GPS)  receiver with real time kinematics provides absolute localization for driverless cars. However, it is not only susceptible to multipath effect but also unable to effectively fulfill precise error correction in a wide range of driving areas. This paper proposes an accurate GPS-inertial measurement unit (IMU)/dead reckoning (DR) data fusion method based on a set of predictive models and occupancy grid constraints. First, we employ a set of autoregressive and moving average (ARMA) equations that have different structural parameters to build maximum likelihood models of raw navigation. Second, both grid constraints and spatial consensus checks on all predictive results and current measurements are required to have removal of outliers. Navigation data that satisfy stationary stochastic process are further fused to achieve accurate localization results. Third, the standard deviation of multimodal data fusion can be pre-specified by grid size. Finally, we perform a lot of field tests on a diversity of real urban scenarios. The experimental results demonstrate that the method can significantly smooth small jumps in bias and considerably reduce accumulated position errors due to DR. With low computational complexity, the position accuracy of our method surpasses existing state-of-the-arts on the same dataset and the new data fusion method is practically applied in our driverless car. PMID:26927108

  8. An Accurate GPS-IMU/DR Data Fusion Method for Driverless Car Based on a Set of Predictive Models and Grid Constraints

    Science.gov (United States)

    Wang, Shiyao; Deng, Zhidong; Yin, Gang

    2016-01-01

    A high-performance differential global positioning system (GPS)  receiver with real time kinematics provides absolute localization for driverless cars. However, it is not only susceptible to multipath effect but also unable to effectively fulfill precise error correction in a wide range of driving areas. This paper proposes an accurate GPS–inertial measurement unit (IMU)/dead reckoning (DR) data fusion method based on a set of predictive models and occupancy grid constraints. First, we employ a set of autoregressive and moving average (ARMA) equations that have different structural parameters to build maximum likelihood models of raw navigation. Second, both grid constraints and spatial consensus checks on all predictive results and current measurements are required to have removal of outliers. Navigation data that satisfy stationary stochastic process are further fused to achieve accurate localization results. Third, the standard deviation of multimodal data fusion can be pre-specified by grid size. Finally, we perform a lot of field tests on a diversity of real urban scenarios. The experimental results demonstrate that the method can significantly smooth small jumps in bias and considerably reduce accumulated position errors due to DR. With low computational complexity, the position accuracy of our method surpasses existing state-of-the-arts on the same dataset and the new data fusion method is practically applied in our driverless car. PMID:26927108

  9. An Accurate GPS-IMU/DR Data Fusion Method for Driverless Car Based on a Set of Predictive Models and Grid Constraints

    Directory of Open Access Journals (Sweden)

    Shiyao Wang

    2016-02-01

    Full Text Available A high-performance differential global positioning system (GPS  receiver with real time kinematics provides absolute localization for driverless cars. However, it is not only susceptible to multipath effect but also unable to effectively fulfill precise error correction in a wide range of driving areas. This paper proposes an accurate GPS–inertial measurement unit (IMU/dead reckoning (DR data fusion method based on a set of predictive models and occupancy grid constraints. First, we employ a set of autoregressive and moving average (ARMA equations that have different structural parameters to build maximum likelihood models of raw navigation. Second, both grid constraints and spatial consensus checks on all predictive results and current measurements are required to have removal of outliers. Navigation data that satisfy stationary stochastic process are further fused to achieve accurate localization results. Third, the standard deviation of multimodal data fusion can be pre-specified by grid size. Finally, we perform a lot of field tests on a diversity of real urban scenarios. The experimental results demonstrate that the method can significantly smooth small jumps in bias and considerably reduce accumulated position errors due to DR. With low computational complexity, the position accuracy of our method surpasses existing state-of-the-arts on the same dataset and the new data fusion method is practically applied in our driverless car.

  10. Ignition calculations using a reduced coupled-mode electron- ion energy exchange model*

    Science.gov (United States)

    Garbett, W. J.; Chapman, D. A.

    2016-03-01

    Coupled-mode models for electron-ion energy exchange can predict large deviations from standard binary collision models in some regimes. A recently developed reduced coupled-mode model for electron-ion energy exchange, which accurately reproduces full numerical results over a wide range of density and temperature space, has been implemented in the Nym hydrocode and used to assess the impact on ICF capsule fuel assembly and performance. Simulations show a lack of sensitivity to the model, consistent with results from a range of simpler alternative models. Since the coupled-mode model is conceptually distinct to models based on binary collision theory, this result provides increased confidence that uncertainty in electron-ion energy exchange will not impact ignition attempts.

  11. Reduced order modeling of some fluid flows of industrial interest

    Energy Technology Data Exchange (ETDEWEB)

    Alonso, D; Terragni, F; Velazquez, A; Vega, J M, E-mail: josemanuel.vega@upm.es [E.T.S.I. Aeronauticos, Universidad Politecnica de Madrid, 28040 Madrid (Spain)

    2012-06-01

    Some basic ideas are presented for the construction of robust, computationally efficient reduced order models amenable to be used in industrial environments, combined with somewhat rough computational fluid dynamics solvers. These ideas result from a critical review of the basic principles of proper orthogonal decomposition-based reduced order modeling of both steady and unsteady fluid flows. In particular, the extent to which some artifacts of the computational fluid dynamics solvers can be ignored is addressed, which opens up the possibility of obtaining quite flexible reduced order models. The methods are illustrated with the steady aerodynamic flow around a horizontal tail plane of a commercial aircraft in transonic conditions, and the unsteady lid-driven cavity problem. In both cases, the approximations are fairly good, thus reducing the computational cost by a significant factor. (review)

  12. Accurate Segmentation of CT Male Pelvic Organs via Regression-Based Deformable Models and Multi-Task Random Forests.

    Science.gov (United States)

    Gao, Yaozong; Shao, Yeqin; Lian, Jun; Wang, Andrew Z; Chen, Ronald C; Shen, Dinggang

    2016-06-01

    Segmenting male pelvic organs from CT images is a prerequisite for prostate cancer radiotherapy. The efficacy of radiation treatment highly depends on segmentation accuracy. However, accurate segmentation of male pelvic organs is challenging due to low tissue contrast of CT images, as well as large variations of shape and appearance of the pelvic organs. Among existing segmentation methods, deformable models are the most popular, as shape prior can be easily incorporated to regularize the segmentation. Nonetheless, the sensitivity to initialization often limits their performance, especially for segmenting organs with large shape variations. In this paper, we propose a novel approach to guide deformable models, thus making them robust against arbitrary initializations. Specifically, we learn a displacement regressor, which predicts 3D displacement from any image voxel to the target organ boundary based on the local patch appearance. This regressor provides a non-local external force for each vertex of deformable model, thus overcoming the initialization problem suffered by the traditional deformable models. To learn a reliable displacement regressor, two strategies are particularly proposed. 1) A multi-task random forest is proposed to learn the displacement regressor jointly with the organ classifier; 2) an auto-context model is used to iteratively enforce structural information during voxel-wise prediction. Extensive experiments on 313 planning CT scans of 313 patients show that our method achieves better results than alternative classification or regression based methods, and also several other existing methods in CT pelvic organ segmentation. PMID:26800531

  13. A new model of dispersion for metals leading to a more accurate modeling of plasmonic structures using the FDTD method

    Energy Technology Data Exchange (ETDEWEB)

    Vial, A.; Dridi, M.; Cunff, L. le [Universite de Technologie de Troyes, Institut Charles Delaunay, CNRS UMR 6279, Laboratoire de Nanotechnologie et d' Instrumentation Optique, 12, rue Marie Curie, BP-2060, Troyes Cedex (France); Laroche, T. [Universite de Franche-Comte, Institut FEMTO-ST, CNRS UMR 6174, Departement de Physique et de Metrologie des Oscillateurs, Besancon Cedex (France)

    2011-06-15

    We present FDTD simulations results obtained using the Drude critical points model. This model enables spectroscopic studies of metallic structures over wider wavelength ranges than usually used, and it facilitates the study of structures made of several metals. (orig.)

  14. SU-E-T-475: An Accurate Linear Model of Tomotherapy MLC-Detector System for Patient Specific Delivery QA

    Energy Technology Data Exchange (ETDEWEB)

    Chen, Y; Mo, X; Chen, M; Olivera, G; Parnell, D; Key, S; Lu, W [21st Century Oncology, Madison, WI (United States); Reeher, M [21st Century Oncology, Naples, FL (United States); Galmarini, D [21st Century Oncology, Fort Myers, FL (United States)

    2014-06-01

    Purpose: An accurate leaf fluence model can be used in applications such as patient specific delivery QA and in-vivo dosimetry for TomoTherapy systems. It is known that the total fluence is not a linear combination of individual leaf fluence due to leakage-transmission, tongue-and-groove, and source occlusion effect. Here we propose a method to model the nonlinear effects as linear terms thus making the MLC-detector system a linear system. Methods: A leaf pattern basis (LPB) consisting of no-leaf-open, single-leaf-open, double-leaf-open and triple-leaf-open patterns are chosen to represent linear and major nonlinear effects of leaf fluence as a linear system. An arbitrary leaf pattern can be expressed as (or decomposed to) a linear combination of the LPB either pulse by pulse or weighted by dwelling time. The exit detector responses to the LPB are obtained by processing returned detector signals resulting from the predefined leaf patterns for each jaw setting. Through forward transformation, detector signal can be predicted given a delivery plan. An equivalent leaf open time (LOT) sinogram containing output variation information can also be inversely calculated from the measured detector signals. Twelve patient plans were delivered in air. The equivalent LOT sinograms were compared with their planned sinograms. Results: The whole calibration process was done in 20 minutes. For two randomly generated leaf patterns, 98.5% of the active channels showed differences within 0.5% of the local maximum between the predicted and measured signals. Averaged over the twelve plans, 90% of LOT errors were within +/−10 ms. The LOT systematic error increases and shows an oscillating pattern when LOT is shorter than 50 ms. Conclusion: The LPB method models the MLC-detector response accurately, which improves patient specific delivery QA and in-vivo dosimetry for TomoTherapy systems. It is sensitive enough to detect systematic LOT errors as small as 10 ms.

  15. Comment on ''Accurate analytic model potentials for D2 and H2 based on the perturbed-Morse-oscillator model''

    International Nuclear Information System (INIS)

    Huffaker and Cohen (ref.1) claim that the perturbed-Morse-oscillator (PMO) model, for the potential energy function for hydrogen, gives very high accuracy results; surpassing that of the RKR potential. A more efficient approach to formulating analytical functions based on the PMO model is given, and some defects of the PMO model are discussed

  16. Fast and accurate Monte Carlo modeling of a kilovoltage X-ray therapy unit using a photon-source approximation for treatment planning in complex media

    Directory of Open Access Journals (Sweden)

    B Zeinali-Rafsanjani

    2015-01-01

    Full Text Available To accurately recompute dose distributions in chest-wall radiotherapy with 120 kVp kilovoltage X-rays, an MCNP4C Monte Carlo model is presented using a fast method that obviates the need to fully model the tube components. To validate the model, half-value layer (HVL, percentage depth doses (PDDs and beam profiles were measured. Dose measurements were performed for a more complex situation using thermoluminescence dosimeters (TLDs placed within a Rando phantom. The measured and computed first and second HVLs were 3.8, 10.3 mm Al and 3.8, 10.6 mm Al, respectively. The differences between measured and calculated PDDs and beam profiles in water were within 2 mm/2% for all data points. In the Rando phantom, differences for majority of data points were within 2%. The proposed model offered an approximately 9500-fold reduced run time compared to the conventional full simulation. The acceptable agreement, based on international criteria, between the simulations and the measurements validates the accuracy of the model for its use in treatment planning and radiobiological modeling studies of superficial therapies including chest-wall irradiation using kilovoltage beam.

  17. Accurate prediction of interfacial residues in two-domain proteins using evolutionary information: implications for three-dimensional modeling.

    Science.gov (United States)

    Bhaskara, Ramachandra M; Padhi, Amrita; Srinivasan, Narayanaswamy

    2014-07-01

    With the preponderance of multidomain proteins in eukaryotic genomes, it is essential to recognize the constituent domains and their functions. Often function involves communications across the domain interfaces, and the knowledge of the interacting sites is essential to our understanding of the structure-function relationship. Using evolutionary information extracted from homologous domains in at least two diverse domain architectures (single and multidomain), we predict the interface residues corresponding to domains from the two-domain proteins. We also use information from the three-dimensional structures of individual domains of two-domain proteins to train naïve Bayes classifier model to predict the interfacial residues. Our predictions are highly accurate (∼85%) and specific (∼95%) to the domain-domain interfaces. This method is specific to multidomain proteins which contain domains in at least more than one protein architectural context. Using predicted residues to constrain domain-domain interaction, rigid-body docking was able to provide us with accurate full-length protein structures with correct orientation of domains. We believe that these results can be of considerable interest toward rational protein and interaction design, apart from providing us with valuable information on the nature of interactions.

  18. Accurate prediction of interfacial residues in two-domain proteins using evolutionary information: implications for three-dimensional modeling.

    Science.gov (United States)

    Bhaskara, Ramachandra M; Padhi, Amrita; Srinivasan, Narayanaswamy

    2014-07-01

    With the preponderance of multidomain proteins in eukaryotic genomes, it is essential to recognize the constituent domains and their functions. Often function involves communications across the domain interfaces, and the knowledge of the interacting sites is essential to our understanding of the structure-function relationship. Using evolutionary information extracted from homologous domains in at least two diverse domain architectures (single and multidomain), we predict the interface residues corresponding to domains from the two-domain proteins. We also use information from the three-dimensional structures of individual domains of two-domain proteins to train naïve Bayes classifier model to predict the interfacial residues. Our predictions are highly accurate (∼85%) and specific (∼95%) to the domain-domain interfaces. This method is specific to multidomain proteins which contain domains in at least more than one protein architectural context. Using predicted residues to constrain domain-domain interaction, rigid-body docking was able to provide us with accurate full-length protein structures with correct orientation of domains. We believe that these results can be of considerable interest toward rational protein and interaction design, apart from providing us with valuable information on the nature of interactions. PMID:24375512

  19. A homotopy-based sparse representation for fast and accurate shape prior modeling in liver surgical planning.

    Science.gov (United States)

    Wang, Guotai; Zhang, Shaoting; Xie, Hongzhi; Metaxas, Dimitris N; Gu, Lixu

    2015-01-01

    Shape prior plays an important role in accurate and robust liver segmentation. However, liver shapes have complex variations and accurate modeling of liver shapes is challenging. Using large-scale training data can improve the accuracy but it limits the computational efficiency. In order to obtain accurate liver shape priors without sacrificing the efficiency when dealing with large-scale training data, we investigate effective and scalable shape prior modeling method that is more applicable in clinical liver surgical planning system. We employed the Sparse Shape Composition (SSC) to represent liver shapes by an optimized sparse combination of shapes in the repository, without any assumptions on parametric distributions of liver shapes. To leverage large-scale training data and improve the computational efficiency of SSC, we also introduced a homotopy-based method to quickly solve the L1-norm optimization problem in SSC. This method takes advantage of the sparsity of shape modeling, and solves the original optimization problem in SSC by continuously transforming it into a series of simplified problems whose solution is fast to compute. When new training shapes arrive gradually, the homotopy strategy updates the optimal solution on the fly and avoids re-computing it from scratch. Experiments showed that SSC had a high accuracy and efficiency in dealing with complex liver shape variations, excluding gross errors and preserving local details on the input liver shape. The homotopy-based SSC had a high computational efficiency, and its runtime increased very slowly when repository's capacity and vertex number rose to a large degree. When repository's capacity was 10,000, with 2000 vertices on each shape, homotopy method cost merely about 11.29 s to solve the optimization problem in SSC, nearly 2000 times faster than interior point method. The dice similarity coefficient (DSC), average symmetric surface distance (ASD), and maximum symmetric surface distance measurement

  20. Reduced order model for binary neutron star waveforms with tidal interactions

    Science.gov (United States)

    Lackey, Benjamin; Bernuzzi, Sebastiano; Galley, Chad

    2016-03-01

    Observations of inspiralling binary neutron star (BNS) systems with Advanced LIGO can be used to determine the unknown neutron-star equation of state by measuring the phase shift in the gravitational waveform due to tidal interactions. Unfortunately, this requires computationally efficient waveform models for use in parameter estimation codes that typically require 106-107 sequential waveform evaluations, as well as accurate waveform models with phase errors less than 1 radian over the entire inspiral to avoid systematic errors in the measured tidal deformability. The effective one body waveform model with l = 2 , 3, and 4 tidal multipole moments is currently the most accurate model for BNS systems, but takes several minutes to evaluate. We develop a reduced order model of this waveform by constructing separate orthonormal bases for the amplitude and phase evolution. We find that only 10-20 bases are needed to reconstruct any BNS waveform with a starting frequency of 10 Hz. The coefficients of these bases are found with Chebyshev interpolation over the waveform parameter space. This reduced order model has maximum errors of 0.2 radians, and results in a speedup factor of more than 103, allowing parameter estimation codes to run in days to weeks rather than decades.

  1. User Guide for SMARTIES: Spheroids Modelled Accurately with a Robust T-matrix Implementation for Electromagnetic Scattering

    CERN Document Server

    Somerville, W R C; Ru, E C Le

    2015-01-01

    We provide a detailed user guide for SMARTIES, a suite of Matlab codes for the calculation of the optical properties of oblate and prolate spheroidal particles, with comparable capabilities and ease-of-use as Mie theory for spheres. SMARTIES is a Matlab implementation of an improved T-matrix algorithm for the theoretical modelling of electromagnetic scattering by particles of spheroidal shape. The theory behind the improvements in numerical accuracy and convergence is briefly summarised, with reference to the original publications. Instructions of use, and a detailed description of the code structure, its range of applicability, as well as guidelines for further developments by advanced users are discussed in separate sections of this user guide. The code may be useful to researchers seeking a fast, accurate and reliable tool to simulate the near-field and far-field optical properties of elongated particles, but will also appeal to other developers of light-scattering software seeking a reliable benchmark for...

  2. Lightning arrester models enabling highly accurate lightning surge analysis; Koseidona kaminari surge kaiseki wo kano ni suru hiraiki model

    Energy Technology Data Exchange (ETDEWEB)

    Ueda, T. [Chubu Electric Power Co. Inc., Nagoya (Japan); Funabashi, T.; Hagiwara, T.; Watanabe, H. [Meidensha Corp., Tokyo (Japan)

    1998-12-28

    Introduced herein are a dynamic behavior model for lightning arresters designed for power stations and substations and a flashover model for a lightning arresting device designed for transmission, both developed by the author et al. The author et al base their zinc oxide type lightning arrester model on the conventional static V-I characteristics, and supplement them with difference in voltage between static and dynamic characteristics. The model is easily simulated using EMTP (Electromagnetic Transients Program) etc. There is good agreement between the results of calculation performed using this model and actually measured values. Lightning arresting devices for transmission have come into practical use, and their effectiveness is introduced on various occasions. For the proper application of such devices, an analysis model capable of faithfully describing the flashover characteristics of arcing horns installed in great numbers along transmission lines, and of lightning arresting devices for transmission, are required. The author et al have newly developed a flashover model for the devices and uses the model for the analysis of lightning surges. It is found that the actually measured values of discharge characteristics of lightning arresting devices for transmission agree well with the values calculated by use of the model. (NEDO)

  3. Towards an accurate model of the redshift-space clustering of haloes in the quasi-linear regime

    Science.gov (United States)

    Reid, Beth A.; White, Martin

    2011-11-01

    Observations of redshift-space distortions in spectroscopic galaxy surveys offer an attractive method for measuring the build-up of cosmological structure, which depends both on the expansion rate of the Universe and on our theory of gravity. The statistical precision with which redshift-space distortions can now be measured demands better control of our theoretical systematic errors. While many recent studies focus on understanding dark matter clustering in redshift space, galaxies occupy special places in the universe: dark matter haloes. In our detailed study of halo clustering and velocity statistics in 67.5 h-3 Gpc3 of N-body simulations, we uncover a complex dependence of redshift-space clustering on halo bias. We identify two distinct corrections which affect the halo redshift-space correlation function on quasi-linear scales (˜30-80 h-1 Mpc): the non-linear mapping between real-space and redshift-space positions, and the non-linear suppression of power in the velocity divergence field. We model the first non-perturbatively using the scale-dependent Gaussian streaming model, which we show is accurate at the 10 (s > 25) h-1 Mpc for the monopole (quadrupole) halo correlation functions. The dominant correction to the Kaiser limit in this model scales like b3. We use standard perturbation theory to predict the real-space pairwise halo velocity statistics. Our fully analytic model is accurate at the 2 per cent level only on scales s > 40 h-1 Mpc for the range of halo masses we studied (with b= 1.4-2.8). We find that recent models of halo redshift-space clustering that neglect the corrections from the bispectrum and higher order terms from the non-linear real-space to redshift-space mapping will not have the accuracy required for current and future observational analyses. Finally, we note that our simulation results confirm the essential but non-trivial assumption that on large scales, the bias inferred from the real-space clustering of haloes is the same as the

  4. Accurate modeling of size and strain broadening in the Rietveld refinement: The {open_quotes}double-Voigt{close_quotes} approach

    Energy Technology Data Exchange (ETDEWEB)

    Balzar, D. [Ruder Boskovic Inst., Zagreb (Croatia); Ledbetter, H. [National Inst. of Standards and Technology, Boulder, CO (United States)

    1995-12-31

    In the {open_quotes}double-Voigt{close_quotes} approach, an exact Voigt function describes both size- and strain-broadened profiles. The lattice strain is defined in terms of physically credible mean-square strain averaged over a distance in the diffracting domains. Analysis of Fourier coefficients in a harmonic approximation for strain coefficients leads to the Warren-Averbach method for the separation of size and strain contributions to diffraction line broadening. The model is introduced in the Rietveld refinement program in the following way: Line widths are modeled with only four parameters in the isotropic case. Varied parameters are both surface- and volume-weighted domain sizes and root-mean-square strains averaged over two distances. Refined parameters determine the physically broadened Voigt line profile. Instrumental Voigt line profile parameters are added to obtain the observed (Voigt) line profile. To speed computation, the corresponding pseudo-Voigt function is calculated and used as a fitting function in refinement. This approach allows for both fast computer code and accurate modeling in terms of physically identifiable parameters.

  5. On Modeling CPU Utilization of MapReduce Applications

    CERN Document Server

    Rizvandi, Nikzad Babaii; Zomaya, Albert Y

    2012-01-01

    In this paper, we present an approach to predict the total CPU utilization in terms of CPU clock tick of applications when running on MapReduce framework. Our approach has two key phases: profiling and modeling. In the profiling phase, an application is run several times with different sets of MapReduce configuration parameters to profile total CPU clock tick of the application on a given platform. In the modeling phase, multi linear regression is used to map the sets of MapReduce configuration parameters (number of Mappers, number of Reducers, size of File System (HDFS) and the size of input file) to total CPU clock ticks of the application. This derived model can be used for predicting total CPU requirements of the same application when using MapReduce framework on the same platform. Our approach aims to eliminate error-prone manual processes and presents a fully automated solution. Three standard applications (WordCount, Exim Mainlog parsing and Terasort) are used to evaluate our modeling technique on pseu...

  6. Use of a clay modeling task to reduce chocolate craving.

    Science.gov (United States)

    Andrade, Jackie; Pears, Sally; May, Jon; Kavanagh, David J

    2012-06-01

    Elaborated Intrusion theory (EI theory; Kavanagh, Andrade, & May, 2005) posits two main cognitive components in craving: associative processes that lead to intrusive thoughts about the craved substance or activity, and elaborative processes supporting mental imagery of the substance or activity. We used a novel visuospatial task to test the hypothesis that visual imagery plays a key role in craving. Experiment 1 showed that spending 10 min constructing shapes from modeling clay (plasticine) reduced participants' craving for chocolate compared with spending 10 min 'letting your mind wander'. Increasing the load on verbal working memory using a mental arithmetic task (counting backwards by threes) did not reduce craving further. Experiment 2 compared effects on craving of a simpler verbal task (counting by ones) and clay modeling. Clay modeling reduced overall craving strength and strength of craving imagery, and reduced the frequency of thoughts about chocolate. The results are consistent with EI theory, showing that craving is reduced by loading the visuospatial sketchpad of working memory but not by loading the phonological loop. Clay modeling might be a useful self-help tool to help manage craving for chocolate, snacks and other foods. PMID:22369958

  7. Accurate Monte Carlo simulations on FCC and HCP Lennard-Jones solids at very low temperatures and high reduced densities up to 1.30

    Science.gov (United States)

    Adidharma, Hertanto; Tan, Sugata P.

    2016-07-01

    Canonical Monte Carlo simulations on face-centered cubic (FCC) and hexagonal closed packed (HCP) Lennard-Jones (LJ) solids are conducted at very low temperatures (0.10 ≤ T∗ ≤ 1.20) and high densities (0.96 ≤ ρ∗ ≤ 1.30). A simple and robust method is introduced to determine whether or not the cutoff distance used in the simulation is large enough to provide accurate thermodynamic properties, which enables us to distinguish the properties of FCC from that of HCP LJ solids with confidence, despite their close similarities. Free-energy expressions derived from the simulation results are also proposed, not only to describe the properties of those individual structures but also the FCC-liquid, FCC-vapor, and FCC-HCP solid phase equilibria.

  8. Accurate Monte Carlo simulations on FCC and HCP Lennard-Jones solids at very low temperatures and high reduced densities up to 1.30.

    Science.gov (United States)

    Adidharma, Hertanto; Tan, Sugata P

    2016-07-01

    Canonical Monte Carlo simulations on face-centered cubic (FCC) and hexagonal closed packed (HCP) Lennard-Jones (LJ) solids are conducted at very low temperatures (0.10 ≤ T(∗) ≤ 1.20) and high densities (0.96 ≤ ρ(∗) ≤ 1.30). A simple and robust method is introduced to determine whether or not the cutoff distance used in the simulation is large enough to provide accurate thermodynamic properties, which enables us to distinguish the properties of FCC from that of HCP LJ solids with confidence, despite their close similarities. Free-energy expressions derived from the simulation results are also proposed, not only to describe the properties of those individual structures but also the FCC-liquid, FCC-vapor, and FCC-HCP solid phase equilibria.

  9. Testing whether humans have an accurate model of their own motor uncertainty in a speeded reaching task.

    Directory of Open Access Journals (Sweden)

    Hang Zhang

    Full Text Available In many motor tasks, optimal performance presupposes that human movement planning is based on an accurate internal model of the subject's own motor error. We developed a motor choice task that allowed us to test whether the internal model implicit in a subject's choices differed from the actual in isotropy (elongation and variance. Subjects were first trained to hit a circular target on a touch screen within a time limit. After training, subjects were repeatedly shown pairs of targets differing in size and shape and asked to choose the target that was easier to hit. On each trial they simply chose a target - they did not attempt to hit the chosen target. For each subject, we tested whether the internal model implicit in her target choices was consistent with her true error distribution in isotropy and variance. For all subjects, movement end points were anisotropic, distributed as vertically elongated bivariate Gaussians. However, in choosing targets, almost all subjects effectively assumed an isotropic distribution rather than their actual anisotropic distribution. Roughly half of the subjects chose as though they correctly estimated their own variance and the other half effectively assumed a variance that was more than four times larger than the actual, essentially basing their choices merely on the areas of the targets. The task and analyses we developed allowed us to characterize the internal model of motor error implicit in how humans plan reaching movements. In this task, human movement planning - even after extensive training - is based on an internal model of human motor error that includes substantial and qualitative inaccuracies.

  10. Prognostic breast cancer signature identified from 3D culture model accurately predicts clinical outcome across independent datasets

    Energy Technology Data Exchange (ETDEWEB)

    Martin, Katherine J.; Patrick, Denis R.; Bissell, Mina J.; Fournier, Marcia V.

    2008-10-20

    One of the major tenets in breast cancer research is that early detection is vital for patient survival by increasing treatment options. To that end, we have previously used a novel unsupervised approach to identify a set of genes whose expression predicts prognosis of breast cancer patients. The predictive genes were selected in a well-defined three dimensional (3D) cell culture model of non-malignant human mammary epithelial cell morphogenesis as down-regulated during breast epithelial cell acinar formation and cell cycle arrest. Here we examine the ability of this gene signature (3D-signature) to predict prognosis in three independent breast cancer microarray datasets having 295, 286, and 118 samples, respectively. Our results show that the 3D-signature accurately predicts prognosis in three unrelated patient datasets. At 10 years, the probability of positive outcome was 52, 51, and 47 percent in the group with a poor-prognosis signature and 91, 75, and 71 percent in the group with a good-prognosis signature for the three datasets, respectively (Kaplan-Meier survival analysis, p<0.05). Hazard ratios for poor outcome were 5.5 (95% CI 3.0 to 12.2, p<0.0001), 2.4 (95% CI 1.6 to 3.6, p<0.0001) and 1.9 (95% CI 1.1 to 3.2, p = 0.016) and remained significant for the two larger datasets when corrected for estrogen receptor (ER) status. Hence the 3D-signature accurately predicts breast cancer outcome in both ER-positive and ER-negative tumors, though individual genes differed in their prognostic ability in the two subtypes. Genes that were prognostic in ER+ patients are AURKA, CEP55, RRM2, EPHA2, FGFBP1, and VRK1, while genes prognostic in ER patients include ACTB, FOXM1 and SERPINE2 (Kaplan-Meier p<0.05). Multivariable Cox regression analysis in the largest dataset showed that the 3D-signature was a strong independent factor in predicting breast cancer outcome. The 3D-signature accurately predicts breast cancer outcome across multiple datasets and holds prognostic

  11. Accurate tracking control in LOM application

    Institute of Scientific and Technical Information of China (English)

    2003-01-01

    The fabrication of accurate prototype from CAD model directly in short time depends on the accurate tracking control and reference trajectory planning in (Laminated Object Manufacture) LOM application. An improvement on contour accuracy is acquired by the introduction of a tracking controller and a trajectory generation policy. A model of the X-Y positioning system of LOM machine is developed as the design basis of tracking controller. The ZPETC (Zero Phase Error Tracking Controller) is used to eliminate single axis following error, thus reduce the contour error. The simulation is developed on a Maltab model based on a retrofitted LOM machine and the satisfied result is acquired.

  12. Observed allocations of productivity and biomass, and turnover times in tropical forests are not accurately represented in CMIP5 Earth system models

    International Nuclear Information System (INIS)

    A significant fraction of anthropogenic CO2 emissions is assimilated by tropical forests and stored as biomass, slowing the accumulation of CO2 in the atmosphere. Because different plant tissues have different functional roles and turnover times, predictions of carbon balance of tropical forests depend on how earth system models (ESMs) represent the dynamic allocation of productivity to different tree compartments. This study shows that observed allocation of productivity, biomass, and turnover times of main tree compartments (leaves, wood, and roots) are not accurately represented in Coupled Model Intercomparison Project Phase 5 ESMs. In particular, observations indicate that biomass saturates with increasing productivity. In contrast, most models predict continuous increases in biomass with increases in productivity. This bias may lead to an over-prediction of carbon uptake in response to CO2 or climate-driven changes in productivity. Compartment-specific productivity and biomass are useful benchmarks to assess terrestrial ecosystem model performance. Improvements in the predicted allocation patterns and turnover times by ESMs will reduce uncertainties in climate predictions. (letter)

  13. Reduced Models in Chemical Kinetics via Nonlinear Data-Mining

    Directory of Open Access Journals (Sweden)

    Eliodoro Chiavazzo

    2014-01-01

    Full Text Available The adoption of detailed mechanisms for chemical kinetics often poses two types of severe challenges: First, the number of degrees of freedom is large; and second, the dynamics is characterized by widely disparate time scales. As a result, reactive flow solvers with detailed chemistry often become intractable even for large clusters of CPUs, especially when dealing with direct numerical simulation (DNS of turbulent combustion problems. This has motivated the development of several techniques for reducing the complexity of such kinetics models, where, eventually, only a few variables are considered in the development of the simplified model. Unfortunately, no generally applicable a priori recipe for selecting suitable parameterizations of the reduced model is available, and the choice of slow variables often relies upon intuition and experience. We present an automated approach to this task, consisting of three main steps. First, the low dimensional manifold of slow motions is (approximately sampled by brief simulations of the detailed model, starting from a rich enough ensemble of admissible initial conditions. Second, a global parametrization of the manifold is obtained through the Diffusion Map (DMAP approach, which has recently emerged as a powerful tool in data analysis/machine learning. Finally, a simplified model is constructed and solved on the fly in terms of the above reduced (slow variables. Clearly, closing this latter model requires nontrivial interpolation calculations, enabling restriction (mapping from the full ambient space to the reduced one and lifting (mapping from the reduced space to the ambient one. This is a key step in our approach, and a variety of interpolation schemes are reported and compared. The scope of the proposed procedure is presented and discussed by means of an illustrative combustion example.

  14. An accurate locally active memristor model for S-type negative differential resistance in NbOx

    Science.gov (United States)

    Gibson, Gary A.; Musunuru, Srinitya; Zhang, Jiaming; Vandenberghe, Ken; Lee, James; Hsieh, Cheng-Chih; Jackson, Warren; Jeon, Yoocharn; Henze, Dick; Li, Zhiyong; Stanley Williams, R.

    2016-01-01

    A number of important commercial applications would benefit from the introduction of easily manufactured devices that exhibit current-controlled, or "S-type," negative differential resistance (NDR). A leading example is emerging non-volatile memory based on crossbar array architectures. Due to the inherently linear current vs. voltage characteristics of candidate non-volatile memristor memory elements, individual memory cells in these crossbar arrays can be addressed only if a highly non-linear circuit element, termed a "selector," is incorporated in the cell. Selectors based on a layer of niobium oxide sandwiched between two electrodes have been investigated by a number of groups because the NDR they exhibit provides a promisingly large non-linearity. We have developed a highly accurate compact dynamical model for their electrical conduction that shows that the NDR in these devices results from a thermal feedback mechanism. A series of electrothermal measurements and numerical simulations corroborate this model. These results reveal that the leakage currents can be minimized by thermally isolating the selector or by incorporating materials with larger activation energies for electron motion.

  15. In pursuit of an accurate spatial and temporal model of biomolecules at the atomistic level: a perspective on computer simulation

    Energy Technology Data Exchange (ETDEWEB)

    Gray, Alan [The University of Edinburgh, Edinburgh EH9 3JZ, Scotland (United Kingdom); Harlen, Oliver G. [University of Leeds, Leeds LS2 9JT (United Kingdom); Harris, Sarah A., E-mail: s.a.harris@leeds.ac.uk [University of Leeds, Leeds LS2 9JT (United Kingdom); University of Leeds, Leeds LS2 9JT (United Kingdom); Khalid, Syma; Leung, Yuk Ming [University of Southampton, Southampton SO17 1BJ (United Kingdom); Lonsdale, Richard [Max-Planck-Institut für Kohlenforschung, Kaiser-Wilhelm-Platz 1, 45470 Mülheim an der Ruhr (Germany); Philipps-Universität Marburg, Hans-Meerwein Strasse, 35032 Marburg (Germany); Mulholland, Adrian J. [University of Bristol, Bristol BS8 1TS (United Kingdom); Pearson, Arwen R. [University of Leeds, Leeds LS2 9JT (United Kingdom); University of Hamburg, Hamburg (Germany); Read, Daniel J.; Richardson, Robin A. [University of Leeds, Leeds LS2 9JT (United Kingdom); The University of Edinburgh, Edinburgh EH9 3JZ, Scotland (United Kingdom)

    2015-01-01

    The current computational techniques available for biomolecular simulation are described, and the successes and limitations of each with reference to the experimental biophysical methods that they complement are presented. Despite huge advances in the computational techniques available for simulating biomolecules at the quantum-mechanical, atomistic and coarse-grained levels, there is still a widespread perception amongst the experimental community that these calculations are highly specialist and are not generally applicable by researchers outside the theoretical community. In this article, the successes and limitations of biomolecular simulation and the further developments that are likely in the near future are discussed. A brief overview is also provided of the experimental biophysical methods that are commonly used to probe biomolecular structure and dynamics, and the accuracy of the information that can be obtained from each is compared with that from modelling. It is concluded that progress towards an accurate spatial and temporal model of biomacromolecules requires a combination of all of these biophysical techniques, both experimental and computational.

  16. An accurate numerical solution to the Saint-Venant-Hirano model for mixed-sediment morphodynamics in rivers

    Science.gov (United States)

    Stecca, Guglielmo; Siviglia, Annunziato; Blom, Astrid

    2016-07-01

    We present an accurate numerical approximation to the Saint-Venant-Hirano model for mixed-sediment morphodynamics in one space dimension. Our solution procedure originates from the fully-unsteady matrix-vector formulation developed in [54]. The principal part of the problem is solved by an explicit Finite Volume upwind method of the path-conservative type, by which all the variables are updated simultaneously in a coupled fashion. The solution to the principal part is embedded into a splitting procedure for the treatment of frictional source terms. The numerical scheme is extended to second-order accuracy and includes a bookkeeping procedure for handling the evolution of size stratification in the substrate. We develop a concept of balancedness for the vertical mass flux between the substrate and active layer under bed degradation, which prevents the occurrence of non-physical oscillations in the grainsize distribution of the substrate. We suitably modify the numerical scheme to respect this principle. We finally verify the accuracy in our solution to the equations, and its ability to reproduce one-dimensional morphodynamics due to streamwise and vertical sorting, using three test cases. In detail, (i) we empirically assess the balancedness of vertical mass fluxes under degradation; (ii) we study the convergence to the analytical linearised solution for the propagation of infinitesimal-amplitude waves [54], which is here employed for the first time to assess a mixed-sediment model; (iii) we reproduce Ribberink's E8-E9 flume experiment [46].

  17. An accurate locally active memristor model for S-type negative differential resistance in NbO{sub x}

    Energy Technology Data Exchange (ETDEWEB)

    Gibson, Gary A.; Musunuru, Srinitya; Zhang, Jiaming; Lee, James; Hsieh, Cheng-Chih; Jackson, Warren; Jeon, Yoocharn; Henze, Dick; Li, Zhiyong; Stanley Williams, R. [Hewlett-Packard Laboratories, 1501 Page Mill Road, Palo Alto, California 94304 (United States); Vandenberghe, Ken [PTD-PPS, Hewlett-Packard Company, 1070 NE Circle Boulevard, Corvallis, Oregon 97330 (United States)

    2016-01-11

    A number of important commercial applications would benefit from the introduction of easily manufactured devices that exhibit current-controlled, or “S-type,” negative differential resistance (NDR). A leading example is emerging non-volatile memory based on crossbar array architectures. Due to the inherently linear current vs. voltage characteristics of candidate non-volatile memristor memory elements, individual memory cells in these crossbar arrays can be addressed only if a highly non-linear circuit element, termed a “selector,” is incorporated in the cell. Selectors based on a layer of niobium oxide sandwiched between two electrodes have been investigated by a number of groups because the NDR they exhibit provides a promisingly large non-linearity. We have developed a highly accurate compact dynamical model for their electrical conduction that shows that the NDR in these devices results from a thermal feedback mechanism. A series of electrothermal measurements and numerical simulations corroborate this model. These results reveal that the leakage currents can be minimized by thermally isolating the selector or by incorporating materials with larger activation energies for electron motion.

  18. Toward an Accurate Modeling of Hydrodynamic Effects on the Translational and Rotational Dynamics of Biomolecules in Many-Body Systems.

    Science.gov (United States)

    Długosz, Maciej; Antosiewicz, Jan M

    2015-07-01

    Proper treatment of hydrodynamic interactions is of importance in evaluation of rigid-body mobility tensors of biomolecules in Stokes flow and in simulations of their folding and solution conformation, as well as in simulations of the translational and rotational dynamics of either flexible or rigid molecules in biological systems at low Reynolds numbers. With macromolecules conveniently modeled in calculations or in dynamic simulations as ensembles of spherical frictional elements, various approximations to hydrodynamic interactions, such as the two-body, far-field Rotne-Prager approach, are commonly used, either without concern or as a compromise between the accuracy and the numerical complexity. Strikingly, even though the analytical Rotne-Prager approach fails to describe (both in the qualitative and quantitative sense) mobilities in the simplest system consisting of two spheres, when the distance between their surfaces is of the order of their size, it is commonly applied to model hydrodynamic effects in macromolecular systems. Here, we closely investigate hydrodynamic effects in two and three-body systems, consisting of bead-shell molecular models, using either the analytical Rotne-Prager approach, or an accurate numerical scheme that correctly accounts for the many-body character of hydrodynamic interactions and their short-range behavior. We analyze mobilities, and translational and rotational velocities of bodies resulting from direct forces acting on them. We show, that with the sufficient number of frictional elements in hydrodynamic models of interacting bodies, the far-field approximation is able to provide a description of hydrodynamic effects that is in a reasonable qualitative as well as quantitative agreement with the description resulting from the application of the virtually exact numerical scheme, even for small separations between bodies. PMID:26068580

  19. Mobile phone model with metamaterials to reduce the exposure

    Science.gov (United States)

    Pinto, Yenny; Begaud, Xavier

    2016-04-01

    This work presents a terminal mobile model where an Inverted-F Antenna (IFA) is associated with three different kinds of metamaterials: artificial magnetic conductor (AMC), electromagnetic band gap (EBG) and resistive high-impedance surface (RHIS). The objective was to evaluate whether some metamaterials may be used to reduce exposure while preserving the antenna performances. The exposure has been evaluated using a simplified phantom model. Two configurations, antenna in front of the phantom and antenna hidden by the ground plane, have been evaluated. Results show that using an optimized RHIS, the SAR 10 g is reduced and the antenna performances are preserved. With RHIS solution, the SAR 10 g peak is reduced by 8 % when the antenna is located in front of the phantom and by 6 % when the antenna is hidden by ground plane.

  20. Development of Boundary Condition Independent Reduced Order Thermal Models using Proper Orthogonal Decomposition

    Science.gov (United States)

    Raghupathy, Arun; Ghia, Karman; Ghia, Urmila

    2008-11-01

    Compact Thermal Models (CTM) to represent IC packages has been traditionally developed using the DELPHI-based (DEvelopment of Libraries of PHysical models for an Integrated design) methodology. The drawbacks of this method are presented, and an alternative method is proposed. A reduced-order model that provides the complete thermal information accurately with less computational resources can be effectively used in system level simulations. Proper Orthogonal Decomposition (POD), a statistical method, can be used to reduce the order of the degree of freedom or variables of the computations for such a problem. POD along with the Galerkin projection allows us to create reduced-order models that reproduce the characteristics of the system with a considerable reduction in computational resources while maintaining a high level of accuracy. The goal of this work is to show that this method can be applied to obtain a boundary condition independent reduced-order thermal model for complex components. The methodology is applied to the 1D transient heat equation.

  1. Reduced order modeling of fluid/structure interaction.

    Energy Technology Data Exchange (ETDEWEB)

    Barone, Matthew Franklin; Kalashnikova, Irina; Segalman, Daniel Joseph; Brake, Matthew Robert

    2009-11-01

    This report describes work performed from October 2007 through September 2009 under the Sandia Laboratory Directed Research and Development project titled 'Reduced Order Modeling of Fluid/Structure Interaction.' This project addresses fundamental aspects of techniques for construction of predictive Reduced Order Models (ROMs). A ROM is defined as a model, derived from a sequence of high-fidelity simulations, that preserves the essential physics and predictive capability of the original simulations but at a much lower computational cost. Techniques are developed for construction of provably stable linear Galerkin projection ROMs for compressible fluid flow, including a method for enforcing boundary conditions that preserves numerical stability. A convergence proof and error estimates are given for this class of ROM, and the method is demonstrated on a series of model problems. A reduced order method, based on the method of quadratic components, for solving the von Karman nonlinear plate equations is developed and tested. This method is applied to the problem of nonlinear limit cycle oscillations encountered when the plate interacts with an adjacent supersonic flow. A stability-preserving method for coupling the linear fluid ROM with the structural dynamics model for the elastic plate is constructed and tested. Methods for constructing efficient ROMs for nonlinear fluid equations are developed and tested on a one-dimensional convection-diffusion-reaction equation. These methods are combined with a symmetrization approach to construct a ROM technique for application to the compressible Navier-Stokes equations.

  2. Reduced Complexity Volterra Models for Nonlinear System Identification

    Directory of Open Access Journals (Sweden)

    Hacıoğlu Rıfat

    2001-01-01

    Full Text Available A broad class of nonlinear systems and filters can be modeled by the Volterra series representation. However, its practical use in nonlinear system identification is sometimes limited due to the large number of parameters associated with the Volterra filter′s structure. The parametric complexity also complicates design procedures based upon such a model. This limitation for system identification is addressed in this paper using a Fixed Pole Expansion Technique (FPET within the Volterra model structure. The FPET approach employs orthonormal basis functions derived from fixed (real or complex pole locations to expand the Volterra kernels and reduce the number of estimated parameters. That the performance of FPET can considerably reduce the number of estimated parameters is demonstrated by a digital satellite channel example in which we use the proposed method to identify the channel dynamics. Furthermore, a gradient-descent procedure that adaptively selects the pole locations in the FPET structure is developed in the paper.

  3. Accelerating transient simulation of linear reduced order models.

    Energy Technology Data Exchange (ETDEWEB)

    Thornquist, Heidi K.; Mei, Ting; Keiter, Eric Richard; Bond, Brad

    2011-10-01

    Model order reduction (MOR) techniques have been used to facilitate the analysis of dynamical systems for many years. Although existing model reduction techniques are capable of providing huge speedups in the frequency domain analysis (i.e. AC response) of linear systems, such speedups are often not obtained when performing transient analysis on the systems, particularly when coupled with other circuit components. Reduced system size, which is the ostensible goal of MOR methods, is often insufficient to improve transient simulation speed on realistic circuit problems. It can be shown that making the correct reduced order model (ROM) implementation choices is crucial to the practical application of MOR methods. In this report we investigate methods for accelerating the simulation of circuits containing ROM blocks using the circuit simulator Xyce.

  4. Reduced order modeling and analysis of combustion instabilities

    Science.gov (United States)

    Tamanampudi, Gowtham Manikanta Reddy

    The coupling between unsteady heat release and pressure fluctuations in a combustor leads to the complex phenomenon of combustion instability. Combustion instability can lead to enormous pressure fluctuations and high rates of combustor heat transfer which play a very important role in determining the life and performance of engine. Although high fidelity simulations are starting to yield detailed understanding of the underlying physics of combustion instability, the enormous computing power required restricts their application to a few runs and fairly simple geometries. To overcome this, low order models are being employed for prediction and analysis. Since low order models cannot account for the coupling between heat release and pressure fluctuations, lower-order combustion response models are required. One such attempt is made through the work presented here using a commercial software COMSOL. The linearized Euler Equations with combustion response models were solved in the frequency domain implementing Arnoldi algorithm using 3D Finite Element solver COMSOL. This work is part of a larger effort to investigate a low order, computationally inexpensive and accurate solver which accounts for mean flow effects, complex boundary conditions and combustion response. This tool was tested against a number of cases presenting longitudinal instabilities. Further, combustion instabilities in transverse instability chamber were studied and are compared with experiments. Both sets of results are in good agreement with experiment. In addition, the effect of nozzle length on the mode shapes in transverse instability chamber was studied and presented.

  5. The i-V curve characteristics of burner-stabilized premixed flames: detailed and reduced models

    KAUST Repository

    Han, Jie

    2016-07-17

    The i-V curve describes the current drawn from a flame as a function of the voltage difference applied across the reaction zone. Since combustion diagnostics and flame control strategies based on electric fields depend on the amount of current drawn from flames, there is significant interest in modeling and understanding i-V curves. We implement and apply a detailed model for the simulation of the production and transport of ions and electrons in one-dimensional premixed flames. An analytical reduced model is developed based on the detailed one, and analytical expressions are used to gain insight into the characteristics of the i-Vcurve for various flame configurations. In order for the reduced model to capture the spatial distribution of the electric field accurately, the concept of a dead zone region, where voltage is constant, is introduced, and a suitable closure for the spatial extent of the dead zone is proposed and validated. The results from the reduced modeling framework are found to be in good agreement with those from the detailed simulations. The saturation voltage is found to depend significantly on the flame location relative to the electrodes, and on the sign of the voltage difference applied. Furthermore, at sub-saturation conditions, the current is shown to increase linearly or quadratically with the applied voltage, depending on the flame location. These limiting behaviors exhibited by the reduced model elucidate the features of i-V curves observed experimentally. The reduced model relies on the existence of a thin layer where charges are produced, corresponding to the reaction zone of a flame. Consequently, the analytical model we propose is not limited to the study of premixed flames, and may be applied easily to others configurations, e.g.~nonpremixed counterflow flames.

  6. Modelling the Constraints of Spatial Environment in Fauna Movement Simulations: Comparison of a Boundaries Accurate Function and a Cost Function

    Science.gov (United States)

    Jolivet, L.; Cohen, M.; Ruas, A.

    2015-08-01

    Landscape influences fauna movement at different levels, from habitat selection to choices of movements' direction. Our goal is to provide a development frame in order to test simulation functions for animal's movement. We describe our approach for such simulations and we compare two types of functions to calculate trajectories. To do so, we first modelled the role of landscape elements to differentiate between elements that facilitate movements and the ones being hindrances. Different influences are identified depending on landscape elements and on animal species. Knowledge were gathered from ecologists, literature and observation datasets. Second, we analysed the description of animal movement recorded with GPS at fine scale, corresponding to high temporal frequency and good location accuracy. Analysing this type of data provides information on the relation between landscape features and movements. We implemented an agent-based simulation approach to calculate potential trajectories constrained by the spatial environment and individual's behaviour. We tested two functions that consider space differently: one function takes into account the geometry and the types of landscape elements and one cost function sums up the spatial surroundings of an individual. Results highlight the fact that the cost function exaggerates the distances travelled by an individual and simplifies movement patterns. The geometry accurate function represents a good bottom-up approach for discovering interesting areas or obstacles for movements.

  7. MODELLING THE CONSTRAINTS OF SPATIAL ENVIRONMENT IN FAUNA MOVEMENT SIMULATIONS: COMPARISON OF A BOUNDARIES ACCURATE FUNCTION AND A COST FUNCTION

    Directory of Open Access Journals (Sweden)

    L. Jolivet

    2015-08-01

    Full Text Available Landscape influences fauna movement at different levels, from habitat selection to choices of movements’ direction. Our goal is to provide a development frame in order to test simulation functions for animal’s movement. We describe our approach for such simulations and we compare two types of functions to calculate trajectories. To do so, we first modelled the role of landscape elements to differentiate between elements that facilitate movements and the ones being hindrances. Different influences are identified depending on landscape elements and on animal species. Knowledge were gathered from ecologists, literature and observation datasets. Second, we analysed the description of animal movement recorded with GPS at fine scale, corresponding to high temporal frequency and good location accuracy. Analysing this type of data provides information on the relation between landscape features and movements. We implemented an agent-based simulation approach to calculate potential trajectories constrained by the spatial environment and individual’s behaviour. We tested two functions that consider space differently: one function takes into account the geometry and the types of landscape elements and one cost function sums up the spatial surroundings of an individual. Results highlight the fact that the cost function exaggerates the distances travelled by an individual and simplifies movement patterns. The geometry accurate function represents a good bottom-up approach for discovering interesting areas or obstacles for movements.

  8. Anatomically accurate high resolution modeling of human whole heart electromechanics: A strongly scalable algebraic multigrid solver method for nonlinear deformation

    Science.gov (United States)

    Augustin, Christoph M.; Neic, Aurel; Liebmann, Manfred; Prassl, Anton J.; Niederer, Steven A.; Haase, Gundolf; Plank, Gernot

    2016-01-01

    Electromechanical (EM) models of the heart have been used successfully to study fundamental mechanisms underlying a heart beat in health and disease. However, in all modeling studies reported so far numerous simplifications were made in terms of representing biophysical details of cellular function and its heterogeneity, gross anatomy and tissue microstructure, as well as the bidirectional coupling between electrophysiology (EP) and tissue distension. One limiting factor is the employed spatial discretization methods which are not sufficiently flexible to accommodate complex geometries or resolve heterogeneities, but, even more importantly, the limited efficiency of the prevailing solver techniques which is not sufficiently scalable to deal with the incurring increase in degrees of freedom (DOF) when modeling cardiac electromechanics at high spatio-temporal resolution. This study reports on the development of a novel methodology for solving the nonlinear equation of finite elasticity using human whole organ models of cardiac electromechanics, discretized at a high para-cellular resolution. Three patient-specific, anatomically accurate, whole heart EM models were reconstructed from magnetic resonance (MR) scans at resolutions of 220 μm, 440 μm and 880 μm, yielding meshes of approximately 184.6, 24.4 and 3.7 million tetrahedral elements and 95.9, 13.2 and 2.1 million displacement DOF, respectively. The same mesh was used for discretizing the governing equations of both electrophysiology (EP) and nonlinear elasticity. A novel algebraic multigrid (AMG) preconditioner for an iterative Krylov solver was developed to deal with the resulting computational load. The AMG preconditioner was designed under the primary objective of achieving favorable strong scaling characteristics for both setup and solution runtimes, as this is key for exploiting current high performance computing hardware. Benchmark results using the 220 μm, 440 μm and 880 μm meshes demonstrate

  9. Identification of the reduced order models of a BWR reactor

    International Nuclear Information System (INIS)

    The present work has as objective to analyze the relative stability of a BWR type reactor. It is analyzed that so adaptive it turns out to identify the parameters of a model of reduced order so that this it reproduces a condition of given uncertainty. This will take of a real fact happened in the La Salle plant under certain operation conditions of power and flow of coolant. The parametric identification is carried out by means of an algorithm of recursive least square and an Output Error model (Output Error), measuring the output power of the reactor when the instability is present, and considering that it is produced by a change in the reactivity of the system in the same way that a sign of type step. Also it is carried out an analytic comparison of the relative stability, analyzing two types of answers: the original answer of the uncertainty of the reactor vs. the obtained response identifying the parameters of the model of reduced order, reaching the conclusion that it is very viable to adapt a model of reduced order to study the stability of a reactor, under the only condition to consider that the dynamics of the reactivity is of step type. (Author)

  10. Accurate Monte Carlo modeling of cyclotrons for optimization of shielding and activation calculations in the biomedical field

    Science.gov (United States)

    Infantino, Angelo; Marengo, Mario; Baschetti, Serafina; Cicoria, Gianfranco; Longo Vaschetto, Vittorio; Lucconi, Giulia; Massucci, Piera; Vichi, Sara; Zagni, Federico; Mostacci, Domiziano

    2015-11-01

    Biomedical cyclotrons for production of Positron Emission Tomography (PET) radionuclides and radiotherapy with hadrons or ions are widely diffused and established in hospitals as well as in industrial facilities and research sites. Guidelines for site planning and installation, as well as for radiation protection assessment, are given in a number of international documents; however, these well-established guides typically offer analytic methods of calculation of both shielding and materials activation, in approximate or idealized geometry set up. The availability of Monte Carlo codes with accurate and up-to-date libraries for transport and interactions of neutrons and charged particles at energies below 250 MeV, together with the continuously increasing power of nowadays computers, makes systematic use of simulations with realistic geometries possible, yielding equipment and site specific evaluation of the source terms, shielding requirements and all quantities relevant to radiation protection. In this work, the well-known Monte Carlo code FLUKA was used to simulate two representative models of cyclotron for PET radionuclides production, including their targetry; and one type of proton therapy cyclotron including the energy selection system. Simulations yield estimates of various quantities of radiological interest, including the effective dose distribution around the equipment, the effective number of neutron produced per incident proton and the activation of target materials, the structure of the cyclotron, the energy degrader, the vault walls and the soil. The model was validated against experimental measurements and comparison with well-established reference data. Neutron ambient dose equivalent H*(10) was measured around a GE PETtrace cyclotron: an average ratio between experimental measurement and simulations of 0.99±0.07 was found. Saturation yield of 18F, produced by the well-known 18O(p,n)18F reaction, was calculated and compared with the IAEA recommended

  11. Enhanced Constraints for Accurate Lower Bounds on Many-Electron Quantum Energies from Variational Two-Electron Reduced Density Matrix Theory

    Science.gov (United States)

    Mazziotti, David A.

    2016-10-01

    A central challenge of physics is the computation of strongly correlated quantum systems. The past ten years have witnessed the development and application of the variational calculation of the two-electron reduced density matrix (2-RDM) without the wave function. In this Letter we present an orders-of-magnitude improvement in the accuracy of 2-RDM calculations without an increase in their computational cost. The advance is based on a low-rank, dual formulation of an important constraint on the 2-RDM, the T 2 condition. Calculations are presented for metallic chains and a cadmium-selenide dimer. The low-scaling T 2 condition will have significant applications in atomic and molecular, condensed-matter, and nuclear physics.

  12. Reducing component estimation for varying coefficient models with longitudinal data

    Institute of Scientific and Technical Information of China (English)

    2008-01-01

    Varying-coefficient models with longitudinal observations are very useful in epidemiology and some other practical fields.In this paper,a reducing component procedure is proposed for es- timating the unknown functions and their derivatives in very general models,in which the unknown coefficient functions admit different or the same degrees of smoothness and the covariates can be time- dependent.The asymptotic properties of the estimators,such as consistency,rate of convergence and asymptotic distribution,are derived.The asymptotic results show that the asymptotic variance of the reducing component estimators is smaller than that of the existing estimators when the coefficient functions admit different degrees of smoothness.Finite sample properties of our procedures are studied through Monte Carlo simulations.

  13. Model protocells photochemically reduce carbonate to organic carbon

    Energy Technology Data Exchange (ETDEWEB)

    Folsome, C.; Brittain, A.

    1981-06-11

    Synthetic cell-sized organic microstructures effect the long-wavelength uv photosynthesis of organic products from carbonate. Formaldehyde is the most abundant photoproduct and water is the major proton donor for this reduced form of carbon. We show here that these results for model phase-bounded systems are consistent with the postulate that metabolism of progenitors to the earliest living cells could have been, at least in part, photosynthetic.

  14. Predictive modeling and reducing cyclic variability in autoignition engines

    Energy Technology Data Exchange (ETDEWEB)

    Hellstrom, Erik; Stefanopoulou, Anna; Jiang, Li; Larimore, Jacob

    2016-08-30

    Methods and systems are provided for controlling a vehicle engine to reduce cycle-to-cycle combustion variation. A predictive model is applied to predict cycle-to-cycle combustion behavior of an engine based on observed engine performance variables. Conditions are identified, based on the predicted cycle-to-cycle combustion behavior, that indicate high cycle-to-cycle combustion variation. Corrective measures are then applied to prevent the predicted high cycle-to-cycle combustion variation.

  15. Anchanling reduces pathology in a lactacystin- induced Parkinson's disease model

    Institute of Scientific and Technical Information of China (English)

    Yinghong Li; Zhengzhi Wu; Xiaowei Gao; Qingwei Zhu; Yu Jin; Anmin Wu; Andrew C. J. Huang

    2012-01-01

    A rat model of Parkinson's disease was induced by injecting lactacystin stereotaxically into the left mesencephalic ventral tegmental area and substantia nigra pars compacta. After rats were intragastrically perfused with Anchanling, a Chinese medicine, mainly composed of magnolol, for 5 weeks, when compared with Parkinson's disease model rats, tyrosine hydroxylase expression was increased, α-synuclein and ubiquitin expression was decreased, substantia nigra cell apoptosis was reduced, and apomorphine-induced rotational behavior was improved. Results suggested that Anchanling can ameliorate Parkinson's disease pathology possibly by enhancing degradation activity of the ubiquitin-proteasome system.

  16. Multi-model analysis of terrestrial carbon cycles in Japan: reducing uncertainties in model outputs among different terrestrial biosphere models using flux observations

    Directory of Open Access Journals (Sweden)

    K. Ichii

    2009-08-01

    Full Text Available Terrestrial biosphere models show large uncertainties when simulating carbon and water cycles, and reducing these uncertainties is a priority for developing more accurate estimates of both terrestrial ecosystem statuses and future climate changes. To reduce uncertainties and improve the understanding of these carbon budgets, we investigated the ability of flux datasets to improve model simulations and reduce variabilities among multi-model outputs of terrestrial biosphere models in Japan. Using 9 terrestrial biosphere models (Support Vector Machine-based regressions, TOPS, CASA, VISIT, Biome-BGC, DAYCENT, SEIB, LPJ, and TRIFFID, we conducted two simulations: (1 point simulations at four flux sites in Japan and (2 spatial simulations for Japan with a default model (based on original settings and an improved model (based on calibration using flux observations. Generally, models using default model settings showed large deviations in model outputs from observation with large model-by-model variability. However, after we calibrated the model parameters using flux observations (GPP, RE and NEP, most models successfully simulated seasonal variations in the carbon cycle, with less variability among models. We also found that interannual variations in the carbon cycle are mostly consistent among models and observations. Spatial analysis also showed a large reduction in the variability among model outputs, and model calibration using flux observations significantly improved the model outputs. These results show that to reduce uncertainties among terrestrial biosphere models, we need to conduct careful validation and calibration with available flux observations. Flux observation data significantly improved terrestrial biosphere models, not only on a point scale but also on spatial scales.

  17. Low-dose biplanar radiography can be used in children and adolescents to accurately assess femoral and tibial torsion and greatly reduce irradiation

    Energy Technology Data Exchange (ETDEWEB)

    Meyrignac, Olivier; Baunin, Christiane; Vial, Julie; Sans, Nicolas [CHU Toulouse Purpan, Department of Radiology, Toulouse Cedex 9 (France); Moreno, Ramiro [ALARA Expertise, Oberhausbergen (France); Accadbled, Franck; Gauzy, Jerome Sales de [Hopital des Enfants, Department of Orthopedics, Toulouse Cedex 9 (France); Sommet, Agnes [Universite Paul Sabatier, Department of Fundamental Pharmaco-Clinical Pharmacology, Toulouse (France)

    2015-06-01

    To evaluate in children the agreement between femoral and tibial torsion measurements obtained with low-dose biplanar radiography (LDBR) and CT, and to study dose reduction ratio between these two techniques both in vitro and in vivo. Thirty children with lower limb torsion abnormalities were included in a prospective study. Biplanar radiographs and CTs were performed for measurements of lower limb torsion on each patient. Values were compared using Bland-Altman plots. Interreader and intrareader agreements were evaluated by intraclass correlation coefficients. Comparative dosimetric study was performed using an ionization chamber in a tissue-equivalent phantom, and with thermoluminescent dosimeters in 5 patients. Average differences between CT and LDBR measurements were -0.1 ±1.1 for femoral torsion and -0.7 ±1.4 for tibial torsion. Interreader agreement for LDBR measurements was very good for both femoral torsion (FT) (0.81) and tibial torsion (TT) (0.87). Intrareader agreement was excellent for FT (0.97) and TT (0.89). The ratio between CT scan dose and LDBR dose was 22 in vitro (absorbed dose) and 32 in vivo (skin dose). Lower limb torsion measurements obtained with LDBR are comparable to CT measurements in children and adolescents, with a considerably reduced radiation dose. (orig.)

  18. Accurate Spectral Fits of Jupiter's Great Red Spot: VIMS Visual Spectra Modelled with Chromophores Created by Photolyzed Ammonia Reacting with Acetyleneχ±

    Science.gov (United States)

    Baines, Kevin; Sromovsky, Lawrence A.; Fry, Patrick M.; Carlson, Robert W.; Momary, Thomas W.

    2016-10-01

    We report results incorporating the red-tinted photochemically-generated aerosols of Carlson et al (2016, Icarus 274, 106-115) in spectral models of Jupiter's Great Red Spot (GRS). Spectral models of the 0.35-1.0-micron spectrum show good agreement with Cassini/VIMS near-center-meridian and near-limb GRS spectra for model morphologies incorporating an optically-thin layer of Carlson (2016) aerosols at high altitudes, either at the top of the tropospheric GRS cloud, or in a distinct stratospheric haze layer. Specifically, a two-layer "crème brûlée" structure of the Mie-scattering Carlson et al (2016) chromophore attached to the top of a conservatively scattering (hereafter, "white") optically-thick cloud fits the spectra well. Currently, best agreement (reduced χ2 of 0.89 for the central-meridian spectrum) is found for a 0.195-0.217-bar, 0.19 ± 0.02 opacity layer of chromophores with mean particle radius of 0.14 ± 0.01 micron. As well, a structure with a detached stratospheric chromophore layer ~0.25 bar above a white tropospheric GRS cloud provides a good spectral match (reduced χ2 of 1.16). Alternatively, a cloud morphology with the chromophore coating white particles in a single optically- and physically-thick cloud (the "coated-shell model", initially explored by Carlson et al 2016) was found to give significantly inferior fits (best reduced χ2 of 2.9). Overall, we find that models accurately fit the GRS spectrum if (1) most of the optical depth of the chromophore is in a layer near the top of the main cloud or in a distinct separated layer above it, but is not uniformly distributed within the main cloud, (2) the chromophore consists of relatively small, 0.1-0.2-micron-radius particles, and (3) the chromophore layer optical depth is small, ~ 0.1-0.2. Thus, our analysis supports the exogenic origin of the red chromophore consistent with the Carlson et al (2016) photolytic production mechanism rather than an endogenic origin, such as upwelling of material

  19. The capabilities and limitations of conductance-based compartmental neuron models with reduced branched or unbranched morphologies and active dendrites.

    Science.gov (United States)

    Hendrickson, Eric B; Edgerton, Jeremy R; Jaeger, Dieter

    2011-04-01

    Conductance-based neuron models are frequently employed to study the dynamics of biological neural networks. For speed and ease of use, these models are often reduced in morphological complexity. Simplified dendritic branching structures may process inputs differently than full branching structures, however, and could thereby fail to reproduce important aspects of biological neural processing. It is not yet well understood which processing capabilities require detailed branching structures. Therefore, we analyzed the processing capabilities of full or partially branched reduced models. These models were created by collapsing the dendritic tree of a full morphological model of a globus pallidus (GP) neuron while preserving its total surface area and electrotonic length, as well as its passive and active parameters. Dendritic trees were either collapsed into single cables (unbranched models) or the full complement of branch points was preserved (branched models). Both reduction strategies allowed us to compare dynamics between all models using the same channel density settings. Full model responses to somatic inputs were generally preserved by both types of reduced model while dendritic input responses could be more closely preserved by branched than unbranched reduced models. However, features strongly influenced by local dendritic input resistance, such as active dendritic sodium spike generation and propagation, could not be accurately reproduced by any reduced model. Based on our analyses, we suggest that there are intrinsic differences in processing capabilities between unbranched and branched models. We also indicate suitable applications for different levels of reduction, including fast searches of full model parameter space. PMID:20623167

  20. Reduced Complexity Modeling (RCM): toward more use of less

    Science.gov (United States)

    Paola, Chris; Voller, Vaughan

    2014-05-01

    Although not exact, there is a general correspondence between reductionism and detailed, high-fidelity models, while 'synthesism' is often associated with reduced-complexity modeling. There is no question that high-fidelity reduction- based computational models are extremely useful in simulating the behaviour of complex natural systems. In skilled hands they are also a source of insight and understanding. We focus here on the case for the other side (reduced-complexity models), not because we think they are 'better' but because their value is more subtle, and their natural constituency less clear. What kinds of problems and systems lend themselves to the reduced-complexity approach? RCM is predicated on the idea that the mechanism of the system or phenomenon in question is, for whatever reason, insensitive to the full details of the underlying physics. There are multiple ways in which this can happen. B.T. Werner argued for the importance of process hierarchies in which processes at larger scales depend on only a small subset of everything going on at smaller scales. Clear scale breaks would seem like a way to test systems for this property but to our knowledge has not been used in this way. We argue that scale-independent physics, as for example exhibited by natural fractals, is another. We also note that the same basic criterion - independence of the process in question from details of the underlying physics - underpins 'unreasonably effective' laboratory experiments. There is thus a link between suitability for experimentation at reduced scale and suitability for RCM. Examples from RCM approaches to erosional landscapes, braided rivers, and deltas illustrate these ideas, and suggest that they are insufficient. There is something of a 'wild west' nature to RCM that puts some researchers off by suggesting a departure from traditional methods that have served science well for centuries. We offer two thoughts: first, that in the end the measure of a model is its

  1. Dose Addition Models Based on Biologically Relevant Reductions in Fetal Testosterone Accurately Predict Postnatal Reproductive Tract Alterations by a Phthalate Mixture in Rats.

    Science.gov (United States)

    Howdeshell, Kembra L; Rider, Cynthia V; Wilson, Vickie S; Furr, Johnathan R; Lambright, Christy R; Gray, L Earl

    2015-12-01

    Challenges in cumulative risk assessment of anti-androgenic phthalate mixtures include a lack of data on all the individual phthalates and difficulty determining the biological relevance of reduction in fetal testosterone (T) on postnatal development. The objectives of the current study were 2-fold: (1) to test whether a mixture model of dose addition based on the fetal T production data of individual phthalates would predict the effects of a 5 phthalate mixture on androgen-sensitive postnatal male reproductive tract development, and (2) to determine the biological relevance of the reductions in fetal T to induce abnormal postnatal reproductive tract development using data from the mixture study. We administered a dose range of the mixture (60, 40, 20, 10, and 5% of the top dose used in the previous fetal T production study consisting of 300 mg/kg per chemical of benzyl butyl (BBP), di(n)butyl (DBP), diethyl hexyl phthalate (DEHP), di-isobutyl phthalate (DiBP), and 100 mg dipentyl (DPP) phthalate/kg; the individual phthalates were present in equipotent doses based on their ability to reduce fetal T production) via gavage to Sprague Dawley rat dams on GD8-postnatal day 3. We compared observed mixture responses to predictions of dose addition based on the previously published potencies of the individual phthalates to reduce fetal T production relative to a reference chemical and published postnatal data for the reference chemical (called DAref). In addition, we predicted DA (called DAall) and response addition (RA) based on logistic regression analysis of all 5 individual phthalates when complete data were available. DA ref and DA all accurately predicted the observed mixture effect for 11 of 14 endpoints. Furthermore, reproductive tract malformations were seen in 17-100% of F1 males when fetal T production was reduced by about 25-72%, respectively. PMID:26350170

  2. Formulation of a 1D finite element of heat exchanger for accurate modelling of the grouting behaviour: Application to cyclic thermal loading

    OpenAIRE

    Cerfontaine, Benjamin; Radioti, Georgia; Collin, Frédéric; Charlier, Robert

    2016-01-01

    This paper presents a comprehensive formulation of a finite element for the modelling of borehole heat exchangers. This work focuses on the accurate modelling of the grouting and the field of temperature near a single borehole. Therefore the grouting of the BHE is explicitly modelled. The purpose of this work is to provide tools necessary to the further modelling of thermo-mechanical couplings. The finite element discretises the classical governing equation of advection-diffusion of heat w...

  3. Construction of energy-stable Galerkin reduced order models.

    Energy Technology Data Exchange (ETDEWEB)

    Kalashnikova, Irina; Barone, Matthew Franklin; Arunajatesan, Srinivasan; van Bloemen Waanders, Bart Gustaaf

    2013-05-01

    This report aims to unify several approaches for building stable projection-based reduced order models (ROMs). Attention is focused on linear time-invariant (LTI) systems. The model reduction procedure consists of two steps: the computation of a reduced basis, and the projection of the governing partial differential equations (PDEs) onto this reduced basis. Two kinds of reduced bases are considered: the proper orthogonal decomposition (POD) basis and the balanced truncation basis. The projection step of the model reduction can be done in two ways: via continuous projection or via discrete projection. First, an approach for building energy-stable Galerkin ROMs for linear hyperbolic or incompletely parabolic systems of PDEs using continuous projection is proposed. The idea is to apply to the set of PDEs a transformation induced by the Lyapunov function for the system, and to build the ROM in the transformed variables. The resulting ROM will be energy-stable for any choice of reduced basis. It is shown that, for many PDE systems, the desired transformation is induced by a special weighted L2 inner product, termed the %E2%80%9Csymmetry inner product%E2%80%9D. Attention is then turned to building energy-stable ROMs via discrete projection. A discrete counterpart of the continuous symmetry inner product, a weighted L2 inner product termed the %E2%80%9CLyapunov inner product%E2%80%9D, is derived. The weighting matrix that defines the Lyapunov inner product can be computed in a black-box fashion for a stable LTI system arising from the discretization of a system of PDEs in space. It is shown that a ROM constructed via discrete projection using the Lyapunov inner product will be energy-stable for any choice of reduced basis. Connections between the Lyapunov inner product and the inner product induced by the balanced truncation algorithm are made. Comparisons are also made between the symmetry inner product and the Lyapunov inner product. The performance of ROMs constructed

  4. Probabilistic Rotor Life Assessment Using Reduced Order Models

    Directory of Open Access Journals (Sweden)

    Brian K. Beachkofski

    2009-01-01

    Full Text Available Probabilistic failure assessments for integrally bladed disks are system reliability problems where a failure in at least one blade constitutes a rotor system failure. Turbine engine fan and compressor blade life is dominated by High Cycle Fatigue (HCF initiated either by pure HCF or Foreign Object Damage (FOD. To date performing an HCF life assessment for the entire rotor system has been too costly in analysis time to be practical. Although the substantial run-time has previously precluded a full-rotor probabilistic analysis, reduced order models make this process tractable as demonstrated in this work. The system model includes frequency prediction, modal stress variation, mistuning amplification, FOD effect, and random material capability. The model has many random variables which are most easily handled through simple random sampling.

  5. Reduced order component models for flexible multibody dynamics simulations

    Science.gov (United States)

    Tsuha, Walter S.; Spanos, John T.

    1990-01-01

    Many flexible multibody dynamics simulation codes require some form of component description that properly characterizes the dynamic behavior of the system. A model reduction procedure for producing low order component models for flexible multibody simulation is described. Referred to as projection and assembly, the method is a Rayleigh-Ritz approach that uses partitions of the system modal matrix as component Ritz transformation matrices. It is shown that the projection and assembly method yields a reduced system model that preserves a specified set of the full order system modes. Unlike classical component mode synthesis methods, the exactness of the method described is obtained at the expense of having to compute the full order system modes. The paper provides a comprehensive description of the method, a proof of exactness, and numerical results demonstrating the method's effectiveness.

  6. Reduced order methods for modeling and computational reduction

    CERN Document Server

    Rozza, Gianluigi

    2014-01-01

    This monograph addresses the state of the art of reduced order methods for modeling and computational reduction of complex parametrized systems, governed by ordinary and/or partial differential equations, with a special emphasis on real time computing techniques and applications in computational mechanics, bioengineering and computer graphics.  Several topics are covered, including: design, optimization, and control theory in real-time with applications in engineering; data assimilation, geometry registration, and parameter estimation with special attention to real-time computing in biomedical engineering and computational physics; real-time visualization of physics-based simulations in computer science; the treatment of high-dimensional problems in state space, physical space, or parameter space; the interactions between different model reduction and dimensionality reduction approaches; the development of general error estimation frameworks which take into account both model and discretization effects. This...

  7. Model-based design approach to reducing mechanical vibrations

    Directory of Open Access Journals (Sweden)

    P. Czop

    2013-09-01

    Full Text Available Purpose: The paper presents a sensitivity analysis method based on a first-principle model in order to reduce mechanical vibrations of a hydraulic damper. Design/methodology/approach: The first-principle model is formulated using a system of continuous ordinary differential equations capturing usually nonlinear relations among variables of the hydraulic damper model. The model applies three categories of parameters: geometrical, physical and phenomenological. Geometrical and physical parameters are deduced from construction and operational documentation. The phenomenological parameters are the adjustable ones, which are estimated or adjusted based on their roughly known values, e.g. friction/damping coefficients. Findings: The sensitivity analysis method provides major contributors and their magnitude that cause vibrations Research limitations/implications: The method accuracy is limited by the model accuracy and inherited nonlinear effects. Practical implications: The proposed model-based sensitivity method can be used to optimize prototypes of hydraulic dampers. Originality/value: The proposed sensitivity-analysis method minimizes a risk that a hydraulic damper does not meet the customer specification.

  8. Package Equivalent Reactor Networks as Reduced Order Models for Use with CAPE-OPEN Compliant Simulation

    Energy Technology Data Exchange (ETDEWEB)

    Meeks, E.; Chou, C. -P.; Garratt, T.

    2013-03-31

    Engineering simulations of coal gasifiers are typically performed using computational fluid dynamics (CFD) software, where a 3-D representation of the gasifier equipment is used to model the fluid flow in the gasifier and source terms from the coal gasification process are captured using discrete-phase model source terms. Simulations using this approach can be very time consuming, making it difficult to imbed such models into overall system simulations for plant design and optimization. For such system-level designs, process flowsheet software is typically used, such as Aspen Plus® [1], where each component where each component is modeled using a reduced-order model. For advanced power-generation systems, such as integrated gasifier/gas-turbine combined-cycle systems (IGCC), the critical components determining overall process efficiency and emissions are usually the gasifier and combustor. Providing more accurate and more computationally efficient reduced-order models for these components, then, enables much more effective plant-level design optimization and design for control. Based on the CHEMKIN-PRO and ENERGICO software, we have developed an automated methodology for generating an advanced form of reduced-order model for gasifiers and combustors. The reducedorder model offers representation of key unit operations in flowsheet simulations, while allowing simulation that is fast enough to be used in iterative flowsheet calculations. Using high-fidelity fluiddynamics models as input, Reaction Design’s ENERGICO® [2] software can automatically extract equivalent reactor networks (ERNs) from a CFD solution. For the advanced reduced-order concept, we introduce into the ERN a much more detailed kinetics model than can be included practically in the CFD simulation. The state-of-the-art chemistry solver technology within CHEMKIN-PRO allows that to be accomplished while still maintaining a very fast model turn-around time. In this way, the ERN becomes the basis for

  9. Linear stability analysis of flow instabilities with a nodalized reduced order model in heated channel

    International Nuclear Information System (INIS)

    The prime objective of the presented work is to develop a Nodalized Reduced Order Model (NROM) to carry linear stability analysis of flow instabilities in a two-phase flow system. The model is developed by dividing the single phase and two-phase region of a uniformly heated channel into N number of nodes followed by time dependent spatial linear approximations for single phase enthalpy and two-phase quality between the consecutive nodes. Moving boundary scheme has been adopted in the model, where all the node boundaries vary with time due to the variation of boiling boundary inside the heated channel. Using a state space approach, the instability thresholds are delineated by stability maps plotted in parameter planes of phase change number (Npch) and subcooling number (Nsub). The prime feature of the present model is that, though the model equations are simpler due to presence of linear-linear approximations for single phase enthalpy and two-phase quality, yet the results are in good agreement with the existing models (Karve [33]; Dokhane [34]) where the model equations run for several pages and experimental data (Solberg [41]). Unlike the existing ROMs, different two-phase friction factor multiplier correlations have been incorporated in the model. The applicability of various two-phase friction factor multipliers and their effects on stability behaviour have been depicted by carrying a comparative study. It is also observed that the Friedel model for friction factor calculations produces the most accurate results with respect to the available experimental data. (authors)

  10. Glyburide reduces bacterial dissemination in a mouse model of melioidosis.

    Directory of Open Access Journals (Sweden)

    Gavin C K W Koh

    Full Text Available BACKGROUND: Burkholderia pseudomallei infection (melioidosis is an important cause of community-acquired Gram-negative sepsis in Northeast Thailand, where it is associated with a ~40% mortality rate despite antimicrobial chemotherapy. We showed in a previous cohort study that patients taking glyburide ( = glibenclamide prior to admission have lower mortality and attenuated inflammatory responses compared to patients not taking glyburide. We sought to define the mechanism underlying this observation in a murine model of melioidosis. METHODS: Mice (C57BL/6 with streptozocin-induced diabetes were inoculated with ~6 × 10(2 cfu B. pseudomallei intranasally, then treated with therapeutic ceftazidime (600 mg/kg intraperitoneally twice daily starting 24 h after inoculation in order to mimic the clinical scenario. Glyburide (50 mg/kg or vehicle was started 7 d before inoculation and continued until sacrifice. The minimum inhibitory concentration of glyburide for B. pseudomallei was determined by broth microdilution. We also examined the effect of glyburide on interleukin (IL 1β by bone-marrow-derived macrophages (BMDM. RESULTS: Diabetic mice had increased susceptibility to melioidosis, with increased bacterial dissemination but no effect was seen of diabetes on inflammation compared to non-diabetic controls. Glyburide treatment did not affect glucose levels but was associated with reduced pulmonary cellular influx, reduced bacterial dissemination to both liver and spleen and reduced IL1β production when compared to untreated controls. Other cytokines were not different in glyburide-treated animals. There was no direct effect of glyburide on B. pseudomallei growth in vitro or in vivo. Glyburide directly reduced the secretion of IL1β by BMDMs in a dose-dependent fashion. CONCLUSIONS: Diabetes increases the susceptibility to melioidosis. We further show, for the first time in any model of sepsis, that glyburide acts as an anti-inflammatory agent by

  11. Randomized Wilson loops, reduced models and the large D expansion

    OpenAIRE

    Evnin, Oleg

    2011-01-01

    Reduced models are matrix integrals believed to be related to the large N limit of gauge theories. These integrals are known to simplify further when the number of matrices D (corresponding to the number of space-time dimensions in the gauge theory) becomes large. Even though this limit appears to be of little use for computing the standard rectangular Wilson loop (which always singles out two directions out of D), a meaningful large D limit can be defined for a randomized Wilson loop (in whi...

  12. Reduced parameter model on trajectory tracking data with applications

    Institute of Scientific and Technical Information of China (English)

    王正明; 朱炬波

    1999-01-01

    The data fusion in tracking the same trajectory by multi-measurernent unit (MMU) is considered. Firstly, the reduced parameter model (RPM) of trajectory parameter (TP), system error and random error are presented,and then the RPM on trajectory tracking data (TTD) is obtained, a weighted method on measuring elements (ME) is studied and criteria on selection of ME based on residual and accuracy estimation are put forward. According to RPM,the problem about selection of ME and self-calibration of TTD is thoroughly investigated. The method improves data accuracy in trajectory tracking obviously and gives accuracy evaluation of trajectory tracking system simultaneously.

  13. Fragile DNA Repair Mechanism Reduces Ageing in Multicellular Model

    DEFF Research Database (Denmark)

    Bendtsen, Kristian Moss; Juul, Jeppe Søgaard; Trusina, Ala

    2012-01-01

    DNA damages, as well as mutations, increase with age. It is believed that these result from increased genotoxic stress and decreased capacity for DNA repair. The two causes are not independent, DNA damage can, for example, through mutations, compromise the capacity for DNA repair, which in turn...... increases the amount of unrepaired DNA damage. Despite this vicious circle, we ask, can cells maintain a high DNA repair capacity for some time or is repair capacity bound to continuously decline with age? We here present a simple mathematical model for ageing in multicellular systems where cells subjected...... to DNA damage can undergo full repair, go apoptotic, or accumulate mutations thus reducing DNA repair capacity. Our model predicts that at the tissue level repair rate does not continuously decline with age, but instead has a characteristic extended period of high and non-declining DNA repair...

  14. Reduced Modeling of Electron Trapping Nonlinearity in Raman Scattering

    Science.gov (United States)

    Strozzi, D. J.; Berger, R. L.; Rose, H. A.; Langdon, A. B.; Williams, E. A.

    2009-11-01

    The trapping of resonant electrons in Langmuir waves generated by stimulated Raman scattering (SRS) gives rise to several nonlinear effects, which can either increase or decrease the reflectivity. We have implemented a reduced model of these nonlinearities in the paraxial propagation code pF3D [R. L. Berger et al., Phys. Plasmas 5 (1998)], consisting of a Landau damping reduction and Langmuir-wave frequency downshift. Both effects depend on the local wave amplitude, and gradually turn on with amplitude. This model is compared with 1D seeded Vlasov simulations, that include a Krook relaxation operator to mimic, e.g., transverse sideloss out of a multi-D, finite laser speckle. SRS in these runs develops from a counter-propagating seed light wave. Applications to ICF experiments will also be presented.

  15. A Reduced Order, One Dimensional Model of Joint Response

    Energy Technology Data Exchange (ETDEWEB)

    DOHNER,JEFFREY L.

    2000-11-06

    As a joint is loaded, the tangent stiffness of the joint reduces due to slip at interfaces. This stiffness reduction continues until the direction of the applied load is reversed or the total interface slips. Total interface slippage in joints is called macro-slip. For joints not undergoing macro-slip, when load reversal occurs the tangent stiffness immediately rebounds to its maximum value. This occurs due to stiction effects at the interface. Thus, for periodic loads, a softening and rebound hardening cycle is produced which defines a hysteretic, energy absorbing trajectory. For many jointed sub-structures, this hysteretic trajectory can be approximated using simple polynomial representations. This allows for complex joint substructures to be represented using simple non-linear models. In this paper a simple one dimensional model is discussed.

  16. Fast and Accurate Icepak-PSpice Co-Simulation of IGBTs under Short-Circuit with an Advanced PSpice Model

    DEFF Research Database (Denmark)

    Wu, Rui; Iannuzzo, Francesco; Wang, Huai;

    2014-01-01

    A basic problem in the IGBT short-circuit failure mechanism study is to obtain realistic temperature distribution inside the chip, which demands accurate electrical simulation to obtain power loss distribution as well as detailed IGBT geometry and material information. This paper describes an unp...

  17. A Taxonomic Reduced-Space Pollen Model for Paleoclimate Reconstruction

    Science.gov (United States)

    Wahl, E. R.; Schoelzel, C.

    2010-12-01

    Paleoenvironmental reconstruction from fossil pollen often attempts to take advantage of the rich taxonomic diversity in such data. Here, a taxonomically "reduced-space" reconstruction model is explored that would be parsimonious in introducing parameters needing to be estimated within a Bayesian Hierarchical Modeling context. This work involves a refinement of the traditional pollen ratio method. This method is useful when one (or a few) dominant pollen type(s) in a region have a strong positive correlation with a climate variable of interest and another (or a few) dominant pollen type(s) have a strong negative correlation. When, e.g., counts of pollen taxa a and b (r >0) are combined with pollen types c and d (r binomial logistic generalized linear model (GLM). The GLM can readily model this relationship in the forward form, pollen = g(climate), which is more physically realistic than inverse models often used in paleoclimate reconstruction [climate = f(pollen)]. The specification of the model is: rnum Bin(n,p), where E(r|T) = p = exp(η)/[1+exp(η)], and η = α + β(T); r is the pollen ratio formed as above, rnum is the ratio numerator, n is the ratio denominator (i.e., the sum of pollen counts), the denominator-specific count is (n - rnum), and T is the temperature at each site corresponding to a specific value of r. Ecological and empirical screening identified the model (Spruce+Birch) / (Spruce+Birch+Oak+Hickory) for use in temperate eastern N. America. α and β were estimated using both "traditional" and Bayesian GLM algorithms (in R). Although it includes only four pollen types, the ratio model yields more explained variation ( 80%) in the pollen-temperature relationship of the study region than a 64-taxon modern analog technique (MAT). Thus, the new pollen ratio method represents an information-rich, reduced space data model that can be efficiently employed in a BHM framework. The ratio model can directly reconstruct past temperature by solving the GLM

  18. Optimizing Crawler4j using MapReduce Programming Model

    Science.gov (United States)

    Siddesh, G. M.; Suresh, Kavya; Madhuri, K. Y.; Nijagal, Madhushree; Rakshitha, B. R.; Srinivasa, K. G.

    2016-08-01

    World wide web is a decentralized system that consists of a repository of information on the basis of web pages. These web pages act as a source of information or data in the present analytics world. Web crawlers are used for extracting useful information from web pages for different purposes. Firstly, it is used in web search engines where the web pages are indexed to form a corpus of information and allows the users to query on the web pages. Secondly, it is used for web archiving where the web pages are stored for later analysis phases. Thirdly, it can be used for web mining where the web pages are monitored for copyright purposes. The amount of information processed by the web crawler needs to be improved by using the capabilities of modern parallel processing technologies. In order to solve the problem of parallelism and the throughput of crawling this work proposes to optimize the Crawler4j using the Hadoop MapReduce programming model by parallelizing the processing of large input data. Crawler4j is a web crawler that retrieves useful information about the pages that it visits. The crawler Crawler4j coupled with data and computational parallelism of Hadoop MapReduce programming model improves the throughput and accuracy of web crawling. The experimental results demonstrate that the proposed solution achieves significant improvements with respect to performance and throughput. Hence the proposed approach intends to carve out a new methodology towards optimizing web crawling by achieving significant performance gain.

  19. Triptolide reduces cystogenesis in a model of ADPKD.

    Science.gov (United States)

    Leuenroth, Stephanie J; Bencivenga, Natasha; Igarashi, Peter; Somlo, Stefan; Crews, Craig M

    2008-09-01

    Mutations in PKD1 result in autosomal dominant polycystic kidney disease, which is characterized by increased proliferation of tubule cells leading to cyst initiation and subsequent expansion. Given the cell proliferation associated with cyst growth, an attractive therapeutic strategy has been to target the hyperproliferative nature of the disease. We previously demonstrated that the small molecule triptolide induces cellular calcium release through a polycystin-2-dependent pathway, arrests Pkd1(-/-) cell growth, and reduces cystic burden in Pkd1(-/-) embryonic mice. To assess cyst progression in neonates, we used the kidney-specific Pkd1(flox/-);Ksp-Cre mouse model of autosomal dominant polycystic kidney disease, in which the burden of cysts is negligible at birth but then progresses rapidly over days. The number, size, and proliferation rate of cysts were examined. Treatment with triptolide significantly improved renal function at postnatal day 8 by inhibition of the early phases of cyst growth. Because the proliferative index of kidney epithelium in neonates versus adults is significantly different, future studies will need to address whether triptolide delays or reduces cyst progression in the Pkd1 adult model. PMID:18650476

  20. A reduced model of pulsatile flow in an arterial compartment

    International Nuclear Information System (INIS)

    In this article we propose a reduced model of the input-output behaviour of an arterial compartment, including the short systolic phase where wave phenomena are predominant. The objective is to provide basis for model-based signal processing methods for the estimation from non-invasive measurements and the interpretation of the characteristics of these waves. Due to phenomena such that peaking and steepening, the considered pressure pulse waves behave more like solitons generated by a Korteweg-de Vries (KdV) model than like linear waves. So we start with a quasi-1D Navier-Stokes equation taking into account radial acceleration of the wall: the radial acceleration term being supposed small, a 2scale singular perturbation technique is used to separate the fast wave propagation phenomena taking place in a boundary layer in time and space described by a KdV equation from the slow phenomena represented by a parabolic equation leading to two-elements windkessel models. Some particular solutions of the KdV equation, the 2 and 3-soliton solutions, seems to be good candidates to match the observed pressure pulse waves. Some very promising preliminary comparisons of numerical results obtained along this line with real pressure data are shown

  1. Informing Investment to Reduce Inequalities: A Modelling Approach

    Science.gov (United States)

    McAuley, Andrew; Denny, Cheryl; Taulbut, Martin; Mitchell, Rory; Fischbacher, Colin; Graham, Barbara; Grant, Ian; O’Hagan, Paul; McAllister, David; McCartney, Gerry

    2016-01-01

    Background Reducing health inequalities is an important policy objective but there is limited quantitative information about the impact of specific interventions. Objectives To provide estimates of the impact of a range of interventions on health and health inequalities. Materials and Methods Literature reviews were conducted to identify the best evidence linking interventions to mortality and hospital admissions. We examined interventions across the determinants of health: a ‘living wage’; changes to benefits, taxation and employment; active travel; tobacco taxation; smoking cessation, alcohol brief interventions, and weight management services. A model was developed to estimate mortality and years of life lost (YLL) in intervention and comparison populations over a 20-year time period following interventions delivered only in the first year. We estimated changes in inequalities using the relative index of inequality (RII). Results Introduction of a ‘living wage’ generated the largest beneficial health impact, with modest reductions in health inequalities. Benefits increases had modest positive impacts on health and health inequalities. Income tax increases had negative impacts on population health but reduced inequalities, while council tax increases worsened both health and health inequalities. Active travel increases had minimally positive effects on population health but widened health inequalities. Increases in employment reduced inequalities only when targeted to the most deprived groups. Tobacco taxation had modestly positive impacts on health but little impact on health inequalities. Alcohol brief interventions had modestly positive impacts on health and health inequalities only when strongly socially targeted, while smoking cessation and weight-reduction programmes had minimal impacts on health and health inequalities even when socially targeted. Conclusions Interventions have markedly different effects on mortality, hospitalisations and

  2. Parameterized reduced order models from a single mesh using hyper-dual numbers

    Science.gov (United States)

    Brake, M. R. W.; Fike, J. A.; Topping, S. D.

    2016-06-01

    In order to assess the predicted performance of a manufactured system, analysts must consider random variations (both geometric and material) in the development of a model, instead of a single deterministic model of an idealized geometry with idealized material properties. The incorporation of random geometric variations, however, potentially could necessitate the development of thousands of nearly identical solid geometries that must be meshed and separately analyzed, which would require an impractical number of man-hours to complete. This research advances a recent approach to uncertainty quantification by developing parameterized reduced order models. These parameterizations are based upon Taylor series expansions of the system's matrices about the ideal geometry, and a component mode synthesis representation for each linear substructure is used to form an efficient basis with which to study the system. The numerical derivatives required for the Taylor series expansions are obtained via hyper-dual numbers, and are compared to parameterized models constructed with finite difference formulations. The advantage of using hyper-dual numbers is two-fold: accuracy of the derivatives to machine precision, and the need to only generate a single mesh of the system of interest. The theory is applied to a stepped beam system in order to demonstrate proof of concept. The results demonstrate that the hyper-dual number multivariate parameterization of geometric variations, which largely are neglected in the literature, are accurate for both sensitivity and optimization studies. As model and mesh generation can constitute the greatest expense of time in analyzing a system, the foundation to create a parameterized reduced order model based off of a single mesh is expected to reduce dramatically the necessary time to analyze multiple realizations of a component's possible geometry.

  3. Reduced Lorenz models for anomalous transport and profile resilience

    DEFF Research Database (Denmark)

    Rypdal, K.; Garcia, Odd Erik

    2007-01-01

    to resilience of the profile. Particular emphasis is put on the diffusionless limit, where these equations reduce to a simple dynamical system depending only on one single forcing parameter. This model is studied numerically, stressing experimentally observable signatures, and some of the perils of dimension......The physical basis for the Lorenz equations for convective cells in stratified fluids, and for magnetized plasmas imbedded in curved magnetic fields, are reexamined with emphasis on anomalous transport. It is shown that the Galerkin truncation leading to the Lorenz equations for the closed boundary...... problem is incompatible with finite fluxes through the system in the limit of vanishing diffusion. An alternative formulation leading to the Lorenz equations is proposed, invoking open boundaries and the notion of convective streamers and their back-reaction on the profile gradient, giving rise...

  4. Stoichiometric modeling of oxidation of reduced inorganic sulfur compounds (Riscs) in Acidithiobacillus thiooxidans.

    Science.gov (United States)

    Bobadilla Fazzini, Roberto A; Cortés, Maria Paz; Padilla, Leandro; Maturana, Daniel; Budinich, Marko; Maass, Alejandro; Parada, Pilar

    2013-08-01

    The prokaryotic oxidation of reduced inorganic sulfur compounds (RISCs) is a topic of utmost importance from a biogeochemical and industrial perspective. Despite sulfur oxidizing bacterial activity is largely known, no quantitative approaches to biological RISCs oxidation have been made, gathering all the complex abiotic and enzymatic stoichiometry involved. Even though in the case of neutrophilic bacteria such as Paracoccus and Beggiatoa species the RISCs oxidation systems are well described, there is a lack of knowledge for acidophilic microorganisms. Here, we present the first experimentally validated stoichiometric model able to assess RISCs oxidation quantitatively in Acidithiobacillus thiooxidans (strain DSM 17318), the archetype of the sulfur oxidizing acidophilic chemolithoautotrophs. This model was built based on literature and genomic analysis, considering a widespread mix of formerly proposed RISCs oxidation models combined and evaluated experimentally. Thiosulfate partial oxidation by the Sox system (SoxABXYZ) was placed as central step of sulfur oxidation model, along with abiotic reactions. This model was coupled with a detailed stoichiometry of biomass production, providing accurate bacterial growth predictions. In silico deletion/inactivation highlights the role of sulfur dioxygenase as the main catalyzer and a moderate function of tetrathionate hydrolase in elemental sulfur catabolism, demonstrating that this model constitutes an advanced instrument for the optimization of At. thiooxidans biomass production with potential use in biohydrometallurgical and environmental applications.

  5. Improved Reduced Models for Single-Pass and Reflective Semiconductor Optical Amplifiers

    CERN Document Server

    Dúill, Seán P Ó

    2014-01-01

    We present highly accurate and easy to implement, improved lumped semiconductor optical amplifier (SOA) models for both single-pass and reflective semiconductor optical amplifiers (RSOA). The key feature of the model is the inclusion of the internal losses and we show that a few subdivisions are required to achieve an accuracy of 0.12 dB. For the case of RSOAs, we generalize a recently published model to account for the internal losses that are vital to replicate observed RSOA behavior. The results of the improved reduced RSOA model show large overlap when compared to a full bidirectional travelling wave model over a 40 dB dynamic range of input powers and a 20 dB dynamic range of reflectivity values. The models would be useful for the rapid system simulation of signals in communication systems, i.e. passive optical networks that employ RSOAs, signal processing using SOAs and for implementing digital back propagation to undo amplifier induced signal distortions.

  6. An ONIOM study of the Bergman reaction: a computationally efficient and accurate method for modeling the enediyne anticancer antibiotics

    Science.gov (United States)

    Feldgus, Steven; Shields, George C.

    2001-10-01

    The Bergman cyclization of large polycyclic enediyne systems that mimic the cores of the enediyne anticancer antibiotics was studied using the ONIOM hybrid method. Tests on small enediynes show that ONIOM can accurately match experimental data. The effect of the triggering reaction in the natural products is investigated, and we support the argument that it is strain effects that lower the cyclization barrier. The barrier for the triggered molecule is very low, leading to a reasonable half-life at biological temperatures. No evidence is found that would suggest a concerted cyclization/H-atom abstraction mechanism is necessary for DNA cleavage.

  7. Fragile DNA repair mechanism reduces ageing in multicellular model.

    Directory of Open Access Journals (Sweden)

    Kristian Moss Bendtsen

    Full Text Available DNA damages, as well as mutations, increase with age. It is believed that these result from increased genotoxic stress and decreased capacity for DNA repair. The two causes are not independent, DNA damage can, for example, through mutations, compromise the capacity for DNA repair, which in turn increases the amount of unrepaired DNA damage. Despite this vicious circle, we ask, can cells maintain a high DNA repair capacity for some time or is repair capacity bound to continuously decline with age? We here present a simple mathematical model for ageing in multicellular systems where cells subjected to DNA damage can undergo full repair, go apoptotic, or accumulate mutations thus reducing DNA repair capacity. Our model predicts that at the tissue level repair rate does not continuously decline with age, but instead has a characteristic extended period of high and non-declining DNA repair capacity, followed by a rapid decline. Furthermore, the time of high functionality increases, and consequently slows down the ageing process, if the DNA repair mechanism itself is vulnerable to DNA damages. Although counterintuitive at first glance, a fragile repair mechanism allows for a faster removal of compromised cells, thus freeing the space for healthy peers. This finding might be a first step toward understanding why a mutation in single DNA repair protein (e.g. Wrn or Blm is not buffered by other repair proteins and therefore, leads to severe ageing disorders.

  8. High-resolution LES of the rotating stall in a reduced scale model pump-turbine

    International Nuclear Information System (INIS)

    Extending the operating range of modern pump-turbines becomes increasingly important in the course of the integration of renewable energy sources in the existing power grid. However, at partial load condition in pumping mode, the occurrence of rotating stall is critical to the operational safety of the machine and on the grid stability. The understanding of the mechanisms behind this flow phenomenon yet remains vague and incomplete. Past numerical simulations using a RANS approach often led to inconclusive results concerning the physical background. For the first time, the rotating stall is investigated by performing a large scale LES calculation on the HYDRODYNA pump-turbine scale model featuring approximately 100 million elements. The computations were performed on the PRIMEHPC FX10 of the University of Tokyo using the overset Finite Element open source code FrontFlow/blue with the dynamic Smagorinsky turbulence model and the no-slip wall condition. The internal flow computed is the one when operating the pump-turbine at 76% of the best efficiency point in pumping mode, as previous experimental research showed the presence of four rotating cells. The rotating stall phenomenon is accurately reproduced for a reduced Reynolds number using the LES approach with acceptable computing resources. The results show an excellent agreement with available experimental data from the reduced scale model testing at the EPFL Laboratory for Hydraulic Machines. The number of stall cells as well as the propagation speed corroborates the experiment

  9. Reduced-dimension model of liquid plug propagation in tubes

    Science.gov (United States)

    Fujioka, Hideki; Halpern, David; Ryans, Jason; Gaver, Donald P.

    2016-09-01

    We investigate the flow resistance caused by the propagation of a liquid plug in a liquid-lined tube and propose a simple semiempirical formula for the flow resistance as a function of the plug length, the capillary number, and the precursor film thickness. These formulas are based on computational investigations of three key contributors to the plug resistance: the front meniscus, the plug core, and the rear meniscus. We show that the nondimensional flow resistance in the front meniscus varies as a function of the capillary number and the precursor film thickness. For a fixed capillary number, the flow resistance increases with decreasing precursor film thickness. The flow in the core region is modeled as Poiseuille flow and the flow resistance is a linear function of the plug length. For the rear meniscus, the flow resistance increases monotonically with decreasing capillary number. We investigate the maximum mechanical stress behavior at the wall, such as the wall pressure gradient, the wall shear stress, and the wall shear stress gradient, and propose empirical formulas for the maximum stresses in each region. These wall mechanical stresses vary as a function of the capillary number: For semi-infinite fingers of air propagating through pulmonary airways, the epithelial cell damage correlates with the pressure gradient. However, for shorter plugs the front meniscus may provide substantial mechanical stresses that could modulate this behavior and provide a major cause of cell injury when liquid plugs propagate in pulmonary airways. Finally, we propose that the reduced-dimension models developed herein may be of importance for the creation of large-scale models of interfacial flows in pulmonary networks, where full computational fluid dynamics calculations are untenable.

  10. Modelling obesity outcomes : reducing obesity risk in adulthood may have greater impact than reducing obesity prevalence in childhood

    NARCIS (Netherlands)

    Lhachimi, S. K.; Nusselder, W. J.; Lobstein, T. J.; Smit, H. A.; Baili, P.; Bennett, K.; Kulik, M. C.; Jackson-Leach, R.; Boshuizen, H. C.; Mackenbach, J. P.

    2013-01-01

    A common policy response to the rise in obesity prevalence is to undertake interventions in childhood, but it is an open question whether this is more effective than reducing the risk of becoming obese during adulthood. In this paper, we model the effect on health outcomes of (i) reducing the preval

  11. Design of multivariable feedback control systems via spectral assignment using reduced-order models and reduced-order observers

    Science.gov (United States)

    Mielke, R. R.; Tung, L. J.; Carraway, P. I., III

    1985-01-01

    The feasibility of using reduced order models and reduced order observers with eigenvalue/eigenvector assignment procedures is investigated. A review of spectral assignment synthesis procedures is presented. Then, a reduced order model which retains essential system characteristics is formulated. A constant state feedback matrix which assigns desired closed loop eigenvalues and approximates specified closed loop eigenvectors is calculated for the reduced order model. It is shown that the eigenvalue and eigenvector assignments made in the reduced order system are retained when the feedback matrix is implemented about the full order system. In addition, those modes and associated eigenvectors which are not included in the reduced order model remain unchanged in the closed loop full order system. The fulll state feedback design is then implemented by using a reduced order observer. It is shown that the eigenvalue and eigenvector assignments of the closed loop full order system remain unchanged when a reduced order observer is used. The design procedure is illustrated by an actual design problem.

  12. Modelling obesity outcomes: reducing obesity risk in adulthood may have grater impact than reducing obesity prevalence in childhood

    NARCIS (Netherlands)

    Lhachimi, S.K.; Nusselder, W.J.; Lobstein, T.J.; Smit, H.A.; Baili, P.; Bennett, K.; Kulik, M.C.; Jackson-Leach, R.; Boshuizen, H.C.; Mackenbach, J.P.

    2013-01-01

    A common policy response to the rise in obesity prevalence is to undertake interventions in childhood, but it is an open question whether this is more effective than reducing the risk of becoming obese during adulthood. In this paper, we model the effect on health outcomes of (i) reducing the preval

  13. Reduced Moment-Based Models for Oxygen Precipitates and Dislocation Loops in Silicon

    Science.gov (United States)

    Trzynadlowski, Bart

    The demand for ever smaller, higher-performance integrated circuits and more efficient, cost-effective solar cells continues to push the frontiers of process technology. Fabrication of silicon devices requires extremely precise control of impurities and crystallographic defects. Failure to do so not only reduces performance, efficiency, and yield, it threatens the very survival of commercial enterprises in today's fiercely competitive and price-sensitive global market. The presence of oxygen in silicon is an unavoidable consequence of the Czochralski process, which remains the most popular method for large-scale production of single-crystal silicon. Oxygen precipitates that form during thermal processing cause distortion of the surrounding silicon lattice and can lead to the formation of dislocation loops. Localized deformation caused by both of these defects introduces potential wells that trap diffusing impurities such as metal atoms, which is highly desirable if done far away from sensitive device regions. Unfortunately, dislocations also reduce the mechanical strength of silicon, which can cause wafer warpage and breakage. Engineers must negotiate this and other complex tradeoffs when designing fabrication processes. Accomplishing this in a complex, modern process involving a large number of thermal steps is impossible without the aid of computational models. In this dissertation, new models for oxygen precipitation and dislocation loop evolution are described. An oxygen model using kinetic rate equations to evolve the complete precipitate size distribution was developed first. This was then used to create a reduced model tracking only the moments of the size distribution. The moment-based model was found to run significantly faster than its full counterpart while accurately capturing the evolution of oxygen precipitates. The reduced model was fitted to experimental data and a sensitivity analysis was performed to assess the robustness of the results. Source

  14. Wind Farm Flow Modeling using an Input-Output Reduced-Order Model

    Energy Technology Data Exchange (ETDEWEB)

    Annoni, Jennifer; Gebraad, Pieter; Seiler, Peter

    2016-08-01

    Wind turbines in a wind farm operate individually to maximize their own power regardless of the impact of aerodynamic interactions on neighboring turbines. There is the potential to increase power and reduce overall structural loads by properly coordinating turbines. To perform control design and analysis, a model needs to be of low computational cost, but retains the necessary dynamics seen in high-fidelity models. The objective of this work is to obtain a reduced-order model that represents the full-order flow computed using a high-fidelity model. A variety of methods, including proper orthogonal decomposition and dynamic mode decomposition, can be used to extract the dominant flow structures and obtain a reduced-order model. In this paper, we combine proper orthogonal decomposition with a system identification technique to produce an input-output reduced-order model. This technique is used to construct a reduced-order model of the flow within a two-turbine array computed using a large-eddy simulation.

  15. A pharmacokinetic/pharmacodynamic mathematical model accurately describes the activity of voriconazole against Candida spp. in vitro

    OpenAIRE

    Li, Yanjun; Nguyen, M. Hong; Cheng, Shaoji; Schmidt, Stephan; Zhong, Li; Derendorf, Hartmut; Clancy, Cornelius J.

    2008-01-01

    We developed a pharmacokinetic/pharmacodynamic (PK/PD) mathematical model that fits voriconazole time–kill data against Candida isolates in vitro and used the model to simulate the expected kill curves for typical intravenous and oral dosing regimens. A series of Emax mathematical models were used to fit time–kill data for two isolates each of Candida albicans, Candida glabrata and Candida parapsilosis. PK parameters extracted from human data sets were used in the model to simulate kill curve...

  16. A reduced order aerothermodynamic modeling framework for hypersonic vehicles based on surrogate and POD

    Directory of Open Access Journals (Sweden)

    Chen Xin

    2015-10-01

    Full Text Available Aerothermoelasticity is one of the key technologies for hypersonic vehicles. Accurate and efficient computation of the aerothermodynamics is one of the primary challenges for hypersonic aerothermoelastic analysis. Aimed at solving the shortcomings of engineering calculation, computation fluid dynamics (CFD and experimental investigation, a reduced order modeling (ROM framework for aerothermodynamics based on CFD predictions using an enhanced algorithm of fast maximin Latin hypercube design is developed. Both proper orthogonal decomposition (POD and surrogate are considered and compared to construct ROMs. Two surrogate approaches named Kriging and optimized radial basis function (ORBF are utilized to construct ROMs. Furthermore, an enhanced algorithm of fast maximin Latin hypercube design is proposed, which proves to be helpful to improve the precisions of ROMs. Test results for the three-dimensional aerothermodynamic over a hypersonic surface indicate that: the ROMs precision based on Kriging is better than that by ORBF, ROMs based on Kriging are marginally more accurate than ROMs based on POD-Kriging. In a word, the ROM framework for hypersonic aerothermodynamics has good precision and efficiency.

  17. A reduced order aerothermodynamic modeling framework for hypersonic vehicles based on surrogate and POD

    Institute of Scientific and Technical Information of China (English)

    Chen X in; Liu Li; Long Teng; Yue Zhenjiang

    2015-01-01

    Aerothermoelasticity is one of the key technologies for hypersonic vehicles. Accurate and efficient computation of the aerothermodynamics is one of the primary challenges for hypersonic aerothermoelastic analysis. Aimed at solving the shortcomings of engineering calculation, compu-tation fluid dynamics (CFD) and experimental investigation, a reduced order modeling (ROM) framework for aerothermodynamics based on CFD predictions using an enhanced algorithm of fast maximin Latin hypercube design is developed. Both proper orthogonal decomposition (POD) and surrogate are considered and compared to construct ROMs. Two surrogate approaches named Kriging and optimized radial basis function (ORBF) are utilized to construct ROMs. Furthermore, an enhanced algorithm of fast maximin Latin hypercube design is proposed, which proves to be helpful to improve the precisions of ROMs. Test results for the three-dimensional aerothermody-namic over a hypersonic surface indicate that:the ROMs precision based on Kriging is better than that by ORBF, ROMs based on Kriging are marginally more accurate than ROMs based on POD-Kriging. In a word, the ROM framework for hypersonic aerothermodynamics has good precision and efficiency.

  18. Fast and Accurate Practical Positioning Method using Enhanced-Lateration Technique and Adaptive Propagation Model in GSM Mode

    Directory of Open Access Journals (Sweden)

    Mohamed H. Abdel Meniem

    2012-03-01

    Full Text Available In this paper, we consider problem of positioning of mobile phones, different approaches were produced for these targets using GPS, WiFi, GSM, UMTS and other sensors, which exist in today smart phone sensors. Location awareness in gen-eral is emerging a tremendous interest in different fields and scopes. Position is the key element of context awareness. How-ever GPS produces an accurate position, it requires open sky and does not work indoors. We produce an innovative robust tech-nique for positioning which could be applied on terminal-based or network-based architecture. It depends only on Received Sig-nal Strength (RSS and location of Base Transceiver Station (BTS. This work has been completely tested and analyzed in Egypt1 roads using realistic data and commercial android smart phone. In general, all performance evaluation results were good. Mean positioning error was about 120 m in urban and 394 m in rural.

  19. Testing the importance of accurate meteorological input fields and parameterizations in atmospheric transport modelling using DREAM - Validation against ETEX-1

    DEFF Research Database (Denmark)

    Brandt, J.; Bastrup-Birk, A.; Christensen, J.H.;

    1998-01-01

    transport and dispersion of air pollutants caused by a single but strong source as, e.g. an accidental release from a nuclear power plant. The model system including the coupling of the Lagrangian model with the Eulerian model are described. Various simple and comprehensive parameterizations of the mixing...

  20. Building relationships between plant traits and leaf spectra to reduce uncertainty in terrestrial ecosystem models

    Science.gov (United States)

    Lieberman-Cribbin, W.; Rogers, A.; Serbin, S.; Ely, K.

    2015-12-01

    Despite climate projections, there is uncertainty in how terrestrial ecosystems will respond to warming temperatures and increased atmospheric carbon dioxide concentrations. Earth system models are used to determine how ecosystems will respond in the future, but there is considerable variation in how plant traits are represented within these models. A potential approach to reducing uncertainty is the establishment of spectra-trait linkages among plant species. These relationships allow the accurate estimation of biochemical characteristics of plants from their shortwave spectral profiles. Remote sensing approaches can then be implemented to acquire spectral data and estimate plant traits over large spatial and temporal scales. This paper describes a greenhouse experiment conducted at Brookhaven National Laboratory in which spectra-trait relationships were investigated for 8 different plant species. This research was designed to generate a broad gradient in plant traits, using a range of species grown in different sized pots with different soil type. Fertilizer was also applied in different amounts to generate variation in plant C and N status that will be reflected in the traits measured, as well as the spectra observed. Leaves were sampled at different developmental stages to increase variation. Spectra and plant traits were then measured and a partial least-squares regression (PLSR) modeling approach was used to establish spectra-trait relationships. Despite the variability in growing conditions and plant species, our PLSR models could be used to accurately estimate plant traits from spectral signatures, yielding model calibration R2 and root mean square error (RMSE) values, respectively, of 0.85 and 0.30 for percent nitrogen by mass (Nmass%), R2 0.78 and 0.75 for carbon to nitrogen (C:N) ratio, 0.87 and 2.39 for leaf mass area (LMA), and 0.76 R2 and 15.16 for water (H2O) content. This research forms the basis for establishing new and more comprehensive spectra

  1. The use of a new 3D splint and double CT scan procedure to obtain an accurate anatomic virtual augmented model of the skull.

    Science.gov (United States)

    Swennen, G R J; Barth, E-L; Eulzer, C; Schutyser, F

    2007-02-01

    Three-dimensional (3D) virtual planning of orthognathic surgery requires detailed visualization of the interocclusal relationship. The purpose of this study was to introduce the modification of the double computed tomography (CT) scan procedure using a newly designed 3D splint in order to obtain a detailed anatomic 3D virtual augmented model of the skull. A total of 10 dry adult human cadaver skulls were used to evaluate the accuracy of the automatic rigid registration method for fusion of both CT datasets (Maxilim, version 1.3.0). The overall mean registration error was 0.1355+/-0.0323 mm (range 0.0760-0.1782 mm). Analysis of variance showed a registration method error of 0.0564 mm (P 3D splint with the double CT scan procedure allowed accurate registration and the set-up of an accurate anatomic 3D virtual augmented model of the skull with detailed dental surface.

  2. A Two-Scale Reduced Model for Darcy Flow in Fractured Porous Media

    KAUST Repository

    Chen, Huangxin

    2016-06-01

    In this paper, we develop a two-scale reduced model for simulating the Darcy flow in two-dimensional porous media with conductive fractures. We apply the approach motivated by the embedded fracture model (EFM) to simulate the flow on the coarse scale, and the effect of fractures on each coarse scale grid cell intersecting with fractures is represented by the discrete fracture model (DFM) on the fine scale. In the DFM used on the fine scale, the matrix-fracture system are resolved on unstructured grid which represents the fractures accurately, while in the EFM used on the coarse scale, the flux interaction between fractures and matrix are dealt with as a source term, and the matrix-fracture system can be resolved on structured grid. The Raviart-Thomas mixed finite element methods are used for the solution of the coupled flows in the matrix and the fractures on both fine and coarse scales. Numerical results are presented to demonstrate the efficiency of the proposed model for simulation of flow in fractured porous media.

  3. Reduced-order model based feedback control of the modified Hasegawa-Wakatani model

    Energy Technology Data Exchange (ETDEWEB)

    Goumiri, I. R.; Rowley, C. W.; Ma, Z. [Department of Mechanical and Aerospace Engineering, Princeton University, Princeton, New Jersey 08544 (United States); Gates, D. A.; Krommes, J. A.; Parker, J. B. [Princeton Plasma Physics Laboratory, Princeton, New Jersey 08544 (United States)

    2013-04-15

    In this work, the development of model-based feedback control that stabilizes an unstable equilibrium is obtained for the Modified Hasegawa-Wakatani (MHW) equations, a classic model in plasma turbulence. First, a balanced truncation (a model reduction technique that has proven successful in flow control design problems) is applied to obtain a low dimensional model of the linearized MHW equation. Then, a model-based feedback controller is designed for the reduced order model using linear quadratic regulators. Finally, a linear quadratic Gaussian controller which is more resistant to disturbances is deduced. The controller is applied on the non-reduced, nonlinear MHW equations to stabilize the equilibrium and suppress the transition to drift-wave induced turbulence.

  4. Prediction of a Francis turbine prototype full load instability from investigations on the reduced scale model

    Science.gov (United States)

    Alligné, S.; Maruzewski, P.; Dinh, T.; Wang, B.; Fedorov, A.; Iosfin, J.; Avellan, F.

    2010-08-01

    The growing development of renewable energies combined with the process of privatization, lead to a change of economical energy market strategies. Instantaneous pricings of electricity as a function of demand or predictions, induces profitable peak productions which are mainly covered by hydroelectric power plants. Therefore, operators harness more hydroelectric facilities at full load operating conditions. However, the Francis Turbine features an axi-symmetric rope leaving the runner which may act under certain conditions as an internal energy source leading to instability. Undesired power and pressure fluctuations are induced which may limit the maximum available power output. BC Hydro experiences such constraints in a hydroelectric power plant consisting of four 435 MW Francis Turbine generating units, which is located in Canada's province of British Columbia. Under specific full load operating conditions, one unit experiences power and pressure fluctuations at 0.46 Hz. The aim of the paper is to present a methodology allowing prediction of this prototype's instability frequency from investigations on the reduced scale model. A new hydro acoustic vortex rope model has been developed in SIMSEN software, taking into account the energy dissipation due to the thermodynamic exchange between the gas and the surrounding liquid. A combination of measurements, CFD simulations and computation of eigenmodes of the reduced scale model installed on test rig, allows the accurate calibration of the vortex rope model parameters at the model scale. Then, transposition of parameters to the prototype according to similitude laws is applied and stability analysis of the power plant is performed. The eigenfrequency of 0.39 Hz related to the first eigenmode of the power plant is determined to be unstable. Predicted frequency of the full load power and pressure fluctuations at the unit unstable operating point is found to be in general agreement with the prototype measurements.

  5. An Accurate GPS-IMU/DR Data Fusion Method for Driverless Car Based on a Set of Predictive Models and Grid Constraints

    OpenAIRE

    Shiyao Wang; Zhidong Deng; Gang Yin

    2016-01-01

    A high-performance differential global positioning system (GPS)  receiver with real time kinematics provides absolute localization for driverless cars. However, it is not only susceptible to multipath effect but also unable to effectively fulfill precise error correction in a wide range of driving areas. This paper proposes an accurate GPS–inertial measurement unit (IMU)/dead reckoning (DR) data fusion method based on a set of predictive models and occupancy grid constraints. First, we employ...

  6. Nonlinear dynamics of an electrically actuated imperfect microbeam resonator: Experimental investigation and reduced-order modeling

    KAUST Repository

    Ruzziconi, Laura

    2013-06-10

    We present a study of the dynamic behavior of a microelectromechanical systems (MEMS) device consisting of an imperfect clamped-clamped microbeam subjected to electrostatic and electrodynamic actuation. Our objective is to develop a theoretical analysis, which is able to describe and predict all the main relevant aspects of the experimental response. Extensive experimental investigation is conducted, where the main imperfections coming from microfabrication are detected, the first four experimental natural frequencies are identified and the nonlinear dynamics are explored at increasing values of electrodynamic excitation, in a neighborhood of the first symmetric resonance. Several backward and forward frequency sweeps are acquired. The nonlinear behavior is highlighted, which includes ranges of multistability, where the nonresonant and the resonant branch coexist, and intervals where superharmonic resonances are clearly visible. Numerical simulations are performed. Initially, two single mode reduced-order models are considered. One is generated via the Galerkin technique, and the other one via the combined use of the Ritz method and the Padé approximation. Both of them are able to provide a satisfactory agreement with the experimental data. This occurs not only at low values of electrodynamic excitation, but also at higher ones. Their computational efficiency is discussed in detail, since this is an essential aspect for systematic local and global simulations. Finally, the theoretical analysis is further improved and a two-degree-of-freedom reduced-order model is developed, which is also capable of capturing the measured second symmetric superharmonic resonance. Despite the apparent simplicity, it is shown that all the proposed reduced-order models are able to describe the experimental complex nonlinear dynamics of the device accurately and properly, which validates the proposed theoretical approach. © 2013 IOP Publishing Ltd.

  7. Random forest algorithm yields accurate quantitative prediction models of benthic light at intertidal sites affected by toxic Lyngbya majuscula blooms

    NARCIS (Netherlands)

    M.J. Kehoe; K. O’ Brien; A. Grinham; D. Rissik; K.S. Ahern; P. Maxwell

    2012-01-01

    It is shown that targeted high frequency monitoring and modern machine learning methods lead to highly predictive models of benthic light flux. A state-of-the-art machine learning technique was used in conjunction with a high frequency data set to calibrate and test predictive benthic lights models

  8. Efficient and accurate modeling of multi-wavelength propagation in SOAs: a generalized coupled-mode approach

    CERN Document Server

    Antonelli, Cristian; Li, Wangzhe; Coldren, Larry

    2015-01-01

    We present a model for multi-wavelength mixing in semiconductor optical amplifiers (SOAs) based on coupled-mode equations. The proposed model applies to all kinds of SOA structures, takes into account the longitudinal dependence of carrier density caused by saturation, it accommodates an arbitrary functional dependencies of the material gain and carrier recombination rate on the local value of carrier density, and is computationally more efficient by orders of magnitude as compared with the standard full model based on space-time equations. We apply the coupled-mode equations model to a recently demonstrated phase-sensitive amplifier based on an integrated SOA and prove its results to be consistent with the experimental data. The accuracy of the proposed model is certified by means of a meticulous comparison with the results obtained by integrating the space-time equations.

  9. An Accurate Analytical Model for 802.11e EDCA under Different Traffic Conditions with Contention-Free Bursting

    Directory of Open Access Journals (Sweden)

    Nada Chendeb Taher

    2011-01-01

    Full Text Available Extensive research addressing IEEE 802.11e enhanced distributed channel access (EDCA performance analysis, by means of analytical models, exist in the literature. Unfortunately, the currently proposed models, even though numerous, do not reach this accuracy due to the great number of simplifications that have been done. Particularly, none of these models considers the 802.11e contention free burst (CFB mode which allows a given station to transmit a burst of frames without contention during a given transmission opportunity limit (TXOPLimit time interval. Despite its influence on the global performance, TXOPLimit is ignored in almost all existing models. To fill in this gap, we develop in this paper a new and complete analytical model that (i reflects the correct functioning of EDCA, (ii includes all the 802.11e EDCA differentiation parameters, (iii takes into account all the features of the protocol, and (iv can be applied to all network conditions, going from nonsaturation to saturation conditions. Additionally, this model is developed in order to be used in admission control procedure, so it was designed to have a low complexity and an acceptable response time. The proposed model is validated by means of both calculations and extensive simulations.

  10. Reduced Order Aeroservoelastic Models with Rigid Body Modes Project

    Data.gov (United States)

    National Aeronautics and Space Administration — Complex aeroelastic and aeroservoelastic phenomena can be modeled on complete aircraft configurations generating models with millions of degrees of freedom....

  11. Prognostic models and risk scores: can we accurately predict postoperative nausea and vomiting in children after craniotomy?

    Science.gov (United States)

    Neufeld, Susan M; Newburn-Cook, Christine V; Drummond, Jane E

    2008-10-01

    Postoperative nausea and vomiting (PONV) is a problem for many children after craniotomy. Prognostic models and risk scores help identify who is at risk for an adverse event such as PONV to help guide clinical care. The purpose of this article is to assess whether an existing prognostic model or risk score can predict PONV in children after craniotomy. The concepts of transportability, calibration, and discrimination are presented to identify what is required to have a valid tool for clinical use. Although previous work may inform clinical practice and guide future research, existing prognostic models and risk scores do not appear to be options for predicting PONV in children undergoing craniotomy. However, until risk factors are further delineated, followed by the development and validation of prognostic models and risk scores that include children after craniotomy, clinical judgment in the context of current research may serve as a guide for clinical care in this population. PMID:18939320

  12. Some combinatorial models for reduced expressions in Coxeter groups

    CERN Document Server

    Denoncourt, Hugh

    2011-01-01

    Stanley's formula for the number of reduced expressions of a permutation regarded as a Coxeter group element raises the question of how to enumerate the reduced expressions of an arbitrary Coxeter group element. We provide a framework for answering this question by constructing combinatorial objects that represent the inversion set and the reduced expressions for an arbitrary Coxeter group element. The framework also provides a formula for the length of an element formed by deleting a generator from a Coxeter group element. Fan and Hagiwara, et al$.$ showed that for certain Coxeter groups, the short-braid avoiding elements characterize those elements that give reduced expressions when any generator is deleted from a reduced expression. We provide a characterization that holds in all Coxeter groups. Lastly, we give applications to the freely braided elements introduced by Green and Losonczy, generalizing some of their results that hold in simply-laced Coxeter groups to the arbitrary Coxeter group setting.

  13. What makes an accurate and reliable subject-specific finite element model? A case study of an elephant femur

    OpenAIRE

    Panagiotopoulou, O.; Wilshin, S. D.; Rayfield, E J; Shefelbine, S. J.; Hutchinson, J. R.

    2014-01-01

    Finite element modelling is well entrenched in comparative vertebrate biomechanics as a tool to assess the mechanical design of skeletal structures and to better comprehend the complex interaction of their form–function relationships. But what makes a reliable subject-specific finite element model? To approach this question, we here present a set of convergence and sensitivity analyses and a validation study as an example, for finite element analysis (FEA) in general, of ways to ensure a reli...

  14. A hybrid stochastic-deterministic computational model accurately describes spatial dynamics and virus diffusion in HIV-1 growth competition assay.

    Science.gov (United States)

    Immonen, Taina; Gibson, Richard; Leitner, Thomas; Miller, Melanie A; Arts, Eric J; Somersalo, Erkki; Calvetti, Daniela

    2012-11-01

    We present a new hybrid stochastic-deterministic, spatially distributed computational model to simulate growth competition assays on a relatively immobile monolayer of peripheral blood mononuclear cells (PBMCs), commonly used for determining ex vivo fitness of human immunodeficiency virus type-1 (HIV-1). The novel features of our approach include incorporation of viral diffusion through a deterministic diffusion model while simulating cellular dynamics via a stochastic Markov chain model. The model accounts for multiple infections of target cells, CD4-downregulation, and the delay between the infection of a cell and the production of new virus particles. The minimum threshold level of infection induced by a virus inoculum is determined via a series of dilution experiments, and is used to determine the probability of infection of a susceptible cell as a function of local virus density. We illustrate how this model can be used for estimating the distribution of cells infected by either a single virus type or two competing viruses. Our model captures experimentally observed variation in the fitness difference between two virus strains, and suggests a way to minimize variation and dual infection in experiments.

  15. Reduced-Order Model Based Feedback Control For Modified Hasegawa-Wakatani Model

    Energy Technology Data Exchange (ETDEWEB)

    Goumiri, I. R.; Rowley, C. W.; Ma, Z.; Gates, D. A.; Krommes, J. A.; Parker, J. B.

    2013-01-28

    In this work, the development of model-based feedback control that stabilizes an unstable equilibrium is obtained for the Modi ed Hasegawa-Wakatani (MHW) equations, a classic model in plasma turbulence. First, a balanced truncation (a model reduction technique that has proven successful in ow control design problems) is applied to obtain a low dimensional model of the linearized MHW equation. Then a modelbased feedback controller is designed for the reduced order model using linear quadratic regulators (LQR). Finally, a linear quadratic gaussian (LQG) controller, which is more resistant to disturbances is deduced. The controller is applied on the non-reduced, nonlinear MHW equations to stabilize the equilibrium and suppress the transition to drift-wave induced turbulence.

  16. RCK: accurate and efficient inference of sequence- and structure-based protein–RNA binding models from RNAcompete data

    Science.gov (United States)

    Orenstein, Yaron; Wang, Yuhao; Berger, Bonnie

    2016-01-01

    Motivation: Protein–RNA interactions, which play vital roles in many processes, are mediated through both RNA sequence and structure. CLIP-based methods, which measure protein–RNA binding in vivo, suffer from experimental noise and systematic biases, whereas in vitro experiments capture a clearer signal of protein RNA-binding. Among them, RNAcompete provides binding affinities of a specific protein to more than 240 000 unstructured RNA probes in one experiment. The computational challenge is to infer RNA structure- and sequence-based binding models from these data. The state-of-the-art in sequence models, Deepbind, does not model structural preferences. RNAcontext models both sequence and structure preferences, but is outperformed by GraphProt. Unfortunately, GraphProt cannot detect structural preferences from RNAcompete data due to the unstructured nature of the data, as noted by its developers, nor can it be tractably run on the full RNACompete dataset. Results: We develop RCK, an efficient, scalable algorithm that infers both sequence and structure preferences based on a new k-mer based model. Remarkably, even though RNAcompete data is designed to be unstructured, RCK can still learn structural preferences from it. RCK significantly outperforms both RNAcontext and Deepbind in in vitro binding prediction for 244 RNAcompete experiments. Moreover, RCK is also faster and uses less memory, which enables scalability. While currently on par with existing methods in in vivo binding prediction on a small scale test, we demonstrate that RCK will increasingly benefit from experimentally measured RNA structure profiles as compared to computationally predicted ones. By running RCK on the entire RNAcompete dataset, we generate and provide as a resource a set of protein–RNA structure-based models on an unprecedented scale. Availability and Implementation: Software and models are freely available at http://rck.csail.mit.edu/ Contact: bab@mit.edu Supplementary information

  17. AN ACCURATE MODELING OF DELAY AND SLEW METRICS FOR ON-CHIP VLSI RC INTERCONNECTS FOR RAMP INPUTS USING BURR’S DISTRIBUTION FUNCTION

    Directory of Open Access Journals (Sweden)

    Rajib Kar

    2010-09-01

    Full Text Available This work presents an accurate and efficient model to compute the delay and slew metric of on-chip interconnect of high speed CMOS circuits foe ramp input. Our metric assumption is based on the Burr’s Distribution function. The Burr’s distribution is used to characterize the normalized homogeneous portion of the step response. We used the PERI (Probability distribution function Extension for Ramp Inputs technique that extends delay metrics and slew metric for step inputs to the more general and realistic non-step inputs. The accuracy of our models is justified with the results compared with that of SPICE simulations.

  18. A three-dimensional nonlinear reduced-order predictive joint model

    Institute of Scientific and Technical Information of China (English)

    2003-01-01

    Mechanical joints can have significant effects on the dynamics of assembled structures. However, the lack of efficacious predictive dynamic models for joints hinders accurate prediction of their dynamic behavior. The goal of our work is to develop physics-based, reduced-order, finite element models that are capable of replicating the effects of joints on vibrating structures. The authors recently developed the so-called two-dimensional adjusted Iwan beam element (2-D AIBE) to simulate the hysteretic behavior of bolted joints in 2-D beam structures. In this paper, 2-D AIBE is extended to three-dimensional cases by formulating a three-dimensional adjusted Iwan beam element (3-D AIBE). Impulsive loading experiments are applied to a jointed frame structure and a beam structure containing the same joint. The frame is subjected to excitation out of plane so that the joint is under rotation and single axis bending. By assuming that the rotation in the joint is linear elastic, the parameters of the joint associated with bending in the frame are identified from acceleration responses of the jointed beam structure, using a multi-layer feed-forward neural network (MLFF). Numerical simulation is then performed on the frame structure using the identified parameters. The good agreement between the simulated and experimental impulsive acceleration responses of the frame structure validates the efficacy of the presented 3-D AIBE, and indicates that the model can potentially be applied to more complex structural systems with joint parameters identified from a relatively simple structure.

  19. Development of accurate UWB dielectric properties dispersion at CST simulation tool for modeling microwave interactions with numerical breast phantoms

    International Nuclear Information System (INIS)

    In this paper, a reformulation for the recently published dielectric properties dispersion models of the breast tissues is carried out to be used by CST simulation tool. The reformulation includes tabulation of the real and imaginary parts versus frequency on ultra-wideband (UWB) for these models by MATLAB programs. The tables are imported and fitted by CST simulation tool to second or first order general equations. The results have shown good agreement between the original and the imported data. The MATLAB programs written in MATLAB code are included in the appendix.

  20. Fast and accurate two-dimensional modelling of high-current, high-voltage air-cored transformers

    International Nuclear Information System (INIS)

    This paper presents a detailed two-dimensional model for high-voltage air-cored pulse transformers of two quite different designs. A filamentary technique takes magnetic diffusion fully into account and enables the resistances and self and mutual inductances that are effective under fast transient conditions to be calculated. Very good agreement between calculated and measured results for typical transformers has been obtained in several cases, and the model is now regularly used in the design of compact high-power sources

  1. Transgenic Mouse Model for Reducing Oxidative Damage in Bone

    Science.gov (United States)

    Schreurs, A.-S.; Torres, S.; Truong, T.; Kumar, A.; Alwood, J. S.; Limoli, C. L.; Globus, R. K.

    2014-01-01

    Exposure to musculoskeletal disuse and radiation result in bone loss; we hypothesized that these catabolic treatments cause excess reactive oxygen species (ROS), and thereby alter the tight balance between bone resorption by osteoclasts and bone formation by osteoblasts, culminating in bone loss. To test this, we used transgenic mice which over-express the human gene for catalase, targeted to mitochondria (MCAT). Catalase is an anti-oxidant that converts the ROS hydrogen peroxide into water and oxygen. MCAT mice were shown previously to display reduced mitochondrial oxidative stress and radiosensitivity of the CNS compared to wild type controls (WT). As expected, MCAT mice expressed the transgene in skeletal tissue, and in marrow-derived osteoblasts and osteoclast precursors cultured ex vivo, and also showed greater catalase activity compared to wildtype (WT) mice (3-6 fold). Colony expansion in marrow cells cultured under osteoblastogenic conditions was 2-fold greater in the MCAT mice compared to WT mice, while the extent of mineralization was unaffected. MCAT mice had slightly longer tibiae than WT mice (2%, P less than 0.01), although cortical bone area was slightly lower in MCAT mice than WT mice (10%, p=0.09). To challenge the skeletal system, mice were treated by exposure to combined disuse (2 wk Hindlimb Unloading) and total body irradiation Cs(137) (2 Gy, 0.8 Gy/min), then bone parameters were analyzed by 2-factor ANOVA to detect possible interaction effects. Treatment caused a 2-fold increase (p=0.015) in malondialdehyde levels of bone tissue (ELISA) in WT mice, but had no effect in MCAT mice. These findings indicate that the transgene conferred protection from oxidative damage caused by treatment. Unexpected differences between WT and MCAT mice emerged in skeletal responses to treatment.. In WT mice, treatment did not alter osteoblastogenesis, cortical bone area, moment of inertia, or bone perimeter, whereas in MCAT mice, treatment increased these

  2. How accurately can subject-specific finite element models predict strains and strength of human femora? Investigation using full-field measurements.

    Science.gov (United States)

    Grassi, Lorenzo; Väänänen, Sami P; Ristinmaa, Matti; Jurvelin, Jukka S; Isaksson, Hanna

    2016-03-21

    Subject-specific finite element models have been proposed as a tool to improve fracture risk assessment in individuals. A thorough laboratory validation against experimental data is required before introducing such models in clinical practice. Results from digital image correlation can provide full-field strain distribution over the specimen surface during in vitro test, instead of at a few pre-defined locations as with strain gauges. The aim of this study was to validate finite element models of human femora against experimental data from three cadaver femora, both in terms of femoral strength and of the full-field strain distribution collected with digital image correlation. The results showed a high accuracy between predicted and measured principal strains (R(2)=0.93, RMSE=10%, 1600 validated data points per specimen). Femoral strength was predicted using a rate dependent material model with specific strain limit values for yield and failure. This provided an accurate prediction (strain accuracy was comparable to that obtained in state-of-the-art studies which validated their prediction accuracy against 10-16 strain gauge measurements. Fracture force was accurately predicted, with the predicted failure location being very close to the experimental fracture rim. Despite the low sample size and the single loading condition tested, the present combined numerical-experimental method showed that finite element models can predict femoral strength by providing a thorough description of the local bone mechanical response. PMID:26944687

  3. Even faster and even more accurate first-passage time densities and distributions for the Wiener diffusion model

    DEFF Research Database (Denmark)

    Gondan, Matthias; Blurton, Steven Paul; Kesselmeier, Miriam

    2014-01-01

    The Wiener diffusion model with two absorbing barriers is often used to describe response times and error probabilities in two-choice decisions. Different representations exist for the density and cumulative distribution of first-passage times, all including infinite series, but with different...

  4. Can Impacts of Climate Change and Agricultural Adaptation Strategies Be Accurately Quantified if Crop Models Are Annually Re-Initialized?

    Science.gov (United States)

    Basso, Bruno; Hyndman, David W; Kendall, Anthony D; Grace, Peter R; Robertson, G Philip

    2015-01-01

    Estimates of climate change impacts on global food production are generally based on statistical or process-based models. Process-based models can provide robust predictions of agricultural yield responses to changing climate and management. However, applications of these models often suffer from bias due to the common practice of re-initializing soil conditions to the same state for each year of the forecast period. If simulations neglect to include year-to-year changes in initial soil conditions and water content related to agronomic management, adaptation and mitigation strategies designed to maintain stable yields under climate change cannot be properly evaluated. We apply a process-based crop system model that avoids re-initialization bias to demonstrate the importance of simulating both year-to-year and cumulative changes in pre-season soil carbon, nutrient, and water availability. Results are contrasted with simulations using annual re-initialization, and differences are striking. We then demonstrate the potential for the most likely adaptation strategy to offset climate change impacts on yields using continuous simulations through the end of the 21st century. Simulations that annually re-initialize pre-season soil carbon and water contents introduce an inappropriate yield bias that obscures the potential for agricultural management to ameliorate the deleterious effects of rising temperatures and greater rainfall variability.

  5. Accurate determination of the superfluid-insulator transition in the one-dimensional Bose-Hubbard model

    OpenAIRE

    Zakrzewski, Jakub; Delande, Dominique

    2007-01-01

    The quantum phase transition point between the insulator and the superfluid phase at unit filling factor of the infinite one-dimensional Bose-Hubbard model is numerically computed with a high accuracy, better than current state of the art calculations. The method uses the infinite system version of the time evolving block decimation algorithm, here tested in a challenging case.

  6. A Parameterized yet Accurate Model of Ozone and Water Vapor Transmittance in the Solar-to-near-infrared Spectrum

    Institute of Scientific and Technical Information of China (English)

    LIU Weiyi; QIU Jinhuan

    2012-01-01

    A parameterized transmittance model (PTR) for ozone and water vapor monochromatic transmittance calculation in the solar-to-near-infrared spectrum 0.3-4 μm with a spectral resolution of 5 cm-1 was developed based on the transmittance data calculated by Moderate-resolution Transmittance model (MODTRAN).Polynomial equations were derived to represent the transmittance as functions of path length and airmass for every wavelength based on the least-squares method.Comparisons between the transmittances calculated using PTR and MODTRAN were made,using the results of MODTRAN as a reference.Relative root-mean-square error (RMSre) was 0.823% for ozone transmittance.RMSre values were 8.84% and 3.48% for water vapor transmittance ranges of 1-1 × 10-18and 1-1× 10-3,respectively.In addition,the Stratospheric Aerosol and Gas Experiment II (SAGEII) ozone profiles and University of Wyoming (UWYO)water vapor profiles were applied to validate the applicability of PTR model.RMSre was 0.437% for ozone transmittance.RMSre values were 8.89% and 2.43% for water vapor transmittance ranges of 1-1 × 10-18and 1-1 × 10-6,respectively.Furthermore,the optical depth profiles calculated using the PTR model were compared to the results of MODTRAN.Absolute RMS errors (RMSab) for ozone optical depths were within 0.0055 and 0.0523 for water vapor at all of the tested altitudes.Finally,the comparison between the solar heating rate calculated from the transmittance of PTR and Line-by-Line radiative transfer model (LBLRTM) was performed,showing a maximum deviation of 0.238 K d-1 (6% of the corresponding solar heating rate calculated using LBLRTM).In the troposphere all of the deviations were within 0.08 K d-1.The computational speed of PTR model is nearly two orders of magnitude faster than that of MODTRAN.

  7. ACCURATE 3D TEXTURED MODELS OF VESSELS FOR THE IMPROVEMENT OF THE EDUCATIONAL TOOLS OF A MUSEUM

    OpenAIRE

    S. Soile; Adam, K.; C. Ioannidis; A. Georgopoulos

    2013-01-01

    Besides the demonstration of the findings, modern museums organize educational programs which aim to experience and knowledge sharing combined with entertainment rather than to pure learning. Toward that effort, 2D and 3D digital representations are gradually replacing the traditional recording of the findings through photos or drawings. The present paper refers to a project that aims to create 3D textured models of two lekythoi that are exhibited in the National Archaeological Museu...

  8. Accurate small and wide angle x-ray scattering profiles from atomic models of proteins and nucleic acids

    Energy Technology Data Exchange (ETDEWEB)

    Nguyen, Hung T. [BioMaPS Institute for Quantitative Biology, Rutgers University, Piscataway, New Jersey 08854 (United States); Pabit, Suzette A.; Meisburger, Steve P.; Pollack, Lois [School of Applied and Engineering Physics, Cornell University, Ithaca, New York 14853 (United States); Case, David A., E-mail: case@biomaps.rutgers.edu [BioMaPS Institute for Quantitative Biology, Rutgers University, Piscataway, New Jersey 08854 (United States); Department of Chemistry and Chemical Biology, Rutgers University, Piscataway, New Jersey 08854 (United States)

    2014-12-14

    A new method is introduced to compute X-ray solution scattering profiles from atomic models of macromolecules. The three-dimensional version of the Reference Interaction Site Model (RISM) from liquid-state statistical mechanics is employed to compute the solvent distribution around the solute, including both water and ions. X-ray scattering profiles are computed from this distribution together with the solute geometry. We describe an efficient procedure for performing this calculation employing a Lebedev grid for the angular averaging. The intensity profiles (which involve no adjustable parameters) match experiment and molecular dynamics simulations up to wide angle for two proteins (lysozyme and myoglobin) in water, as well as the small-angle profiles for a dozen biomolecules taken from the BioIsis.net database. The RISM model is especially well-suited for studies of nucleic acids in salt solution. Use of fiber-diffraction models for the structure of duplex DNA in solution yields close agreement with the observed scattering profiles in both the small and wide angle scattering (SAXS and WAXS) regimes. In addition, computed profiles of anomalous SAXS signals (for Rb{sup +} and Sr{sup 2+}) emphasize the ionic contribution to scattering and are in reasonable agreement with experiment. In cases where an absolute calibration of the experimental data at q = 0 is available, one can extract a count of the excess number of waters and ions; computed values depend on the closure that is assumed in the solution of the Ornstein–Zernike equations, with results from the Kovalenko–Hirata closure being closest to experiment for the cases studied here.

  9. Dixon sequence with superimposed model-based bone compartment provides highly accurate PET/MR attenuation correction of the brain

    OpenAIRE

    Koesters, Thomas; Friedman, Kent P.; Fenchel, Matthias; Zhan, Yiqiang; Hermosillo, Gerardo; Babb, James; Jelescu, Ileana O.; Faul, David; Boada, Fernando E.; Shepherd, Timothy M.

    2016-01-01

    Simultaneous PET/MR of the brain is a promising new technology for characterizing patients with suspected cognitive impairment or epilepsy. Unlike CT though, MR signal intensities do not provide a direct correlate to PET photon attenuation correction (AC) and inaccurate radiotracer standard uptake value (SUV) estimation could limit future PET/MR clinical applications. We tested a novel AC method that supplements standard Dixon-based tissue segmentation with a superimposed model-based bone com...

  10. Traveled Distance Is a Sensitive and Accurate Marker of Motor Dysfunction in a Mouse Model of Multiple Sclerosis

    OpenAIRE

    Takemiya, Takako; Takeuchi, Chisen

    2013-01-01

    Multiple sclerosis (MS) is a common central nervous system disease associated with progressive physical impairment. To study the mechanisms of the disease, we used experimental autoimmune encephalomyelitis (EAE), an animal model of MS. EAE is induced by myelin oligodendrocyte glycoprotein35–55 peptide, and the severity of paralysis in the disease is generally measured using the EAE score. Here, we compared EAE scores and traveled distance using the open-field test for an assessment of EAE pro...

  11. Rotating Arc Jet Test Model: Time-Accurate Trajectory Heat Flux Replication in a Ground Test Environment

    Science.gov (United States)

    Laub, Bernard; Grinstead, Jay; Dyakonov, Artem; Venkatapathy, Ethiraj

    2011-01-01

    Though arc jet testing has been the proven method employed for development testing and certification of TPS and TPS instrumentation, the operational aspects of arc jets limit testing to selected, but constant, conditions. Flight, on the other hand, produces timevarying entry conditions in which the heat flux increases, peaks, and recedes as a vehicle descends through an atmosphere. As a result, we are unable to "test as we fly." Attempts to replicate the time-dependent aerothermal environment of atmospheric entry by varying the arc jet facility operating conditions during a test have proven to be difficult, expensive, and only partially successful. A promising alternative is to rotate the test model exposed to a constant-condition arc jet flow to yield a time-varying test condition at a point on a test article (Fig. 1). The model shape and rotation rate can be engineered so that the heat flux at a point on the model replicates the predicted profile for a particular point on a flight vehicle. This simple concept will enable, for example, calibration of the TPS sensors on the Mars Science Laboratory (MSL) aeroshell for anticipated flight environments.

  12. Wide-range and accurate modeling of linear alkylbenzene sulfonate (LAS) adsorption/desorption on agricultural soil.

    Science.gov (United States)

    Oliver-Rodríguez, B; Zafra-Gómez, A; Reis, M S; Duarte, B P M; Verge, C; de Ferrer, J A; Pérez-Pascual, M; Vílchez, J L

    2015-11-01

    In this paper, rigorous data and adequate models about linear alkylbenzene sulfonate (LAS) adsorption/desorption on agricultural soil are presented, contributing with a substantial improvement over available adsorption works. The kinetics of the adsorption/desorption phenomenon and the adsorption/desorption equilibrium isotherms were determined through batch studies for total LAS amount and also for each homologue series: C10, C11, C12 and C13. The proposed multiple pseudo-first order kinetic model provides the best fit to the kinetic data, indicating the presence of two adsorption/desorption processes in the general phenomenon. Equilibrium adsorption and desorption data have been properly fitted by a model consisting of a Langmuir plus quadratic term, which provides a good integrated description of the experimental data over a wide range of concentrations. At low concentrations, the Langmuir term explains the adsorption of LAS on soil sites which are highly selective of the n-alkyl groups and cover a very small fraction of the soil surface area, whereas the quadratic term describes adsorption on the much larger part of the soil surface and on LAS retained at moderate to high concentrations. Since adsorption/desorption phenomenon plays a major role in the LAS behavior in soils, relevant conclusions can be drawn from the obtained results.

  13. Wide-range and accurate modeling of linear alkylbenzene sulfonate (LAS) adsorption/desorption on agricultural soil.

    Science.gov (United States)

    Oliver-Rodríguez, B; Zafra-Gómez, A; Reis, M S; Duarte, B P M; Verge, C; de Ferrer, J A; Pérez-Pascual, M; Vílchez, J L

    2015-11-01

    In this paper, rigorous data and adequate models about linear alkylbenzene sulfonate (LAS) adsorption/desorption on agricultural soil are presented, contributing with a substantial improvement over available adsorption works. The kinetics of the adsorption/desorption phenomenon and the adsorption/desorption equilibrium isotherms were determined through batch studies for total LAS amount and also for each homologue series: C10, C11, C12 and C13. The proposed multiple pseudo-first order kinetic model provides the best fit to the kinetic data, indicating the presence of two adsorption/desorption processes in the general phenomenon. Equilibrium adsorption and desorption data have been properly fitted by a model consisting of a Langmuir plus quadratic term, which provides a good integrated description of the experimental data over a wide range of concentrations. At low concentrations, the Langmuir term explains the adsorption of LAS on soil sites which are highly selective of the n-alkyl groups and cover a very small fraction of the soil surface area, whereas the quadratic term describes adsorption on the much larger part of the soil surface and on LAS retained at moderate to high concentrations. Since adsorption/desorption phenomenon plays a major role in the LAS behavior in soils, relevant conclusions can be drawn from the obtained results. PMID:26070080

  14. Charging and discharging tests for obtaining an accurate dynamic electro-thermal model of high power lithium-ion pack system for hybrid and EV applications

    DEFF Research Database (Denmark)

    Mihet-Popa, Lucian; Camacho, Oscar Mauricio Forero; Nørgård, Per Bromand

    2013-01-01

    This paper presents a battery test platform including two Li-ion battery designed for hybrid and EV applications, and charging/discharging tests under different operating conditions carried out for developing an accurate dynamic electro-thermal model of a high power Li-ion battery pack system....... The aim of the tests has been to study the impact of the battery degradation and to find out the dynamic characteristics of the cells including nonlinear open circuit voltage, series resistance and parallel transient circuit at different charge/discharge currents and cell temperature. An equivalent...... circuit model, based on the runtime battery model and the Thevenin circuit model, with parameters obtained from the tests and depending on SOC, current and temperature has been implemented in MATLAB/Simulink and Power Factory. A good alignment between simulations and measurements has been found....

  15. Can AERONET data be used to accurately model the monochromatic beam and circumsolar irradiances under cloud-free conditions in desert environment?

    Science.gov (United States)

    Eissa, Y.; Blanc, P.; Wald, L.; Ghedira, H.

    2015-12-01

    Routine measurements of the beam irradiance at normal incidence include the irradiance originating from within the extent of the solar disc only (DNIS), whose angular extent is 0.266° ± 1.7 %, and from a larger circumsolar region, called the circumsolar normal irradiance (CSNI). This study investigates whether the spectral aerosol optical properties of the AERONET stations are sufficient for an accurate modelling of the monochromatic DNIS and CSNI under cloud-free conditions in a desert environment. The data from an AERONET station in Abu Dhabi, United Arab Emirates, and the collocated Sun and Aureole Measurement instrument which offers reference measurements of the monochromatic profile of solar radiance were exploited. Using the AERONET data both the radiative transfer models libRadtran and SMARTS offer an accurate estimate of the monochromatic DNIS, with a relative root mean square error (RMSE) of 6 % and a coefficient of determination greater than 0.96. The observed relative bias obtained with libRadtran is +2 %, while that obtained with SMARTS is -1 %. After testing two configurations in SMARTS and three in libRadtran for modelling the monochromatic CSNI, libRadtran exhibits the most accurate results when the AERONET aerosol phase function is presented as a two-term Henyey-Greenstein phase function. In this case libRadtran exhibited a relative RMSE and a bias of respectively 27 and -24 % and a coefficient of determination of 0.882. Therefore, AERONET data may very well be used to model the monochromatic DNIS and the monochromatic CSNI. The results are promising and pave the way towards reporting the contribution of the broadband circumsolar irradiance to standard measurements of the beam irradiance.

  16. Minocycline reduces reactive gliosis in the rat model of hydrocephalus

    Directory of Open Access Journals (Sweden)

    Xu Hao

    2012-12-01

    Full Text Available Abstract Background Reactive gliosis had been implicated in injury and recovery patterns associated with hydrocephalus. Our aim is to determine the efficacy of minocycline, an antibiotic known for its anti-inflammatory properties, to reduce reactive gliosis and inhibit the development of hydrocephalus. Results The ventricular dilatation were evaluated by MRI at 1-week post drugs treated, while GFAP and Iba-1were detected by RT-PCR, Immunohistochemistry and Western blot. The expression of GFAP and Iba-1 was significantly higher in hydrocephalic group compared with saline control group (p . Minocycline treatment of hydrocephalic animals reduced the expression of GFAP and Iba-1 significantly (p . Likewise, the severity of ventricular dilatation is lower in minocycline treated hydrocephalic animals compared with the no minocycline group (p . Conclusion Minocycline treatment is effective in reducing the gliosis and delaying the development of hydrocephalus with prospective to be the auxiliary therapeutic method of hydrocephalus.

  17. Models of emergency departments for reducing patient waiting times.

    Directory of Open Access Journals (Sweden)

    Marek Laskowski

    Full Text Available In this paper, we apply both agent-based models and queuing models to investigate patient access and patient flow through emergency departments. The objective of this work is to gain insights into the comparative contributions and limitations of these complementary techniques, in their ability to contribute empirical input into healthcare policy and practice guidelines. The models were developed independently, with a view to compare their suitability to emergency department simulation. The current models implement relatively simple general scenarios, and rely on a combination of simulated and real data to simulate patient flow in a single emergency department or in multiple interacting emergency departments. In addition, several concepts from telecommunications engineering are translated into this modeling context. The framework of multiple-priority queue systems and the genetic programming paradigm of evolutionary machine learning are applied as a means of forecasting patient wait times and as a means of evolving healthcare policy, respectively. The models' utility lies in their ability to provide qualitative insights into the relative sensitivities and impacts of model input parameters, to illuminate scenarios worthy of more complex investigation, and to iteratively validate the models as they continue to be refined and extended. The paper discusses future efforts to refine, extend, and validate the models with more data and real data relative to physical (spatial-topographical and social inputs (staffing, patient care models, etc.. Real data obtained through proximity location and tracking system technologies is one example discussed.

  18. Models of emergency departments for reducing patient waiting times.

    Science.gov (United States)

    Laskowski, Marek; McLeod, Robert D; Friesen, Marcia R; Podaima, Blake W; Alfa, Attahiru S

    2009-01-01

    In this paper, we apply both agent-based models and queuing models to investigate patient access and patient flow through emergency departments. The objective of this work is to gain insights into the comparative contributions and limitations of these complementary techniques, in their ability to contribute empirical input into healthcare policy and practice guidelines. The models were developed independently, with a view to compare their suitability to emergency department simulation. The current models implement relatively simple general scenarios, and rely on a combination of simulated and real data to simulate patient flow in a single emergency department or in multiple interacting emergency departments. In addition, several concepts from telecommunications engineering are translated into this modeling context. The framework of multiple-priority queue systems and the genetic programming paradigm of evolutionary machine learning are applied as a means of forecasting patient wait times and as a means of evolving healthcare policy, respectively. The models' utility lies in their ability to provide qualitative insights into the relative sensitivities and impacts of model input parameters, to illuminate scenarios worthy of more complex investigation, and to iteratively validate the models as they continue to be refined and extended. The paper discusses future efforts to refine, extend, and validate the models with more data and real data relative to physical (spatial-topographical) and social inputs (staffing, patient care models, etc.). Real data obtained through proximity location and tracking system technologies is one example discussed. PMID:19572015

  19. Models of emergency departments for reducing patient waiting times.

    Science.gov (United States)

    Laskowski, Marek; McLeod, Robert D; Friesen, Marcia R; Podaima, Blake W; Alfa, Attahiru S

    2009-07-02

    In this paper, we apply both agent-based models and queuing models to investigate patient access and patient flow through emergency departments. The objective of this work is to gain insights into the comparative contributions and limitations of these complementary techniques, in their ability to contribute empirical input into healthcare policy and practice guidelines. The models were developed independently, with a view to compare their suitability to emergency department simulation. The current models implement relatively simple general scenarios, and rely on a combination of simulated and real data to simulate patient flow in a single emergency department or in multiple interacting emergency departments. In addition, several concepts from telecommunications engineering are translated into this modeling context. The framework of multiple-priority queue systems and the genetic programming paradigm of evolutionary machine learning are applied as a means of forecasting patient wait times and as a means of evolving healthcare policy, respectively. The models' utility lies in their ability to provide qualitative insights into the relative sensitivities and impacts of model input parameters, to illuminate scenarios worthy of more complex investigation, and to iteratively validate the models as they continue to be refined and extended. The paper discusses future efforts to refine, extend, and validate the models with more data and real data relative to physical (spatial-topographical) and social inputs (staffing, patient care models, etc.). Real data obtained through proximity location and tracking system technologies is one example discussed.

  20. Development of an accurate molecular mechanics model for buckling behavior of multi-walled carbon nanotubes under axial compression.

    Science.gov (United States)

    Safaei, B; Naseradinmousavi, P; Rahmani, A

    2016-04-01

    In the present paper, an analytical solution based on a molecular mechanics model is developed to evaluate the elastic critical axial buckling strain of chiral multi-walled carbon nanotubes (MWCNTs). To this end, the total potential energy of the system is calculated with the consideration of the both bond stretching and bond angular variations. Density functional theory (DFT) in the form of generalized gradient approximation (GGA) is implemented to evaluate force constants used in the molecular mechanics model. After that, based on the principle of molecular mechanics, explicit expressions are proposed to obtain elastic surface Young's modulus and Poisson's ratio of the single-walled carbon nanotubes corresponding to different types of chirality. Selected numerical results are presented to indicate the influence of the type of chirality, tube diameter, and number of tube walls in detailed. An excellent agreement is found between the present numerical results and those found in the literature which confirms the validity as well as the accuracy of the present closed-form solution. It is found that the value of critical axial buckling strain exhibit significant dependency on the type of chirality and number of tube walls.

  1. A new expression of Ns versus Ef to an accurate control charge model for AlGaAs/GaAs

    Science.gov (United States)

    Bouneb, I.; Kerrour, F.

    2016-03-01

    Semi-conductor components become the privileged support of information and communication, particularly appreciation to the development of the internet. Today, MOS transistors on silicon dominate largely the semi-conductors market, however the diminution of transistors grid length is not enough to enhance the performances and respect Moore law. Particularly, for broadband telecommunications systems, where faster components are required. For this reason, alternative structures proposed like hetero structures IV-IV or III-V [1] have been.The most effective components in this area (High Electron Mobility Transistor: HEMT) on IIIV substrate. This work investigates an approach for contributing to the development of a numerical model based on physical and numerical modelling of the potential at heterostructure in AlGaAs/GaAs interface. We have developed calculation using projective methods allowed the Hamiltonian integration using Green functions in Schrodinger equation, for a rigorous resolution “self coherent” with Poisson equation. A simple analytical approach for charge-control in quantum well region of an AlGaAs/GaAs HEMT structure was presented. A charge-control equation, accounting for a variable average distance of the 2-DEG from the interface was introduced. Our approach which have aim to obtain ns-Vg characteristics is mainly based on: A new linear expression of Fermi-level variation with two-dimensional electron gas density in high electron mobility and also is mainly based on the notion of effective doping and a new expression of AEc

  2. Reduced model-based decision-making in schizophrenia.

    Science.gov (United States)

    Culbreth, Adam J; Westbrook, Andrew; Daw, Nathaniel D; Botvinick, Matthew; Barch, Deanna M

    2016-08-01

    Individuals with schizophrenia have a diminished ability to use reward history to adaptively guide behavior. However, tasks traditionally used to assess such deficits often rely on multiple cognitive and neural processes, leaving etiology unresolved. In the current study, we adopted recent computational formalisms of reinforcement learning to distinguish between model-based and model-free decision-making in hopes of specifying mechanisms associated with reinforcement-learning dysfunction in schizophrenia. Under this framework, decision-making is model-free to the extent that it relies solely on prior reward history, and model-based if it relies on prospective information such as motivational state, future consequences, and the likelihood of obtaining various outcomes. Model-based and model-free decision-making was assessed in 33 schizophrenia patients and 30 controls using a 2-stage 2-alternative forced choice task previously demonstrated to discern individual differences in reliance on the 2 forms of reinforcement-learning. We show that, compared with controls, schizophrenia patients demonstrate decreased reliance on model-based decision-making. Further, parameter estimates of model-based behavior correlate positively with IQ and working memory measures, suggesting that model-based deficits seen in schizophrenia may be partially explained by higher-order cognitive deficits. These findings demonstrate specific reinforcement-learning and decision-making deficits and thereby provide valuable insights for understanding disordered behavior in schizophrenia. (PsycINFO Database Record PMID:27175984

  3. Numerical simulations of a reduced model for blood coagulation

    Science.gov (United States)

    Pavlova, Jevgenija; Fasano, Antonio; Sequeira, Adélia

    2016-04-01

    In this work, the three-dimensional numerical resolution of a complex mathematical model for the blood coagulation process is presented. The model was illustrated in Fasano et al. (Clin Hemorheol Microcirc 51:1-14, 2012), Pavlova et al. (Theor Biol 380:367-379, 2015). It incorporates the action of the biochemical and cellular components of blood as well as the effects of the flow. The model is characterized by a reduction in the biochemical network and considers the impact of the blood slip at the vessel wall. Numerical results showing the capacity of the model to predict different perturbations in the hemostatic system are discussed.

  4. Reduced model-based decision-making in schizophrenia.

    Science.gov (United States)

    Culbreth, Adam J; Westbrook, Andrew; Daw, Nathaniel D; Botvinick, Matthew; Barch, Deanna M

    2016-08-01

    Individuals with schizophrenia have a diminished ability to use reward history to adaptively guide behavior. However, tasks traditionally used to assess such deficits often rely on multiple cognitive and neural processes, leaving etiology unresolved. In the current study, we adopted recent computational formalisms of reinforcement learning to distinguish between model-based and model-free decision-making in hopes of specifying mechanisms associated with reinforcement-learning dysfunction in schizophrenia. Under this framework, decision-making is model-free to the extent that it relies solely on prior reward history, and model-based if it relies on prospective information such as motivational state, future consequences, and the likelihood of obtaining various outcomes. Model-based and model-free decision-making was assessed in 33 schizophrenia patients and 30 controls using a 2-stage 2-alternative forced choice task previously demonstrated to discern individual differences in reliance on the 2 forms of reinforcement-learning. We show that, compared with controls, schizophrenia patients demonstrate decreased reliance on model-based decision-making. Further, parameter estimates of model-based behavior correlate positively with IQ and working memory measures, suggesting that model-based deficits seen in schizophrenia may be partially explained by higher-order cognitive deficits. These findings demonstrate specific reinforcement-learning and decision-making deficits and thereby provide valuable insights for understanding disordered behavior in schizophrenia. (PsycINFO Database Record

  5. Damage Detection in Flexible Plates through Reduced-Order Modeling and Hybrid Particle-Kalman Filtering.

    Science.gov (United States)

    Capellari, Giovanni; Azam, Saeed Eftekhar; Mariani, Stefano

    2015-12-22

    Health monitoring of lightweight structures, like thin flexible plates, is of interest in several engineering fields. In this paper, a recursive Bayesian procedure is proposed to monitor the health of such structures through data collected by a network of optimally placed inertial sensors. As a main drawback of standard monitoring procedures is linked to the computational costs, two remedies are jointly considered: first, an order-reduction of the numerical model used to track the structural dynamics, enforced with proper orthogonal decomposition; and, second, an improved particle filter, which features an extended Kalman updating of each evolving particle before the resampling stage. The former remedy can reduce the number of effective degrees-of-freedom of the structural model to a few only (depending on the excitation), whereas the latter one allows to track the evolution of damage and to locate it thanks to an intricate formulation. To assess the effectiveness of the proposed procedure, the case of a plate subject to bending is investigated; it is shown that, when the procedure is appropriately fed by measurements, damage is efficiently and accurately estimated.

  6. Damage Detection in Flexible Plates through Reduced-Order Modeling and Hybrid Particle-Kalman Filtering.

    Science.gov (United States)

    Capellari, Giovanni; Azam, Saeed Eftekhar; Mariani, Stefano

    2015-01-01

    Health monitoring of lightweight structures, like thin flexible plates, is of interest in several engineering fields. In this paper, a recursive Bayesian procedure is proposed to monitor the health of such structures through data collected by a network of optimally placed inertial sensors. As a main drawback of standard monitoring procedures is linked to the computational costs, two remedies are jointly considered: first, an order-reduction of the numerical model used to track the structural dynamics, enforced with proper orthogonal decomposition; and, second, an improved particle filter, which features an extended Kalman updating of each evolving particle before the resampling stage. The former remedy can reduce the number of effective degrees-of-freedom of the structural model to a few only (depending on the excitation), whereas the latter one allows to track the evolution of damage and to locate it thanks to an intricate formulation. To assess the effectiveness of the proposed procedure, the case of a plate subject to bending is investigated; it is shown that, when the procedure is appropriately fed by measurements, damage is efficiently and accurately estimated. PMID:26703615

  7. Damage Detection in Flexible Plates through Reduced-Order Modeling and Hybrid Particle-Kalman Filtering

    Directory of Open Access Journals (Sweden)

    Giovanni Capellari

    2015-12-01

    Full Text Available Health monitoring of lightweight structures, like thin flexible plates, is of interest in several engineering fields. In this paper, a recursive Bayesian procedure is proposed to monitor the health of such structures through data collected by a network of optimally placed inertial sensors. As a main drawback of standard monitoring procedures is linked to the computational costs, two remedies are jointly considered: first, an order-reduction of the numerical model used to track the structural dynamics, enforced with proper orthogonal decomposition; and, second, an improved particle filter, which features an extended Kalman updating of each evolving particle before the resampling stage. The former remedy can reduce the number of effective degrees-of-freedom of the structural model to a few only (depending on the excitation, whereas the latter one allows to track the evolution of damage and to locate it thanks to an intricate formulation. To assess the effectiveness of the proposed procedure, the case of a plate subject to bending is investigated; it is shown that, when the procedure is appropriately fed by measurements, damage is efficiently and accurately estimated.

  8. SEMICONDUCTOR INTEGRATED CIRCUITS: Accurate metamodels of device parameters and their applications in performance modeling and optimization of analog integrated circuits

    Science.gov (United States)

    Tao, Liang; Xinzhang, Jia; Junfeng, Chen

    2009-11-01

    Techniques for constructing metamodels of device parameters at BSIM3v3 level accuracy are presented to improve knowledge-based circuit sizing optimization. Based on the analysis of the prediction error of analytical performance expressions, operating point driven (OPD) metamodels of MOSFETs are introduced to capture the circuit's characteristics precisely. In the algorithm of metamodel construction, radial basis functions are adopted to interpolate the scattered multivariate data obtained from a well tailored data sampling scheme designed for MOSFETs. The OPD metamodels can be used to automatically bias the circuit at a specific DC operating point. Analytical-based performance expressions composed by the OPD metamodels show obvious improvement for most small-signal performances compared with simulation-based models. Both operating-point variables and transistor dimensions can be optimized in our nesting-loop optimization formulation to maximize design flexibility. The method is successfully applied to a low-voltage low-power amplifier.

  9. The CPA Equation of State and an Activity Coefficient Model for Accurate Molar Enthalpy Calculations of Mixtures with Carbon Dioxide and Water/Brine

    Energy Technology Data Exchange (ETDEWEB)

    Myint, P. C. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Hao, Y. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Firoozabadi, A. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2015-03-27

    Thermodynamic property calculations of mixtures containing carbon dioxide (CO2) and water, including brines, are essential in theoretical models of many natural and industrial processes. The properties of greatest practical interest are density, solubility, and enthalpy. Many models for density and solubility calculations have been presented in the literature, but there exists only one study, by Spycher and Pruess, that has compared theoretical molar enthalpy predictions with experimental data [1]. In this report, we recommend two different models for enthalpy calculations: the CPA equation of state by Li and Firoozabadi [2], and the CO2 activity coefficient model by Duan and Sun [3]. We show that the CPA equation of state, which has been demonstrated to provide good agreement with density and solubility data, also accurately calculates molar enthalpies of pure CO2, pure water, and both CO2-rich and aqueous (H2O-rich) mixtures of the two species. It is applicable to a wider range of conditions than the Spycher and Pruess model. In aqueous sodium chloride (NaCl) mixtures, we show that Duan and Sun’s model yields accurate results for the partial molar enthalpy of CO2. It can be combined with another model for the brine enthalpy to calculate the molar enthalpy of H2O-CO2-NaCl mixtures. We conclude by explaining how the CPA equation of state may be modified to further improve agreement with experiments. This generalized CPA is the basis of our future work on this topic.

  10. Can AERONET data be used to accurately model the monochromatic beam and circumsolar irradiances under cloud-free conditions in desert environment?

    Directory of Open Access Journals (Sweden)

    Y. Eissa

    2015-07-01

    Full Text Available Routine measurements of the beam irradiance at normal incidence (DNI include the irradiance originating from within the extent of the solar disc only (DNIS whose angular extent is 0.266° ± 1.7 %, and that from a larger circumsolar region, called the circumsolar normal irradiance (CSNI. This study investigates if the spectral aerosol optical properties of the AERONET stations are sufficient for an accurate modelling of the monochromatic DNIS and CSNI under cloud-free conditions in a desert environment. The data from an AERONET station in Abu Dhabi, United Arab Emirates, and a collocated Sun and Aureole Measurement (SAM instrument which offers reference measurements of the monochromatic profile of solar radiance, were exploited. Using the AERONET data both the radiative transfer models libRadtran and SMARTS offer an accurate estimate of the monochromatic DNIS, with a relative root mean square error (RMSE of 5 %, a relative bias of +1 % and acoefficient of determination greater than 0.97. After testing two configurations in SMARTS and three in libRadtran for modelling the monochromatic CSNI, libRadtran exhibits the most accurate results when the AERONET aerosol phase function is presented as a Two Term Henyey–Greenstein phase function. In this case libRadtran exhibited a relative RMSE and a bias of respectively 22 and −19 % and a coefficient of determination of 0.89. The results are promising and pave the way towards reporting the contribution of the broadband circumsolar irradiance to standard DNI measurements.

  11. Model-reduced gradient-based history matching

    NARCIS (Netherlands)

    Kaleta, M.P.

    2011-01-01

    Since the world's energy demand increases every year, the oil & gas industry makes a continuous effort to improve fossil fuel recovery. Physics-based petroleum reservoir modeling and closed-loop model-based reservoir management concept can play an important role here. In this concept measured data a

  12. Effects of Modeling and Desensitation in Reducing Dentist Phobia

    Science.gov (United States)

    Shaw, David W.; Thoresen, Carl E.

    1974-01-01

    Many persons avoid dentists and dental work. The present study explored the effects of systematic desensitization and social-modeling treatments with placebo and assessment control groups. Modeling was more effective than desensitization as shown by the number of subjects who went to a dentist. (Author)

  13. A Hybrid Mode Model of the Blazhko Effect, Shown to Accurately Fit Kepler Data for RR Lyr

    CERN Document Server

    Bryant, Paul H

    2013-01-01

    A new hypothesis is presented for the Blazhko effect in RRab stars. A nonlinear model is developed for the first overtone mode, which, if excited to large amplitude, is found to drop strongly in frequency while becoming highly nonsinusoidal. Its frequency is shown to drop sufficiently to become equal that of the fundamental mode. It is proposed that this may lead to phase-locking between the fundamental and the overtone forming a hybrid mode at the fundamental frequency. The fundamental mode, excited less strongly than the overtone, remains nearly sinusoidal and constant in frequency. By varying the fundamental's peak amplitude and its phase relative to the overtone, the hybrid mode can produce a variety of forms that match those observed in various parts of the Blazhko cycle. The presence of the fundamental also serves to stabilize the period of the hybrid which is found in real Blazhko data to be extremely stable. It is proposed that the variations in amplitude and phase might result from a nonlinear intera...

  14. The type IIP supernova 2012aw in M95: Hydrodynamical modeling of the photospheric phase from accurate spectrophotometric monitoring

    Energy Technology Data Exchange (ETDEWEB)

    Dall' Ora, M.; Botticella, M. T.; Della Valle, M. [INAF, Osservatorio Astronomico di Capodimonte, Napoli (Italy); Pumo, M. L.; Zampieri, L.; Tomasella, L.; Cappellaro, E.; Benetti, S. [INAF, Osservatorio Astronomico di Padova, I-35122 Padova (Italy); Pignata, G.; Bufano, F. [Departamento de Ciencias Fisicas, Universidad Andres Bello, Avda. Republica 252, Santiago (Chile); Bayless, A. J. [Southwest Research Institute, Department of Space Science, 6220 Culebra Road, San Antonio, TX 78238 (United States); Pritchard, T. A. [Department of Astronomy and Astrophysics, Penn State University, 525 Davey Lab, University Park, PA 16802 (United States); Taubenberger, S.; Benitez, S. [Max-Planck-Institut für Astrophysik, Karl-Schwarzschild-Str. 1, D-85741 Garching (Germany); Kotak, R.; Inserra, C.; Fraser, M. [Astrophysics Research Centre, School of Mathematics and Physics, Queen' s University Belfast, Belfast, BT7 1NN (United Kingdom); Elias-Rosa, N. [Institut de Ciències de l' Espai (CSIC-IEEC) Campus UAB, Torre C5, Za plata, E-08193 Bellaterra, Barcelona (Spain); Haislip, J. B. [Department of Physics and Astronomy, University of North Carolina at Chapel Hill, 120 E. Cameron Ave., Chapel Hill, NC 27599 (United States); Harutyunyan, A. [Fundación Galileo Galilei - Telescopio Nazionale Galileo, Rambla José Ana Fernández Pérez 7, E-38712 Breña Baja, TF - Spain (Spain); and others

    2014-06-01

    We present an extensive optical and near-infrared photometric and spectroscopic campaign of the Type IIP supernova SN 2012aw. The data set densely covers the evolution of SN 2012aw shortly after the explosion through the end of the photospheric phase, with two additional photometric observations collected during the nebular phase, to fit the radioactive tail and estimate the {sup 56}Ni mass. Also included in our analysis is the previously published Swift UV data, therefore providing a complete view of the ultraviolet-optical-infrared evolution of the photospheric phase. On the basis of our data set, we estimate all the relevant physical parameters of SN 2012aw with our radiation-hydrodynamics code: envelope mass M {sub env} ∼ 20 M {sub ☉}, progenitor radius R ∼ 3 × 10{sup 13} cm (∼430 R {sub ☉}), explosion energy E ∼ 1.5 foe, and initial {sup 56}Ni mass ∼0.06 M {sub ☉}. These mass and radius values are reasonably well supported by independent evolutionary models of the progenitor, and may suggest a progenitor mass higher than the observational limit of 16.5 ± 1.5 M {sub ☉} of the Type IIP events.

  15. Genome-Scale Metabolic Model for the Green Alga Chlorella vulgaris UTEX 395 Accurately Predicts Phenotypes under Autotrophic, Heterotrophic, and Mixotrophic Growth Conditions.

    Science.gov (United States)

    Zuñiga, Cristal; Li, Chien-Ting; Huelsman, Tyler; Levering, Jennifer; Zielinski, Daniel C; McConnell, Brian O; Long, Christopher P; Knoshaug, Eric P; Guarnieri, Michael T; Antoniewicz, Maciek R; Betenbaugh, Michael J; Zengler, Karsten

    2016-09-01

    The green microalga Chlorella vulgaris has been widely recognized as a promising candidate for biofuel production due to its ability to store high lipid content and its natural metabolic versatility. Compartmentalized genome-scale metabolic models constructed from genome sequences enable quantitative insight into the transport and metabolism of compounds within a target organism. These metabolic models have long been utilized to generate optimized design strategies for an improved production process. Here, we describe the reconstruction, validation, and application of a genome-scale metabolic model for C. vulgaris UTEX 395, iCZ843. The reconstruction represents the most comprehensive model for any eukaryotic photosynthetic organism to date, based on the genome size and number of genes in the reconstruction. The highly curated model accurately predicts phenotypes under photoautotrophic, heterotrophic, and mixotrophic conditions. The model was validated against experimental data and lays the foundation for model-driven strain design and medium alteration to improve yield. Calculated flux distributions under different trophic conditions show that a number of key pathways are affected by nitrogen starvation conditions, including central carbon metabolism and amino acid, nucleotide, and pigment biosynthetic pathways. Furthermore, model prediction of growth rates under various medium compositions and subsequent experimental validation showed an increased growth rate with the addition of tryptophan and methionine. PMID:27372244

  16. More Accurate Prediction of Metastatic Pancreatic Cancer Patients' Survival with Prognostic Model Using Both Host Immunity and Tumor Metabolic Activity.

    Directory of Open Access Journals (Sweden)

    Younak Choi

    Full Text Available Neutrophil to lymphocyte ratio (NLR and standard uptake value (SUV by 18F-FDG PET represent host immunity and tumor metabolic activity, respectively. We investigated NLR and maximum SUV (SUVmax as prognostic markers in metastatic pancreatic cancer (MPC patients who receive palliative chemotherapy.We reviewed 396 MPC patients receiving palliative chemotherapy. NLR was obtained before and after the first cycle of chemotherapy. In 118 patients with PET prior to chemotherapy, SUVmax was collected. Cut-off values were determined by ROC curve.In multivariate analysis of all patients, NLR and change in NLR after the first cycle of chemotherapy (ΔNLR were independent prognostic factors for overall survival (OS. We scored the risk considering NLR and ΔNLR and identified 4 risk groups with different prognosis (risk score 0 vs 1 vs 2 vs 3: OS 9.7 vs 7.9 vs 5.7 vs 2.6 months, HR 1 vs 1.329 vs 2.137 vs 7.915, respectively; P<0.001. In PET cohort, NLR and SUVmax were independently prognostic for OS. Prognostication model using both NLR and SUVmax could define 4 risk groups with different OS (risk score 0 vs 1 vs 2 vs 3: OS 11.8 vs 9.8 vs 7.2 vs 4.6 months, HR 1 vs 1.536 vs 2.958 vs 5.336, respectively; P<0.001.NLR and SUVmax as simple parameters of host immunity and metabolic activity of tumor cell, respectively, are independent prognostic factors for OS in MPC patients undergoing palliative chemotherapy.

  17. Crop Model Improvement Reduces the Uncertainty of the Response to Temperature of Multi-Model Ensembles

    Science.gov (United States)

    Maiorano, Andrea; Martre, Pierre; Asseng, Senthold; Ewert, Frank; Mueller, Christoph; Roetter, Reimund P.; Ruane, Alex C.; Semenov, Mikhail A.; Wallach, Daniel; Wang, Enli

    2016-01-01

    To improve climate change impact estimates and to quantify their uncertainty, multi-model ensembles (MMEs) have been suggested. Model improvements can improve the accuracy of simulations and reduce the uncertainty of climate change impact assessments. Furthermore, they can reduce the number of models needed in a MME. Herein, 15 wheat growth models of a larger MME were improved through re-parameterization and/or incorporating or modifying heat stress effects on phenology, leaf growth and senescence, biomass growth, and grain number and size using detailed field experimental data from the USDA Hot Serial Cereal experiment (calibration data set). Simulation results from before and after model improvement were then evaluated with independent field experiments from a CIMMYT worldwide field trial network (evaluation data set). Model improvements decreased the variation (10th to 90th model ensemble percentile range) of grain yields simulated by the MME on average by 39% in the calibration data set and by 26% in the independent evaluation data set for crops grown in mean seasonal temperatures greater than 24 C. MME mean squared error in simulating grain yield decreased by 37%. A reduction in MME uncertainty range by 27% increased MME prediction skills by 47%. Results suggest that the mean level of variation observed in field experiments and used as a benchmark can be reached with half the number of models in the MME. Improving crop models is therefore important to increase the certainty of model-based impact assessments and allow more practical, i.e. smaller MMEs to be used effectively.

  18. Reducing uncertainty in high-resolution sea ice models.

    Energy Technology Data Exchange (ETDEWEB)

    Peterson, Kara J.; Bochev, Pavel Blagoveston

    2013-07-01

    Arctic sea ice is an important component of the global climate system, reflecting a significant amount of solar radiation, insulating the ocean from the atmosphere and influencing ocean circulation by modifying the salinity of the upper ocean. The thickness and extent of Arctic sea ice have shown a significant decline in recent decades with implications for global climate as well as regional geopolitics. Increasing interest in exploration as well as climate feedback effects make predictive mathematical modeling of sea ice a task of tremendous practical import. Satellite data obtained over the last few decades have provided a wealth of information on sea ice motion and deformation. The data clearly show that ice deformation is focused along narrow linear features and this type of deformation is not well-represented in existing models. To improve sea ice dynamics we have incorporated an anisotropic rheology into the Los Alamos National Laboratory global sea ice model, CICE. Sensitivity analyses were performed using the Design Analysis Kit for Optimization and Terascale Applications (DAKOTA) to determine the impact of material parameters on sea ice response functions. Two material strength parameters that exhibited the most significant impact on responses were further analyzed to evaluate their influence on quantitative comparisons between model output and data. The sensitivity analysis along with ten year model runs indicate that while the anisotropic rheology provides some benefit in velocity predictions, additional improvements are required to make this material model a viable alternative for global sea ice simulations.

  19. 精确动力学模型下的火星探测轨道设计%Orbit Design for Mars Exploration by the Accurate Dynamic Model

    Institute of Scientific and Technical Information of China (English)

    陈杨; 赵国强; 宝音贺西; 李俊峰

    2011-01-01

    The precision orbit design for Mars exploration by the accurate dynamic model was studied. The launch window and trans-Mars orbit was determined through the partical swarm optimization (PSO) algorithm within the heliocentric two-body restriction. The patched conics method was introduced to design the earth-centred parking orbit and departure hyperbolic orbit. The solution of two-body Lambert problem was input as initial value for precision orbit design, and the preliminary orbit was corrected with the restrictions of the Mars B-plane parameters and flight time by the accurate dynamic model. Finally the designed orbit was simulated with the STK softwares.%首先在二体意义下采用粒子群优化算法(PSO)求解Lambert问题,确定发射窗口和二体地火转移轨道.使用圆锥曲线拼接法设计地心停泊轨道、逃逸轨道,并作为轨道精确设计的初值,以建立在火星的B平面参数和地火转移时间为约束,在精确动力学模型下进行微分迭代修正,最终得到满足约束的精确轨道.将设计轨道在STK软件中仿真,结果吻合.

  20. A systematic approach for the accurate non-invasive estimation of blood glucose utilizing a novel light-tissue interaction adaptive modelling scheme

    International Nuclear Information System (INIS)

    Diabetes is one of the biggest health challenges of the 21st century. The obesity epidemic, sedentary lifestyles and an ageing population mean prevalence of the condition is currently doubling every generation. Diabetes is associated with serious chronic ill health, disability and premature mortality. Long-term complications including heart disease, stroke, blindness, kidney disease and amputations, make the greatest contribution to the costs of diabetes care. Many of these long-term effects could be avoided with earlier, more effective monitoring and treatment. Currently, blood glucose can only be monitored through the use of invasive techniques. To date there is no widely accepted and readily available non-invasive monitoring technique to measure blood glucose despite the many attempts. This paper challenges one of the most difficult non-invasive monitoring techniques, that of blood glucose, and proposes a new novel approach that will enable the accurate, and calibration free estimation of glucose concentration in blood. This approach is based on spectroscopic techniques and a new adaptive modelling scheme. The theoretical implementation and the effectiveness of the adaptive modelling scheme for this application has been described and a detailed mathematical evaluation has been employed to prove that such a scheme has the capability of extracting accurately the concentration of glucose from a complex biological media

  1. Safety-relevant mode confusions-modelling and reducing them

    Energy Technology Data Exchange (ETDEWEB)

    Bredereke, Jan [Universitaet Bremen, FB 3, P.O. Box 330 440, D-28334 Bremen (Germany)]. E-mail: brederek@tzi.de; Lankenau, Axel [Universitaet Bremen, FB 3, P.O. Box 330 440, D-28334 Bremen (Germany)

    2005-06-01

    Mode confusions are a significant safety concern in safety-critical systems, for example in aircraft. A mode confusion occurs when the observed behaviour of a technical system is out of sync with the user's mental model of its behaviour. But the notion is described only informally in the literature. We present a rigorous way of modelling the user and the machine in a shared-control system. This enables us to propose precise definitions of 'mode' and 'mode confusion' for safety-critical systems. We then validate these definitions against the informal notions in the literature. A new classification of mode confusions by cause leads to a number of design recommendations for shared-control systems. These help in avoiding mode confusion problems. Our approach supports the automated detection of remaining mode confusion problems. We apply our approach practically to a wheelchair robot.

  2. A comparison of updating algorithms for large $N$ reduced models

    CERN Document Server

    Pérez, Margarita García; Keegan, Liam; Okawa, Masanori; Ramos, Alberto

    2015-01-01

    We investigate Monte Carlo updating algorithms for simulating $SU(N)$ Yang-Mills fields on a single-site lattice, such as for the Twisted Eguchi-Kawai model (TEK). We show that performing only over-relaxation (OR) updates of the gauge links is a valid simulation algorithm for the Fabricius and Haan formulation of this model, and that this decorrelates observables faster than using heat-bath updates. We consider two different methods of implementing the OR update: either updating the whole $SU(N)$ matrix at once, or iterating through $SU(2)$ subgroups of the $SU(N)$ matrix, we find the same critical exponent in both cases, and only a slight difference between the two.

  3. The i-V curve curve characteristics of burner-stabilized premixed flames: detailed and reduced models

    CERN Document Server

    Han, Jie; Casey, Tiernan A; Bisetti, Fabrizio; Im, Hong G; Chen, Jyh-Yuan

    2016-01-01

    The i-V curve describes the current drawn from a flame as a function of the voltage difference applied across the reaction zone. Since combustion diagnostics and flame control strategies based on electric fields depend on the amount of current drawn from flames, there is significant interest in modeling and understanding i-V curves. We implement and apply a detailed model for the simulation of the production and transport of ions and electrons in one dimensional premixed flames. An analytical reduced model is developed based on the detailed one, and analytical expressions are used to gain insight into the characteristics of the i-V curve for various flame configurations. In order for the reduced model to capture the spatial distribution of the electric field accurately, the concept of a dead zone region, where voltage is constant, is introduced, and a suitable closure for the spatial extent of the dead zone is proposed and validated. The results from the reduced modeling framework are found to be in good agre...

  4. An Efficient Reduced-Order Model for the Nonlinear Dynamics of Carbon Nanotubes

    KAUST Repository

    Xu, Tiantian

    2014-08-17

    Because of the inherent nonlinearities involving the behavior of CNTs when excited by electrostatic forces, modeling and simulating their behavior is challenging. The complicated form of the electrostatic force describing the interaction of their cylindrical shape, forming upper electrodes, to lower electrodes poises serious computational challenges. This presents an obstacle against applying and using several nonlinear dynamics tools that typically used to analyze the behavior of complicated nonlinear systems, such as shooting, continuation, and integrity analysis techniques. This works presents an attempt to resolve this issue. We present an investigation of the nonlinear dynamics of carbon nanotubes when actuated by large electrostatic forces. We study expanding the complicated form of the electrostatic force into enough number of terms of the Taylor series. We plot and compare the expanded form of the electrostatic force to the exact form and found that at least twenty terms are needed to capture accurately the strong nonlinear form of the force over the full range of motion. Then, we utilize this form along with an Euler–Bernoulli beam model to study the static and dynamic behavior of CNTs. The geometric nonlinearity and the nonlinear electrostatic force are considered. An efficient reduced-order model (ROM) based on the Galerkin method is developed and utilized to simulate the static and dynamic responses of the CNTs. We found that the use of the new expanded form of the electrostatic force enables avoiding the cumbersome evaluation of the spatial integrals involving the electrostatic force during the modal projection procedure in the Galerkin method, which needs to be done at every time step. Hence, the new method proves to be much more efficient computationally.

  5. Reduced order models based on local POD plus Galerkin projection

    Science.gov (United States)

    Rapún, María-Luisa; Vega, José M.

    2010-04-01

    A method is presented to accelerate numerical simulations on parabolic problems using a numerical code and a Galerkin system (obtained via POD plus Galerkin projection) on a sequence of interspersed intervals. The lengths of these intervals are chosen according to several basic ideas that include an a priori estimate of the error of the Galerkin approximation. Several improvements are introduced that reduce computational complexity and deal with: (a) updating the POD manifold (instead of calculating it) at the end of each Galerkin interval; (b) using only a limited number of mesh points to calculate the right hand side of the Galerkin system; and (c) introducing a second error estimate based on a second Galerkin system to account for situations in which qualitative changes in the dynamics occur during the application of the Galerkin system. The resulting method, called local POD plus Galerkin projection method, turns out to be both robust and efficient. For illustration, we consider a time-dependent Fisher-like equation and a complex Ginzburg-Landau equation.

  6. On reduced models for gravity waves generated by moving bodies

    CERN Document Server

    Trinh, Philippe H

    2015-01-01

    In 1982, Marshall P. Tulin published a report proposing a framework for reducing the equations for gravity waves generated by moving bodies into a single nonlinear differential equation solvable in closed form [Proc. 14th Symp. on Naval Hydrodynamics, 1982, pp.19-51]. Several new and puzzling issues were highlighted by Tulin, notably the existence of weak and strong wave-making regimes, and the paradoxical fact that the theory seemed to be applicable to flows at low speeds, "but not too low speeds". These important issues were left unanswered, and despite the novelty of the ideas, Tulin's report fell into relative obscurity. Now thirty years later, we will revive Tulin's observations, and explain how an asymptotically consistent framework allows us to address these concerns. Most notably, we will explain, using the asymptotic method of steepest descents, how the production of free-surface waves can be related to the arrangement of integration contours connected to the shape of the moving body. This approach p...

  7. Cure violence: a public health model to reduce gun violence.

    Science.gov (United States)

    Butts, Jeffrey A; Roman, Caterina Gouvis; Bostwick, Lindsay; Porter, Jeremy R

    2015-03-18

    Scholars and practitioners alike in recent years have suggested that real and lasting progress in the fight against gun violence requires changing the social norms and attitudes that perpetuate violence and the use of guns. The Cure Violence model is a public health approach to gun violence reduction that seeks to change individual and community attitudes and norms about gun violence. It considers gun violence to be analogous to a communicable disease that passes from person to person when left untreated. Cure Violence operates independently of, while hopefully not undermining, law enforcement. In this article, we describe the theoretical basis for the program, review existing program evaluations, identify several challenges facing evaluators, and offer directions for future research. PMID:25581151

  8. Pseudo-spectral Maxwell solvers for an accurate modeling of Doppler harmonic generation on plasma mirrors with Particle-In-Cell codes

    CERN Document Server

    Blaclard, G; Lehe, R; Vay, J L

    2016-01-01

    With the advent of PW class lasers, the very large laser intensities attainable on-target should enable the production of intense high order Doppler harmonics from relativistic laser-plasma mirrors interactions. At present, the modeling of these harmonics with Particle-In-Cell (PIC) codes is extremely challenging as it implies an accurate description of tens of harmonic orders on a a broad range of angles. In particular, we show here that standard Finite Difference Time Domain (FDTD) Maxwell solvers used in most PIC codes partly fail to model Doppler harmonic generation because they induce numerical dispersion of electromagnetic waves in vacuum which is responsible for a spurious angular deviation of harmonic beams. This effect was extensively studied and a simple toy-model based on Snell-Descartes law was developed that allows us to finely predict the angular deviation of harmonics depending on the spatio-temporal resolution and the Maxwell solver used in the simulations. Our model demonstrates that the miti...

  9. The effect of audio and video modeling on beginning guitar students' ability to accurately sing and accompany a familiar melody on guitar by ear.

    Science.gov (United States)

    Wlodarczyk, Natalie

    2010-01-01

    The purpose of this research was to determine the effect of audio and visual modeling on music and nonmusic majors' ability to accurately sing and accompany a familiar melody on guitar by ear. Two studies were run to investigate the impact of musical training on the ability to play by ear. All participants were student volunteers enrolled in sections of a beginning class guitar course and were randomly assigned to one of three groups: control, audio modeling only, or audio and visual modeling. All participants were asked to sing the same familiar song in the same key and accompany on guitar. Study 1 compared music majors with nonmusic majors and showed no significant difference between treatment conditions, however, there was a significant difference between music majors and nonmusic majors across all conditions. There was no significant interaction between groups and treatment conditions. Study 2 investigated the operational definition of "musically trained" and compared musically trained with nonmusically trained participants across the same three conditions. Results of Study 2 showed no significant difference between musically trained and nonmusically trained participants; however, there was a significant difference between treatment conditions with the audio-visual group completing the task in the shortest amount of time. There was no significant interaction between groups and treatment conditions. Results of these analyses support the use of instructor modeling for beginning guitar students and suggest that previous musical knowledge does not play a role in guitar skills acquisition at the beginning level. PMID:21141772

  10. Reducing Uncertainty in Chemistry Climate Model Predictions of Stratospheric Ozone

    Science.gov (United States)

    Douglass, A. R.; Strahan, S. E.; Oman, L. D.; Stolarski, R. S.

    2014-01-01

    Chemistry climate models (CCMs) are used to predict the future evolution of stratospheric ozone as ozone-depleting substances decrease and greenhouse gases increase, cooling the stratosphere. CCM predictions exhibit many common features, but also a broad range of values for quantities such as year of ozone-return-to-1980 and global ozone level at the end of the 21st century. Multiple linear regression is applied to each of 14 CCMs to separate ozone response to chlorine change from that due to climate change. We show that the sensitivity of lower atmosphere ozone to chlorine change deltaO3/deltaCly is a near linear function of partitioning of total inorganic chlorine (Cly) into its reservoirs; both Cly and its partitioning are controlled by lower atmospheric transport. CCMs with realistic transport agree with observations for chlorine reservoirs and produce similar ozone responses to chlorine change. After 2035 differences in response to chlorine contribute little to the spread in CCM results as the anthropogenic contribution to Cly becomes unimportant. Differences among upper stratospheric ozone increases due to temperature decreases are explained by differences in ozone sensitivity to temperature change deltaO3/deltaT due to different contributions from various ozone loss processes, each with their own temperature dependence. In the lower atmosphere, tropical ozone decreases caused by a predicted speed-up in the Brewer-Dobson circulation may or may not be balanced by middle and high latitude increases, contributing most to the spread in late 21st century predictions.

  11. An Efficient and Accurate Numerical Algorithm for Multi-Dimensional Modeling of Casting Solidification, Part Ⅱ: Combination of FEM and FDM

    Institute of Scientific and Technical Information of China (English)

    Jin Xuesong; Tsai Hailung

    1994-01-01

    This paper is a continuation of Ref. [1]. It employs frist-order accurate Taylor-Galerkin-based finite element approach for casting solidification. The approach is based on expressing the finite-difference approximation of the transient time derivative of temperature, while the expressions of the governing equations are discretized in space via the classical Galerkin scheme using finiteelement formulations. The detailed technique is reported in this study. Several casting solidification examples are solved to demonstrate the excellent agreements in comparison with the results obtained by using the control volume method, and to show the availability of combination of the finite element method and the finite difference method in multi-dimensional modeling of casting solidification.

  12. Chaotic vibrations of circular cylindrical shells: Galerkin versus reduced-order models via the proper orthogonal decomposition method

    Science.gov (United States)

    Amabili, M.; Sarkar, A.; Païdoussis, M. P.

    2006-03-01

    The geometric nonlinear response of a water-filled, simply supported circular cylindrical shell to harmonic excitation in the spectral neighbourhood of the fundamental natural frequency is investigated. The response is investigated for a fixed excitation frequency by using the excitation amplitude as bifurcation parameter for a wide range of variation. Bifurcation diagrams of Poincaré maps obtained from direct time integration and calculation of the Lyapunov exponents and Lyapunov dimension have been used to study the system. By increasing the excitation amplitude, the response undergoes (i) a period-doubling bifurcation, (ii) subharmonic response, (iii) quasi-periodic response and (iv) chaotic behaviour with up to 16 positive Lyapunov exponents (hyperchaos). The model is based on Donnell's nonlinear shallow-shell theory, and the reference solution is obtained by the Galerkin method. The proper orthogonal decomposition (POD) method is used to extract proper orthogonal modes that describe the system behaviour from time-series response data. These time-series have been obtained via the conventional Galerkin approach (using normal modes as a projection basis) with an accurate model involving 16 degrees of freedom (dofs), validated in previous studies. The POD method, in conjunction with the Galerkin approach, permits to build a lower-dimensional model as compared to those obtainable via the conventional Galerkin approach. Periodic and quasi-periodic response around the fundamental resonance for fixed excitation amplitude, can be very successfully simulated with a 3-dof reduced-order model. However, in the case of large variation of the excitation, even a 5-dof reduced-order model is not fully accurate. Results show that the POD methodology is not as "robust" as the Galerkin method.

  13. A nonlinear POD reduced order model for limit cycle oscillation prediction

    Institute of Scientific and Technical Information of China (English)

    2010-01-01

    As the amplitude of the unsteady flow oscillation is large or large changes occur in the mean background flow such as limit cycle oscillation,the traditional proper orthogonal decomposition reduced order model based on linearized time or frequency domain small disturbance solvers can not capture the main nonlinear features.A new nonlinear reduced order model based on the dynamically nonlinear flow equation was investigated.The nonlinear second order snapshot equation in the time domain for proper orthogonal decomposition basis construction was obtained from the Taylor series expansion of the flow solver.The NLR 7301 airfoil configuration and Goland+ wing/store aeroelastic model were used to validate the capability and efficiency of the new nonlinear reduced order model.The simulation results indicate that the proposed new reduced order model can capture the limit cycle oscillation of aeroelastic system very well,while the traditional proper orthogonal decomposition reduced order model will lose effectiveness.

  14. Comparison of reduced models for blood flow using Runge-Kutta discontinuous Galerkin methods

    CERN Document Server

    Puelz, Charles; Canic, Suncica; Rusin, Craig G

    2015-01-01

    Reduced, or one-dimensional blood flow models take the general form of nonlinear hyperbolic systems, but differ greatly in their formulation. One class of models considers the physically conserved quantities of mass and momentum, while another class describes mass and velocity. Further, the averaging process employed in the model derivation requires the specification of the axial velocity profile; this choice differentiates models within each class. Discrepancies among differing models have yet to be investigated. In this paper, we systematically compare several reduced models of blood flow for physiologically relevant vessel parameters, network topology, and boundary data. The models are discretized by a class of Runge-Kutta discontinuous Galerkin methods.

  15. Time-Dependent Solutions to the Fokker-Planck Equation of Maximum Reduced Air-Sea Coupling Climate Model

    Institute of Scientific and Technical Information of China (English)

    FENG Guolin; DONG Wenjie; GAO Hongxing

    2005-01-01

    The time-dependent solution of reduced air-sea coupling stochastic-dynamic model is accurately obtained by using the Fokker-Planck equation and the quantum mechanical method. The analysis of the timedependent solution suggests that when the climate system is in the ground state, the behavior of the system appears to be a Brownian movement, thus reasoning the foothold of Hasselmann's stochastic climatic model;when the system is in the first excitation state, the motion of the system exhibits a form of time-decaying,or under certain condition a periodic oscillation with the main period being 2.3 yr. At last, the results are used to discuss the impact of the doubling of carbon dioxide on climate.

  16. Bayesian State-Space Modelling of Conventional Acoustic Tracking Provides Accurate Descriptors of Home Range Behavior in a Small-Bodied Coastal Fish Species

    Science.gov (United States)

    Alós, Josep; Palmer, Miquel; Balle, Salvador; Arlinghaus, Robert

    2016-01-01

    State-space models (SSM) are increasingly applied in studies involving biotelemetry-generated positional data because they are able to estimate movement parameters from positions that are unobserved or have been observed with non-negligible observational error. Popular telemetry systems in marine coastal fish consist of arrays of omnidirectional acoustic receivers, which generate a multivariate time-series of detection events across the tracking period. Here we report a novel Bayesian fitting of a SSM application that couples mechanistic movement properties within a home range (a specific case of random walk weighted by an Ornstein-Uhlenbeck process) with a model of observational error typical for data obtained from acoustic receiver arrays. We explored the performance and accuracy of the approach through simulation modelling and extensive sensitivity analyses of the effects of various configurations of movement properties and time-steps among positions. Model results show an accurate and unbiased estimation of the movement parameters, and in most cases the simulated movement parameters were properly retrieved. Only in extreme situations (when fast swimming speeds are combined with pooling the number of detections over long time-steps) the model produced some bias that needs to be accounted for in field applications. Our method was subsequently applied to real acoustic tracking data collected from a small marine coastal fish species, the pearly razorfish, Xyrichtys novacula. The Bayesian SSM we present here constitutes an alternative for those used to the Bayesian way of reasoning. Our Bayesian SSM can be easily adapted and generalized to any species, thereby allowing studies in freely roaming animals on the ecological and evolutionary consequences of home ranges and territory establishment, both in fishes and in other taxa. PMID:27119718

  17. Bayesian State-Space Modelling of Conventional Acoustic Tracking Provides Accurate Descriptors of Home Range Behavior in a Small-Bodied Coastal Fish Species.

    Directory of Open Access Journals (Sweden)

    Josep Alós

    Full Text Available State-space models (SSM are increasingly applied in studies involving biotelemetry-generated positional data because they are able to estimate movement parameters from positions that are unobserved or have been observed with non-negligible observational error. Popular telemetry systems in marine coastal fish consist of arrays of omnidirectional acoustic receivers, which generate a multivariate time-series of detection events across the tracking period. Here we report a novel Bayesian fitting of a SSM application that couples mechanistic movement properties within a home range (a specific case of random walk weighted by an Ornstein-Uhlenbeck process with a model of observational error typical for data obtained from acoustic receiver arrays. We explored the performance and accuracy of the approach through simulation modelling and extensive sensitivity analyses of the effects of various configurations of movement properties and time-steps among positions. Model results show an accurate and unbiased estimation of the movement parameters, and in most cases the simulated movement parameters were properly retrieved. Only in extreme situations (when fast swimming speeds are combined with pooling the number of detections over long time-steps the model produced some bias that needs to be accounted for in field applications. Our method was subsequently applied to real acoustic tracking data collected from a small marine coastal fish species, the pearly razorfish, Xyrichtys novacula. The Bayesian SSM we present here constitutes an alternative for those used to the Bayesian way of reasoning. Our Bayesian SSM can be easily adapted and generalized to any species, thereby allowing studies in freely roaming animals on the ecological and evolutionary consequences of home ranges and territory establishment, both in fishes and in other taxa.

  18. Bayesian State-Space Modelling of Conventional Acoustic Tracking Provides Accurate Descriptors of Home Range Behavior in a Small-Bodied Coastal Fish Species.

    Science.gov (United States)

    Alós, Josep; Palmer, Miquel; Balle, Salvador; Arlinghaus, Robert

    2016-01-01

    State-space models (SSM) are increasingly applied in studies involving biotelemetry-generated positional data because they are able to estimate movement parameters from positions that are unobserved or have been observed with non-negligible observational error. Popular telemetry systems in marine coastal fish consist of arrays of omnidirectional acoustic receivers, which generate a multivariate time-series of detection events across the tracking period. Here we report a novel Bayesian fitting of a SSM application that couples mechanistic movement properties within a home range (a specific case of random walk weighted by an Ornstein-Uhlenbeck process) with a model of observational error typical for data obtained from acoustic receiver arrays. We explored the performance and accuracy of the approach through simulation modelling and extensive sensitivity analyses of the effects of various configurations of movement properties and time-steps among positions. Model results show an accurate and unbiased estimation of the movement parameters, and in most cases the simulated movement parameters were properly retrieved. Only in extreme situations (when fast swimming speeds are combined with pooling the number of detections over long time-steps) the model produced some bias that needs to be accounted for in field applications. Our method was subsequently applied to real acoustic tracking data collected from a small marine coastal fish species, the pearly razorfish, Xyrichtys novacula. The Bayesian SSM we present here constitutes an alternative for those used to the Bayesian way of reasoning. Our Bayesian SSM can be easily adapted and generalized to any species, thereby allowing studies in freely roaming animals on the ecological and evolutionary consequences of home ranges and territory establishment, both in fishes and in other taxa. PMID:27119718

  19. Sensitivity of Reliability Estimates in Partially Damaged RC Structures subject to Earthquakes, using Reduced Hysteretic Models

    DEFF Research Database (Denmark)

    Iwankiewicz, R.; Nielsen, Søren R. K.; Skjærbæk, P. S.

    The subject of the paper is the investigation of the sensitivity of structural reliability estimation by a reduced hysteretic model for a reinforced concrete frame under an earthquake excitation.......The subject of the paper is the investigation of the sensitivity of structural reliability estimation by a reduced hysteretic model for a reinforced concrete frame under an earthquake excitation....

  20. The role of chemistry and pH of solid surfaces for specific adsorption of biomolecules in solution—accurate computational models and experiment

    International Nuclear Information System (INIS)

    Adsorption of biomolecules and polymers to inorganic nanostructures plays a major role in the design of novel materials and therapeutics. The behavior of flexible molecules on solid surfaces at a scale of 1–1000 nm remains difficult and expensive to monitor using current laboratory techniques, while playing a critical role in energy conversion and composite materials as well as in understanding the origin of diseases. Approaches to implement key surface features and pH in molecular models of solids are explained, and distinct mechanisms of peptide recognition on metal nanostructures, silica and apatite surfaces in solution are described as illustrative examples. The influence of surface energies, specific surface features and protonation states on the structure of aqueous interfaces and selective biomolecular adsorption is found to be critical, comparable to the well-known influence of the charge state and pH of proteins and surfactants on their conformations and assembly. The representation of such details in molecular models according to experimental data and available chemical knowledge enables accurate simulations of unknown complex interfaces in atomic resolution in quantitative agreement with independent experimental measurements. In this context, the benefits of a uniform force field for all material classes and of a mineral surface structure database are discussed. (paper)

  1. Reducing uncertainty for estimating forest carbon stocks and dynamics using integrated remote sensing, forest inventory and process-based modeling

    Science.gov (United States)

    Poulter, B.; Ciais, P.; Joetzjer, E.; Maignan, F.; Luyssaert, S.; Barichivich, J.

    2015-12-01

    Accurately estimating forest biomass and forest carbon dynamics requires new integrated remote sensing, forest inventory, and carbon cycle modeling approaches. Presently, there is an increasing and urgent need to reduce forest biomass uncertainty in order to meet the requirements of carbon mitigation treaties, such as Reducing Emissions from Deforestation and forest Degradation (REDD+). Here we describe a new parameterization and assimilation methodology used to estimate tropical forest biomass using the ORCHIDEE-CAN dynamic global vegetation model. ORCHIDEE-CAN simulates carbon uptake and allocation to individual trees using a mechanistic representation of photosynthesis, respiration and other first-order processes. The model is first parameterized using forest inventory data to constrain background mortality rates, i.e., self-thinning, and productivity. Satellite remote sensing data for forest structure, i.e., canopy height, is used to constrain simulated forest stand conditions using a look-up table approach to match canopy height distributions. The resulting forest biomass estimates are provided for spatial grids that match REDD+ project boundaries and aim to provide carbon estimates for the criteria described in the IPCC Good Practice Guidelines Tier 3 category. With the increasing availability of forest structure variables derived from high-resolution LIDAR, RADAR, and optical imagery, new methodologies and applications with process-based carbon cycle models are becoming more readily available to inform land management.

  2. Speaking Fluently And Accurately

    Institute of Scientific and Technical Information of China (English)

    JosephDeVeto

    2004-01-01

    Even after many years of study,students make frequent mistakes in English. In addition, many students still need a long time to think of what they want to say. For some reason, in spite of all the studying, students are still not quite fluent.When I teach, I use one technique that helps students not only speak more accurately, but also more fluently. That technique is dictations.

  3. Accurate Finite Difference Algorithms

    Science.gov (United States)

    Goodrich, John W.

    1996-01-01

    Two families of finite difference algorithms for computational aeroacoustics are presented and compared. All of the algorithms are single step explicit methods, they have the same order of accuracy in both space and time, with examples up to eleventh order, and they have multidimensional extensions. One of the algorithm families has spectral like high resolution. Propagation with high order and high resolution algorithms can produce accurate results after O(10(exp 6)) periods of propagation with eight grid points per wavelength.

  4. A random forest based risk model for reliable and accurate prediction of receipt of transfusion in patients undergoing percutaneous coronary intervention.

    Directory of Open Access Journals (Sweden)

    Hitinder S Gurm

    Full Text Available BACKGROUND: Transfusion is a common complication of Percutaneous Coronary Intervention (PCI and is associated with adverse short and long term outcomes. There is no risk model for identifying patients most likely to receive transfusion after PCI. The objective of our study was to develop and validate a tool for predicting receipt of blood transfusion in patients undergoing contemporary PCI. METHODS: Random forest models were developed utilizing 45 pre-procedural clinical and laboratory variables to estimate the receipt of transfusion in patients undergoing PCI. The most influential variables were selected for inclusion in an abbreviated model. Model performance estimating transfusion was evaluated in an independent validation dataset using area under the ROC curve (AUC, with net reclassification improvement (NRI used to compare full and reduced model prediction after grouping in low, intermediate, and high risk categories. The impact of procedural anticoagulation on observed versus predicted transfusion rates were assessed for the different risk categories. RESULTS: Our study cohort was comprised of 103,294 PCI procedures performed at 46 hospitals between July 2009 through December 2012 in Michigan of which 72,328 (70% were randomly selected for training the models, and 30,966 (30% for validation. The models demonstrated excellent calibration and discrimination (AUC: full model  = 0.888 (95% CI 0.877-0.899, reduced model AUC = 0.880 (95% CI, 0.868-0.892, p for difference 0.003, NRI = 2.77%, p = 0.007. Procedural anticoagulation and radial access significantly influenced transfusion rates in the intermediate and high risk patients but no clinically relevant impact was noted in low risk patients, who made up 70% of the total cohort. CONCLUSIONS: The risk of transfusion among patients undergoing PCI can be reliably calculated using a novel easy to use computational tool (https://bmc2.org/calculators/transfusion. This risk prediction

  5. Accurate and Fully Automatic Hippocampus Segmentation Using Subject-Specific 3D Optimal Local Maps Into a Hybrid Active Contour Model.

    Science.gov (United States)

    Zarpalas, Dimitrios; Gkontra, Polyxeni; Daras, Petros; Maglaveras, Nicos

    2014-01-01

    Assessing the structural integrity of the hippocampus (HC) is an essential step toward prevention, diagnosis, and follow-up of various brain disorders due to the implication of the structural changes of the HC in those disorders. In this respect, the development of automatic segmentation methods that can accurately, reliably, and reproducibly segment the HC has attracted considerable attention over the past decades. This paper presents an innovative 3-D fully automatic method to be used on top of the multiatlas concept for the HC segmentation. The method is based on a subject-specific set of 3-D optimal local maps (OLMs) that locally control the influence of each energy term of a hybrid active contour model (ACM). The complete set of the OLMs for a set of training images is defined simultaneously via an optimization scheme. At the same time, the optimal ACM parameters are also calculated. Therefore, heuristic parameter fine-tuning is not required. Training OLMs are subsequently combined, by applying an extended multiatlas concept, to produce the OLMs that are anatomically more suitable to the test image. The proposed algorithm was tested on three different and publicly available data sets. Its accuracy was compared with that of state-of-the-art methods demonstrating the efficacy and robustness of the proposed method. PMID:27170866

  6. CLASH-VLT: Insights on the mass substructures in the Frontier Fields Cluster MACS J0416.1-2403 through accurate strong lens modeling

    CERN Document Server

    Grillo, C; Rosati, P; Mercurio, A; Balestra, I; Munari, E; Nonino, M; Caminha, G B; Lombardi, M; De Lucia, G; Borgani, S; Gobat, R; Biviano, A; Girardi, M; Umetsu, K; Coe, D; Koekemoer, A M; Postman, M; Zitrin, A; Halkola, A; Broadhurst, T; Sartoris, B; Presotto, V; Annunziatella, M; Maier, C; Fritz, A; Vanzella, E; Frye, B

    2014-01-01

    We present a detailed mass reconstruction and a novel study on the substructure properties in the core of the CLASH and Frontier Fields galaxy cluster MACS J0416.1-2403. We show and employ our extensive spectroscopic data set taken with the VIMOS instrument as part of our CLASH-VLT program, to confirm spectroscopically 10 strong lensing systems and to select a sample of 175 plausible cluster members to a limiting stellar mass of log(M_*/M_Sun) ~ 8.6. We reproduce the measured positions of 30 multiple images with a remarkable median offset of only 0.3" by means of a comprehensive strong lensing model comprised of 2 cluster dark-matter halos, represented by cored elliptical pseudo-isothermal mass distributions, and the cluster member components. The latter have total mass-to-light ratios increasing with the galaxy HST/WFC3 near-IR (F160W) luminosities. The measurement of the total enclosed mass within the Einstein radius is accurate to ~5%, including systematic uncertainties. We emphasize that the use of multip...

  7. Reduced-order LPV model of flexible wind turbines from high fidelity aeroelastic codes

    DEFF Research Database (Denmark)

    Adegas, Fabiano Daher; Sønderby, Ivan Bergquist; Hansen, Morten Hartvig;

    2013-01-01

    Linear aeroelastic models used for stability analysis of wind turbines are commonly of very high order. These high-order models are generally not suitable for control analysis and synthesis. This paper presents a methodology to obtain a reduced-order linear parameter varying (LPV) model from a se...

  8. Modeling and Optimization of Direct Chill Casting to Reduce Ingot Cracking

    Energy Technology Data Exchange (ETDEWEB)

    Das, Subodh K.

    2006-01-09

    reheating-cooling method (RCM), was developed and validated for measuring mechanical properties in the nonequilibrium mushy zones of alloys. The new method captures the brittle nature of aluminum alloys at temperatures close to the nonequilibrium solidus temperature, while specimens tested using the reheating method exhibit significant ductility. The RCM has been used for determining the mechanical properties of alloys at nonequilibrium mushy zone temperatures. Accurate data obtained during this project show that the metal becomes more brittle at high temperatures and high strain rates. (4) The elevated-temperature mechanical properties of the alloy were determined. Constitutive models relating the stress and strain relationship at elevated temperatures were also developed. The experimental data fit the model well. (5) An integrated 3D DC casting model has been used to simulate heat transfer, fluid flow, solidification, and thermally induced stress-strain during casting. A temperature-dependent HTC between the cooling water and the ingot surface, cooling water flow rate, and air gap were coupled in this model. An elasto-viscoplastic model based on high-temperature mechanical testing was used to calculate the stress during casting. The 3D integrated model can be used for the prediction of temperature, fluid flow, stress, and strain distribution in DC cast ingots. (6) The cracking propensity of DC cast ingots can be predicted using the 3D integrated model as well as thermodynamic models. Thus, an ingot cracking index based on the ratio of local stress to local alloy strength was established. Simulation results indicate that cracking propensity increases with increasing casting speed. The composition of the ingots also has a major effect on cracking formation. It was found that copper and zinc increase the cracking propensity of DC cast ingots. The goal of this Aluminum Industry of the Future (IOF) project was to assist the aluminum industry in reducing the incidence of stress

  9. Quantum corrections of (fuzzy) spacetimes from a supersymmetric reduced model with Filippov 3-algebra

    OpenAIRE

    Tomino, Dan

    2010-01-01

    1-loop vacuum energies of (fuzzy) spacetimes from a supersymmetric reduced model with Filippov 3-algebra are discussed. A_{2,2} algebra, Nambu-Poisson algebra in flat spacetime, and a Lorentzian 3-algebra are examined as 3-algebras.

  10. Novel Reduced Order in Time Models for Problems in Nonlinear Aeroelasticity Project

    Data.gov (United States)

    National Aeronautics and Space Administration — Research is proposed for the development and implementation of state of the art, reduced order models for problems in nonlinear aeroelasticity. Highly efficient and...

  11. Reduced thermal quadrupole heat transport modeling in harmonic and transient regime scanning thermal microscopy using nanofabricated thermal probes

    Science.gov (United States)

    Bodzenta, J.; Chirtoc, M.; Juszczyk, J.

    2014-08-01

    The thermal model of a nanofabricated thermal probe (NTP) used in scanning thermal microscopy is proposed. It is based on consideration of the heat exchange channels between electrically heated probe, a sample, and their surroundings, in transient and harmonic regimes. Three zones in the probe-sample system were distinguished and modeled by using electrical analogies of heat flow through a chain of quadrupoles built from thermal resistances and thermal capacitances. The analytical transfer functions for two- and three-cell quadrupoles are derived. A reduced thermal quadrupole with merged RC elements allows for thermo-electrical modeling of the complex architecture of a NTP, with a minimum of independent parameters (two resistance ratios and two time constants). The validity of the model is examined by comparing computed values of discrete RC elements with results of finite element simulations and with experimental data. It is proved that the model consisting of two or three-cell quadrupole is sufficient for accurate interpretation of experimental results. The bandwidth of the NTP is limited to 10 kHz. The performance in dc regime can be simply obtained in the limit of zero frequency. One concludes that the low NTP sensitivity to sample thermal conductivity is due, much like in dc regime, to significant heat by-pass by conduction through the cantilever, and to the presence of probe-sample contact resistance in series with the sample.

  12. Study on dynamic characteristics' change of hippocampal neuron reduced models caused by the Alzheimer's disease.

    Science.gov (United States)

    Peng, Yueping; Wang, Jue; Zheng, Chongxun

    2016-12-01

    In the paper, based on the electrophysiological experimental data, the Hippocampal neuron reduced model under the pathology condition of Alzheimer's disease (AD) has been built by modifying parameters' values. The reduced neuron model's dynamic characteristics under effect of AD are comparatively studied. Under direct current stimulation, compared with the normal neuron model, the AD neuron model's dynamic characteristics have obviously been changed. The neuron model under the AD condition undergoes supercritical Andronov-Hopf bifurcation from the rest state to the continuous discharge state. It is different from the neuron model under the normal condition, which undergoes saddle-node bifurcation. So, the neuron model changes into a resonator with monostable state from an integrator with bistable state under AD's action. The research reveals the neuron model's dynamic characteristics' changing under effect of AD, and provides some theoretic basis for AD research by neurodynamics theory. PMID:26998957

  13. Study on dynamic characteristics' change of hippocampal neuron reduced models caused by the Alzheimer's disease.

    Science.gov (United States)

    Peng, Yueping; Wang, Jue; Zheng, Chongxun

    2016-01-01

    In the paper, based on the electrophysiological experimental data, the Hippocampal neuron reduced model under the pathology condition of Alzheimer's disease (AD) has been built by modifying parameters' values. The reduced neuron model's dynamic characteristics under effect of AD are comparatively studied. Under direct current stimulation, compared with the normal neuron model, the AD neuron model's dynamic characteristics have obviously been changed. The neuron model under the AD condition undergoes supercritical Andronov-Hopf bifurcation from the rest state to the continuous discharge state. It is different from the neuron model under the normal condition, which undergoes saddle-node bifurcation. So, the neuron model changes into a resonator with monostable state from an integrator with bistable state under AD's action. The research reveals the neuron model's dynamic characteristics' changing under effect of AD, and provides some theoretic basis for AD research by neurodynamics theory.

  14. Reduced-order modeling for cardiac electrophysiology. Application to parameter identification

    CERN Document Server

    Boulakia, Muriel; Gerbeau, Jean-Frédéric

    2011-01-01

    A reduced-order model based on Proper Orthogonal Decomposition (POD) is proposed for the bidomain equations of cardiac electrophysiology. Its accuracy is assessed through electrocardiograms in various configurations, including myocardium infarctions and long-time simulations. We show in particular that a restitution curve can efficiently be approximated by this approach. The reduced-order model is then used in an inverse problem solved by an evolutionary algorithm. Some attempts are presented to identify ionic parameters and infarction locations from synthetic ECGs.

  15. Integrated assessment of acid deposition impacts using reduced-form modeling. Final report

    Energy Technology Data Exchange (ETDEWEB)

    Sinha, R.; Small, M.J.

    1996-05-01

    Emissions of sulfates and other acidic pollutants from anthropogenic sources result in the deposition of these acidic pollutants on the earth`s surface, downwind of the source. These pollutants reach surface waters, including streams and lakes, and acidify them, resulting in a change in the chemical composition of the surface water. Sometimes the water chemistry is sufficiently altered so that the lake can no longer support aquatic life. This document traces the efforts by many researchers to understand and quantify the effect of acid deposition on the water chemistry of populations of lakes, in particular the improvements to the MAGIC (Model of Acidification of Groundwater in Catchments) modeling effort, and describes its reduced-form representation in a decision and uncertainty analysis tool. Previous reduced-form approximations to the MAGIC model are discussed in detail, and their drawbacks are highlighted. An improved reduced-form model for acid neutralizing capacity is presented, which incorporates long-term depletion of the watershed acid neutralization fraction. In addition, improved fish biota models are incorporated in the integrated assessment model, which includes reduced-form models for other physical and chemical processes of acid deposition, as well as the resulting socio-economic and health related effects. The new reduced-form lake chemistry and fish biota models are applied to the Adirondacks region of New York.

  16. Can hypoxia-PET map hypoxic cell density heterogeneity accurately in an animal tumor model at a clinically obtainable image contrast?

    International Nuclear Information System (INIS)

    Background: PET allows non-invasive mapping of tumor hypoxia, but the combination of low resolution, slow tracer adduct-formation and slow clearance of unbound tracer remains problematic. Using a murine tumor with a hypoxic fraction within the clinical range and a tracer post-injection sampling time that results in clinically obtainable tumor-to-reference tissue activity ratios, we have analyzed to what extent inherent limitations actually compromise the validity of PET-generated hypoxia maps. Materials and methods: Mice bearing SCCVII tumors were injected with the PET hypoxia-marker fluoroazomycin arabinoside (FAZA), and the immunologically detectable hypoxia marker, pimonidazole. Tumors and reference tissue (muscle, blood) were harvested 0.5, 2 and 4 h after FAZA administration. Tumors were analyzed for global (well counter) and regional (autoradiography) tracer distribution and compared to pimonidazole as visualized using immunofluorescence microscopy. Results: Hypoxic fraction as measured by pimonidazole staining ranged from 0.09 to 0.32. FAZA tumor to reference tissue ratios were close to unity 0.5 h post-injection but reached values of 2 and 6 when tracer distribution time was prolonged to 2 and 4 h, respectively. A fine-scale pixel-by-pixel comparison of autoradiograms and immunofluorescence images revealed a clear spatial link between FAZA and pimonidazole-adduct signal intensities at 2 h and later. Furthermore, when using a pixel size that mimics the resolution in PET, an excellent correlation between pixel FAZA mean intensity and density of hypoxic cells was observed already at 2 h post-injection. Conclusions: Despite inherent weaknesses, PET-hypoxia imaging is able to generate quantitative tumor maps that accurately reflect the underlying microscopic reality (i.e., hypoxic cell density) in an animal model with a clinical realistic image contrast.

  17. A nonlinear manifold-based reduced order model for multiscale analysis of heterogeneous hyperelastic materials

    Science.gov (United States)

    Bhattacharjee, Satyaki; Matouš, Karel

    2016-05-01

    A new manifold-based reduced order model for nonlinear problems in multiscale modeling of heterogeneous hyperelastic materials is presented. The model relies on a global geometric framework for nonlinear dimensionality reduction (Isomap), and the macroscopic loading parameters are linked to the reduced space using a Neural Network. The proposed model provides both homogenization and localization of the multiscale solution in the context of computational homogenization. To construct the manifold, we perform a number of large three-dimensional simulations of a statistically representative unit cell using a parallel finite strain finite element solver. The manifold-based reduced order model is verified using common principles from the machine-learning community. Both homogenization and localization of the multiscale solution are demonstrated on a large three-dimensional example and the local microscopic fields as well as the homogenized macroscopic potential are obtained with acceptable engineering accuracy.

  18. An optimal control model for reducing and trading of carbon emissions

    Science.gov (United States)

    Guo, Huaying; Liang, Jin

    2016-03-01

    A stochastic optimal control model of reducing and trading for carbon emissions is established in this paper. With considerations of reducing the carbon emission growth and the price of the allowances in the market, an optimal policy is searched to have the minimum total costs to achieve the agreement of emission reduction targets. The model turns to a two-dimension HJB equation problem. By the methods of reducing dimension and Cole-Hopf transformation, a semi-closed form solution of the corresponding HJB problem under some assumptions is obtained. For more general cases, the numerical calculations, analysis and comparisons are presented.

  19. Accurate market price formation model with both supply-demand and trend-following for global food prices providing policy recommendations.

    Science.gov (United States)

    Lagi, Marco; Bar-Yam, Yavni; Bertrand, Karla Z; Bar-Yam, Yaneer

    2015-11-10

    Recent increases in basic food prices are severely affecting vulnerable populations worldwide. Proposed causes such as shortages of grain due to adverse weather, increasing meat consumption in China and India, conversion of corn to ethanol in the United States, and investor speculation on commodity markets lead to widely differing implications for policy. A lack of clarity about which factors are responsible reinforces policy inaction. Here, for the first time to our knowledge, we construct a dynamic model that quantitatively agrees with food prices. The results show that the dominant causes of price increases are investor speculation and ethanol conversion. Models that just treat supply and demand are not consistent with the actual price dynamics. The two sharp peaks in 2007/2008 and 2010/2011 are specifically due to investor speculation, whereas an underlying upward trend is due to increasing demand from ethanol conversion. The model includes investor trend following as well as shifting between commodities, equities, and bonds to take advantage of increased expected returns. Claims that speculators cannot influence grain prices are shown to be invalid by direct analysis of price-setting practices of granaries. Both causes of price increase, speculative investment and ethanol conversion, are promoted by recent regulatory changes-deregulation of the commodity markets, and policies promoting the conversion of corn to ethanol. Rapid action is needed to reduce the impacts of the price increases on global hunger. PMID:26504216

  20. Ground-level ozone concentration over Spain: an application of Kalman Filter post-processing to reduce model uncertainties

    Science.gov (United States)

    Sicardi, V.; Ortiz, J.; Rincón, A.; Jorba, O.; Pay, M. T.; Gassó, S.; Baldasano, J. M.

    2011-02-01

    The CALIOPE air quality modelling system, namely WRF-ARW/HERMES-EMEP/CMAQ/BSC-DREAM8b, has been used to perform the simulation of ground level O3 concentration for the year 2004, over the Iberian Peninsula. We use this system to study the daily ground-level O3 maximum. We investigate the use of a post-processing such as the Kalman Filter bias-adjustment technique to improve the simulated O3 maximum. The Kalman Filter bias-adjustment technique is a recursive algorithm to optimally estimate bias-adjustment terms from previous measurements and model results. The bias-adjustment technique is found to improve the simulated O3 maximum for the entire year and the whole domain. The corrected simulation presents improvements in statistical indicators such as correlation, root mean square error, mean bias, standard deviation, and gross error. After the post-processing the exceedances of O3 concentration limits, as established by the European Directive 2008/50/CE, are better reproduced and the uncertainty of the modelling system is reduced from 20% to 7.5%. Such uncertainty in the model results is under the established EU limit of the 50%. Significant improvements in the O3 average daily cycle and in its amplitude are also observed after the post-processing. The systematic improvements in the O3 maximum simulations suggest that the Kalman Filter post-processing method is a suitable technique to reproduce accurate estimate of ground-level O3 concentration.

  1. Reduced Order Modeling for Prediction and Control of Large-Scale Systems.

    Energy Technology Data Exchange (ETDEWEB)

    Kalashnikova, Irina; Arunajatesan, Srinivasan; Barone, Matthew Franklin; van Bloemen Waanders, Bart Gustaaf; Fike, Jeffrey A.

    2014-05-01

    This report describes work performed from June 2012 through May 2014 as a part of a Sandia Early Career Laboratory Directed Research and Development (LDRD) project led by the first author. The objective of the project is to investigate methods for building stable and efficient proper orthogonal decomposition (POD)/Galerkin reduced order models (ROMs): models derived from a sequence of high-fidelity simulations but having a much lower computational cost. Since they are, by construction, small and fast, ROMs can enable real-time simulations of complex systems for onthe- spot analysis, control and decision-making in the presence of uncertainty. Of particular interest to Sandia is the use of ROMs for the quantification of the compressible captive-carry environment, simulated for the design and qualification of nuclear weapons systems. It is an unfortunate reality that many ROM techniques are computationally intractable or lack an a priori stability guarantee for compressible flows. For this reason, this LDRD project focuses on the development of techniques for building provably stable projection-based ROMs. Model reduction approaches based on continuous as well as discrete projection are considered. In the first part of this report, an approach for building energy-stable Galerkin ROMs for linear hyperbolic or incompletely parabolic systems of partial differential equations (PDEs) using continuous projection is developed. The key idea is to apply a transformation induced by the Lyapunov function for the system, and to build the ROM in the transformed variables. It is shown that, for many PDE systems including the linearized compressible Euler and linearized compressible Navier-Stokes equations, the desired transformation is induced by a special inner product, termed the “symmetry inner product”. Attention is then turned to nonlinear conservation laws. A new transformation and corresponding energy-based inner product for the full nonlinear compressible Navier

  2. CLASH-VLT: INSIGHTS ON THE MASS SUBSTRUCTURES IN THE FRONTIER FIELDS CLUSTER MACS J0416.1–2403 THROUGH ACCURATE STRONG LENS MODELING

    Energy Technology Data Exchange (ETDEWEB)

    Grillo, C. [Dark Cosmology Centre, Niels Bohr Institute, University of Copenhagen, Juliane Maries Vej 30, DK-2100 Copenhagen (Denmark); Suyu, S. H.; Umetsu, K. [Institute of Astronomy and Astrophysics, Academia Sinica, P.O. Box 23-141, Taipei 10617, Taiwan (China); Rosati, P.; Caminha, G. B. [Dipartimento di Fisica e Scienze della Terra, Università degli Studi di Ferrara, Via Saragat 1, I-44122 Ferrara (Italy); Mercurio, A. [INAF - Osservatorio Astronomico di Capodimonte, Via Moiariello 16, I-80131 Napoli (Italy); Balestra, I.; Munari, E.; Nonino, M.; De Lucia, G.; Borgani, S.; Biviano, A.; Girardi, M. [INAF - Osservatorio Astronomico di Trieste, via G. B. Tiepolo 11, I-34143, Trieste (Italy); Lombardi, M. [Dipartimento di Fisica, Università degli Studi di Milano, via Celoria 16, I-20133 Milano (Italy); Gobat, R. [Laboratoire AIM-Paris-Saclay, CEA/DSM-CNRS-Universitè Paris Diderot, Irfu/Service d' Astrophysique, CEA Saclay, Orme des Merisiers, F-91191 Gif sur Yvette (France); Coe, D.; Koekemoer, A. M.; Postman, M. [Space Telescope Science Institute, 3700 San Martin Drive, Baltimore, MD 21208 (United States); Zitrin, A. [Cahill Center for Astronomy and Astrophysics, California Institute of Technology, MS 249-17, Pasadena, CA 91125 (United States); Halkola, A., E-mail: grillo@dark-cosmology.dk; and others

    2015-02-10

    We present a detailed mass reconstruction and a novel study on the substructure properties in the core of the Cluster Lensing And Supernova survey with Hubble (CLASH) and Frontier Fields galaxy cluster MACS J0416.1–2403. We show and employ our extensive spectroscopic data set taken with the VIsible Multi-Object Spectrograph instrument as part of our CLASH-VLT program, to confirm spectroscopically 10 strong lensing systems and to select a sample of 175 plausible cluster members to a limiting stellar mass of log (M {sub *}/M {sub ☉}) ≅ 8.6. We reproduce the measured positions of a set of 30 multiple images with a remarkable median offset of only 0.''3 by means of a comprehensive strong lensing model comprised of two cluster dark-matter halos, represented by cored elliptical pseudo-isothermal mass distributions, and the cluster member components, parameterized with dual pseudo-isothermal total mass profiles. The latter have total mass-to-light ratios increasing with the galaxy HST/WFC3 near-IR (F160W) luminosities. The measurement of the total enclosed mass within the Einstein radius is accurate to ∼5%, including the systematic uncertainties estimated from six distinct mass models. We emphasize that the use of multiple-image systems with spectroscopic redshifts and knowledge of cluster membership based on extensive spectroscopic information is key to constructing robust high-resolution mass maps. We also produce magnification maps over the central area that is covered with HST observations. We investigate the galaxy contribution, both in terms of total and stellar mass, to the total mass budget of the cluster. When compared with the outcomes of cosmological N-body simulations, our results point to a lack of massive subhalos in the inner regions of simulated clusters with total masses similar to that of MACS J0416.1–2403. Our findings of the location and shape of the cluster dark-matter halo density profiles and on the cluster substructures provide

  3. On the Nonlinear Structural Analysis of Wind Turbine Blades using Reduced Degree-of-Freedom Models

    DEFF Research Database (Denmark)

    Holm-Jørgensen, Kristian; Larsen, Jesper Winther; Nielsen, Søren R.K.

    2008-01-01

    and stability conditions. It is demonstrated that the response predicted by such models in some cases becomes instable or chaotic. However, as a consequence of the energy flow the stability is increased and the tendency of chaotic vibrations is reduced as the number of modes are increased. The FE model...

  4. Forward Modeling of Reduced Power Spectra From Three-Dimensional k-Space

    OpenAIRE

    von Papen, Michael; Saur, Joachim

    2015-01-01

    We present results from a numerical forward model to evaluate one-dimensional reduced power spectral densities (PSD) from arbitrary energy distributions in $\\mathbf{k}$-space. In this model, we can separately calculate the diagonal elements of the spectral tensor for incompressible axisymmetric turbulence with vanishing helicity. Given a critically balanced turbulent cascade with $k_\\|\\sim k_\\perp^\\alpha$ and $\\alpha

  5. A Reduced Form Framework for Modeling Volatility of Speculative Prices based on Realized Variation Measures

    DEFF Research Database (Denmark)

    Andersen, Torben G.; Bollerslev, Tim; Huang, Xin

    Building on realized variance and bi-power variation measures constructed from high-frequency financial prices, we propose a simple reduced form framework for effectively incorporating intraday data into the modeling of daily return volatility. We decompose the total daily return variability into...... combination of an ACH model for the time-varying jump intensities coupled with a relatively simple log-linear structure for the jump sizes. Lastly, we discuss how the resulting reduced form model structure for each of the three components may be used in the construction of out-of-sample forecasts for the...

  6. A reduced model for ion temperature gradient turbulent transport in helical plasmas

    International Nuclear Information System (INIS)

    A novel reduced model for ion temperature gradient (ITG) turbulent transport in helical plasmas is presented. The model enables one to predict nonlinear gyrokinetic simulation results from linear gyrokinetic analyses. It is shown from nonlinear gyrokinetic simulations of the ITG turbulence in helical plasmas that the transport coefficient can be expressed as a function of the turbulent fluctuation level and the averaged zonal flow amplitude. Then, the reduced model for the turbulent ion heat diffusivity is derived by representing the nonlinear turbulent fluctuations and zonal flow amplitude in terms of the linear growth rate of the ITG instability and the linear response of the zonal flow potentials. It is confirmed that the reduced transport model results are in good agreement with those from nonlinear gyrokinetic simulations for high ion temperature plasmas in the Large Helical Device. (author)

  7. Resilient model approximation for Markov jump time-delay systems via reduced model with hierarchical Markov chains

    Science.gov (United States)

    Zhu, Yanzheng; Zhang, Lixian; Sreeram, Victor; Shammakh, Wafa; Ahmad, Bashir

    2016-10-01

    In this paper, the resilient model approximation problem for a class of discrete-time Markov jump time-delay systems with input sector-bounded nonlinearities is investigated. A linearised reduced-order model is determined with mode changes subject to domination by a hierarchical Markov chain containing two different nonhomogeneous Markov chains. Hence, the reduced-order model obtained not only reflects the dependence of the original systems but also model external influence that is related to the mode changes of the original system. Sufficient conditions formulated in terms of bilinear matrix inequalities for the existence of such models are established, such that the resulting error system is stochastically stable and has a guaranteed l2-l∞ error performance. A linear matrix inequalities optimisation coupled with line search is exploited to solve for the corresponding reduced-order systems. The potential and effectiveness of the developed theoretical results are demonstrated via a numerical example.

  8. Incorporating Prior Knowledge for Quantifying and Reducing Model-Form Uncertainty in RANS Simulations

    CERN Document Server

    Wang, Jian-Xun; Xiao, Heng

    2015-01-01

    Simulations based on Reynolds-Averaged Navier--Stokes (RANS) models have been used to support high-consequence decisions related to turbulent flows. Apart from the deterministic model predictions, the decision makers are often equally concerned about the predictions confidence. Among the uncertainties in RANS simulations, the model-form uncertainty is an important or even a dominant source. Therefore, quantifying and reducing the model-form uncertainties in RANS simulations are of critical importance to make risk-informed decisions. Researchers in statistics communities have made efforts on this issue by considering numerical models as black boxes. However, this physics-neutral approach is not a most efficient use of data, and is not practical for most engineering problems. Recently, we proposed an open-box, Bayesian framework for quantifying and reducing model-form uncertainties in RANS simulations by incorporating observation data and physics-prior knowledge. It can incorporate the information from the vast...

  9. Interpolation-based reduced-order models to predict transient thermal output for enhanced geothermal systems

    OpenAIRE

    Mudunuru, M. K.; Karra, S.; D. R. Harp; Guthrie, G. D.; Viswanathan, H. S.

    2016-01-01

    The goal of this paper is to assess the utility of Reduced-Order Models (ROMs) developed from 3D physics-based models for predicting transient thermal power output for an enhanced geothermal reservoir while explicitly accounting for uncertainties in the subsurface system and site-specific details. Numerical simulations are performed based on Latin Hypercube Sampling (LHS) of model inputs drawn from uniform probability distributions. Key sensitive parameters are identified from these simulatio...

  10. Study of the nutrient and plankton dynamics in Lake Tanganyika using a reduced-gravity model

    OpenAIRE

    Naithani, Jaya; Darchambeau, François; Deleersnijder, Eric; Descy, Jean*-Pierre; Wolanski, Eric

    2007-01-01

    An eco-hydrodynamic (ECOH) model is proposed for Lake Tanganyika to study the plankton productivity. The hydrodynamic sub-model solves the non-linear, reduced-gravity equations in which wind is the dominant forcing. The ecological sub-model for the epilimnion comprises nutrients, primary production, phytoplankton biomass and zooplankton biomass. In the absence of significant terrestrial input of nutrients, the nutrient loss is compensated for by seasonal, wind-driven, turbulent entrainment of...

  11. Use of Paired Simple and Complex Models to Reduce Predictive Bias and Quantify Uncertainty

    DEFF Research Database (Denmark)

    Doherty, John; Christensen, Steen

    2011-01-01

    of these details born of the necessity for model outputs to replicate observations of historical system behavior. In contrast, the rapid run times and general numerical reliability of simple models often promulgates good calibration and ready implementation of sophisticated methods of calibration...... into the costs of model simplification, and into how some of these costs may be reduced. It then describes a methodology for paired model usage through which predictive bias of a simplified model can be detected and corrected, and postcalibration predictive uncertainty can be quantified. The methodology...

  12. EXPERIMENTS OF A REDUCED GRID IN LASG/IAP WORLD OCEAN GENERAL CIRCULATION MODELS (OGCMs)

    Institute of Scientific and Technical Information of China (English)

    LIU Xiying; LIU Hailong; ZHANG Xuehong; YU Rucong

    2006-01-01

    Due to the decrease in grid size associated with the convergence of meridians toward the poles in spherical coordinates, the time steps in many global climate models with finite-difference method are restricted to be unpleasantly small. To overcome the problem, a reduced grid is introduced to LASG/IAP world ocean general circulation models. The reduced grid is implemented successfully in the coarser resolutions version model L30T63 at first. Then, it is carried out in the improved version model LICOM with finer resolutions. In the experiment with model L30T63, under time step unchanged though, execution time per single model run is shortened significantly owing to the decrease of grid number and filtering execution in high latitudes. Results from additional experiments with L30T63 show that the time step of integration can be quadrupled at most in reduced grid with refinement ratio 3. In the experiment with model LICOM and with the model's original time step unchanged, the model covered area is extended to the whole globe from its original case with the grid point of North Pole considered as an isolated island and the results of experiment are shown to be acceptable.

  13. Charge density distributions derived from smoothed electrostatic potential functions: design of protein reduced point charge models.

    Science.gov (United States)

    Leherte, Laurence; Vercauteren, Daniel P

    2011-10-01

    To generate reduced point charge models of proteins, we developed an original approach to hierarchically locate extrema in charge density distribution functions built from the Poisson equation applied to smoothed molecular electrostatic potential (MEP) functions. A charge fitting program was used to assign charge values to the so-obtained reduced representations. In continuation to a previous work, the Amber99 force field was selected. To easily generate reduced point charge models for protein structures, a library of amino acid templates was designed. Applications to four small peptides, a set of 53 protein structures, and four KcsA ion channel models, are presented. Electrostatic potential and solvation free energy values generated by the reduced models are compared with the corresponding values obtained using the original set of atomic charges. Results are in closer agreement with the original all-atom electrostatic properties than those obtained with a previous reduced model that was directly built from the smoothed MEP functions [Leherte and Vercauteren in J Chem Theory Comput 5:3279-3298, 2009]. PMID:21915750

  14. Formulation of Japanese consensus-building model for HLW geological disposal site determination. 4. The influence of the accurate information on the decision making

    International Nuclear Information System (INIS)

    Investigation has been made to discuss how the accurate scientific information affects the perception of risk. To verify this investigation, dialogue seminars have been held. Based upon the outcomes of these investigations, the analysis of attribution was done to verify the factors affecting the risk perception and acceptance relevant to the consensus-building for HLW geological disposal site determination. (author)

  15. Reducing hydrologic model uncertainty in monthly streamflow predictions using multimodel combination

    Science.gov (United States)

    Li, Weihua; Sankarasubramanian, A.

    2012-12-01

    Model errors are inevitable in any prediction exercise. One approach that is currently gaining attention in reducing model errors is by combining multiple models to develop improved predictions. The rationale behind this approach primarily lies on the premise that optimal weights could be derived for each model so that the developed multimodel predictions will result in improved predictions. A new dynamic approach (MM-1) to combine multiple hydrological models by evaluating their performance/skill contingent on the predictor state is proposed. We combine two hydrological models, "abcd" model and variable infiltration capacity (VIC) model, to develop multimodel streamflow predictions. To quantify precisely under what conditions the multimodel combination results in improved predictions, we compare multimodel scheme MM-1 with optimal model combination scheme (MM-O) by employing them in predicting the streamflow generated from a known hydrologic model (abcd model orVICmodel) with heteroscedastic error variance as well as from a hydrologic model that exhibits different structure than that of the candidate models (i.e., "abcd" model or VIC model). Results from the study show that streamflow estimated from single models performed better than multimodels under almost no measurement error. However, under increased measurement errors and model structural misspecification, both multimodel schemes (MM-1 and MM-O) consistently performed better than the single model prediction. Overall, MM-1 performs better than MM-O in predicting the monthly flow values as well as in predicting extreme monthly flows. Comparison of the weights obtained from each candidate model reveals that as measurement errors increase, MM-1 assigns weights equally for all the models, whereas MM-O assigns higher weights for always the best-performing candidate model under the calibration period. Applying the multimodel algorithms for predicting streamflows over four different sites revealed that MM-1 performs

  16. Identifying the Reducing Resistance to Change Phase in an Organizational Change Model

    OpenAIRE

    Daniela Bradutanu

    2012-01-01

    In this article we examine where in an organizational change process it is better to place the reducing resistance to change phase, so that employees would accept the new changes easier and not manifest too much resistance. After analyzing twelve organizational change models we have concluded that the place of the reducing resistance to change phase in an organizational change process is not the same, it being modified according to the type of change. The results of this study are helpful for...

  17. Approaches for Reduced Order Modeling of Electrically Actuated von Karman Microplates

    KAUST Repository

    Saghir, Shahid

    2016-07-25

    This article presents and compares different approaches to develop reduced order models for the nonlinear von Karman rectangular microplates actuated by nonlinear electrostatic forces. The reduced-order models aim to investigate the static and dynamic behavior of the plate under small and large actuation forces. A fully clamped microplate is considered. Different types of basis functions are used in conjunction with the Galerkin method to discretize the governing equations. First we investigate the convergence with the number of modes retained in the model. Then for validation purpose, a comparison of the static results is made with the results calculated by a nonlinear finite element model. The linear eigenvalue problem for the plate under the electrostatic force is solved for a wide range of voltages up to pull-in. Results among the various reduced-order modes are compared and are also validated by comparing to results of the finite-element model. Further, the reduced order models are employed to capture the forced dynamic response of the microplate under small and large vibration amplitudes. Comparison of the different approaches are made for this case. Keywords: electrically actuated microplates, static analysis, dynamics of microplates, diaphragm vibration, large amplitude vibrations, nonlinear dynamics

  18. Strategies for reducing the climate noise in model simulations: ensemble runs versus a long continuous run

    Science.gov (United States)

    Decremer, Damien; Chung, Chul E.; Räisänen, Petri

    2015-03-01

    Climate modelers often integrate the model with constant forcing over a long time period, and make an average over the period in order to reduce climate noise. If the time series is persistent, as opposed to rapidly varying, such an average does not reduce noise efficiently. In this case, ensemble runs, which ideally represent independent runs, can reduce noise more efficiently. We quantify the noise reduction gain by using ensemble runs over a long continuous run in constant-forcing simulations. We find that in terms of the amplitude of the noise, a continuous simulation of 30 years may be equivalent to as few as five 3-year long ensemble runs in a slab ocean-atmosphere coupled model and as few as two 3-year long ensemble runs in a fully coupled model. The outperformance of ensemble runs over a continuous run is strictly a function of the persistence of the time series. We find that persistence depends on model, location and variable, and that persistence in surface air temperature has robust spatial structures in coupled models. We demonstrate that lag-1 year autocorrelation represents persistence fairly well, but the use of lag-1 year-lag-5 years autocorrelations represents the persistence far more sufficiently. Furthermore, there is more persistence in coupled model output than in the output of a first-order autoregressive model with the same lag-1 autocorrelation.

  19. REDUCING PROCESS VARIABILITY BY USING DMAIC MODEL: A CASE STUDY IN BANGLADESH

    Directory of Open Access Journals (Sweden)

    Ripon Kumar Chakrabortty

    2013-03-01

    Full Text Available Now-a-day's many leading manufacturing industry have started to practice Six Sigma and Lean manufacturing concepts to boost up their productivity as well as quality of products. In this paper, the Six Sigma approach has been used to reduce process variability of a food processing industry in Bangladesh. DMAIC (Define,Measure, Analyze, Improve, & Control model has been used to implement the Six Sigma Philosophy. Five phases of the model have been structured step by step respectively. Different tools of Total Quality Management, Statistical Quality Control and Lean Manufacturing concepts likely Quality function deployment, P Control chart, Fish-bone diagram, Analytical Hierarchy Process, Pareto analysis have been used in different phases of the DMAIC model. The process variability have been tried to reduce by identify the root cause of defects and reducing it. The ultimate goal of this study is to make the process lean and increase the level of sigma.

  20. Novel Framework for Reduced Order Modeling of Aero-engine Components

    Science.gov (United States)

    Safi, Ali

    The present study focuses on the popular dynamic reduction methods used in design of complex assemblies (millions of Degrees of Freedom) where numerous iterations are involved to achieve the final design. Aerospace manufacturers such as Rolls Royce and Pratt & Whitney are actively seeking techniques that reduce computational time while maintaining accuracy of the models. This involves modal analysis of components with complex geometries to determine the dynamic behavior due to non-linearity and complicated loading conditions. In such a case the sub-structuring and dynamic reduction techniques prove to be an efficient tool to reduce design cycle time. The components whose designs are finalized can be dynamically reduced to mass and stiffness matrices at the boundary nodes in the assembly. These matrices conserve the dynamics of the component in the assembly, and thus avoid repeated calculations during the analysis runs for design modification of other components. This thesis presents a novel framework in terms of modeling and meshing of any complex structure, in this case an aero-engine casing. In this study the affect of meshing techniques on the run time are highlighted. The modal analysis is carried out using an extremely fine mesh to ensure all minor details in the structure are captured correctly in the Finite Element (FE) model. This is used as the reference model, to compare against the results of the reduced model. The study also shows the conditions/criteria under which dynamic reduction can be implemented effectively, proving the accuracy of Criag-Bampton (C.B.) method and limitations of Static Condensation. The study highlights the longer runtime needed to produce the reduced matrices of components compared to the overall runtime of the complete unreduced model. Although once the components are reduced, the assembly run is significantly. Hence the decision to use Component Mode Synthesis (CMS) is to be taken judiciously considering the number of

  1. Dynamic Modeling of the Human Coagulation Cascade Using Reduced Order Effective Kinetic Models

    Directory of Open Access Journals (Sweden)

    Adithya Sagar

    2015-03-01

    Full Text Available In this study, we present a novel modeling approach which combines ordinary differential equation (ODE modeling with logical rules to simulate an archetype biochemical network, the human coagulation cascade. The model consisted of five differential equations augmented with several logical rules describing regulatory connections between model components, and unmodeled interactions in the network. This formulation was more than an order of magnitude smaller than current coagulation models, because many of the mechanistic details of coagulation were encoded as logical rules. We estimated an ensemble of likely model parameters (N = 20 from in vitro extrinsic coagulation data sets, with and without inhibitors, by minimizing the residual between model simulations and experimental measurements using particle swarm optimization (PSO. Each parameter set in our ensemble corresponded to a unique particle in the PSO. We then validated the model ensemble using thrombin data sets that were not used during training. The ensemble predicted thrombin trajectories for conditions not used for model training, including thrombin generation for normal and hemophilic coagulation in the presence of platelets (a significant unmodeled component. We then used flux analysis to understand how the network operated in a variety of conditions, and global sensitivity analysis to identify which parameters controlled the performance of the network. Taken together, the hybrid approach produced a surprisingly predictive model given its small size, suggesting the proposed framework could also be used to dynamically model other biochemical networks, including intracellular metabolic networks, gene expression programs or potentially even cell free metabolic systems.

  2. Design and implementation of linear controllers for the active control of reduced models of thin-walled structures

    OpenAIRE

    Ghareeb, Nader

    2013-01-01

    The main objectives of this work are twofold: 1.) to create reduced models of smart structures that are fully representative and 2.) to design different linear controllers and implement them into the active control of these reduced models. After a short introduction to the theory of piezoelectricity, the reduced model (super element model) is created starting from the finite element model. Damping properties are also calculated and added to the model. The relation between electrical and mecha...

  3. Non-reference Objective Quality Evaluation for Noise-Reduced Speech Using Overall Quality Estimation Model

    OpenAIRE

    Yamada, Takeshi; Kasuya, Yuki; Shinohara, Yuki; KITAWAKI, Nobuhiko

    2010-01-01

    This paper describes non-reference objective quality evaluation for noise-reduced speech. First, a subjective test is conducted in accordance with ITU-T Rec. P.835 to obtain the speech quality, the noise quality, and the overall quality of noise-reduced speech. Based on the results, we then propose an overall quality estimation model. The unique point of the proposed model is that the estimation of the overall quality is done only using the previously estimated speech quality and noise qualit...

  4. Hierarchical mixture of experts and diagnostic modeling approach to reduce hydrologic model structural uncertainty

    Energy Technology Data Exchange (ETDEWEB)

    Moges, Edom M.; Demissie, Yonas; Li, Hongyi

    2016-05-18

    In most water resources applications, a single model structure might be inadequate to capture the dynamic multi-scale interactions among different hydrological processes. Calibrating single models for dynamic catchments, where multiple dominant processes exist, can result in displacement of errors from structure to parameters, which in turn leads to over-correction and biased predictions. An alternative to a single model structure is to develop local expert structures that are effective in representing the dominant components of the hydrologic process and adaptively integrate them based on an indicator variable. In this study, the Hierarchical Mixture of Experts (HME) framework is applied to integrate expert model structures representing the different components of the hydrologic process. Various signature diagnostic analyses are used to assess the presence of multiple dominant processes and the adequacy of a single model, as well as to identify the structures of the expert models. The approaches are applied for two distinct catchments, the Guadalupe River (Texas) and the French Broad River (North Carolina) from the Model Parameter Estimation Experiment (MOPEX), using different structures of the HBV model. The results show that the HME approach has a better performance over the single model for the Guadalupe catchment, where multiple dominant processes are witnessed through diagnostic measures. Whereas, the diagnostics and aggregated performance measures prove that French Broad has a homogeneous catchment response, making the single model adequate to capture the response.

  5. Modelling mitigation options to reduce diffuse nitrogen water pollution from agriculture.

    Science.gov (United States)

    Bouraoui, Fayçal; Grizzetti, Bruna

    2014-01-15

    Agriculture is responsible for large scale water quality degradation and is estimated to contribute around 55% of the nitrogen entering the European Seas. The key policy instrument for protecting inland, transitional and coastal water resources is the Water Framework Directive (WFD). Reducing nutrient losses from agriculture is crucial to the successful implementation of the WFD. There are several mitigation measures that can be implemented to reduce nitrogen losses from agricultural areas to surface and ground waters. For the selection of appropriate measures, models are useful for quantifying the expected impacts and the associated costs. In this article we review some of the models used in Europe to assess the effectiveness of nitrogen mitigation measures, ranging from fertilizer management to the construction of riparian areas and wetlands. We highlight how the complexity of models is correlated with the type of scenarios that can be tested, with conceptual models mostly used to evaluate the impact of reduced fertilizer application, and the physically-based models used to evaluate the timing and location of mitigation options and the response times. We underline the importance of considering the lag time between the implementation of measures and effects on water quality. Models can be effective tools for targeting mitigation measures (identifying critical areas and timing), for evaluating their cost effectiveness, for taking into consideration pollution swapping and considering potential trade-offs in contrasting environmental objectives. Models are also useful for involving stakeholders during the development of catchments mitigation plans, increasing their acceptability. PMID:23998504

  6. Accurate and Fully Automatic Hippocampus Segmentation Using Subject-Specific 3D Optimal Local Maps Into a Hybrid Active Contour Model

    OpenAIRE

    ZARPALAS, Dimitrios; Gkontra, Polyxeni; Daras, Petros; Maglaveras, Nicos

    2014-01-01

    Assessing the structural integrity of the hippocampus (HC) is an essential step toward prevention, diagnosis, and follow-up of various brain disorders due to the implication of the structural changes of the HC in those disorders. In this respect, the development of automatic segmentation methods that can accurately, reliably, and reproducibly segment the HC has attracted considerable attention over the past decades. This paper presents an innovative 3-D fully automatic method to be used on to...

  7. Membrane Compartmentalization Reducing the Mobility of Lipids and Proteins within a Model Plasma Membrane.

    Science.gov (United States)

    Koldsø, Heidi; Reddy, Tyler; Fowler, Philip W; Duncan, Anna L; Sansom, Mark S P

    2016-09-01

    The cytoskeleton underlying cell membranes may influence the dynamic organization of proteins and lipids within the bilayer by immobilizing certain transmembrane (TM) proteins and forming corrals within the membrane. Here, we present coarse-grained resolution simulations of a biologically realistic membrane model of asymmetrically organized lipids and TM proteins. We determine the effects of a model of cytoskeletal immobilization of selected membrane proteins using long time scale coarse-grained molecular dynamics simulations. By introducing compartments with varying degrees of restraints within the membrane models, we are able to reveal how compartmentalization caused by cytoskeletal immobilization leads to reduced and anomalous diffusional mobility of both proteins and lipids. This in turn results in a reduced rate of protein dimerization within the membrane and of hopping of membrane proteins between compartments. These simulations provide a molecular realization of hierarchical models often invoked to explain single-molecule imaging studies of membrane proteins.

  8. Dynamic energy conservation model REDUCE. Extension with experience curves, energy efficiency indicators and user's guide

    International Nuclear Information System (INIS)

    The main objective of the energy conservation model REDUCE (Reduction of Energy Demand by Utilization of Conservation of Energy) is the evaluation of the effectiveness of economical, financial, institutional, and regulatory measures for improving the rational use of energy in end-use sectors. This report presents the results of additional model development activities, partly based on the first experiences in a previous project. Energy efficiency indicators have been added as an extra tool for output analysis in REDUCE. The methodology is described and some examples are given. The model has been extended with a method for modelling the effects of technical development on production costs, by means of an experience curve. Finally, the report provides a 'users guide', by describing in more detail the input data specification as well as all menus and buttons. 19 refs

  9. Methodology for Constructing Reduced-Order Power Block Performance Models for CSP Applications: Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Wagner, M.

    2010-10-01

    The inherent variability of the solar resource presents a unique challenge for CSP systems. Incident solar irradiation can fluctuate widely over a short time scale, but plant performance must be assessed for long time periods. As a result, annual simulations with hourly (or sub-hourly) timesteps are the norm in CSP analysis. A highly detailed power cycle model provides accuracy but tends to suffer from prohibitively long run-times; alternatively, simplified empirical models can run quickly but don?t always provide enough information, accuracy, or flexibility for the modeler. The ideal model for feasibility-level analysis incorporates both the detail and accuracy of a first-principle model with the low computational load of a regression model. The work presented in this paper proposes a methodology for organizing and extracting information from the performance output of a detailed model, then using it to develop a flexible reduced-order regression model in a systematic and structured way. A similar but less generalized approach for characterizing power cycle performance and a reduced-order modeling methodology for CFD analysis of heat transfer from electronic devices have been presented. This paper builds on these publications and the non-dimensional approach originally described.

  10. A Study of the Equivalence of the BLUEs between a Partitioned Singular Linear Model and Its Reduced Singular Linear Models

    Institute of Scientific and Technical Information of China (English)

    Bao Xue ZHANG; Bai Sen LIU; Chang Yu LU

    2004-01-01

    Consider the partitioned linear regression model A = (y, X1β1 + X2β2, σ2V) and its four reduced linear models, where y is an n × 1 observable random vector with E(y) = Xβ and dispersion matrix Var(y) = σ2V, where σ2 is an unknown positive scalar, V is an n × n known symmetric nonnegative definite matrix, X = (X1: X2) is an n× (p+q) known design matrix with rank(X) = r ≤ (p+q),andβ = (β'1:β'2)' withβ1 andβ2 being p × 1 and q × 1 vectors of unknown parameters, respectively. In this article the formulae for the differences between the best linear unbiased estimators of M2X1β1under the model A and its best linear unbiased estimators under the reduced linear models of A are given,where M2 = I - X2X2+. Furthermore, the necessary and sufficient conditions for the equalities between the best linear unbiased estimators of M2X1β1 under the model A and those under its reduced linear models are established. Lastly, we also study the connections between the model A and its linear transformation model.

  11. Reduced Effective Model for Condensation in Slender Tubes with Rotational Symmetry, Obtained by Generalized Dimensional Analysis

    OpenAIRE

    Dziubek, Andrea

    2011-01-01

    Experimental results for condensation in compact heat exchangers show that the heat transfer due to condensation is significantly better compared to classical heat exchangers, especially when using R134a instead of water as the refrigerant. This suggests that surface tension plays a role. Using generalized dimensional analysis we derive reduced model equations and jump conditions for condensation in a vertical tube with cylindrical cross section. Based on this model we derive a single ordinar...

  12. Implementation of a Diabetes Educator Care Model to Reduce Paediatric Admission for Diabetic Ketoacidosis

    OpenAIRE

    Asma Deeb; Hana Yousef; Layla Abdelrahman; Mary Tomy; Shaker Suliman; Salima Attia; Hana Al Suwaidi

    2016-01-01

    Introduction. Diabetic Ketoacidosis (DKA) is a serious complication that can be life-threatening. Management of DKA needs admission in a specialized center and imposes major constraints on hospital resources. Aim. We plan to study the impact of adapting a diabetes-educator care model on reducing the frequency of hospital admission of children and adolescents presenting with DKA. Method. We have proposed a model of care led by diabetes educators for children and adolescents with diabetes. The ...

  13. Convergence rates of supercell calculations in the reduced Hartree-Fock model

    OpenAIRE

    Gontier, David; Lahbabi, Salma

    2015-01-01

    This article is concerned with the numerical simulations of perfect crystals. We study the rate of convergence of the reduced Hartree-Fock (rHF) model in a supercell towards the periodic rHF model in the whole space. We prove that, whenever the crystal is an insulator or a semi-conductor, the supercell energy per unit cell converges exponentially fast towards the periodic rHF energy per unit cell, with respect to the size of the supercell.

  14. Modal testing and finite element modelling of a reduced-sized tyre for rolling contact investigation

    OpenAIRE

    ZHANG, Yuan-Fang; Cesbron, Julien; BERENGIER, Michel; YIN, Hai Ping

    2015-01-01

    One of the main contributors to the generation of tyre/road noise is the vibrational mechanism. The understanding of the latter requires both numerical modelling of the tyre/road contact problem under rolling conditions and experimental validation. The use of a go-kart tyre presents advantages in comparison with a standard tyre due to its simpler structure for modelling and its reduced size that facilitates experimental studies in laboratory. Modal testing has first been performed on such a t...

  15. Reduced animal use in efficacy testing in disease models with use of sequential experimental designs.

    OpenAIRE

    Waterton JC, Middleton BJ, Pickford R, Allott CP, Checkley D, Keith RA.

    2000-01-01

    Although the use of animals in efficacy tests has declined substantially, there remains a small number of well-documented disease models which provide essential information about the efficacy of new compounds. Such models are typically used after extensive in vitro testing, to evaluate small numbers of compounds and to select the most promising agents for clinical trial in humans. The aim of this study was to reduce the number of animals required to achieve valid results, without compromising...

  16. Mechanical disequilibria in two-phase flow models: approaches by relaxation and by a reduced model

    International Nuclear Information System (INIS)

    This thesis deals with hyperbolic models for the simulation of compressible two-phase flows, to find alternatives to the classical bi-fluid model. We first establish a hierarchy of two-phase flow models, obtained according to equilibrium hypothesis between the physical variables of each phase. The use of Chapman-Enskog expansions enables us to link the different existing models to each other. Moreover, models that take into account small physical unbalances are obtained by means of expansion to the order one. The second part of this thesis focuses on the simulation of flows featuring velocity unbalances and pressure balances, in two different ways. First, a two-velocity two-pressure model is used, where non-instantaneous velocity and pressure relaxations are applied so that a balancing of these variables is obtained. A new one-velocity one-pressure dissipative model is then proposed, where the arising of second-order terms enables us to take into account unbalances between the phase velocities. We develop a numerical method based on a fractional step approach for this model. (author)

  17. A Reduced-Order Model of Transport Phenomena for Power Plant Simulation

    Energy Technology Data Exchange (ETDEWEB)

    Paul Cizmas; Brian Richardson; Thomas Brenner; Raymond Fontenot

    2009-09-30

    A reduced-order model based on proper orthogonal decomposition (POD) has been developed to simulate transient two- and three-dimensional isothermal and non-isothermal flows in a fluidized bed. Reduced-order models of void fraction, gas and solids temperatures, granular energy, and z-direction gas and solids velocity have been added to the previous version of the code. These algorithms are presented and their implementation is discussed. Verification studies are presented for each algorithm. A number of methods to accelerate the computations performed by the reduced-order model are presented. The errors associated with each acceleration method are computed and discussed. Using a combination of acceleration methods, a two-dimensional isothermal simulation using the reduced-order model is shown to be 114 times faster than using the full-order model. In the pursue of achieving the objectives of the project and completing the tasks planned for this program, several unplanned and unforeseen results, methods and studies have been generated. These additional accomplishments are also presented and they include: (1) a study of the effect of snapshot sampling time on the computation of the POD basis functions, (2) an investigation of different strategies for generating the autocorrelation matrix used to find the POD basis functions, (3) the development and implementation of a bubble detection and tracking algorithm based on mathematical morphology, (4) a method for augmenting the proper orthogonal decomposition to better capture flows with discontinuities, such as bubbles, and (5) a mixed reduced-order/full-order model, called point-mode proper orthogonal decomposition, designed to avoid unphysical due to approximation errors. The limitations of the proper orthogonal decomposition method in simulating transient flows with moving discontinuities, such as bubbling flows, are discussed and several methods are proposed to adapt the method for future use.

  18. Accurate Weather Forecasting for Radio Astronomy

    Science.gov (United States)

    Maddalena, Ronald J.

    2010-01-01

    The NRAO Green Bank Telescope routinely observes at wavelengths from 3 mm to 1 m. As with all mm-wave telescopes, observing conditions depend upon the variable atmospheric water content. The site provides over 100 days/yr when opacities are low enough for good observing at 3 mm, but winds on the open-air structure reduce the time suitable for 3-mm observing where pointing is critical. Thus, to maximum productivity the observing wavelength needs to match weather conditions. For 6 years the telescope has used a dynamic scheduling system (recently upgraded; www.gb.nrao.edu/DSS) that requires accurate multi-day forecasts for winds and opacities. Since opacity forecasts are not provided by the National Weather Services (NWS), I have developed an automated system that takes available forecasts, derives forecasted opacities, and deploys the results on the web in user-friendly graphical overviews (www.gb.nrao.edu/ rmaddale/Weather). The system relies on the "North American Mesoscale" models, which are updated by the NWS every 6 hrs, have a 12 km horizontal resolution, 1 hr temporal resolution, run to 84 hrs, and have 60 vertical layers that extend to 20 km. Each forecast consists of a time series of ground conditions, cloud coverage, etc, and, most importantly, temperature, pressure, humidity as a function of height. I use the Liebe's MWP model (Radio Science, 20, 1069, 1985) to determine the absorption in each layer for each hour for 30 observing wavelengths. Radiative transfer provides, for each hour and wavelength, the total opacity and the radio brightness of the atmosphere, which contributes substantially at some wavelengths to Tsys and the observational noise. Comparisons of measured and forecasted Tsys at 22.2 and 44 GHz imply that the forecasted opacities are good to about 0.01 Nepers, which is sufficient for forecasting and accurate calibration. Reliability is high out to 2 days and degrades slowly for longer-range forecasts.

  19. Technical Note: Reducing the spin-up time of integrated surface water–groundwater models

    KAUST Repository

    Ajami, H.

    2014-12-12

    One of the main challenges in the application of coupled or integrated hydrologic models is specifying a catchment\\'s initial conditions in terms of soil moisture and depth-to-water table (DTWT) distributions. One approach to reducing uncertainty in model initialization is to run the model recursively using either a single year or multiple years of forcing data until the system equilibrates with respect to state and diagnostic variables. However, such "spin-up" approaches often require many years of simulations, making them computationally intensive. In this study, a new hybrid approach was developed to reduce the computational burden of the spin-up procedure by using a combination of model simulations and an empirical DTWT function. The methodology is examined across two distinct catchments located in a temperate region of Denmark and a semi-arid region of Australia. Our results illustrate that the hybrid approach reduced the spin-up period required for an integrated groundwater–surface water–land surface model (ParFlow.CLM) by up to 50%. To generalize results to different climate and catchment conditions, we outline a methodology that is applicable to other coupled or integrated modeling frameworks when initialization from an equilibrium state is required.

  20. Mannose 6-Phosphate Receptor Is Reduced in -Synuclein Overexpressing Models of Parkinsons Disease

    DEFF Research Database (Denmark)

    Matrone, Carmela; Dzamko, Nicolas; Madsen, Peder;

    2016-01-01

    , leading to a reduced CD-mediated α-synuclein degradation and α-synuclein accumulation in neurons. MPR300 is downregulated in brain from α-synuclein overexpressing animal models and in PD patients with early diagnosis. These data indicate MPR300 as crucial player in the autophagy-lysosomal dysfunctions...

  1. DESIGNING SULFATE-REDUCING BACTERIA FIELD BIOREACTORS USING THE BEST MODEL

    Science.gov (United States)

    BEST (bioreactor economics, size and time of operation) is a spreadsheet-based model that is used in conjunction with a public domain computer software package, PHREEQCI. BEST is intended to be used in the design process of sulfate-reducing bacteria (SRB)field bioreactors to pas...

  2. Reduced Order Model Implementation in the Risk-Informed Safety Margin Characterization Toolkit

    Energy Technology Data Exchange (ETDEWEB)

    Mandelli, Diego [Idaho National Lab. (INL), Idaho Falls, ID (United States); Smith, Curtis L. [Idaho National Lab. (INL), Idaho Falls, ID (United States); Alfonsi, Andrea [Idaho National Lab. (INL), Idaho Falls, ID (United States); Rabiti, Cristian [Idaho National Lab. (INL), Idaho Falls, ID (United States); Cogliati, Joshua J. [Idaho National Lab. (INL), Idaho Falls, ID (United States); Talbot, Paul W. [Idaho National Lab. (INL), Idaho Falls, ID (United States); Rinaldi, Ivan [Idaho National Lab. (INL), Idaho Falls, ID (United States); Maljovec, Dan [Idaho National Lab. (INL), Idaho Falls, ID (United States); Wang, Bei [Idaho National Lab. (INL), Idaho Falls, ID (United States); Pascucci, Valerio [Idaho National Lab. (INL), Idaho Falls, ID (United States); Zhao, Haihua [Idaho National Lab. (INL), Idaho Falls, ID (United States)

    2015-09-01

    The RISMC project aims to develop new advanced simulation-based tools to perform Probabilistic Risk Analysis (PRA) for the existing fleet of U.S. nuclear power plants (NPPs). These tools numerically model not only the thermo-hydraulic behavior of the reactor primary and secondary systems but also external events temporal evolution and components/system ageing. Thus, this is not only a multi-physics problem but also a multi-scale problem (both spatial, µm-mm-m, and temporal, ms-s-minutes-years). As part of the RISMC PRA approach, a large amount of computationally expensive simulation runs are required. An important aspect is that even though computational power is regularly growing, the overall computational cost of a RISMC analysis may be not viable for certain cases. A solution that is being evaluated is the use of reduce order modeling techniques. During the FY2015, we investigated and applied reduced order modeling techniques to decrease the RICM analysis computational cost by decreasing the number of simulations runs to perform and employ surrogate models instead of the actual simulation codes. This report focuses on the use of reduced order modeling techniques that can be applied to any RISMC analysis to generate, analyze and visualize data. In particular, we focus on surrogate models that approximate the simulation results but in a much faster time (µs instead of hours/days). We apply reduced order and surrogate modeling techniques to several RISMC types of analyses using RAVEN and RELAP-7 and show the advantages that can be gained.

  3. Reduced Order Model Implementation in the Risk-Informed Safety Margin Characterization Toolkit

    Energy Technology Data Exchange (ETDEWEB)

    Mandelli, Diego [Idaho National Lab. (INL), Idaho Falls, ID (United States); Smith, Curtis L. [Idaho National Lab. (INL), Idaho Falls, ID (United States); Alfonsi, Andrea [Idaho National Lab. (INL), Idaho Falls, ID (United States); Rabiti, Cristian [Idaho National Lab. (INL), Idaho Falls, ID (United States); Cogliati, Joshua J. [Idaho National Lab. (INL), Idaho Falls, ID (United States); Talbot, Paul W. [Idaho National Lab. (INL), Idaho Falls, ID (United States); Rinaldi, Ivan [Idaho National Lab. (INL), Idaho Falls, ID (United States); Maljovec, Dan [Idaho National Lab. (INL), Idaho Falls, ID (United States); Wang, Bei [Idaho National Lab. (INL), Idaho Falls, ID (United States); Pascucci, Valerio [Idaho National Lab. (INL), Idaho Falls, ID (United States); Zhao, Haihua [Idaho National Lab. (INL), Idaho Falls, ID (United States)

    2015-09-01

    The RISMC project aim to develop new advanced simulation-based tools to perform Probabilistic Risk Analysis (PRA) for the existing fleet of U.S. nuclear power plants (NPPs). These tools numerically model not only the thermo-hydraulic behavior of the reactor primary and secondary systems but also external events temporal evolution and components/system ageing. Thus, this is not only a multi-physics problem but also a multi-scale problem (both spatial, µm-mm-m, and temporal, ms-s-minutes-years). As part of the RISMC PRA approach, a large amount of computationally expensive simulation runs are required. An important aspect is that even though computational power is regularly growing, the overall computational cost of a RISMC analysis may be not viable for certain cases. A solution that is being evaluated is the use of reduce order modeling techniques. During the FY2015, we investigated and applied reduced order modeling techniques to decrease the RICM analysis computational cost by decreasing the number of simulations runs to perform and employ surrogate models instead of the actual simulation codes. This report focuses on the use of reduced order modeling techniques that can be applied to any RISMC analysis to generate, analyze and visualize data. In particular, we focus on surrogate models that approximate the simulation results but in a much faster time (µs instead of hours/days). We apply reduced order and surrogate modeling techniques to several RISMC types of analyses using RAVEN and RELAP-7 and show the advantages that can be gained.

  4. Accurate thickness measurement of graphene

    Science.gov (United States)

    Shearer, Cameron J.; Slattery, Ashley D.; Stapleton, Andrew J.; Shapter, Joseph G.; Gibson, Christopher T.

    2016-03-01

    Graphene has emerged as a material with a vast variety of applications. The electronic, optical and mechanical properties of graphene are strongly influenced by the number of layers present in a sample. As a result, the dimensional characterization of graphene films is crucial, especially with the continued development of new synthesis methods and applications. A number of techniques exist to determine the thickness of graphene films including optical contrast, Raman scattering and scanning probe microscopy techniques. Atomic force microscopy (AFM), in particular, is used extensively since it provides three-dimensional images that enable the measurement of the lateral dimensions of graphene films as well as the thickness, and by extension the number of layers present. However, in the literature AFM has proven to be inaccurate with a wide range of measured values for single layer graphene thickness reported (between 0.4 and 1.7 nm). This discrepancy has been attributed to tip-surface interactions, image feedback settings and surface chemistry. In this work, we use standard and carbon nanotube modified AFM probes and a relatively new AFM imaging mode known as PeakForce tapping mode to establish a protocol that will allow users to accurately determine the thickness of graphene films. In particular, the error in measuring the first layer is reduced from 0.1-1.3 nm to 0.1-0.3 nm. Furthermore, in the process we establish that the graphene-substrate adsorbate layer and imaging force, in particular the pressure the tip exerts on the surface, are crucial components in the accurate measurement of graphene using AFM. These findings can be applied to other 2D materials.

  5. Accurate thickness measurement of graphene.

    Science.gov (United States)

    Shearer, Cameron J; Slattery, Ashley D; Stapleton, Andrew J; Shapter, Joseph G; Gibson, Christopher T

    2016-03-29

    Graphene has emerged as a material with a vast variety of applications. The electronic, optical and mechanical properties of graphene are strongly influenced by the number of layers present in a sample. As a result, the dimensional characterization of graphene films is crucial, especially with the continued development of new synthesis methods and applications. A number of techniques exist to determine the thickness of graphene films including optical contrast, Raman scattering and scanning probe microscopy techniques. Atomic force microscopy (AFM), in particular, is used extensively since it provides three-dimensional images that enable the measurement of the lateral dimensions of graphene films as well as the thickness, and by extension the number of layers present. However, in the literature AFM has proven to be inaccurate with a wide range of measured values for single layer graphene thickness reported (between 0.4 and 1.7 nm). This discrepancy has been attributed to tip-surface interactions, image feedback settings and surface chemistry. In this work, we use standard and carbon nanotube modified AFM probes and a relatively new AFM imaging mode known as PeakForce tapping mode to establish a protocol that will allow users to accurately determine the thickness of graphene films. In particular, the error in measuring the first layer is reduced from 0.1-1.3 nm to 0.1-0.3 nm. Furthermore, in the process we establish that the graphene-substrate adsorbate layer and imaging force, in particular the pressure the tip exerts on the surface, are crucial components in the accurate measurement of graphene using AFM. These findings can be applied to other 2D materials.

  6. Supercell calculations in the reduced Hartree-Fock model for crystals with local defects

    OpenAIRE

    David, Gontier; Salma, Lahbabi

    2015-01-01

    In this article, we study the speed of convergence of the supercell reduced Hartree-Fock~(rHF) model towards the whole space rHF model in the case where the crystal contains a local defect. We prove that, when the defect is charged, the defect energy in a supercell model converges to the full rHF defect energy with speed $L^{-1}$, where $L^3$ is the volume of the supercell. The convergence constant is identified as the Makov-Payne correction term when the crystal is isotropic cubic. The resul...

  7. Technical Note: Reducing the spin-up time of integrated surface water–groundwater models

    KAUST Repository

    Ajami, H.

    2014-06-26

    One of the main challenges in catchment scale application of coupled/integrated hydrologic models is specifying a catchment\\'s initial conditions in terms of soil moisture and depth to water table (DTWT) distributions. One approach to reduce uncertainty in model initialization is to run the model recursively using a single or multiple years of forcing data until the system equilibrates with respect to state and diagnostic variables. However, such "spin-up" approaches often require many years of simulations, making them computationally intensive. In this study, a new hybrid approach was developed to reduce the computational burden of spin-up time for an integrated groundwater-surface water-land surface model (ParFlow.CLM) by using a combination of ParFlow.CLM simulations and an empirical DTWT function. The methodology is examined in two catchments located in the temperate and semi-arid regions of Denmark and Australia respectively. Our results illustrate that the hybrid approach reduced the spin-up time required by ParFlow.CLM by up to 50%, and we outline a methodology that is applicable to other coupled/integrated modelling frameworks when initialization from equilibrium state is required.

  8. Technical Note: Reducing the spin-up time of integrated surface water–groundwater models

    Directory of Open Access Journals (Sweden)

    H. Ajami

    2014-06-01

    Full Text Available One of the main challenges in catchment scale application of coupled/integrated hydrologic models is specifying a catchment's initial conditions in terms of soil moisture and depth to water table (DTWT distributions. One approach to reduce uncertainty in model initialization is to run the model recursively using a single or multiple years of forcing data until the system equilibrates with respect to state and diagnostic variables. However, such "spin-up" approaches often require many years of simulations, making them computationally intensive. In this study, a new hybrid approach was developed to reduce the computational burden of spin-up time for an integrated groundwater-surface water-land surface model (ParFlow.CLM by using a combination of ParFlow.CLM simulations and an empirical DTWT function. The methodology is examined in two catchments located in the temperate and semi-arid regions of Denmark and Australia respectively. Our results illustrate that the hybrid approach reduced the spin-up time required by ParFlow.CLM by up to 50%, and we outline a methodology that is applicable to other coupled/integrated modelling frameworks when initialization from equilibrium state is required.

  9. Implementation of REDIM reduced chemistry to model an axisymmetric laminar diffusion methane-air flame

    Science.gov (United States)

    Henrique de Almeida Konzen, Pedro; Richter, Thomas; Riedel, Uwe; Maas, Ulrich

    2011-06-01

    The goal of this work is to analyze the use of automatically reduced chemistry by the Reaction-Diffusion Manifold (REDIM) method in simulating axisymmetric laminar coflow diffusion flames. Detailed chemical kinetic models are usually computationally prohibitive for simulating complex reacting flows, and therefore reduced models are required. Automatic reduction model approaches usually exploit the natural multi-scale structure of combustion systems. The novel REDIM approach applies the concept of invariant manifolds to treat also the influence of the transport processes on the reduced model, which overcomes a fundamental problem of model reduction in neglecting the coupling of molecular transport with thermochemical processes. We have considered a previously well studied atmospheric pressure nitrogen-diluted methane-air flame as a test case to validate the methodology presented here. First, one-dimensional and two-dimensional REDIMs were computed and tabulated in lookup tables. Then, the full set of governing equations are projected on the REDIM and implemented in the object-oriented C++ Gascoigne code with a new add-on library to deal with the REDIM tables. The projected set of governing equations have been discretized by the Finite Element Method (FEM) and solved by a GMRES iteration preconditioned by a geometric multigrid method. Local grid refinement, adaptive mesh and parallelization are applied to ensure efficiency and precision. The numerical results obtained using the REDIM approach have shown very good agreement with detailed numerical simulations and experimental data.

  10. Reduced Order Model of a Spouted Fluidized Bed Utilizing Proper Orthogonal Decomposition

    Science.gov (United States)

    Beck-Roth, Stephanie R.

    2011-07-01

    A reduced order model utilizing proper orthogonal decomposition for approximation of gas and solids velocities as well as pressure, solids granular temperature and gas void fraction for use in multiphase incompressible fluidized beds is developed and presented. The methodology is then tested on data representing a flat-bottom spouted fluidized bed and comparative results against the software Multiphase Flow with Interphase eXchanges (MFIX) are provided. The governing equations for the model development are based upon those implemented in the (MFIX) software. The three reduced order models explored are projective, extrapolative and interpolative. The first is an extension of the system solution beyond an original time sequence. The second is a numerical approximation to a new solution based on a small selected parameter deviation from an existing CFD data set. Finally an interpolative methodology approximates a solution between two existing CFD data sets both which vary a single parameter.

  11. Hydro-economic Modeling: Reducing the Gap between Large Scale Simulation and Optimization Models

    Science.gov (United States)

    Forni, L.; Medellin-Azuara, J.; Purkey, D.; Joyce, B. A.; Sieber, J.; Howitt, R.

    2012-12-01

    The integration of hydrological and socio economic components into hydro-economic models has become essential for water resources policy and planning analysis. In this study we integrate the economic value of water in irrigated agricultural production using SWAP (a StateWide Agricultural Production Model for California), and WEAP (Water Evaluation and Planning System) a climate driven hydrological model. The integration of the models is performed using a step function approximation of water demand curves from SWAP, and by relating the demand tranches to the priority scheme in WEAP. In order to do so, a modified version of SWAP was developed called SWEAP that has the Planning Area delimitations of WEAP, a Maximum Entropy Model to estimate evenly sized steps (tranches) of water derived demand functions, and the translation of water tranches into crop land. In addition, a modified version of WEAP was created called ECONWEAP with minor structural changes for the incorporation of land decisions from SWEAP and series of iterations run via an external VBA script. This paper shows the validity of this integration by comparing revenues from WEAP vs. ECONWEAP as well as an assessment of the approximation of tranches. Results show a significant increase in the resulting agricultural revenues for our case study in California's Central Valley using ECONWEAP while maintaining the same hydrology and regional water flows. These results highlight the gains from allocating water based on its economic compared to priority-based water allocation systems. Furthermore, this work shows the potential of integrating optimization and simulation-based hydrologic models like ECONWEAP.ercentage difference in total agricultural revenues (EconWEAP versus WEAP).

  12. Building Accurate 3D Spatial Networks to Enable Next Generation Intelligent Transportation Systems

    DEFF Research Database (Denmark)

    Kaul, Manohar; Yang, Bin; Jensen, Christian S.

    2013-01-01

    model with elevation information extracted from massive aerial laser scan data and thus yields an accurate 3D model. We present a filtering technique that is capable of pruning irrelevant laser scan points in a single pass, but assumes that the 2D network fits in internal memory and that the points......The use of accurate 3D spatial network models can enable substantial improvements in vehicle routing. Notably, such models enable eco-routing, which reduces the environmental impact of transportation. We propose a novel filtering and lifting framework that augments a standard 2D spatial network...

  13. Reducing Ambulance Diversion at Hospital and Regional Levels: Systemic Review of Insights from Simulation Models

    Directory of Open Access Journals (Sweden)

    M Kit Delgado

    2013-09-01

    Full Text Available Introduction: Optimal solutions for reducing diversion without worsening emergency department (ED crowding are unclear. We performed a systematic review of published simulation studies to identify: 1 the tradeoff between ambulance diversion and ED wait times; 2 the predicted impact of patient flow interventions on reducing diversion; and 3 the optimal regional strategy for reducing diversion.Methods: Data Sources: Systematic review of articles using MEDLINE, Inspec, Scopus. Additional studies identified through bibliography review, Google Scholar, and scientific conference proceedings. Study Selection: Only simulations modeling ambulance diversion as a result of ED crowding or inpatient capacity problems were included. Data extraction: Independent extraction by two authors using predefined data fields.Results: We identified 5,116 potentially relevant records; 10 studies met inclusion criteria. In models that quantified the relationship between ED throughput times and diversion, diversion was found to only minimally improve ED waiting room times. Adding holding units for inpatient boarders and ED-based fast tracks, improving lab turnaround times, and smoothing elective surgery caseloads were found to reduce diversion considerably. While two models found a cooperative agreement between hospitals is necessary to prevent defensive diversion behavior by a hospital when a nearby hospital goes on diversion, one model found there may be more optimal solutions for reducing region wide wait times than a regional ban on diversion.Conclusion: Smoothing elective surgery caseloads, adding ED fast tracks as well as holding units for inpatient boarders, improving ED lab turnaround times, and implementing regional cooperative agreements among hospitals. [West J Emerg Med. 2013;14(5:489-498.

  14. Stochastic stability analysis of a reduced galactic dynamo model with perturbed α-effect

    Science.gov (United States)

    Kelly, Cónall

    2016-09-01

    We investigate the asymptotic behaviour of a reduced αΩ-dynamo model of magnetic field generation in spiral galaxies where fluctuation in the α-effect results in a system with state-dependent stochastic perturbations. By computing the upper Lyapunov exponent of the linearised model, we can identify regions of instability and stability in probability for the equilibrium of the nonlinear model; in this case the equilibrium solution corresponds to a magnetic field that has undergone catastrophic quenching. These regions are compared to regions of exponential mean-square stability and regions of sub- and super-criticality in the unperturbed linearised model. Prior analysis in the literature which focuses on these latter regions does not adequately address the corresponding transition in the nonlinear stochastic model. Finally we provide a visual representation of the influence of drift non-normality and perturbation intensity on these regions.

  15. Oral Administration of Escin Inhibits Acute Inflammation and Reduces Intestinal Mucosal Injury in Animal Models

    Directory of Open Access Journals (Sweden)

    Minmin Li

    2015-01-01

    Full Text Available The present study aimed to investigate the effects of oral administration of escin on acute inflammation and intestinal mucosal injury in animal models. The effects of escin on carrageenan-induced paw edema in a rat model of acute inflammation, cecal ligation and puncture (CLP induced intestinal mucosal injury in a mouse model, were observed. It was shown that oral administration of escin inhibits carrageenan-induced paw edema and decreases the production of prostaglandin E2 (PGE2 and cyclooxygenase- (COX- 2. In CLP model, low dose of escin ameliorates endotoxin induced liver injury and intestinal mucosal injury and increases the expression of tight junction protein claudin-5 in mice. These findings suggest that escin effectively inhibits acute inflammation and reduces intestinal mucosal injury in animal models.

  16. Optimization of a Reduced Chemical Kinetic Model for HCCI Engine Simulations by Micro-Genetic Algorithm

    Institute of Scientific and Technical Information of China (English)

    2006-01-01

    A reduced chemical kinetic model (44 species and 72 reactions) for the homogeneous charge compression ignition (HCCI) combustion of n-heptane was optimized to improve its autoignition predictions under different engine operating conditions. The seven kinetic parameters of the optimized model were determined by using the combination of a micro-genetic algorithm optimization methodology and the SENKIN program of CHEMKIN chemical kinetics software package. The optimization was performed within the range of equivalence ratios 0.2-1.2, initial temperature 310-375 K and initial pressure 0.1-0.3 MPa. The engine simulations show that the optimized model agrees better with the detailed chemical kinetic model (544 species and 2 446 reactions) than the original model does.

  17. Modeling of the Propagation of Seismic Waves in Non-Classical Media: Reduced Cosserat Continuum

    Science.gov (United States)

    Grekova, E.; Kulesh, M.; Herman, G.; Shardakov, I.

    2006-12-01

    In rock mechanics, elastic wave propagation is usually modeled in terms of classical elasticity. There are situations, however, when rock behaviour is still elastic but cannot be described by the classical model. In particular, current effective medium theories, based on classical elasticity, do not properly describe strong dispersive or attenuative behaviour of wave propagation observed sometimes. The approach we have taken to address this problem is to introduce supplementary and independent degrees of freedom of material particles, in our case rotational ones. Various models of this kind are widely used in continuum mechanics: Cosserat theory, micropolar model of Eringen, Cosserat pseudocontinuum, reduced Cosserat continuum etc. We have considered the reduced Cosserat medium where the couple stress is zero, while the rotation vector is independent of the translational displacement. In this model, the stress depends on the rotation of a particle relatively to the background continuum of mass centers, but it does not depend on the relative rotation of two neighboring particles. This model seems to be adequate for the description of granular media, consolidated soils, and rocks with inhomogeneous microstructure. A real inhomogeneous medium is considered as effective homogeneous enriched continuum, where proper rotational dynamics of inhomogeneities are taken into account by means of rotation of a particle of the enriched continuum. We have obtained and analyzed theoretical solutions for this model describing the propagation of body waves and surface waves. We have shown both the dispersive character of these waves in elastic space and half space, and the existence of forbidden frequency zones. These results can be used for the preparation, execution, and interpretation of seismic experiments, which would allow one to determine whether (and in which situations) polar theories are important in rock mechanics, and to help with the identification of material parameters

  18. ASPEN: A fully kinetic, reduced-description particle-in-cell model for simulating parametric instabilities

    International Nuclear Information System (INIS)

    A fully kinetic, reduced-description particle-in-cell (RPIC) model is presented in which deviations from quasineutrality, electron and ion kinetic effects, and nonlinear interactions between low-frequency and high-frequency parametric instabilities are modeled correctly. The model is based on a reduced description where the electromagnetic field is represented by three separate temporal envelopes in order to model parametric instabilities with low-frequency and high-frequency daughter waves. Because temporal envelope approximations are invoked, the simulation can be performed on the electron time scale instead of the time scale of the light waves. The electrons and ions are represented by discrete finite-size particles, permitting electron and ion kinetic effects to be modeled properly. The Poisson equation is utilized to ensure that space-charge effects are included. The RPIC model is fully three dimensional and has been implemented in two dimensions on the Accelerated Strategic Computing Initiative (ASCI) parallel computer at Los Alamos National Laboratory, and the resulting simulation code has been named ASPEN. The authors believe this code is the first particle-in-cell code capable of simulating the interaction between low-frequency and high-frequency parametric instabilities in multiple dimensions. Test simulations of stimulated Raman scattering, stimulated Brillouin scattering, and Langmuir decay instability are presented

  19. Prenatal nicotine exposure mouse model showing hyperactivity, reduced cingulate cortex volume, reduced dopamine turnover and responsiveness to oral methylphenidate treatment

    Science.gov (United States)

    Zhu, Jinmin; Zhang, Xuan; Xu, Yuehang; Spencer, Thomas J.; Biederman, Joseph; Bhide, Pradeep G.

    2012-01-01

    Cigarette smoking, nicotine replacement therapy and smokeless tobacco use during pregnancy are associated with cognitive disabilities later in life in children exposed prenatally to nicotine. The disabilities include attention deficit hyperactivity disorder (ADHD) and conduct disorder. However, the structural and neurochemical bases of these cognitive deficits remain unclear. Using a mouse model we show that prenatal nicotine exposure produces hyperactivity, selective decreases in cingulate cortical volume and radial thickness as well as decreased dopamine turnover in the frontal cortex. The hyperactivity occurs in both male and female offspring and peaks during the “active” or dark phase of the light-dark cycle. These features of the mouse model closely parallel the human ADHD phenotype, whether or not the ADHD is associated with prenatal nicotine exposure. A single oral, but not intraperitoneal, administration of a therapeutic equivalent dose (0.75 mg/kg) of methylphenidate decreases the hyperactivity and increases the dopamine turnover in the frontal cortex of the prenatally nicotine exposed mice, once again paralleling the therapeutic effects of this compound in ADHD subjects. Collectively, our data suggest that the prenatal nicotine exposure mouse model has striking parallels to the ADHD phenotype not only in behavioral, neuroanatomical and neurochemical features but also with respect to responsiveness of the behavioral phenotype to methylphenidate treatment. The behavioral, neurochemical and anatomical biomarkers in the mouse model could be valuable for evaluating new therapies for ADHD and mechanistic investigations into its etiology. PMID:22764249

  20. Use of quantitative shape-activity relationships to model the photoinduced toxicity of polycyclic aromatic hydrocarbons: Electron density shape features accurately predict toxicity

    Energy Technology Data Exchange (ETDEWEB)

    Mezey, P.G.; Zimpel, Z.; Warburton, P.; Walker, P.D.; Irvine, D.G. [Univ. of Saskatchewan, Saskatoon, Saskatchewan (Canada); Huang, X.D.; Dixon, D.G.; Greenberg, B.M. [Univ. of Waterloo, Ontario (Canada). Dept. of Biology

    1998-07-01

    The quantitative shape-activity relationship (QShAR) methodology, based on accurate three-dimensional electron densities and detailed shape analysis methods, has been applied to a Lemna gibba photoinduced toxicity data set of 16 polycyclic aromatic hydrocarbon (PAH) molecules. In the first phase of the studies, a shape fragment QShAR database of PAHs was developed. The results provide a very good match to toxicity based on a combination of the local shape features of single rings in comparison to the central ring of anthracene and a more global shape feature involving larger molecular fragments. The local shape feature appears as a descriptor of the susceptibility of PAHs to photomodification and the global shape feature is probably related to photosensitization activity.

  1. Efficient Accurate Context-Sensitive Anomaly Detection

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    For program behavior-based anomaly detection, the only way to ensure accurate monitoring is to construct an efficient and precise program behavior model. A new program behavior-based anomaly detection model,called combined pushdown automaton (CPDA) model was proposed, which is based on static binary executable analysis. The CPDA model incorporates the optimized call stack walk and code instrumentation technique to gain complete context information. Thereby the proposed method can detect more attacks, while retaining good performance.

  2. A comparison of local simulations and reduced models of MRI-induced turbulence

    CERN Document Server

    Lesaffre, Pierre; Latter, Henrik

    2009-01-01

    We run mean-field shearing-box numerical simulations with a temperature-dependent resistivity and compare them to a reduced dynamical model. Our simulations reveal the co-existence of two quasi-steady states, a `quiet' state and an `active' turbulent state, confirming the predictions of the reduced model. The initial conditions determine on which state the simulation ultimately settles. The active state is strongly influenced by the geometry of the computational box and the thermal properties of the gas. Cubic domains support permanent channel flows, bar-shaped domains exhibit eruptive behaviour, and horizontal slabs give rise to infrequent channels. Meanwhile, longer cooling time-scales lead to higher saturation amplitudes.

  3. Forward Modeling of Reduced Power Spectra From Three-Dimensional $\\mathbf{k}$-Space

    CERN Document Server

    von Papen, Michael

    2015-01-01

    We present results from a numerical forward model to evaluate one-dimensional reduced power spectral densities (PSD) from arbitrary energy distributions in $\\mathbf{k}$-space. In this model, we can separately calculate the diagonal elements of the spectral tensor for incompressible axisymmetric turbulence with vanishing helicity. Given a critically balanced turbulent cascade with $k_\\|\\sim k_\\perp^\\alpha$ and $\\alpha<1$, we explore the implications on the reduced PSD as a function of frequency. The spectra are obtained under the assumption of Taylor's hypothesis. We further investigate the functional dependence of the spectral index $\\kappa$ on the field-to-flow angle $\\theta$ between plasma flow and background magnetic field from MHD to electron kinetic scales. We show that critically balanced turbulence asymptotically develops toward $\\theta$-independent spectra with a slope corresponding to the perpendicular cascade. This occurs at a transition frequency $f_{2D}(L,\\alpha,\\theta)$, which is analytically ...

  4. Reduced Effective Model for Condensation in Slender Tubes with Rotational Symmetry, Obtained by Generalized Dimensional Analysis

    CERN Document Server

    Dziubek, Andrea

    2011-01-01

    Experimental results for condensation in compact heat exchangers show that the heat transfer due to condensation is significantly better compared to classical heat exchangers, especially when using R134a instead of water as the refrigerant. This suggests that surface tension plays a role. Using generalized dimensional analysis we derive reduced model equations and jump conditions for condensation in a vertical tube with cylindrical cross section. Based on this model we derive a single ordinary differential equation for the thickness of the condensate film as function of the tube axis. Our model agrees well with commonly used models from existing literature. It is based on the physical dimensions of the problem and has greater geometrical flexibility.

  5. Cluster-based reduced-order modelling of a mixing layer

    CERN Document Server

    Kaiser, Eurika; Cordier, Laurent; Spohn, Andreas; Segond, Marc; Abel, Markus; Daviller, Guillaume; Niven, Robert K

    2013-01-01

    We propose a novel cluster-based reduced-order modelling (CROM) strategy of unsteady flows. CROM builds on the pioneering works of Gunzburger's group in cluster analysis (Burkardt et al. 2006) and Eckhardt's group in transition matrix models (Schneider et al. 2007) and constitutes a potential alternative to POD models. This strategy processes a time-resolved sequence of flow snapshots in two steps. First, the snapshot data is clustered into a small number of representative states, called centroids, in the state space. These centroids partition the state space in complementary non-overlapping regions (centroidal Voronoi cells). Departing from the standard algorithm, the probability of the clusters are determined, and the states are sorted by transition matrix consideration. Secondly, the transitions between the states are dynamically modelled via a Markov process. Physical mechanisms are then distilled by a refined analysis of the Markov process, e.g. with the finite-time Lyapunov exponent and entropic methods...

  6. Interpolation-based reduced-order models to predict transient thermal output for enhanced geothermal systems

    CERN Document Server

    Mudunuru, M K; Harp, D R; Guthrie, G D; Viswanathan, H S

    2016-01-01

    The goal of this paper is to assess the utility of Reduced-Order Models (ROMs) developed from 3D physics-based models for predicting transient thermal power output for an enhanced geothermal reservoir while explicitly accounting for uncertainties in the subsurface system and site-specific details. Numerical simulations are performed based on Latin Hypercube Sampling (LHS) of model inputs drawn from uniform probability distributions. Key sensitive parameters are identified from these simulations, which are fracture zone permeability, well/skin factor, bottom hole pressure, and injection flow rate. The inputs for ROMs are based on these key sensitive parameters. The ROMs are then used to evaluate the influence of subsurface attributes on thermal power production curves. The resulting ROMs are compared with field-data and the detailed physics-based numerical simulations. We propose three different ROMs with different levels of model parsimony, each describing key and essential features of the power production cu...

  7. Numerical modelling of pressure rise combustion for reducing emissions of future civil aircraft

    OpenAIRE

    Materano Blanco, Gilberto Ignacio

    2014-01-01

    This work assesses the feasibility of designing and implementing the wave rotor (WR), the pulse detonation engine (PDE) and the internal combustion wave rotor (ICWR) as part of novel Brayton cycles able to reduce emissions of future aircraft. The design and evaluation processes are performed using the simplified analytical solution of the devices as well as 1D-CFD models. A code based on the finite volume method is built to predict the position and dimensions of the slots fo...

  8. Reduced order modelling and numerical optimisation approach to reliability analysis of microsystems and power modules

    OpenAIRE

    Rajaguru, Pushparajah

    2014-01-01

    The principal aim of this PhD program is the development of an optimisation and risk based methodology for reliability and robustness predictions of packaged electronic components. Reliability based design optimisation involves the integration of reduced order modelling, risk analysis and optimisation. The increasing cost of physical prototyping and extensive qualification testing for reliability assessment is making virtual qualification a very attractive alternative for the electronics indu...

  9. Manpower Consideration to Reduce Development Time for New Model in Automotive Industry

    Directory of Open Access Journals (Sweden)

    N. M.Z.N. Mohamed

    2005-01-01

    Full Text Available A study of manpower consideration to reduce development time for new model in automotive industry is presented in this study. The approach taken are by studying the existing practice in car development and suggesting various ways for manpower improvement such as through early involvement and input from manufacturing personnel, the proper job scope structure, proper training to the new staffs to accomplish an important task at the specific timing and clear definition of criteria for a Project Manager's appointment.

  10. Reducing Inventory and Optimizing the Lead time in a Custom order, High model mix Environment

    OpenAIRE

    Salian, Shilpashree

    2016-01-01

    In this contemporary world, demand forecasting has become an effective tool for the success of any product organization. This is especially important when their components have long lead times and when the companies don’t build on order. The goal of this thesis is to reduce inventory by improving the forecast accuracy while maintaining customer lead time in a custom order, high mix model environment.   In this master thesis investigation, the research questions that were formulated are answer...

  11. Reduced hippocampal neurogenesis in the GR+/− genetic mouse model of depression

    OpenAIRE

    Kronenberg, Golo; Kirste, Imke; Inta, Dragos; Chourbaji, Sabine; Heuser, Isabella; Endres, Matthias; Gass, Peter

    2009-01-01

    Glucocorticoid receptor (GR) heterozygous mice (GR+/− ) represent a valuable animal model for major depression. GR+/− mice show a depression-related phenotype characterized by increased learned helplessness on the behavioral level and neuroendocrine alterations with hypothalamo-pituitary-adrenal (HPA) axis overdrive characteristic of depression. Hippocampal brain-derived neurotrophic factor (BDNF) levels have also been shown to be reduced in GR+/− animals. Because adult hippocampal neurogenes...

  12. Reduced-order modeling for mistuned centrifugal impellers with crack damages

    Science.gov (United States)

    Wang, Shuai; Zi, Yanyang; Li, Bing; Zhang, Chunlin; He, Zhengjia

    2014-12-01

    An efficient method for nonlinear vibration analysis of mistuned centrifugal impellers with crack damages is presented. The main objective is to investigate the effects of mistuning and cracks on the vibration features of centrifugal impellers and to explore effective techniques for crack detection. Firstly, in order to reduce the input information needed for component mode synthesis (CMS), the whole model of an impeller is obtained by rotation transformation based on the finite element model of a sector model. Then, a hybrid-interface method of CMS is employed to generate a reduced-order model (ROM) for the cracked impeller. The degrees of freedom on the crack surfaces are retained in the ROM to simulate the crack breathing effects. A novel approach for computing the inversion of large sparse matrix is proposed to save memory space during model order reduction by partitioning the matrix into many smaller blocks. Moreover, to investigate the effects of mistuning and cracks on the resonant frequencies, the bilinear frequency approximation is used to estimate the resonant frequencies of the mistuned impeller with a crack. Additionally, statistical analysis is performed using the Monte Carlo simulation to study the statistical characteristics of the resonant frequencies versus crack length at different mistuning levels. The results show that the most significant effect of mistuning and cracks on the vibration response is the shift and split of the two resonant frequencies with the same nodal diameters. Finally, potential quantitative indicators for detection of crack of centrifugal impellers are discussed.

  13. A reduced-complexity model for sediment transport and step-pool morphology

    Science.gov (United States)

    Saletti, Matteo; Molnar, Peter; Hassan, Marwan A.; Burlando, Paolo

    2016-07-01

    A new particle-based reduced-complexity model to simulate sediment transport and channel morphology in steep streams in presented. The model CAST (Cellular Automaton Sediment Transport) contains phenomenological parameterizations, deterministic or stochastic, of sediment supply, bed load transport, and particle entrainment and deposition in a cellular-automaton space with uniform grain size. The model reproduces a realistic bed morphology and typical fluctuations in transport rates observed in steep channels. Particle hop distances, from entrainment to deposition, are well fitted by exponential distributions, in agreement with field data. The effect of stochasticity in both the entrainment and the input rate is shown. A stochastic parameterization of the entrainment is essential to create and maintain a realistic channel morphology, while the intermittent transport of grains in CAST shreds the input signal and its stochastic variability. A jamming routine has been added to CAST to simulate the grain-grain and grain-bed interactions that lead to particle jamming and step formation in a step-pool stream. The results show that jamming is effective in generating steps in unsteady conditions. Steps are created during high-flow periods and they survive during low flows only in sediment-starved conditions, in agreement with the jammed-state hypothesis of Church and Zimmermann (2007). Reduced-complexity models like CAST give new insights into the dynamics of complex phenomena such as sediment transport and bedform stability and are a useful complement to fully physically based models to test research hypotheses.

  14. Parasitic analysis and π-type Butterworth-Van Dyke model for complementary-metal-oxide-semiconductor Lamb wave resonator with accurate two-port Y-parameter characterizations

    Science.gov (United States)

    Wang, Yong; Goh, Wang Ling; Chai, Kevin T.-C.; Mu, Xiaojing; Hong, Yan; Kropelnicki, Piotr; Je, Minkyu

    2016-04-01

    The parasitic effects from electromechanical resonance, coupling, and substrate losses were collected to derive a new two-port equivalent-circuit model for Lamb wave resonators, especially for those fabricated on silicon technology. The proposed model is a hybrid π-type Butterworth-Van Dyke (PiBVD) model that accounts for the above mentioned parasitic effects which are commonly observed in Lamb-wave resonators. It is a combination of interdigital capacitor of both plate capacitance and fringe capacitance, interdigital resistance, Ohmic losses in substrate, and the acoustic motional behavior of typical Modified Butterworth-Van Dyke (MBVD) model. In the case studies presented in this paper using two-port Y-parameters, the PiBVD model fitted significantly better than the typical MBVD model, strengthening the capability on characterizing both magnitude and phase of either Y11 or Y21. The accurate modelling on two-port Y-parameters makes the PiBVD model beneficial in the characterization of Lamb-wave resonators, providing accurate simulation to Lamb-wave resonators and oscillators.

  15. Stability analysis of the POD reduced order method for solving the bidomain model in cardiac electrophysiology.

    Science.gov (United States)

    Corrado, Cesare; Lassoued, Jamila; Mahjoub, Moncef; Zemzemi, Néjib

    2016-02-01

    In this paper we show the numerical stability of the Proper Orthogonal Decomposition (POD) reduced order method used in cardiac electrophysiology applications. The difficulty of proving the stability comes from the fact that we are interested in the bidomain model, which is a system of degenerate parabolic equations coupled to a system of ODEs representing the cell membrane electrical activity. The proof of the stability of this method is based on a priori estimates controlling the gap between the reduced order solution and the Galerkin finite element one. We present some numerical simulations confirming the theoretical results. We also combine the POD method with a time splitting scheme allowing a faster solving of the bidomain problem and show numerical results. Finally, we conduct numerical simulation in 2D illustrating the stability of the POD method in its sensitivity to the ionic model parameters. We also perform 3D simulation using a massively parallel code. We show the computational gain using the POD reduced order model. We also show that this method has a better scalability than the full finite element method. PMID:26723278

  16. Determining salt concentrations for equivalent water activity in reduced-sodium cheese by use of a model system.

    Science.gov (United States)

    Grummer, J; Schoenfuss, T C

    2011-09-01

    The range of sodium chloride (salt)-to-moisture ratio is critical in producing high-quality cheese products. The salt-to-moisture ratio has numerous effects on cheese quality, including controlling water activity (a(w)). Therefore, when attempting to decrease the sodium content of natural cheese it is important to calculate the amount of replacement salts necessary to create the same a(w) as the full-sodium target (when using the same cheese making procedure). Most attempts to decrease sodium using replacement salts have used concentrations too low to create the equivalent a(w) due to the differences in the molecular weight of the replacers compared with salt. This could be because of the desire to minimize off-flavors inherent in the replacement salts, but it complicates the ability to conclude that the replacement salts are the cause of off-flavors such as bitter. The objective of this study was to develop a model system that could be used to measure a(w) directly, without manufacturing cheese, to allow cheese makers to determine the salt and salt replacer concentrations needed to achieve the equivalent a(w) for their existing full-sodium control formulas. All-purpose flour, salt, and salt replacers (potassium chloride, modified potassium chloride, magnesium chloride, and calcium chloride) were blended with butter and water at concentrations that approximated the solids, fat, and moisture contents of typical Cheddar cheese. Salt and salt replacers were applied to the model systems at concentrations predicted by Raoult's law. The a(w) of the model samples was measured on a water activity meter, and concentrations were adjusted using Raoult's law if they differed from those of the full-sodium model. Based on the results determined using the model system, stirred-curd pilot-scale batches of reduced- and full-sodium Cheddar cheese were manufactured in duplicate. Water activity, pH, and gross composition were measured and evaluated statistically by linear mixed model

  17. TU-EF-204-01: Accurate Prediction of CT Tube Current Modulation: Estimating Tube Current Modulation Schemes for Voxelized Patient Models Used in Monte Carlo Simulations.

    OpenAIRE

    McMillan, K; Bostani, M; McCollough, C; McNitt-Gray, M

    2015-01-01

    PURPOSE: Most patient models used in Monte Carlo-based estimates of CT dose, including computational phantoms, do not have tube current modulation (TCM) data associated with them. While not a problem for fixed tube current simulations, this is a limitation when modeling the effects of TCM. Therefore, the purpose of this work was to develop and validate methods to estimate TCM schemes for any voxelized patient model. METHODS: For 10 patients who received clinically-indicated chest (n=5) and ab...

  18. Simple and Accurate Analytical Solutions of the Electrostatically Actuated Curled Beam Problem

    KAUST Repository

    Younis, Mohammad I.

    2014-08-17

    We present analytical solutions of the electrostatically actuated initially deformed cantilever beam problem. We use a continuous Euler-Bernoulli beam model combined with a single-mode Galerkin approximation. We derive simple analytical expressions for two commonly observed deformed beams configurations: the curled and tilted configurations. The derived analytical formulas are validated by comparing their results to experimental data in the literature and numerical results of a multi-mode reduced order model. The derived expressions do not involve any complicated integrals or complex terms and can be conveniently used by designers for quick, yet accurate, estimations. The formulas are found to yield accurate results for most commonly encountered microbeams of initial tip deflections of few microns. For largely deformed beams, we found that these formulas yield less accurate results due to the limitations of the single-mode approximations they are based on. In such cases, multi-mode reduced order models need to be utilized.

  19. pH Measurement in High Ionic Strength Brines : Calibration of a combined glass electrode to obtain accurate pH measurements for use in a coupled single pass SWRO boron removal model

    OpenAIRE

    Marvin, Esra

    2013-01-01

    The purpose of this thesis was to calibrate a combined glass electrode to obtain accurate pH measurements in high ionic strength brines. pH measurements in high ionic strength brines are susceptible to significant errors when measured with standard calibrated electrodes. The work done in this thesis was part of a larger project carried out at the Israel Institute of Technology, where a new single pass boron removal process is being developed and modeled. The goal for the calibration was...

  20. Consistency of Cluster Analysis for Cognitive Diagnosis: The Reduced Reparameterized Unified Model and the General Diagnostic Model.

    Science.gov (United States)

    Chiu, Chia-Yi; Köhn, Hans-Friedrich

    2016-09-01

    The asymptotic classification theory of cognitive diagnosis (ACTCD) provided the theoretical foundation for using clustering methods that do not rely on a parametric statistical model for assigning examinees to proficiency classes. Like general diagnostic classification models, clustering methods can be useful in situations where the true diagnostic classification model (DCM) underlying the data is unknown and possibly misspecified, or the items of a test conform to a mix of multiple DCMs. Clustering methods can also be an option when fitting advanced and complex DCMs encounters computational difficulties. These can range from the use of excessive CPU times to plain computational infeasibility. However, the propositions of the ACTCD have only been proven for the Deterministic Input Noisy Output "AND" gate (DINA) model and the Deterministic Input Noisy Output "OR" gate (DINO) model. For other DCMs, there does not exist a theoretical justification to use clustering for assigning examinees to proficiency classes. But if clustering is to be used legitimately, then the ACTCD must cover a larger number of DCMs than just the DINA model and the DINO model. Thus, the purpose of this article is to prove the theoretical propositions of the ACTCD for two other important DCMs, the Reduced Reparameterized Unified Model and the General Diagnostic Model. PMID:27230079